VideoHelp Forum
+ Reply to Thread
Results 1 to 20 of 20
Thread
  1. Member yoda313's Avatar
    Join Date
    Jun 2004
    Location
    The Animus
    Search Comp PM
    Ok I was wondering if we have had 64 bit processors for a long time now why don't we have 128bit processors? (or is that what those i7 i-whatever intel processors are????) I was wondering if its a micronization issue or is it a lack of operating system support? Is it a physical impossibility right now?

    Or has the jump to more and more bits on a cpu been supplanted by the race to cram as many cores as they can on to one chip?

    Would there be any benefit of a single core 128bit cpu versus say a six core 64bit processor?

    Is there a bottleneck now between the processor and controllers from going ever faster? Is that what is keeping the bits on the processor relatively low? I mean weren't the 386 processors 32bit chips back in the late 80's and early 90's?

    Shouldn't we be having a 1mb cpu by now? Or is the bottleneck issue preventing that escalation? Also is that practical? I know that the cpu is the guts of the computer and the faster the cpu the faster everything else can go. But if the information can't get from point a to point b does that power get underutilized then?

    Am I misreading the speed race?

    Is this similar to how the chip companies (intel and amd) slowly dropped out of the GHZ race to the core race? I mean in the 90s the main thing was the speed of the chip. Same thing in the early 2000's. But its now to the point where you can't even see what the ghz speed of a single core is.

    Is a 64bit chip the pinnacle for current technology? Will it go higher? Has it gone higher and its just called something else I'm not familiar with? Is there any need to go faster? Would it be useless until the rest of the bus system catches up?

    Please note some terminology here I'm just using because I'm vaguely familiar with it. I'll be honest I don't fully understand how bus systems work. I also am not sure how multicore processors work other than they are essentially two processors in one chip.

    So am I just asking the wrong questions or is there a real technical hurdle to getting to that next level? Is that where nanotechnology comes in?
    Donatello - The Shredder? Michelangelo - Maybe all that hardware is for making coleslaw?
    Quote Quote  
  2. Banned
    Join Date
    Oct 2004
    Location
    Freedonia
    Search Comp PM
    There's actually some work being done now on 128 bit CPUs. AMD's Bulldozer is rumored to be such a beast. It's a good post with a lot of interesting questions, but someone who keeps up with this stuff more than I do can maybe chime in with more information.
    Quote Quote  
  3. Member yoda313's Avatar
    Join Date
    Jun 2004
    Location
    The Animus
    Search Comp PM
    Originally Posted by jman98
    There's actually some work being done now on 128 bit CPUs. AMD's Bulldozer is rumored to be such a beast
    Interesting. I'm sure Intel is too

    Originally Posted by jman98
    It's a good post with a lot of interesting questions, but someone who keeps up with this stuff more than I do can maybe chime in with more information.
    Thanks. I'm sure they'll start rolling in.
    Donatello - The Shredder? Michelangelo - Maybe all that hardware is for making coleslaw?
    Quote Quote  
  4. There's no need for 128 bit CPUs on the desktop. There's hardly any performance difference between 32 bit and 64 bit CPUs, the only real benefit being the ability to address additional memory.
    Quote Quote  
  5. Originally Posted by jagabo View Post
    There's no need for 128 bit CPUs on the desktop. There's hardly any performance difference between 32 bit and 64 bit CPUs, the only real benefit being the ability to address additional memory.
    Well, obviously you have no idea how computer and cpu works. 99.99% of calculations is just data movement from A to B, that is from register to register.
    16 32 64 bit is just the size of registry in cpu, the more the better since you can store more information in it at once and don;t have to wait until becomes available

    well we will see those 128 bit cpu when the time comes, basically when software is more and more complex and becomes slow because the cpu can't handle it in timely manner.
    Quote Quote  
  6. Originally Posted by lenti_75 View Post
    we will see those 128 bit cpu when the time comes
    Yes, in 10 years or so. The question was why we don't have them today.
    Quote Quote  
  7. because would be just a waste right now, we can barely use 64

    all software has to be rewritten for it so it can address more memory spaces and more registers; this is time and MONEY
    Quote Quote  
  8. Member yoda313's Avatar
    Join Date
    Jun 2004
    Location
    The Animus
    Search Comp PM
    so just because it CAN be done doesn't mean it will be done huh? Only if enough companies clamor for it?
    Donatello - The Shredder? Michelangelo - Maybe all that hardware is for making coleslaw?
    Quote Quote  
  9. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    @yoda313

    current cpu's are 64bit cpu's that are capable of 128 bit operations through the use of simd's, all current simd instructions are 128 bit operations. all current cpu's execute 128 bit simd instructions in a single cycle and said instructions are capable of working on four 32 bit pieces of data, two 64 bit pieces of data or one 128 bit piece of data per cycle.

    furthermore sandy bridge and bulldozer will support the new 256 bit avx simd instruction set that will allow a single 256 bit instruction to operate on eight 32 bit pieces of data, four 64 bit pieces of data, two 128 bit pieces of data or one 256 bit piece of data.

    bulldozer ups the ante by featuring 128 bit integer units and 256 bit floating point units. since "normal" integer operation are 32 bit operations, as are "normal" floating point operation (<--these two points require additional clarification but i won't have the time until this weekend), bulldozer seems poised to do to non-simd instructions what sse/avx did/will do for simd operations.

    as for the OS, windows is technically a hybrid OS, with the main parts being 32 or 64 bit and some api's, like dx, being composed of a mixture of 32/64 bit instructions and 128 bit instructions (dx is thoroughly sse optimized).

    as a general rule of thumb, the bigger an app is (i.e. 32 bit, 64 bit, etc) the more ram it needs and the bigger the installed footprint. if the OS were compiled for 128 bits, first no desktop cpu other than bulldozer (when it's released) would be able to run it and second the ram requirements would exceed the max that most motherboards currently support (assuming most consumer boards support a max of 2-4 gigs of ram).

    there are some 128 bit cpu's and OSes currently available but they are targeted towards the hpc/high end server market.
    Quote Quote  
  10. HCenc author
    Join Date
    Dec 2006
    Location
    Netherlands
    Search Comp PM
    as a general rule of thumb, the bigger an app is (i.e. 32 bit, 64 bit, etc) the more ram it needs and the bigger the installed footprint. if the OS were compiled for 128 bits, first no desktop cpu other than bulldozer (when it's released) would be able to run it and second the ram requirements would exceed the max that most motherboards currently support (assuming most consumer boards support a max of 2-4 gigs of ram).
    In general there's no relation between the size of the application and the memory requirement, the application decides how much memory needs to be allocated for what it is supposed to do.
    Also most consumer 1366 socket MB's support 24 GB DDR3 ram.
    Quote Quote  
  11. 64 bit addressing will probably be sufficient for the lifetime of most of us here discussing this (for desktop systems, maybe not HPC).
    Quote Quote  
  12. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by hank315 View Post
    In general there's no relation between the size of the application and the memory requirement, the application decides how much memory needs to be allocated for what it is supposed to do.
    Also most consumer 1366 socket MB's support 24 GB DDR3 ram.
    a lot depends on the type of data declarations made within the app, if, for example, you were to take your encoder (assuming you used int and float declarations) and simply recompiled it with gcc as a 64 bit app then yes, though the executable would be bigger but the amount of ram used would stay the same, however if you first went in and re-declared int as long long and float as double and then compiled it as a 64 bit app, you would see the ram footprint increase in conjunction with the installed footprint. you can see the size of each instruction here:

    http://msdn.microsoft.com/en-us/library/s3f49ktz%28VS.80%29.aspx

    http://softpixel.com/~cwright/programming/datatypes.c.php

    this is very easy to verify by looking at any number of open source projects for which 32 bit and 64 bit versions exist, the linux kernel is a great example, run either version and notice the increased ram usage (the same difference exists between 32 bit and 64 bit windows), though the increased ram usage is offset by the more complete usage of the cpu by virtue of the 64 bit registers coming into play.

    re: motherboards with 24 gigs ram support, i think i have seen all of 2 non-server motherboards in my life that support anywhere near that amount and they both cost over $400, a quick walk through microcenter pc shows that most consumer boards hover around the 4 gig's of ram support with the cheaper boards supporting 2 gigs and the "high end" enthusiast boards supporting 8 gigs.
    Quote Quote  
  13. How much memory motherboards support depends on what you mean by "support". If you could get 8 GB DDR3 DIMMs then many of those 3 DIMM slot motherboards support 24 GB. Of course, you can't get 8 GB DIMMs so they don't currently support 24 GB.

    If you're talking about what the current chipsets, most support 36 to 48 bit addressing (the rest of the address pins are tied to ground).
    Quote Quote  
  14. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by jagabo View Post
    How much memory motherboards support depends on what you mean by "support". If you could get 8 GB DDR3 DIMMs then many of those 3 DIMM slot motherboards support 24 GB. Of course, you can't get 8 GB DIMMs so they don't currently support 24 GB.

    If you're talking about what the current chipsets, most support 36 to 48 bit addressing (the rest of the address pins are tied to ground).
    by "support" i mean that the motherboard comes equipped with 2-4 ram slots and each ram slot will accept a max of 2 gig dimms, my current motherboard can only use a max dimm size of 2 gigs and has 4 slots, thus it can only support a total of 8 gigs of ram.

    most of the 1366 motherboards i have seen support a max of either 6 gigs of ram or 12 gigs of ram, but all the other boards i have seen max out at 8 gigs if you're lucky.
    Quote Quote  
  15. Member yoda313's Avatar
    Join Date
    Jun 2004
    Location
    The Animus
    Search Comp PM
    Originally Posted by deadrats
    current cpu's are 64bit cpu's that are capable of 128 bit operations through the use of simd's, all current simd instructions are 128 bit operations. all current cpu's execute 128 bit simd instructions in a single cycle and said instructions are capable of working on four 32 bit pieces of data, two 64 bit pieces of data or one 128 bit piece of data per cycle.
    SIMDS?

    So are these 64bit cpu's seudo 128bit then? Not "full" 128bit?
    Donatello - The Shredder? Michelangelo - Maybe all that hardware is for making coleslaw?
    Quote Quote  
  16. 128 bit is not here because of economics law, that's it.

    in the next 15 years or so 64bit will be saturated and then we will see 128bit and 128gb Ram as standard
    Quote Quote  
  17. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by yoda313 View Post
    Originally Posted by deadrats
    current cpu's are 64bit cpu's that are capable of 128 bit operations through the use of simd's, all current simd instructions are 128 bit operations. all current cpu's execute 128 bit simd instructions in a single cycle and said instructions are capable of working on four 32 bit pieces of data, two 64 bit pieces of data or one 128 bit piece of data per cycle.
    SIMDS?

    So are these 64bit cpu's seudo 128bit then? Not "full" 128bit?
    simd stands for single instruction multiple data, refers to any number of specialized instruction sets that allows a programmer to manipulate more than one piece of data with a single instruction. examples include avx, altivec, mmx, sse. this contrasts with non-simd instructions that require a single instruction per piece of data (under normal conditions).

    all current desktop cpu's are hybrid 32/64/128 bit architectures, a true 64 bit cpu would not be able to run 32 bit code (i believe the old alphas fit this category), likewise true 128 bit cpu's require 128 bit code, won't run anything else.

    for the desktop, it would be silly to make a fully 64 bit or 128 bit cpu as that would render prevent legacy code from running.
    Quote Quote  
  18. There is really no need for 128-bit CPUs just yet. Hardware is way ahead of what today's software is capable of (for the most part, specifically in games). In gaming you will see that even with beefy CPUs, framerates/performance lies mostly on the GPU (video card), and the RAM. Years ago, physics calculations used to be done on the CPU, and many applications still rely on this method. However, as more complex algorithms allow for more efficient physics calculations, and more GPUs allow for computations and physics operations to be done within the GPU, the need for more powerful processors has ceased. The bus width on today's GPUs (specifically Radeon HD and GeForce platforms) commonly exceeds 128 bits, and can be seen as high as 384-bits for a single card. Dual GPU setups, and dual-chip-pcb video cards allow for combined bus widths upwards of 512-bits. Now you may be saying, bus width and bit handling capabilities are two different things, and that would be true if comparing processors to GPUs. But I am comparing GPU buses to GPU cores. The bus allows for data strings with bit widths of 128-bits or more at a time to cross onto the GPU. The GPU is unlike a CPU, it does not have cores, it has shader units, and it does not just have 2, 4, or 6 cores. GPUs have three different types of shader cores (vertex, geometry, and pixel shaders). These all fall into the category of unified shaders. Take the Radeon HD 6970 for example. It has 1536 vertex shaders, 96 geometry shaders, and 32 pixel shaders, so there are 1664 unified shader cores. All running at 880MHz per core at stock speeds, while the memory runs at 1375MHz. With all of the cores running at even 880MHz, calculations can be done way faster and more data can be processed at once. Thus making the scalability rate that much higher compared to that of a regular CPU such as the sandy bridge cpus, or even the i7 990X (to an extent). By no means are cpus going to become obsolete in the near future, while shader cores are good at processing media, hard data such as statistics, estimations, and data routing (from content source, to its processing sector: Graphics processed on the GPU, sound is processed through sound processor, etc) are still the strong point of the modern CPU. Therefore, the need for consumer 128 bit CPUs is dependent on the availability of consumer content that will scale to 128-bits, which is a small to none margin.
    Quote Quote  
  19. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    Originally Posted by deadrats View Post
    Originally Posted by jagabo View Post
    How much memory motherboards support depends on what you mean by "support". If you could get 8 GB DDR3 DIMMs then many of those 3 DIMM slot motherboards support 24 GB. Of course, you can't get 8 GB DIMMs so they don't currently support 24 GB.

    If you're talking about what the current chipsets, most support 36 to 48 bit addressing (the rest of the address pins are tied to ground).
    by "support" i mean that the motherboard comes equipped with 2-4 ram slots and each ram slot will accept a max of 2 gig dimms, my current motherboard can only use a max dimm size of 2 gigs and has 4 slots, thus it can only support a total of 8 gigs of ram.

    most of the 1366 motherboards i have seen support a max of either 6 gigs of ram or 12 gigs of ram, but all the other boards i have seen max out at 8 gigs if you're lucky.
    intel p67 and q67 boards support 32gb of ram. i have 16gb in my p67.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  20. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    I think you're putting the cart (hardware) before the horse (applications). The need for bigger and bigger hardware capabilities is driven by the need for IN DEMAND FILES and APPLICATIONS to be operated on in a "timely" manner (aka "split-second").

    Computers have had 128bit CPUs for a long time, just not MICROCHIP/DESKTOP CPUs.

    Think back:
    In the (late) '70s, the jump from 4bit to 8bit Microchips came with the IBM PC and standard apps like Word Processing, Spreadsheets, etc
    In the '80s, the jump from 8 to 16 bits came with the need for Desktop Publishing, Graphics/Photo apps and GUIs.
    In the '90s, the jump from 16 to 32 bits came with the rise of Multimedia apps (Audio, Video, etc), Internet and expanded Multitasking.

    The thing is that there isn't a current/near-future consumer app or file type that necessitates the use of such HUGE contiguous blocks that are provided by the jump from 32 to 64 bits. Something that isn't ALREADY covered by 32bit chips.
    Maybe HD/2k/4k/Stereo3D video, Maybe VoiceRecog/HandwritingRecog/AI UI stuff, Maybe something dealing with extremely complex visual or physics modelling (that isn't currently done in GPUs). But lots of that can still be PAGED in and out of memory, so the impetus is not there anymore.

    And like what was mentioned about the SIMD/Non-Simd pipelines and GPUs, it's not all required to be done the same monolithic way, so the burden is off the CPU for much of that.

    When you see an app or paradigm that REQUIRES huge bit capability, then you'll know will be the time that consumers will make the shift to 64 bit (fully) or beyond...

    Scott
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!