..and man does it look like AMD will be in a perfect position with Bulldozer/Bobcat:

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3674

a short while ago, in another post, i made a number of predictions and there was one poster that decided to take me to task, evidently he thought my predictions were to outrageous to ever come to pass. one of my prediction was that i expected the fpu/sse hybrid unit that cpu's have now to go the way of the DODO and that all the floating point operations would be handled by the integrated gpu, to me it was an obvious prdiction. well, it seems that i was correct:

Doubling the integer resources but not the FP resources works even better when you look at AMD’s whole motivation behind Fusion. Much heavy FP work is expected to be moved to the GPU anyway, there’s little sense in duplicating FP hardware on the Bulldozer core when it has a fully capable GPU sitting on the same piece of silicon. Presumably the Bulldozer cores and the GPU will share the L3 cache. It’s really a very elegant design and the basis for what AMD, Intel and NVIDIA have been talking about for years now. The CPU will do what it does best while the GPU does what it is good at.
save up you pennies boys girls, though still a year away, Bulldozer looks to be the same type of performance jump for AMD that Conroe was for Intel.

the only kind of bad thing is that it doesn't look like OpenCL will live up to it's promise; for those that don't know OpenCL is a framework that allows applications to be coded/compiled to run across a variety of different processor simultaneously, there already is a OS X ffmpeg based app that runs on both the cpu and gpu for mpeg-2 encoding and adds 40 fps to encoding speed over just cpu encoding, but once the cpu's hand off the heavy floating point calculation to integrated gpu's, it will be redundant to write software to do the same thing.

it also looks like Nvidia's CUDA will also likely face a similar fate, as will Microsoft's DX Compute.

personally, i think it's for the best, i'm not a big fan to being beholden to any one company's proprietary software technology, instead of having to buy an Nvidia card to be able to take advantage of a CUDA application or having to upgrade to Win7 to take advantage of DX11's DX Compute capabilities, a guy could stick with XP, a developer could use ANSI standard C, C++, VB, Pascal, Fortran, JAVA, whatever he wants, without having to worry about learning proprietary coding techniques and without limiting his software to a target audience that uses a particular platform, and the end user will still get the performance he desires.

can't wait for 2011...