for a while now i have said that clarkdale will be the chip to get and build a system around, and based on this intel preview it looks like it will be even better than i was expecting:
http://www.legitreviews.com/article/1091/1/
here's a couple of quotes from the article that should convince most that clarkdale will be awesome:
Just days ago AMD got to say that they had the World's first graphics card that supported bitstreaming of high definition Blu-ray audio codecs. We touched on this in our ATI Radeon HD 5870 review in case you missed it. The new Clarkdale processors integrated graphics core supports protected audio paths and can properly handle the bitstreaming of Dolby TrueHD and DTS-HD Master Audio. The demo above was showing 8-channel LPCM output, but they also showed Dolby TrueHD pass through.aside from the fact that the integrated gpu supports the above mentioned audio features and the ridiculously low power consumption, a look at the benchmarks shows that they will kick ass, with the clarkdale offering a performance rating of 1.48 compared to a rating of 1.26 for the Q9400 (normalized to a E8500) under SPEC fp_rate 2006 and under pcmark vantage it totally smacks both the E8500 and the Q9400 around.The next demo that Intel showed us was an Clarkdake HTPC system that was using a Dell LCD. The point of this demo was that the new Clarkdale system consumes less power at idle than the monitor did.
At idle the system had an idle power consumption of under 27.9 Watts. At 100% CPU load, thanks to Cinebench R10 being run, the entire system consumed just 69.9 Watts. The LCD alone consumed 40.2 Watts as you can see in the image above.
couple that with rumored list prices of $130 for the 3.46ghz chip (a faster version than the one previewed), i know i will be picking one of these up when they finally hit the market later this year.
+ Reply to Thread
Results 1 to 14 of 14
-
-
Not that I'm suspicious of Intel, but:
This motherboard is fully HDCP compliant, so it should be ideal for Blu-Ray playback or for any other protected content.Or am I just being paranoid?
-
Originally Posted by redwudz
http://en.wikipedia.org/wiki/High-bandwidth_Digital_Content_Protection
http://arstechnica.com/old/content/2005/08/hdcp-vista.ars
now technically you are correct, HDCP is DRM sponsored by Hollywood and does require hardware and software support for it to work, but it's main purpose is to close the so-called "analog loophole", if your display, graphics card and blu-ray player aren't HDCP compliant then you won't be able to play protected blu-rays on your pc.
so yes, the integrated gpu is HDCP compliant but if it wasn't then people would be screaming bloody murder when they tried to view a blu-ray on using their brand new computer they just bought/built and found that they couldn't watch the movie in Full HD (more than likely the software player would downscale it to 480p).
btw, for a piece of hardware to be certified DX11 complaint it has to be HDCP compliant (it's part of the spec, in fact i think it's part of the DX10 spec as well), so chances are you probably already have some HDCP compliant hardware in your computer already and just don't know, well you do now. -
1 word "AnyDVD"
AnyDVD HD removes the "need" for HDCP compliancy
ps. Did you know HDCP doesn't "allow" cloned monitors"? lol
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
I don't know how I feel about an integrated GPU on the CPU. I'd rather see video cards go away from being packaged, proprietary platforms and instead go to a sort of riser card that has a socket on it to allow swapping GPUs and VRAM. This market is being directed by the hardware manufacturers now and not the consumers. SLi and CF were big losses for the consumer in this respect. And they're still marketing multi-core systems when most people barely utilize a second core. Whatever, let the sheeple have what they think they need I guess.
FB-DIMM are the real cause of global warming -
Originally Posted by rallynavvie
if you remember there was a time when the floating point unit used to be called the x87 co-processor and you could mix and match which cpu you wanted with which co-processor. all floating point operations were handled by the x87 unit and if you needed better performance in a floating point intensive app you simply swapped in a faster co-processor.
then intel integrated the co-processor onto the die with the cpu, but they were still 2 distinct entities, just on the same package. this had 2 consequences, one "negative" and one positive; the negative was that you could no longer just upgrade the x87 chip if you needed faster fp performance, you had to upgrade the whole cpu but the positive far outweighed the negative: the fp performance increase of the new design was significant.
the next generation cpu had the natural evolution of the x87 chip becoming the floating point unit and being fully integrated onto the cpu, thus now it was just another part of the cpu like the alu (arithmetic logic unit) was.
as more floating point and integer performance was needed the vector processing unit made an appearance and allowed the use of special 128 bit instructions that were capable of operating on multiple pieces of data simultaneously, most people simply refer to this as SSE and friends.
starting with the P4 the floating point unit and the vector processing unit were integrated into one unit and all the big players in the computing industry started pushing developers to use SSE instructions rather than the outdated x87 instructions (there has been, and continues to be, a lot of resistance by developers to doing this, primarily because they are lazy bastards).
we are now seeing the same thing happen with gpu's: first clarkdale will integrate a DX11 class gpu onto the same die package as the cpu, then both intel and amd have announced that the next generation chips from each company will feature fully integrated gpu onto the cpu. what you are going to see is floating point operations offloaded from the existing fp/sse unit to the gpu for much faster performance, but it gets even better: DX11 compliance for a gpu requires that it also be capable of hardware AI and AI is heavily depended on branched code.
i will make the following predictions right now:
1) there will be no SSE5, intel has already announced that clarkdale will support hardware encryption/decryption via a special instruction set but no SSE evolution, not even just a faster unit. what will happen with sandy bridge and it's descendants, as i said, is that all x87/sse instructions will be translated to assembler understood by the integrated gpu and executed by said unit at 3+ times the speeds they are now.
2) we will not see more than 6 core cpu's, at least not on the desktop and in fact i would be surprised if even hexa-core cpu's have more than 15 minutes of fame. the next generation of cpu's will be at most 4 cores and if more performance is required they will just start adding graphics cores (i really think the top of the line models will be 2 cores, 2 threads per core and 2 gpu's, which by then won't be called "gpu's" anymore).
3) the discrete graphics card will cease to exist as we know it, as the industry movement is toward less proprietary hardware, more homogeneous systems, OpenCL, and greater interoperability, i think the days of companies like nvidia are numbered.
for those that think i'm out of my mind and that cpu's will just keep getting more and more cores and that software will simply become more and more threaded to take advantage of said cores, just look at one of the fastest super computers in existance:
http://en.wikipedia.org/wiki/IBM_Roadrunner
it's made up of dual core opterons that handle I/O and Cell processors that handle the actual calculations, with CUDA gaining traction, the promise of OpenCL and with the availability of Cell based add in cards in the form of the Spurs Engine, developers could remove intel's and amd's high end products from the equation if the simply adopted the OpenCL api and wrote their apps with it as their target.
the only way to stop such a thing is if intel and amd beat everyone to the punch and release products that remove the incentive for developers to start seriously looking at OpenCL for a speed boost.
clarkdale is the first step in that direction. -
Alternatively:
http://www.nvidia.com/object/tesla_computing_solutions.htmlFB-DIMM are the real cause of global warming -
Originally Posted by rallynavvie
but as great as i think CUDA is and could potentially be i don't think intel is going to give nvidia the chance to make the kind of head way into the desktop computing arena as nvidia would like; between cpu's with integrated gpu's and larabee, both of which will use the existing x86 instruction set and OpenCL, which would allow developers to spread execution of their apps across a cpu/larabee combo with little effort, and the move towards using ray tracing for 3d graphics, i give nvidia a year before their significance on the desktop starts to erode, 18 months before they start having serious financial difficulties, 2 years before they file for bankruptcy and less than 3 years before they go the way of 3DFX.
bookmark this post: by 2012 nvidia will have closed it's doors and i will take it one step further: DX11 will be the last api of any significance from microsoft. -
Originally Posted by deadrats
You seem to have one excellent crystal ball. Could the plans and actions of others, or other unknown factors, play into your predictions and cause Intel to not achieve the domimation that you speak of? Could nVidia and/or Microsoft weather this great storm in some way, shape or form and not fall to the demise that you speak of?
The technology industry has had so many anal(ists) over the years make so many great predictions of dominance and failure that no one close to the industry even gives it a second glance. Check out AST Computer. Their stock symbol is... -
Originally Posted by Video Head
as for my predictions, a few years ago i did a couple of reviews for a now defunct site and i remember reviewing an ati video card (i think it was the 9600 pro) and testing ati's still new avivo encoder and finding that it was 10 times faster encoding than a pentium d 2.8 ghz (dual core). at the time i made the prediction that intel and amd would conspire to put ati out of business, primarily because if avivo took off it would effectively eliminate one of the killer apps driving the need for faster and faster cpu's; video transcoding. during that review i also took people on a trip down memory lane reminding them that 3dfx once hurt intel's cpu sales by releasing the first 3d graphics accelerator (prior to the voodoo if you wanted faster gaming the only option was to buy a faster cpu) and that intel effectively put the final nail in 3dfx's coffin by not supporting the voodoo 5 (and never released 6) on intel chipsets for the P4. by the time VIA released their P4 chipsets 3dfx was gone.
people posted to tell me i was out of my mind, ati wasn't going anywhere, one person posted that my predictions gave him a chill because he had been a big fan of 3dfx (and glide) and he had been shocked by how fast 3dfx went under.
less than a year after i did that review AMD had bought all of ATI's assets and for all practical purposes ceased all work on AVIVO and all graphics editing capabilities of ATI's cards.
if you recall, ATI had hardware encoding capabilities as far back as the 9700 PRO AIW (all-in-wonder) using what they termed the COBRA engine:
http://www.anandtech.com/showdoc.aspx?i=1740&p=2
that was 7 YEARS AGO that ATI had consumer level hardware mpeg-2 encoding up and running and hardware accelerated video editing yet here we are and we have a few hardware accelerated apps that barely scratch the surface of what our graphics cards are capable of. as soon as AMD bought ATI that was the end of the All-In-Wonder line, it's AVIVO encoding app is little more than a sophomoric class project level encoder and we are still dependent on AMD and INTEL releasing faster cpu's that require new motherboards and ram.
and lastly don't forget that INTEL recently had a very public dispute with NVIDIA because INTEL refused to allow NVIDIA to manufacture and sell chipsets for it's BLOOMFIELD line of cpu's; NVIDIA claimed it's licenses for making chipsets for previous architectures extended to the CORE i7, INTEL said that the license did not extend to cpu's with integrated memory controllers, i'm not sure that the dispute has still been resolved.[/i] -
Originally Posted by deadrats
So, one has to ask: with all your wisdom, experience and insight why you are not on Wall Street as a senior technology analyst? -
Originally Posted by Video Head
as for why i am not on Wall Street, the answer is 2 fold:
1) i made a number of really stupid choices in my life, the first being following my dream of being a physicists, when i went to college instead of majoring in a more marketable, and quite frankly easier discipline, i decided to major in physics, then i decided to change my major to computer science, and before i could finish my degree i decided to drop out and get certified as a Unix System Administrator, which i did. the idea was that i would get a job in the IT field and save up some money, go back to school (maybe have the company that hired me pay for college) and finish my 2 degrees. unfortunately 9-11 hit shortly after i received my Unix cert and i was never able to break into the IT field. i ended up becoming an exterminator, which i still do to this day (hence the screen name).
2) even if the above events had not unfolded as they did, i have a deep philosophical problem with capitalism as it currently exists (which is ironic considering i owned my own business for 5 years and currently work for a large company) so it is highly unlikely that you would ever have found me working on Wall Street.
any other questions? -
Originally Posted by deadrats
-
Originally Posted by Video Head
We are sitting down at the NVIDIA GPU Conference in San Jose, California (Live Webcast is linked on this page.) listening to Senior Vice President Dan Vivoli talk about how far reaching the word of GPU computing is going to be in the future. But I can't help but wonder about issues that are a bit more evident to the hardware enthusiast and gamer. Where is NVIDIA's next generation technology for the gamer? What is NVIDIA's answer to ATI Eyefinity technology? Why does NVIDIA detect AMD GPUs in Batman: AA and turn off AntiAliasing? Why do new NVIDIA drivers punish AMD GPU owners who want to leverage an NVIDIA card to compute PhysX? Hmmm.
Most interesting is that NVIDIA is showing off some demos with incredible fidelity, namely a Bugatti Veyron, that cannot be distinguished from an actual photograph. Sadly though, it does take about 18 seconds to render a single frame using ray tracing, and most disappointing is that this is being demonstrated on the currently available retail GPUs. No next generation is being shown off at NVIDIA's biggest event of the year. That said, the tech used to render the car is incredibly impressive and we remember that not very long ago it would take a bank of computers hours to do this.
Jensen Huang does make some incredibly efficient points about parallel computation possibly using a GPU as a co-processor though. There is no doubt in my mind that GPUs will find a huge place in our economy as a needed component, but all this makes me think that NVIDIA is on the way out as a gaming company and on the way in as a "CPU" company.
lastly, if i were somehow "involved" and wished to "sway people to my advantage", i think i could find a better platform than the video help forum (were less than 100 people have seen this thread) to further my nefarious goals, don't you think?
edit: i just had to add this quote from nvidia's vice president of product marketing:
Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "******* hard".
these guys are toast.
Similar Threads
-
Clarkdale Quad-Core
By wulf109 in forum ComputerReplies: 2Last Post: 27th Jan 2010, 23:17 -
clarkdale hitting 4 ghz on air
By deadrats in forum ComputerReplies: 2Last Post: 25th Jul 2009, 07:09 -
How to remove previews?
By maverickluke in forum DVD RippingReplies: 6Last Post: 20th Mar 2009, 22:44 -
Intel Q6600 Kentsfield or Intel E8400 Wolfdale??? Help me decide which one
By budz in forum ComputerReplies: 33Last Post: 12th Mar 2009, 00:19 -
No Previews 8(
By palnudb in forum ffmpegX general discussionReplies: 2Last Post: 14th Jan 2008, 10:36