VideoHelp Forum




+ Reply to Thread
Results 1 to 27 of 27
  1. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    i ran across this article:

    http://www.legitreviews.com/article/978/1/

    which showed a $100 video card encoding video faster than a $1000 cpu (a 9800 gtx+ vs an i7 965), so i decided to download the trial version of media show espresso and try it out for myself. while looking around cyberlink's site i noticed that the latest version of power director 7 also supports gpu encoding and downloaded that trial as well:

    http://www.cyberlink.com/products/powerdirector/overview_en_US.html?affid=2581_490_479...paign=Homepage

    http://www.cyberlink.com/products/mediashow-espresso/overview_en_US.html

    media show espresso is a nice piece of software but it is purposely castrated in options because at $40 cyberlink is not about to give us the same encoding options that the $80 power director offers.

    looking at the legit reviews article as well as cyberlink's web site it seems obvious that cyberlink pulled out all the stops in optimizing their software for both the i7 architecture as well as nvidia's gpu's.

    my own tests using this software gave very impressive results, starting with 2 hour (2 hours, 6 seconds to be exact) 1280 x 720 adult movie (latin sinsations 2009, great movie if anyone is interested), i converted it to a 720 x 480 dvd compliant mpeg-2.

    the power director software offers the option of enabling/disabling gpu acceleration so a direct apples to apples comparison is possible, using just the cpu (an e7400 at stock 2.66ghz running on a 1066 mhz fsb) it completed the encode in a touch under 1hr 25min, faster than real time, very impressive.

    activating gpu acceleration the encode time dropped to 1hr 10min, an impressive 15 minutes faster, which is even more impressive considering that the video car used is a 9400gt with 512mb ddr2. the 9400gt only has 16 stream processors and cost me just $50. considering that encoding performance with nvidia cpu's is said to increase almost linearly with the number of stream processor, and considering a 9800gtx+ with 128 stream processors was able to convincingly beat an i7 965, i think it's obvious that rather than spending $220 for a core i7 920, $200 for a motherboard that supports it and another $150 on ddr3 it makes a bunch more sense to spend less than $100 on a 9800 gt with 112 stream processors and smoke a i7 920 setup let alone my e7400.

    as for image quality, i could see no difference between the results from the cpu encode and the gpu encode, so i personally think intel should start worrying, or bring cpu's with integrated gpu's to market soon, which i expect them to do fairly quickly.

    until then, it looks like gpu acceleration will continue gaining traction in the market place, i highly recommend you guys download the trial for power director and give it a shot, with gpu accelerated mpeg-2 and avc encoding, as well as gpu accelerated effects and filters, it's everything tmpg express should be and quite frankly sets the bar by which other consumer grade video apps should be judged by.

    and no, i don't own stock in cyberlink.

    edit:

    just one update, power director (and espresso) will open any file supported by ffdshow, all you have to do is type the first couple of letters of the name of the file into the window that opens up when browsing for what file you want opened and when windows shows you a list choose the file, in other words certain file types like .mkv are not officially supported but can be used as source files just fine.
    Quote Quote  
  2. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    It isn't just for video encoding, you can do a lot of stuff with GPUs. I've been following advances in nVidia's CUDA programming and played with a facial recognition API that uses nVidia GPUs. These applications will start moving into the consumer markets as the year goes on. I'm hoping to see some well-honed implementations out by the holidays.
    FB-DIMM are the real cause of global warming
    Quote Quote  
  3. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by rallynavvie
    It isn't just for video encoding, you can do a lot of stuff with GPUs. I've been following advances in nVidia's CUDA programming and played with a facial recognition API that uses nVidia GPUs. These applications will start moving into the consumer markets as the year goes on. I'm hoping to see some well-honed implementations out by the holidays.
    i know, in all honesty gpu's have advanced to the point where they can do almost anything a cpu can do, so long as the programmer knows what he's doing, it's just that up until now gpu acceleration has been confined to the high end scientific and business markets, with some headway made into the distributed computing (like folding at home) and encryption segments, but now it's becoming more and more mainstream.

    eventually i figure the gpu will replace the fp/sse unit, after it's integrated within the cpu that is.
    Quote Quote  
  4. The Old One SatStorm's Avatar
    Join Date
    Aug 2000
    Location
    Hellas (Greece), E.U.
    Search Comp PM
    Well, my experience with the current GPU solutions, is not positive so far. Too many crashes. But, eventually, those alternatives gonna improve.

    I believe Nvidia gonna push those solutions more that ATI.
    Quote Quote  
  5. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Originally Posted by SatStorm
    Well, my experience with the current GPU solutions, is not positive so far. Too many crashes. But, eventually, those alternatives gonna improve.

    I believe Nvidia gonna push those solutions more that ATI.
    On what kinda' card? Was this temp related? Maybe a gaming card w/ a over size cooler is needed

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  6. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by SatStorm
    Well, my experience with the current GPU solutions, is not positive so far. Too many crashes. But, eventually, those alternatives gonna improve.

    I believe Nvidia gonna push those solutions more that ATI.
    sounds like a driver issue to me, my card, a sparkle 9400gt 512mb fanless card, is as stable as they come, no matter how much gaming or encoding i put it through.

    are you using an ati or nvidia based card?
    Quote Quote  
  7. Banned
    Join Date
    Aug 2002
    Location
    beautiful
    Search Comp PM
    Money-wise it is better to spend $ on better CPU than over-priced GPU that won't do half of what better/faster CPU can...
    To me it is as pointless as need for gazillion gigabytes of memory just to start up Vista and read 10 kilobyte emails on it
    Quote Quote  
  8. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by DereX888
    Money-wise it is better to spend $ on better CPU than over-priced GPU that won't do half of what better/faster CPU can...
    huh?!?!? how is a video card that costs $100 "over-priced" and how is a cpu "better/faster" when the benchmarks show that a 9800gt is faster using cyberlink's software than the fastest cpu available today, the i7 965.

    and don't forget that along with that $1000 cpu you also need a $200-$500 motherboard, about $150 worth of ram not to mention a top of the line power supply, another couple of hundred bucks.

    Originally Posted by DereX888
    To me it is as pointless as need for gazillion gigabytes of memory just to start up Vista and read 10 kilobyte emails on it
    what does vista have to do with the topic at hand and how exactly does your (inaccurate) claim of it needed "gazillion gigabytes of memory just to start up" prove that a cpu is "better/faster" than a gpu?

    seriously, where you drunk when you posted your comments, just trolling or new to the english language and didn't understand what you read?

    i'd really like to know.
    Quote Quote  
  9. Member
    Join Date
    Oct 2004
    Location
    United States
    Search Comp PM
    Originally Posted by SatStorm
    Well, my experience with the current GPU solutions, is not positive so far. Too many crashes. But, eventually, those alternatives gonna improve.

    I believe Nvidia gonna push those solutions more that ATI.
    Nvidia has an investment program to invest in start up companies who are developing GPU based product solutions.

    http://www.nvidia.com/object/gpu_ventures_program.html
    Quote Quote  
  10. Banned
    Join Date
    Aug 2002
    Location
    beautiful
    Search Comp PM
    Originally Posted by deadrats
    Originally Posted by DereX888
    Money-wise it is better to spend $ on better CPU than over-priced GPU that won't do half of what better/faster CPU can...
    huh?!?!? how is a video card that costs $100 "over-priced" and how is a cpu "better/faster" when the benchmarks show that a 9800gt is faster using cyberlink's software than the fastest cpu available today, the i7 965.

    and don't forget that along with that $1000 cpu you also need a $200-$500 motherboard, about $150 worth of ram not to mention a top of the line power supply, another couple of hundred bucks.
    I'll type slower for you:
    buying $100 more expensive CPU is better option than investing same $100 in a GPU to do the job of CPU.
    There
    If you still need it spelled out you may try Microsoft Sam...

    Originally Posted by deadrats
    Originally Posted by DereX888
    To me it is as pointless as need for gazillion gigabytes of memory just to start up Vista and read 10 kilobyte emails on it
    what does vista have to do with the topic at hand and how exactly does your (inaccurate) claim of it needed "gazillion gigabytes of memory just to start up" prove that a cpu is "better/faster" than a gpu?

    seriously, where you drunk when you posted your comments, just trolling or new to the english language and didn't understand what you read?

    i'd really like to know.
    Can't help ya with that, sorry.
    Apparently reading and understanding single sentence is too much a task for you.
    I suggest asking your parents or paying someone to teach you that...
    (for their convenience I already highlighted and underscored the key "ingredient" of the sentence you probably have trouble comprehend)
    Quote Quote  
  11. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    I am interested in what a couple of 4870's in crossfire can do w/ this Power Director program

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  12. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by DereX888
    Money-wise it is better to spend $ on better CPU than over-priced GPU that won't do half of what better/faster CPU can...
    To me it is as pointless as need for gazillion gigabytes of memory just to start up Vista and read 10 kilobyte emails on it
    While I completely agree with the last half of your statement I can only partially agree on the first. As more and more developers start tapping into the power of GPUs it will essentially become a specialized, extra, CPU "core". Due to the differing architecture it's hard to make comparisons, and the technology is still developing so right now you're right about spending more on the CPU since more can take advantage of that. However if you're a bit of a tinkerer like myself then it's good to have both, though my CPUs are really only one step above entry-level
    FB-DIMM are the real cause of global warming
    Quote Quote  
  13. The Old One SatStorm's Avatar
    Join Date
    Aug 2000
    Location
    Hellas (Greece), E.U.
    Search Comp PM
    My experience so far are with ATI cards, which are behind on this matter.
    I tested an nvidia 6800 AGP and also crashed - probably a not so good sample to test those solutions.

    The GPUs gonna be a great help with the video filters. Neatvideo and Deemon's videoenhancer are 2 programs that comes first in my mind.
    Quote Quote  
  14. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    ATI was ahead 10 years ago ... sad. AMD screwed it up.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  15. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by SatStorm
    My experience so far are with ATI cards, which are behind on this matter.
    I tested an nvidia 6800 AGP and also crashed - probably a not so good sample to test those solutions.
    that explains it, ati cards do trail at the moment in gpu acceleration and the 6800 does not support cuda, you need a g80 or greater based gpu, i.e. an 8xxx series or better.
    Quote Quote  
  16. Banned
    Join Date
    Aug 2002
    Location
    beautiful
    Search Comp PM
    Originally Posted by rallynavvie
    Originally Posted by DereX888
    Money-wise it is better to spend $ on better CPU than over-priced GPU that won't do half of what better/faster CPU can...
    To me it is as pointless as need for gazillion gigabytes of memory just to start up Vista and read 10 kilobyte emails on it
    While I completely agree with the last half of your statement I can only partially agree on the first. As more and more developers start tapping into the power of GPUs it will essentially become a specialized, extra, CPU "core". Due to the differing architecture it's hard to make comparisons, and the technology is still developing so right now you're right about spending more on the CPU since more can take advantage of that. However if you're a bit of a tinkerer like myself then it's good to have both, though my CPUs are really only one step above entry-level
    *When* it becomes practical/functional - sure!
    But not at this moment

    I don't pay to be a beta-tester. I expect to be paid for it.
    If nVidia/ATI/etc will PAY ME to beta-test their GPU-co-processors then, perhaps, I would do that (depends how much they'd pay ).
    Quote Quote  
  17. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    No hurry about all this GPU assist. It was first discussed ten years ago and may offer real support in another year or two as applications join in. For the consumer this is bleeding edge. Closed systems like "Seti" can use this tech faster. I shudder to think of those Seti followers running their GPUs at full power. Yikes 300-700 Watts of heat, double that for SLI.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  18. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by lordsmurf
    ATI was ahead 10 years ago ... sad. AMD screwed it up.
    there was a now defunct site that i used to submit linux and hardware reviews to and i remember testing ati's avivo transcoding software when it was first released (i think i tested it with a 9600 pro) and discovering that transcoding with avivo was 10 times faster than transcoding using a purely software based solution on a dual core pentium 2.8ghz.

    at the time i predicted that because encoding/transcoding was the so-called "killer app" on the desktop, i.e. one of the primary tasks that drove the demand for faster and faster cpu's; amd and intel would conspire to put ati out of business, most likely in less than a year, and used the example of how intel helped put the final nail in the coffin of 3dfx*. less than a year later amd bought ati and that was effectively the end of avivo development for a long time and even now we see that amd has purposely crippled ati card's video capabilities**.

    *for those that don't know, up until 3dfx introduced the first graphics accelerator all 3d was done via software, i.e. rendered on the cpu, if you wanted faster graphics, i.e gaming, you had to buy a faster cpu (and usually motherboard) which allowed intel to command a premium price for a relatively minor bump in speed. 3dfx's products changed all that and brought about the current situation where you upgrade to a faster graphics card if you want better gaming performance. when intel released the pentium 4 the intel p4 supporting chipsets didn't support the voodoo 5, which was the straw the already weary camel's back, seeing all the problems 3dfx was having. intel could have thrown 3dfx a bone and supported the voodoo 5 (and the much anticipated voodoo 6) with the intel p4 chipsets, there was nothing technically preventing intel from supporting them; the later released via chipsets supported them just fine (by then 3dfx had already gone under) but intel show the opportunity to exact revenge on the company that effectively altered the computing landscape (and thereby somewhat marginalized intel's position in the market place) and took that opportunity.

    **avivo was not ati's first attempt at gpu accelerated encoding, the 9700 all-in-wonder had the "cobra" engine which accelerated encoding of mpeg-2 video up to 720x480, it only worked when capturing video and then it only offloaded 10 percent of the work from the cpu, but even still it was a major leap forward. amd has put itself in weird position with the acquisition of ati, if it starts emphasizing gpu accelerate computing, such as encoding, it will cannibalize it's cpu sales; if it purposely holds back development of gpu accelerated computing (which it seems to be doing at the moment) it will find nvidia blow past it on the gpu computing front (which is what has happened. what i think amd plans on doing is replacing their current "wide floating point accelerator" (a hybrid sse/fp unit) with a gpu (which is much faster at fp operations than x87 or sse can ever hope to be) and i think intel is obviously going to try a different attack by trying to slow down the adoption of cuda and nvidia gpu's by offering developers a competing target, i.e. larrabee. even though intel road maps indicate that by 2011 it will have a cpu with an integrated gpu, i don't see intel abandoning sse, which i think will prove to be intel's downfall. whoever manages to replace the x87/sse unit with a gpu for floating point operations will have a significant advantage and i think amd is the one hungry enough to go that route.

    or maybe nvidia will release am x86 compatible gpu/cpu hybrid of it's own and kick both of their asses, what do i know?
    Quote Quote  
  19. Originally Posted by SatStorm
    My experience so far are with ATI cards, which are behind on this matter.
    I tested an nvidia 6800 AGP and also crashed - probably a not so good sample to test those solutions.

    The GPUs gonna be a great help with the video filters. Neatvideo and Deemon's videoenhancer are 2 programs that comes first in my mind.
    the crashes were most likely because the gpu encoding cuda engine started with the nvidia 8000 series. the 6800 doesn't have it and therefore can't do gpu encoding.
    Quote Quote  
  20. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by DereX888
    buying $100 more expensive CPU is better option than investing same $100 in a GPU to do the job of CPU.
    There
    what an absolutely absurd thing to think let alone say (or type in this case):

    if the absolute fastest cpu you can currently buy, the $1000 i7 965, is slower than a mid range $100 video card, then the following is true:

    1) it's not possible to spend an extra $100 to buy a more expensive cpu because there is no faster option.

    2) if there was a faster option it would be the height of stupidity to spend $1100 instead of $100 on a video card.

    3) all sub-$1000 cpu's are even poorer choices than the i7 965 as compared to simply buying a 9800 gtx+.

    there yourself.
    Quote Quote  
  21. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by edDV
    No hurry about all this GPU assist. It was first discussed ten years ago and may offer real support in another year or two as applications join in. For the consumer this is bleeding edge. Closed systems like "Seti" can use this tech faster. I shudder to think of those Seti followers running their GPUs at full power. Yikes 300-700 Watts of heat, double that for SLI.
    where on earth did you get 300-700 watts? the maximum power draw of a gtx 280 is only 236 watts:

    http://www.nvidia.com/object/product_geforce_gtx_280_us.html

    and that's with 1 gig of on board ram.

    when you take into account just how much faster seti (and folding at home) runs on a gpu verses a cpu, you would need a dual socket nehelam based xeon setup to match the performance of a single video card and how much juice do you think a dually sucks down?
    Quote Quote  
  22. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by deadrats
    Originally Posted by edDV
    No hurry about all this GPU assist. It was first discussed ten years ago and may offer real support in another year or two as applications join in. For the consumer this is bleeding edge. Closed systems like "Seti" can use this tech faster. I shudder to think of those Seti followers running their GPUs at full power. Yikes 300-700 Watts of heat, double that for SLI.
    where on earth did you get 300-700 watts? the maximum power draw of a gtx 280 is only 236 watts:

    http://www.nvidia.com/object/product_geforce_gtx_280_us.html

    and that's with 1 gig of on board ram.

    when you take into account just how much faster seti (and folding at home) runs on a gpu verses a cpu, you would need a dual socket nehelam based xeon setup to match the performance of a single video card and how much juice do you think a dually sucks down?
    OK 236 watts for one and 472 watts for SLI plus the host computer. That is still a lot of watts that add up to real power bills.

    I was secretly writing a script for a smackdown of extreme anti-carbon enviros vs. power sucking extraterrestrial listeners. Oops the secret is out there now.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  23. Banned
    Join Date
    Aug 2002
    Location
    beautiful
    Search Comp PM
    Originally Posted by deadrats
    if the absolute fastest cpu you can currently buy, the $1000 i7 965, is slower than a mid range $100 video card
    What $100 GPU is faster than "absolute fastest CPU you can currently buy" (on the market)?
    Where the **** do you get such nonsense, from trade show pamphlets of CUDA or what...
    Try to run any tiniest kernel operating system on that GPU and let's see how far you'd get, hahaha

    Sure, for a specific job, using code specially written for that specific GPU, you may get faster specific result than using CPU,
    but WTF, that make as much sense as stating that math co-processor is better than CPU because it can do some specific calculations faster than CPU could, geez, what a moronism.
    Are you still drunk or what, forgot your meds or something...
    Quote Quote  
  24. The Old One SatStorm's Avatar
    Join Date
    Aug 2000
    Location
    Hellas (Greece), E.U.
    Search Comp PM
    I'll jump to CUDA - or something like that - when and if neat video supports those solutions.
    As is, I don't plan to buy a new card. Somehow an ATI 2400HD Pro end up to my new PC, and it is enough for my current needs.

    But actually, I will welcome and adapt any kind of assistance to the "heavy" filters I use. (something that doesn't happens yet). I mean, 12 to 14 hours for a 3 hour VHS tape, is too much time.
    Quote Quote  
  25. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by DereX888
    Originally Posted by deadrats
    if the absolute fastest cpu you can currently buy, the $1000 i7 965, is slower than a mid range $100 video card
    What $100 GPU is faster than "absolute fastest CPU you can currently buy" (on the market)?
    Where the **** do you get such nonsense, from trade show pamphlets of CUDA or what...
    i'm guessing you didn't actually read the article i linked to where legit reviews tested a core i7 965 against a 9800 gtx+:

    http://www.legitreviews.com/article/978/1/

    if you read the article you will note that a core i7 965 converted a 93 sec 25 mbps mpeg-2 to 1080p in 143 seconds and when coupled with the 9800gtx+ the encode was done nearly 3 times faster at 57 secs.

    likewise, when the source was a 20 mpbs h264 the i7 965 did the job in 145 seconds and when coupled with the 9800gtx+ it did it in 58 seconds.

    as a matter of fact by looking at the results obtained in the test it's obvious that when using the 9800gtx+ for encoding the bottleneck is the cpu as it seems that it can't feed the data to the gpu fast enough.

    Originally Posted by DereX888
    Try to run any tiniest kernel operating system on that GPU and let's see how far you'd get, hahaha
    and because you can't run an os on a video card that means what exactly? in all honesty, it may be possible to custom compile a small distro to run on a video card, no one has ever tried to install a linux distro on a cards frame buffer, so who knows if it can be done or not?

    Originally Posted by DereX888
    Sure, for a specific job, using code specially written for that specific GPU, you may get faster specific result than using CPU,
    but WTF, that make as much sense as stating that math co-processor is better than CPU because it can do some specific calculations faster than CPU could, geez, what a moronism.
    Are you still drunk or what, forgot your meds or something...
    are you aware of 3 things:

    1) that there is no such word as "moronism"?

    2) that your sentences are the written equivalent of a "chinese firedrill"?

    3) that there is no such thing as a "math co-processor" anymore? what used to be referred to as the "math co-processor" was eventually integrated into the cpu itself and became what people referred to the floating point unit, which itself was eventually combined with what used to be referred to as the vector processing unit (or simd unit) into what is now a hybrid sse/fp unit?
    Quote Quote  
  26. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by edDV
    OK 236 watts for one and 472 watts for SLI plus the host computer. That is still a lot of watts that add up to real power bills.

    I was secretly writing a script for a smackdown of extreme anti-carbon enviros vs. power sucking extraterrestrial listeners. Oops the secret is out there now.
    you're way over estimating how much juice a pc uses, check out this tom's article:

    http://www.tomshardware.com/reviews/geforce-gtx-radeon,2326-17.html

    look at the total power consumption of a complete system featuring a core i7 920 o/c'd to 3.8ghz, 6 gigs of ram, 2 hdd's and a gtx 260, with both the cpu and graphics card at full load, the power draw at the wall was only 295 watts.

    try it with your pc, buy a cheap $20 meter and measure the power draw, you'll be surprised how low it is.
    Quote Quote  
  27. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by deadrats
    Originally Posted by edDV
    OK 236 watts for one and 472 watts for SLI plus the host computer. That is still a lot of watts that add up to real power bills.

    I was secretly writing a script for a smackdown of extreme anti-carbon enviros vs. power sucking extraterrestrial listeners. Oops the secret is out there now.
    you're way over estimating how much juice a pc uses, check out this tom's article:

    http://www.tomshardware.com/reviews/geforce-gtx-radeon,2326-17.html

    look at the total power consumption of a complete system featuring a core i7 920 o/c'd to 3.8ghz, 6 gigs of ram, 2 hdd's and a gtx 260, with both the cpu and graphics card at full load, the power draw at the wall was only 295 watts.

    try it with your pc, buy a cheap $20 meter and measure the power draw, you'll be surprised how low it is.
    Yes i have a watt meter on my UPS and idle is below 200W but CPU encoding runs the proc at 100% and power consumption rises. Same happens with extreme gaming for the GPUs. That is why they have min and typical power supply specs for display cards. I haven't tested 'cudda encoding modes yet. They probably aren't running the GPU flat out yet but the seti apps are getting there.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!