http://www.sonycreativesoftware.com/vegaspro10
this should be the true litmus test of were gpu accelerated encoding currently stands (quality wise) as vegas will feature the main concept gpu powered avc encoder. i have to assume that sony's product will be equipped with the best main concept has to offer, so if this app doesn't impress quality wise people may start giving up on gpu encoding.
i guess we'll know in 4 days.
+ Reply to Thread
Results 1 to 18 of 18
-
-
If past versions are any indication, vegas usually gets nearly the lowest end , bottom of the barrel mainconcept avc encoder, with very few features and controls enabled from the SDK
GPU decoding would be more valuable IMO - like Adobe does, as the primary function of vegas is editing -
isn't vegas something like a $600 app? you really think that they will use the bare minimum?
regardless, after numerous test encodes, using at least a dozen different sources and trying every combination of settings i could imagine, i have come to the conclusion, and i can't believe i am actual going to say this, that most people are probably better served using x264 using the "very fast" preset or "ultra fast" coupled with the "film" tuning option + some good filters (preferably gpu powered).
one thing i did notice with all my tests is that the decoder used makes a ton of difference, when i coupled either cuda h264 or x264 with mencoder (for decoding duties) the quality stank to high heaven, switching to ffmpeg made a world of difference.
this led me to wonder 2 things:
1) what if someone coded a high quality decoder based nvidia's "pure video", maybe even the x264 developer's could make their own decoder rather than relying on the ffmpeg project.
2) i know the x264 developer's well established stance on porting x264 over to cuda but what if they decided to use every thing they have learned developing x264 to improve the existing cuda h264 encoder, quality wise. the basic encoder already exists, perhaps they could add some in-loop deblocking or improve the i/p/b quantizer routines or even add/improve multi-pass encoding.
but after thinking about it i realized that they can't develop for cuda and stay true to the open source, gpl principles they obviously espouse. cuda is a proprietary technology, as is direct x and pure video and in the end i think this is the real reason they have no interest in porting x264 to cuda: it goes against everything they believe in. -
yes, and very close to low end. The current Mainconcept AVC encoder bundled with Vegas9 Pro is very handicapped. But who knows, they might offer a better implementation for the GPU version in 10 .
regardless, after numerous test encodes, using at least a dozen different sources and trying every combination of settings i could imagine, i have come to the conclusion, and i can't believe i am actual going to say this, that most people are probably better served using x264 using the "very fast" preset or "ultra fast" coupled with the "film" tuning option + some good filters (preferably gpu powered).
1) what if someone coded a high quality decoder based nvidia's "pure video", maybe even the x264 developer's could make their own decoder rather than relying on the ffmpeg project.
2) i know the x264 developer's well established stance on porting x264 over to cuda but what if they decided to use every thing they have learned developing x264 to improve the existing cuda h264 encoder, quality wise. the basic encoder already exists, perhaps they could add some in-loop deblocking or improve the i/p/b quantizer routines or even add/improve multi-pass encoding.
but after thinking about it i realized that they can't develop for cuda and stay true to the open source, gpl principles they obviously espouse. cuda is a proprietary technology, as is direct x and pure video and in the end i think this is the real reason they have no interest in porting x264 to cuda: it goes against everything they believe in.Last edited by poisondeathray; 7th Oct 2010 at 19:28.
-
With all the test I've run recently (all CRF encodes, mostly around Q=20) x264 at "veryfast" delivers smaller files than "slower" or "veryslow", sometimes even "placebo". The quality is very slightly lower if I examine enlarged still frames but I find the tradeoff worth it. At "ultrafast" the files get much larger and the small increase in speed over "veryfast" isn't worth it. I've settled on CRF 20, "veryfast" with a few tweaks.
And x264 at "veryfast" is both faster (Athlon 64 x2 3.2 GHz) and better quality than cuda encoding (Nvidia 8600GT) with mediacoder. My quad core Q6600 is even faster, obviously. -
"CRF" is just a rough estimation of "quality" . You can't compare 2 encodes that end up at different file sizes. Just because using a different preset results in a smaller filesize at a given CRF , you cannot conclude the quality is better (or worse) . But you know this already.
"placebo" is useless , but the difference between say, "slower" and "veryfast" at a given bitrate should be noticable, unless you have relatively saturated conditions (using a relatively high bitrate for that content complexity) -
Yes, in theory. But in my tests it's not really noticeable at normal playback speed. And only barely noticeable with looking closely at enlarged still frames. Note that the veryfast encode has a lower bitrate than the slower encode (both CRF 20).
"x264 --version" reports "x264 0.104.1703 cd21d05 built on Aug 24 2010, gcc: 4.4.4
configuration: --bit-depth=8". -
on my system, a phenom 2 x4 620, 4 gigs ddr2 and a gts250 1 gig, using media coder (it's quickly becoming my favorite video app, only xmedia recode comes close, when it doesn't have those ridiculous ffmpeg related bugs) x264 using the "ultra fast" preset is within 20 fps (130 fps vs 150 fps) of the cuda encoder when converting 720x480 interlaced mpeg-2 sources to m2ts (h264/ac3) at full d1 resolution and the following filters: yadif, deringing, auto balance=normal and denoise=temporal, but the quality is significantly higher with x264 with the "film" tune.
"very fast" cuts the fps down to half but doesn't seem to improve image quality by any amount (most likely because of all the filtering).
i wonder what "bulldozer" (if it's ever released) will do under similar tests. -
Both the veryfast and slower encodes are using the same AVS script. With the veryfast encodes on the Q6600 Mpeg2Source() with deblocking was probably starting to be an issue.
What I think is going on:
At veryfast x264 is performing a less wide motion search and less sub pixel motion motion search. In my experience wide ME searches usually don't decrease final file size by much (a few percent) and subpixel ME increases file size a bit. On balance, the increased bitrate from more SubMe outweighs the decreased bitrate from wider ME. I suspect the veryfast encode has less smooth motion with things like slow scrolling credits, film bounce, and such.Last edited by jagabo; 8th Oct 2010 at 06:22.
-
well just got through testing vegas 10 and it blows, gpu accelerated encoding is only available with the sony avc (which surprised me that main concept's gpu avc isn't available) and it is slow, SD avc encoding was about half real time and that's without any filtering. for comparison x264 using ultra fast and the previously mentioned 4 filters does the same encode at nearly 150 fps, and i don't have to spend a dime on it.
very disappointed. -
Only "officially" for CS5 ; there is a "hack". You don't an expensive card for MPE to work. Many people use $80-100 cards as long as you have 768MB memory and CUDA enabled and it will be sufficient for even 2-3 streams with GPU effects all in realtime
I mentioned this earlier, but decoding is (was) the bottleneck for NLE's. Without MPE, just scrubbing the timeline can increase CPU usage 40-90% on your average quad core without MPE . When MPE is in use, almost all that CPU usage can go to encoding instead of being wasted on decoding. Not only is editing faster, but the same encoding tasks , on the same hardware, are 2-4x faster on average in CS5 with MPE.
According to some blog posts, vegas 10 has made some software improvements in decoding, but apparently nothing close to MPE and real GPU decoding -
it's not the video card, using my gts250 espresso 6 is able to do faster than real time 1080p mpeg-2 and avc encoding and using the cuda encoder supplied with media coder i can do vob to avc in full D1 resolution at close to 200 fps, the gpu is fast enough, sony's gpu encoder is what sucks.
-
Adobe has to maintain the pretense of being high end software for professionals. Hence the artificial limits on hardware.
-
Hmm, Vegas Pro usually upgrades after Christmas. I don't have this budgeted, especially if new harrdware is needed.
Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
--
"a lot of people are better dead" - prisoner KSC2-303
Similar Threads
-
[Help Request] Sony Vegas 10 Pro, AVC/H.264 Encoding
By lindenkron in forum Newbie / General discussionsReplies: 91Last Post: 8th Jul 2011, 03:14 -
Sony Vegas 9 - MP4 AVC-1 - Sound but no video
By inocent in forum Newbie / General discussionsReplies: 3Last Post: 7th Jul 2011, 15:24 -
FF AAC & H.264 AVC - Sony Vegas woes....
By andwan0 in forum Newbie / General discussionsReplies: 4Last Post: 6th May 2011, 05:41 -
[Sony Vegas Pro 10] GPU "acceleration" = slower encode?
By DragonQ in forum Video ConversionReplies: 3Last Post: 11th Jan 2011, 15:32 -
microsoft awarded gpu powered video encoding patent
By deadrats in forum Video ConversionReplies: 0Last Post: 30th Oct 2010, 21:44