VideoHelp Forum




+ Reply to Thread
Results 1 to 18 of 18
  1. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    http://www.sonycreativesoftware.com/vegaspro10

    this should be the true litmus test of were gpu accelerated encoding currently stands (quality wise) as vegas will feature the main concept gpu powered avc encoder. i have to assume that sony's product will be equipped with the best main concept has to offer, so if this app doesn't impress quality wise people may start giving up on gpu encoding.

    i guess we'll know in 4 days.
    Quote Quote  
  2. Originally Posted by deadrats View Post
    http://www.sonycreativesoftware.com/vegaspro10

    this should be the true litmus test of were gpu accelerated encoding currently stands (quality wise) as vegas will feature the main concept gpu powered avc encoder. i have to assume that sony's product will be equipped with the best main concept has to offer, so if this app doesn't impress quality wise people may start giving up on gpu encoding.

    i guess we'll know in 4 days.
    If past versions are any indication, vegas usually gets nearly the lowest end , bottom of the barrel mainconcept avc encoder, with very few features and controls enabled from the SDK

    GPU decoding would be more valuable IMO - like Adobe does, as the primary function of vegas is editing
    Quote Quote  
  3. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by poisondeathray View Post
    If past versions are any indication, vegas usually gets nearly the lowest end , bottom of the barrel mainconcept avc encoder, with very few features and controls enabled from the SDK

    GPU decoding would be more valuable IMO - like Adobe does, as the primary function of vegas is editing
    isn't vegas something like a $600 app? you really think that they will use the bare minimum?

    regardless, after numerous test encodes, using at least a dozen different sources and trying every combination of settings i could imagine, i have come to the conclusion, and i can't believe i am actual going to say this, that most people are probably better served using x264 using the "very fast" preset or "ultra fast" coupled with the "film" tuning option + some good filters (preferably gpu powered).

    one thing i did notice with all my tests is that the decoder used makes a ton of difference, when i coupled either cuda h264 or x264 with mencoder (for decoding duties) the quality stank to high heaven, switching to ffmpeg made a world of difference.

    this led me to wonder 2 things:

    1) what if someone coded a high quality decoder based nvidia's "pure video", maybe even the x264 developer's could make their own decoder rather than relying on the ffmpeg project.

    2) i know the x264 developer's well established stance on porting x264 over to cuda but what if they decided to use every thing they have learned developing x264 to improve the existing cuda h264 encoder, quality wise. the basic encoder already exists, perhaps they could add some in-loop deblocking or improve the i/p/b quantizer routines or even add/improve multi-pass encoding.

    but after thinking about it i realized that they can't develop for cuda and stay true to the open source, gpl principles they obviously espouse. cuda is a proprietary technology, as is direct x and pure video and in the end i think this is the real reason they have no interest in porting x264 to cuda: it goes against everything they believe in.
    Quote Quote  
  4. Originally Posted by deadrats View Post
    isn't vegas something like a $600 app? you really think that they will use the bare minimum?
    yes, and very close to low end. The current Mainconcept AVC encoder bundled with Vegas9 Pro is very handicapped. But who knows, they might offer a better implementation for the GPU version in 10 .

    regardless, after numerous test encodes, using at least a dozen different sources and trying every combination of settings i could imagine, i have come to the conclusion, and i can't believe i am actual going to say this, that most people are probably better served using x264 using the "very fast" preset or "ultra fast" coupled with the "film" tuning option + some good filters (preferably gpu powered).
    Those fast presets might be good enough for some purposes. You can definitely tell a difference on high quality material when using higher quality presets . The beauty of x264 is you have a choice and it's very adjustable and configurable .

    one thing i did notice with all my tests is that the decoder used makes a ton of difference, when i coupled either cuda h264 or x264 with mencoder (for decoding duties) the quality stank to high heaven, switching to ffmpeg made a world of difference.
    That decoder is broken then, or something in the process chain is broken. A decoder has to provide the same bit for bit image. There may be differences in speed, in integration, but the decoded image has to be bit for bit identical.


    1) what if someone coded a high quality decoder based nvidia's "pure video", maybe even the x264 developer's could make their own decoder rather than relying on the ffmpeg project.
    DGNVTools (<- this is the wrong hotlink) uses avisynth and pure video HD VP2/3 engine. It offloads decoding so is faster in many cases, but in other cases it's actually slower because it has a frame rate cap - it will depend on the system configuration, encoding settings as to when it is beneficial

    2) i know the x264 developer's well established stance on porting x264 over to cuda but what if they decided to use every thing they have learned developing x264 to improve the existing cuda h264 encoder, quality wise. the basic encoder already exists, perhaps they could add some in-loop deblocking or improve the i/p/b quantizer routines or even add/improve multi-pass encoding.
    AFAIK, it's only motion estimation for x264 cuda development in the GSOC project . I don't know how far they are along.

    but after thinking about it i realized that they can't develop for cuda and stay true to the open source, gpl principles they obviously espouse. cuda is a proprietary technology, as is direct x and pure video and in the end i think this is the real reason they have no interest in porting x264 to cuda: it goes against everything they believe in.
    That could be a big reason, but you could have multiple development forks . There are plenty of branches now with patches that do various things that are not in the main branch
    Last edited by poisondeathray; 7th Oct 2010 at 19:28.
    Quote Quote  
  5. Originally Posted by deadrats View Post
    after numerous test encodes, using at least a dozen different sources and trying every combination of settings i could imagine, i have come to the conclusion, and i can't believe i am actual going to say this, that most people are probably better served using x264 using the "very fast" preset or "ultra fast" coupled with the "film" tuning option + some good filters (preferably gpu powered).
    With all the test I've run recently (all CRF encodes, mostly around Q=20) x264 at "veryfast" delivers smaller files than "slower" or "veryslow", sometimes even "placebo". The quality is very slightly lower if I examine enlarged still frames but I find the tradeoff worth it. At "ultrafast" the files get much larger and the small increase in speed over "veryfast" isn't worth it. I've settled on CRF 20, "veryfast" with a few tweaks.

    And x264 at "veryfast" is both faster (Athlon 64 x2 3.2 GHz) and better quality than cuda encoding (Nvidia 8600GT) with mediacoder. My quad core Q6600 is even faster, obviously.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    With all the test I've run recently (all CRF encodes, mostly around Q=20) x264 at "veryfast" delivers smaller files than "slower" or "veryslow", sometimes even "placebo". The quality is very slightly lower if I examine enlarged still frames but I find the tradeoff worth it. At "ultrafast" the files get much larger and the small increase in speed over "veryfast" isn't worth it. I've settled on CRF 20, "veryfast" with a few tweaks.

    And x264 at "veryfast" is both faster (Athlon 64 x2 3.2 GHz) and better quality than cuda encoding (Nvidia 8600GT) with mediacoder. My quad core Q6600 is even faster, obviously.

    "CRF" is just a rough estimation of "quality" . You can't compare 2 encodes that end up at different file sizes. Just because using a different preset results in a smaller filesize at a given CRF , you cannot conclude the quality is better (or worse) . But you know this already.

    "placebo" is useless , but the difference between say, "slower" and "veryfast" at a given bitrate should be noticable, unless you have relatively saturated conditions (using a relatively high bitrate for that content complexity)
    Quote Quote  
  7. Originally Posted by poisondeathray View Post
    the difference between say, "slower" and "veryfast" at a given bitrate should be noticable
    Yes, in theory. But in my tests it's not really noticeable at normal playback speed. And only barely noticeable with looking closely at enlarged still frames. Note that the veryfast encode has a lower bitrate than the slower encode (both CRF 20).

    "x264 --version" reports "x264 0.104.1703 cd21d05 built on Aug 24 2010, gcc: 4.4.4
    configuration: --bit-depth=8".
    Quote Quote  
  8. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by jagabo View Post
    And x264 at "veryfast" is both faster (Athlon 64 x2 3.2 GHz) and better quality than cuda encoding (Nvidia 8600GT) with mediacoder. My quad core Q6600 is even faster, obviously.
    on my system, a phenom 2 x4 620, 4 gigs ddr2 and a gts250 1 gig, using media coder (it's quickly becoming my favorite video app, only xmedia recode comes close, when it doesn't have those ridiculous ffmpeg related bugs) x264 using the "ultra fast" preset is within 20 fps (130 fps vs 150 fps) of the cuda encoder when converting 720x480 interlaced mpeg-2 sources to m2ts (h264/ac3) at full d1 resolution and the following filters: yadif, deringing, auto balance=normal and denoise=temporal, but the quality is significantly higher with x264 with the "film" tune.

    "very fast" cuts the fps down to half but doesn't seem to improve image quality by any amount (most likely because of all the filtering).

    i wonder what "bulldozer" (if it's ever released) will do under similar tests.
    Quote Quote  
  9. I was using x264 CLI via an AVS script (DgIndex, Mpeg2Source() with Y deblocking) keeping the 720x480 DVD frame, 23.976 fps progressive, and getting over 60 fps at veryfast on the A64 X2. I don't remember exactly what the Q6600 was delivering but it was over 100 fps.
    Quote Quote  
  10. DGIndex (DGDecode.dll) is single threaded, in some cases it can be a bottleneck for encoding (esp. HD MPEG2 sources) . Neuron2 is supposed to be working on a MT version
    Quote Quote  
  11. Originally Posted by poisondeathray View Post
    DGIndex (DGDecode.dll) is single threaded, in some cases it can be a bottleneck for encoding (esp. HD MPEG2 sources).
    Both the veryfast and slower encodes are using the same AVS script. With the veryfast encodes on the Q6600 Mpeg2Source() with deblocking was probably starting to be an issue.

    What I think is going on:

    At veryfast x264 is performing a less wide motion search and less sub pixel motion motion search. In my experience wide ME searches usually don't decrease final file size by much (a few percent) and subpixel ME increases file size a bit. On balance, the increased bitrate from more SubMe outweighs the decreased bitrate from wider ME. I suspect the veryfast encode has less smooth motion with things like slow scrolling credits, film bounce, and such.
    Last edited by jagabo; 8th Oct 2010 at 06:22.
    Quote Quote  
  12. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    well just got through testing vegas 10 and it blows, gpu accelerated encoding is only available with the sony avc (which surprised me that main concept's gpu avc isn't available) and it is slow, SD avc encoding was about half real time and that's without any filtering. for comparison x264 using ultra fast and the previously mentioned 4 filters does the same encode at nearly 150 fps, and i don't have to spend a dime on it.

    very disappointed.
    Quote Quote  
  13. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    unlike adobe cs5, vegas 10 assumes you have the horsepower to make gpu xcelleration work well. a gtx285,480 or quadro 4000 or better is required for cs5 gpu xcelleration, no such req for vegas.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  14. Originally Posted by aedipuss View Post
    unlike adobe cs5, vegas 10 assumes you have the horsepower to make gpu xcelleration work well. a gtx285,480 or quadro 4000 or better is required for cs5 gpu xcelleration, no such req for vegas.
    Only "officially" for CS5 ; there is a "hack". You don't an expensive card for MPE to work. Many people use $80-100 cards as long as you have 768MB memory and CUDA enabled and it will be sufficient for even 2-3 streams with GPU effects all in realtime

    I mentioned this earlier, but decoding is (was) the bottleneck for NLE's. Without MPE, just scrubbing the timeline can increase CPU usage 40-90% on your average quad core without MPE . When MPE is in use, almost all that CPU usage can go to encoding instead of being wasted on decoding. Not only is editing faster, but the same encoding tasks , on the same hardware, are 2-4x faster on average in CS5 with MPE.

    According to some blog posts, vegas 10 has made some software improvements in decoding, but apparently nothing close to MPE and real GPU decoding
    Quote Quote  
  15. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by aedipuss View Post
    unlike adobe cs5, vegas 10 assumes you have the horsepower to make gpu xcelleration work well. a gtx285,480 or quadro 4000 or better is required for cs5 gpu xcelleration, no such req for vegas.
    it's not the video card, using my gts250 espresso 6 is able to do faster than real time 1080p mpeg-2 and avc encoding and using the cuda encoder supplied with media coder i can do vob to avc in full D1 resolution at close to 200 fps, the gpu is fast enough, sony's gpu encoder is what sucks.
    Quote Quote  
  16. Adobe has to maintain the pretense of being high end software for professionals. Hence the artificial limits on hardware.
    Quote Quote  
  17. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Hmm, Vegas Pro usually upgrades after Christmas. I don't have this budgeted, especially if new harrdware is needed.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  18. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    Originally Posted by edDV View Post
    Hmm, Vegas Pro usually upgrades after Christmas. I don't have this budgeted, especially if new harrdware is needed.
    hardware specs are still low.
    2 GHz processor (multicore or multiprocessor CPU recommended for HD or stereoscopic 3D)
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!