VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 33
Thread
  1. Chicken McNewblet
    Join Date
    Sep 2009
    Location
    United States
    Search Comp PM
    I've been having a hell of a time finding information about this topic, either from official or consumer sources. A lot of the information is contradictory in nature - "Click here to learn how to enable GPU acceleration in Vegas!" follows with comments of, "This doesn't work at all!"

    Right now I use Sony Vegas 12 to do Youtube projects, and I've always been annoyed that there doesn't seem to be a way to utilize my GPU effectively. My rig is an i7-6700k, GTX 960, and 16GB DDR4 RAM. I've read tons of web resources on this, that, or the other thing related to Vegas' ability to use my GPU, but the reality is that enabling GPU accelerated previews often results in worse performance...and there is no way to get GPU-accelerated rendering, from what I understand. In Vegas 12, there's no support for NVENC, CUDA and OpenCL don't work for whatever reason, the only way to get H264 out of it is to use x264, and it doesn't even support editing with H265 files (yeah yeah I know, not a good editing format). I just want faster renders (and better performance), I feel like my GPU is completely wasted in this regard!

    Does Vegas 14 have any improvements on this front? Is Adobe Premiere any better suited for this? Is hardware-accelerated rendering even feasible for an editing program, in contrast to simple video conversion?

    EDIT: Also, I don't even know if Vegas can take advantage of QuickSync, as it's disabled if you have a graphics card active on your machine.
    Last edited by CursedLemon; 22nd Apr 2017 at 08:56.
    Quote Quote  
  2. I don't use Vegas but I believe it used Nvidia's CUDA based encoder at version 12. Nvidia no longer supports the CUDA encoder, and has replaced it with NVENC. I don't know if Vegas has been updated to use NVENC. I believe it's possible to manually install an old CUDA encoder with recent Nvidea drivers.

    http://forums.guru3d.com/showthread.php?t=391269

    I think there are some threads regarding that here at videohelp too.

    By the way, quality (per bitrate) is lower than x264.
    Quote Quote  
  3. I have used Vegas since version 4. Before I finally gave up on the Sony Forum, I think I posted over 7,000 messages there. I still use Vegas every day. I have versions 7, 8, 10, 11, and 12 installed on this computer. I mostly use Vegas 8 because it was the last really stable version. What happened in Vegas 9 that made it less stable?

    GPU acceleration.

    The short version of a VERY long story is that GPU acceleration in Vegas does not work very well, and is still (in Vegas 14) filled with bugs. There are two types of GPU acceleration, and they are turned on and off in different places. The first is used to accelerate timeline playback so you can get a higher quality spatial and temporal result while playing back (i.e., sharper and smoother). The amount of improvement depends not only on your video card, but also your drivers. Look on the Vegas site for posts by Nick Hope. He's done a good job summarizing the current state of Vegas GPU technology and its numerous pitfalls. Here is his main post on the subject:

    (FAQ) Graphics Cards & GPU-Acceleration for VEGAS Pro

    As you will see nVidia cards are not as well supported as AMD cards.

    The second way the GPU is used is for rendering. This is where the technology (in Vegas) really falls down. GPU-assisted rendering is only available for a handful of the codecs (mostly the Sony MP4 format). What's worse, even when it helps speed up rendering, it also often ends up introducing glitches, or worse, introduces subtle changes from the original that you may not notice right away (the glitches are horrendous and therefore obvious).

    I try the GPU rendering every time a new release comes out, and then always turn it off. There is not point to a faster render if the resulting video is unusable.

    My suggestion to you: forget you ever heard about GPU rendering and turn it completely off for rendering. The renders should still be plenty fast, unless you are doing a ton of complex compositing.
    Quote Quote  
  4. Chicken McNewblet
    Join Date
    Sep 2009
    Location
    United States
    Search Comp PM
    That's pretty depressing, not going to lie.

    However, I do happen to have a GTX 460 in my old rig. Would be very curious to see if I could make that little sucker work...
    Quote Quote  
  5. Originally Posted by CursedLemon View Post
    That's pretty depressing, not going to lie.

    However, I do happen to have a GTX 460 in my old rig. Would be very curious to see if I could make that little sucker work...
    Did you read Nick Hope's post that I linked to?? It provides everything you need to know to get the most out of whatever card you have. Also, if you search further in that same Vegas forum, you will find LOTS of discussions about what video card driver to use. For some cards (and I think your GTX460 may be one of them), the latest drivers may actually not be the best for getting the optimum performance out of Vegas. You may be better off with an older driver (the older versions are still available for download).

    So, read Nick's stuff and then let us know how it turns out.

    Oh, and BTW, you talk about depressing, many years ago, when Sony first brought out their GPU acceleration, I went ahead an built the "ultimate" editing computer of that era and spent a lot of money getting a video card that was WAY beyond what I needed for anything else, only to find out that when I enabled the GPU, I would get random black frames in my render, and did not get any timeline acceleration. That was several thousand dollars of equipment, many hundreds of which could have been saved if I'd known this in advance.

    So, for me, not only depressing, but costly.
    Quote Quote  
  6. Chicken McNewblet
    Join Date
    Sep 2009
    Location
    United States
    Search Comp PM
    [QUOTE=johnmeyer;2483989]
    Originally Posted by CursedLemon View Post
    Did you read Nick Hope's post that I linked to?? It provides everything you need to know to get the most out of whatever card you have. Also, if you search further in that same Vegas forum, you will find LOTS of discussions about what video card driver to use. For some cards (and I think your GTX460 may be one of them), the latest drivers may actually not be the best for getting the optimum performance out of Vegas. You may be better off with an older driver (the older versions are still available for download).
    Well, yes, I did read it. lol It says plain as day that the 400-500 series GTX cards are a person's best bet for GPU acceleration in Vegas, coupled with the 296.10 driver.

    Oh, and BTW, you talk about depressing, many years ago, when Sony first brought out their GPU acceleration, I went ahead an built the "ultimate" editing computer of that era and spent a lot of money getting a video card that was WAY beyond what I needed for anything else, only to find out that when I enabled the GPU, I would get random black frames in my render, and did not get any timeline acceleration. That was several thousand dollars of equipment, many hundreds of which could have been saved if I'd known this in advance.

    So, for me, not only depressing, but costly.
    That's pretty much how I feel about having my i7-6700k and my GTX 960 for video editing/rendering - the processor doesn't seem to outpace my old i7-875k in render speed very significantly (and I can't even use QuickSync), and my 960 is basically a non-factor except in very niche applications like ripping Blu-rays, where programs like MediaCoder can help tremendously.
    Quote Quote  
  7. Can you frameserve from Vegas to QSVENC?
    Quote Quote  
  8. Chicken McNewblet
    Join Date
    Sep 2009
    Location
    United States
    Search Comp PM
    I'm not even sure if I know what QSVENC is, hah. Besides that, I've never really understood how to do frameserving.
    Quote Quote  
  9. Originally Posted by jagabo View Post
    Can you frameserve from Vegas to QSVENC?
    I have no experience with QSVEnc, but I do with MeGUI, VirtualDub and other apps that can read AVS scripts. There is a very nice, free add-on frameserver for Vegas from debugmode.com. For many Vegas users it has become such an essential addition, that many refused to upgrade to Vegas 14 when it broke the interface to the frameserver. The debugmode frameserver can serve out in RGB24, RGB32, or YUY2. I usually frameserve into an AVS script and then read that script into MeGUI or my old MainConcept MPEG-2 encoder. For some things, I just read the AVI frameserved signpost file directly in VirtualDub.

    I use it every single day.
    Last edited by johnmeyer; 23rd Apr 2017 at 20:57. Reason: added sentence about MPEG-2 and VirtualDub
    Quote Quote  
  10. I drag/drop avs scripts onto a batch file like:
    Code:
    "G:\Program Files\QSVEnc\QSVEncC\x86\QSVEncC.exe" -i %1 --codec h264 --quality best --gop-len 250 --b-pyramid --cqp 29:31:33 --ref 4 -o "%~dpnx1.qsvenc.mkv"
    Quote Quote  
  11. Originally Posted by CursedLemon View Post
    I'm not even sure if I know what QSVENC is, hah. Besides that, I've never really understood how to do frameserving.
    QSVEncC is like NVEncC (both from rigaya) , except QSVEncC uses Quicksync (Intel, it would be your 6700k's GPU) , whereas NVEncC uses NVEnc (Nvidia, it would be your 960)

    If you're looking for speed, frameserving out of vegas in many projects will actually be the bottleneck - you won't be able to feed NVenc fast enough on a GTX 960, even on a simple import, export (not even any editing, layers, filters etc...) . Frameserving with debugmode out of vegas is actually very easy to do. There are step by step guides with screenshots posted on various forums . You can actually feed 2 instances with 2 vegas's , 1 card and not even saturate your card on a typical 1080p project. I suppose another benefit if you're GPU encoding is you have more free CPU cycles to do other work simultaneously

    If you're looking for quality and compression ratio , forget about GPU based encoding , at least in consumer software << $5K . It's significantly worse than x264 for AVC, or x265 for HEVC

    GPU is good for accelerating certain tasks like scaling , some filters. Vegas' GPU implementation is very poor, much less mature than Adobe's

    Vegas doesn't have a NVEnc or QSVEnc implementation directly, but Premiere does have a NVEnc encoder as a 3rd party plugin (direct from PP), or you can frameserve with debugmode or advanced frameserver
    Quote Quote  
  12. Member
    Join Date
    Sep 2016
    Location
    Brazil
    Search PM
    There is plenty of Quicksync/Nvenc softwares which support avisynth. A GUI encoder with both QS/NVENC which was not told yet is AS Video Converter and support avisynth too.

    Even using debug frameserver with outside encoder like x264 veryfast preset is faster than internal Mainconcept/SonyAVC templates.

    Here some results from benchmark i done late 2016 with mainconcept/SonyAVC/Frameserver+NVENC. I used multiple GPU's but just look at GTX770 in MCAVC/SonyAVC vs frameserver(NVENC). Yet using x264 veryfast at frameserving give almost same results to NVENC. I dont included x264 results there.

    Mainconcept vs frameserver(NVENC) results:
    https://forum.videohelp.com/images/imgfiles/ATzK4ibh.jpg

    SonyAVC vs (NVENC) results:
    https://forum.videohelp.com/images/imgfiles/eyRWxGih.jpg

    NVENC, x264 veryfast and both internal Vegas AVC templates give almost same quality, so you will not loose quality but gain some performance.
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    If you're looking for quality and compression ratio , forget about GPU based encoding , at least in consumer software << $5K . It's significantly worse than x264 for AVC, or x265 for HEVC
    I beg to differ. I replaced my GTX960 with a GTX1050 back in February and since that time, in preparation for a wide ranging review of NVENC, I have done thousands of test encodes with dozens of different source files (yes i will be posting that review on this site, with sample encodes soon) and I can honestly say that NVENC H264, encodes via ffmpeg on Linux, using the proprietary NVIDIA drivers, match, quality wise, x264+crf18+medium, bitrate for bitrate so long as you configure NVENC properly, including using the proper pixel format, and it's 3 times faster in encoding.

    Quality has been confirmed via minimum PSNR, average PSNR, maximum PSNR, YUV SSIM and my own eyes; in fact NVENC's quality based encoding is superb.

    I wouldn't use Rigaya's encoders as representative of what hardware based encoders are capable of, the guy is obviously using a custom coded encoder, at least for QSV, as evidenced by the fact that his qsv_h265 encoder doesn't have the 1000 frame limit that the official Intel encoder does.

    If you read through Intel's and ffmpeg documentation, you will find that the Intel SDK provides hardware accelerated encoding for MPEG-2, H264, H265, VP8 and VP9 as well as aac, but it also notes that if you want unlimited hardware H265 encoding via QSV you need to buy the $5000 pro version of the SDK.

    Since Rigaya's encoder offers unlimited H265 encoding via QS without needing to shell out the 5 grand, it's obvious that he coded a custom application. You can try the official Intel encoder via ffmpeg and avconv, they both support libmfx, so long as both the Intel drivers and SDK are installed and either one is built with --libmfx; this works on Windows and *Nix.

    But, there is one more cool thing, on *Nixes there is something called vaapi:

    https://en.wikipedia.org/wiki/Video_Acceleration_API

    https://wiki.libav.org/Hardware/vaapi

    Which allows one to perform hardware mpeg-2, H264, HEVC, MJPEG, VP8 and VP9 encoding so long as the underlying hardware supports it, thus on AMD graphics cards you can do HEVC, on Sky Lake you can do hardware VP8 (in addition to MPEG-2/H264/HEVC) and on Kaby Lake you can do hardware VP9, though neither ffmpeg nor avconv currently supports this ( I figure soon).

    With regards to the OP's inquiry, I would be exporting to either lossless or a highly quality intermediary format and then encode to a delivery format via ffmpeg, if high quality accelerated gpu encoding is what he's after.
    Quote Quote  
  14. Originally Posted by sophisticles View Post
    Originally Posted by poisondeathray View Post
    If you're looking for quality and compression ratio , forget about GPU based encoding , at least in consumer software << $5K . It's significantly worse than x264 for AVC, or x265 for HEVC
    I beg to differ. I replaced my GTX960 with a GTX1050 back in February and since that time, in preparation for a wide ranging review of NVENC, I have done thousands of test encodes with dozens of different source files (yes i will be posting that review on this site, with sample encodes soon) and I can honestly say that NVENC H264, encodes via ffmpeg on Linux, using the proprietary NVIDIA drivers, match, quality wise, x264+crf18+medium, bitrate for bitrate so long as you configure NVENC properly, including using the proper pixel format, and it's 3 times faster in encoding.

    Quality has been confirmed via minimum PSNR, average PSNR, maximum PSNR, YUV SSIM and my own eyes; in fact NVENC's quality based encoding is superb.
    Sure, posts the tests.

    NVEnc is much faster for sure, but even at the highest quality 2pass setting , the compression ratio is worse across the board on virtually all sources. Even Nvidia concedes this for AVC.

    Perhaps you were using very high bitrate ranges? Improper testing methods ? Or is this just compared to x264 default "medium" regardless of the situation ?

    I've done many tests too... but not with pascal . Not on linux . I don't think those 2 factors would make a big difference, but I could be wrong . Posted native Linux benchmarks always seem to be few% faster , at least for x264 and x265 than Windows counterparts on the same hardware; not sure about GPU encodes , but there shouldn't be any quality differences. Metric wise - perhaps you didn't use correct x264 settings ? (eg. If you are testing for PSNR or SSIM for some runs you should use proper tunings and settings)



    I wouldn't use Rigaya's encoders as representative of what hardware based encoders are capable of, the guy is obviously using a custom coded encoder, at least for QSV, as evidenced by the fact that his qsv_h265 encoder doesn't have the 1000 frame limit that the official Intel encoder does.
    That's right. He's not using the same HEVC SDK encoder as the $5000 software/hardware based Intel one. The one that nobody actually uses because it's not free.

    For NVEnc, his defaults for NVEnc are higher quality, slower than the ffmpeg default presets . You can make apples to apples comparisons by adjusting the settings, but the "average" user would come to the conclusion that ffmpeg's NVEnc yields lower quality...than NVEncC it's not necessarily the case when you match the settings

    Since Rigaya's encoder offers unlimited H265 encoding via QS without needing to shell out the 5 grand, it's obvious that he coded a custom application. You can try the official Intel encoder via ffmpeg and avconv, they both support libmfx, so long as both the Intel drivers and SDK are installed and either one is built with --libmfx; this works on Windows and *Nix.
    It's not the official full featured one used in such benchmarks as MSU. There are features missing from the full SDK. You need to pay.

    On the AVC front, both Nvidia and Intel have conceded to x264 for AVC encoding for highest quality and compression ratio, but maybe you didn't look at that specifically ?
    Last edited by poisondeathray; 26th Apr 2017 at 23:01.
    Quote Quote  
  15. Here's a very quick and dirty test for you, a) because I don't want to highjack the thread, I will be posting a full comparison this weekend and b) because I have to get ready for work.

    The tests were done on Ubuntu 17.04 using the latest ffmpeg build, the source file was from here:

    https://www.harmonicinc.com/4k-demo-footage-download/

    You will need to register a (throw away) email but the samples are of very high quality, they have about 45sec-1min ProRes clips and 5min very high quality avc clips; I have downloaded them all, for this test I used 16_raptors 5994 pr422hq uhd 45sec.mov and I encoded them like this:

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec libx264 -pix_fmt yuv420p -preset medium -crf 18 16_raptors_X264_medium.mp4

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv420p -preset slow -rc constqp -global_quality 25 16_raptors_NVENC_H264.mp4

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv420p -preset slow -rc constqp -global_quality 24 16_raptors_NVENC_H264_2.mp4

    As you will see by the file sizes CRF 18 falls between global_quality 24 and 25 with a bit rate closer to 24.

    I'll leave quality assessment up to you, though I would argue since these are 4k clips, unless you have a 4k monitor the only reliable way to test quality is via PSNR and SSIM and before I hear the usual BS about testing x264 with either -tune psnr or -tune ssim bear in mind that nvenc also has AQ, spatial and temporal and according to NVIDIA that negatively impacts PSNR and SSIM as well, so in my book it's a level playing field.

    More this weekend in a separate dedicated thread.
    Image Attached Files
    Quote Quote  
  16. Originally Posted by sophisticles View Post
    Here's a very quick and dirty test for you, a) because I don't want to highjack the thread, I will be posting a full comparison this weekend and b) because I have to get ready for work.

    The tests were done on Ubuntu 17.04 using the latest ffmpeg build, the source file was from here:

    https://www.harmonicinc.com/4k-demo-footage-download/

    You will need to register a (throw away) email but the samples are of very high quality, they have about 45sec-1min ProRes clips and 5min very high quality avc clips; I have downloaded them all, for this test I used 16_raptors 5994 pr422hq uhd 45sec.mov and I encoded them like this:

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec libx264 -pix_fmt yuv420p -preset medium -crf 18 16_raptors_X264_medium.mp4

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv420p -preset slow -rc constqp -global_quality 25 16_raptors_NVENC_H264.mp4

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv420p -preset slow -rc constqp -global_quality 24 16_raptors_NVENC_H264_2.mp4

    As you will see by the file sizes CRF 18 falls between global_quality 24 and 25 with a bit rate closer to 24.

    I'll leave quality assessment up to you, though I would argue since these are 4k clips, unless you have a 4k monitor the only reliable way to test quality is via PSNR and SSIM and before I hear the usual BS about testing x264 with either -tune psnr or -tune ssim bear in mind that nvenc also has AQ, spatial and temporal and according to NVIDIA that negatively impacts PSNR and SSIM as well, so in my book it's a level playing field.

    More this weekend in a separate dedicated thread.



    Thanks for posting.

    Maybe PM a mod to split this into a separate thread


    1) I would argue that using proper settings is not BS at all .

    Because one encoder "lacks" a certain feature , that should not be basis for penalizing another encoder . At minimum you should be comparing "apples to apples." You haven't used spatial or temporal AQ switches for the NVEnc encode, but you have for the x264 encode.... Thus using metrics is invalidated.

    This doesn't reflect real usage either. For example, if you have a film source you'd use proper settings , at least something like --tune film etc... Simple cartoons would use other appropriate settings and so forth . That's why proper comparisons solicit input from the developers as to what settings they would like to use for that scenario



    2) You're using the wrong settings for ffmpeg nvenc. You need to input NV12 "-pix_fmt NV12", otherwise you get chroma errors. YV12 (or yuv420p, p for planar) and NV12 are functionally 4:2:0, but not all programs handle it the same way. NVEnc doesn't "like" YV12 despite claiming to support it. Rigaya's NVEncC YV12=>NV12 conversion is automatic and has speed optimizations using SSE/AVX/AVX2 etc... I don't think ffmpeg does yet

    Code:
    ffmpeg -h encoder=nvenc
    Supported pixel formats: yuv420p nv12 p010le yuv444p yuv444p16le bgr0 rgb0 cuda
    .
    .
    Do you see the chroma error differences here ? At least in the windows ffmpeg nvenc version, there are problems with feeding yuv420p.

    I always recommend running low level tests, test patterns, etc.. before starting with different sources. (Instead of wasting time doing 1000's of "wrong" encodes )

    ffmpeg nvenc yuv420p
    Click image for larger version

Name:	1_ffmpeg_nvenc_yuv420p.png
Views:	307
Size:	281.5 KB
ID:	41422

    ffmpeg nvenc nv12
    Click image for larger version

Name:	2_ffmpeg_nvenc_nv12.png
Views:	259
Size:	284.3 KB
ID:	41423

    ffmpeg nvenc yuv420p
    Name:  3_ffmpeg_nvenc_yuv420p_roof.png
Views: 3072
Size:  54.1 KB

    ffmpeg nvenc nv12
    Name:  4_ffmpeg_nvenc_nv12_roof.png
Views: 2640
Size:  54.7 KB


    3) You'll get better ffmpeg nvenc encodes if you customize parameters like -refs -bf -g . It's enormously fast, so the slight speed penalty is worth it IMO to show it in the best light... It's still going to be a several times faster than x264 . Rigaya's NVEncC uses higher defaults like 4 reference frames instead of 2.

    At minimum you should know what settings are actually being used "underneath the hood" . For example early ffmpeg nvenc h264 implementations didn't even use bframes with default settings (NVEnc HEVC still doesn't even have the ability to use them)



    4) Have you actually tested NVEnc AQ modes ?

    NVEnc sAQ is fantastic(!) for for certain scenarios . It's reminescent of x264. It can dramatically improve situations with high quantization such as dark shadowy areas, gradients like blue skies . I can post some examples, but it's really as dramatic as some the early x264 analysis posts I made maybe 8-10 years ago . Just stellar... huge impact and dramatically improves visual quality when used properly. AFAIK, the early NVEnc implementations were actually CPU, so it dramatically slowed down. I don't know what the current status is, or if part of the calcs are run on the card. Eitherway it's still going to be much faster than x264

    Comparing "apples to apples" AQ is problematic because the scale is different 1-15 for NVEnc sAQ . So again, you'd have to solicit developer or "expert" input as to what settings to use for a given situation. Or if using metrics, just disable it and likewise for x264 , use --tune ssim or psnr



    If you want my serious summary of many proper tests , NVEnc is decent, but like every encoder has pros/cons. Obviously it's very fast. But it' s not as mature as x264, there are issues with scene changes, fades, compression efficiency is lower. Higher residuals and less accurate ME. I can post some examples later. It doesn't have as much control or switches over settings (more limits imposed), not as customizable. x264 has problems too obviously, but not as severe, and you can get over problem areas by using advanced settings, or even qpfile or zones.
    Quote Quote  
  17. So you can see the chroma error in your nvenc encode, it affects linux ffmpeg nvenc too

    x264
    Name:  x264.png
Views: 2658
Size:  12.9 KB


    nvenc-2
    Name:  nvenc_2.png
Views: 2618
Size:  8.5 KB


    So if were doing metrics (even though it' s not appropriate unless you use proper settings) , that wouldn't show up on something like a PSNR-Y, but aggregate PSNR or PSNR-U or PSNR-V would severely penalize that
    Quote Quote  
  18. So, I was thinking about starting a new thread but the more I think about it the more I think this discussion helps answer the OP's original question and so unless anyone objects I'm inclined to continue here.

    To address some of the points brought up by PDR:

    The reason I used yuv420p was because I didn't want anyone accusing me of "cheating" or biasing against or for either encoder, yes nv12 offers better quality with nvenc than yuv420p but in my tests rgb0 offers better quality than nv12 and in all honesty since the source files are 4:4:4 the best option would be to test both x264 and nvenc with yuv444p, which I have also done and will be posting shortly.

    Regarding the settings I used, I wanted to keep it as simple as possible so as not to confuse people with a convoluted command, which in my experience doesn't really alter the quality all that much. My personal experience has been that the low latency hq preset offers the best perceptual image quality over all when coupled with appropriate rate control settings ( I will be posting samples of this in the morning).

    With respect to the encoder, there are 2 common misconceptions about NVENC, one of which PDR incorrectly parroted in his post, which I would like to clear up right now.

    NVENC is not a pure hardware based encoder, i.e. implemented via fixed function units on the video card, it's a hybrid approach, which brings me to the second misconception that PDR seems to think is true, namely there is no part of the nvenc encoder that runs on the cpu.

    The confusion arises from some early Nvidia documentation where it stated that some features, such as AQ were implemented in "software". Some people, understandably so, took that to mean that the calculations were being handled by the cpu. This is not the case, later documentation, such as the linked to developer's guide clearly states:

    8.4 ENCODER FEATURES USING CUDA

    Although the core video encoder hardware on GPU is completely independent
    of CUDA cores or graphics engine on the GPU, following encoder features internally use CUDA
    for hardware acceleration. Note that the impact of enabling these features on overall
    CUDA or graphics performance is minimal and this list is provided purely for
    information purposes.

    Two pass rate control modes for high quality presets.

    Look-ahead

    All adaptive quantization modes.

    Encoding with inputs in RGB formats

    This of course does lead the the realization that higher end video card, with faster memory and higher clocked gpu will result in faster encoding.

    On the subject of Adaptive Quantization, which PDR seems to be particularly enamored with Spatial AQ, under "recommended settings" it states "Temporal AQ in general gives better quality than Spatial AQ but is computationally complex".

    The below command line is a direct result of Nvidia's recommended settings for optimal quality:

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt nv12 -preset hq -rc vbr -bf 3 -g 120 -temporal-aq 1 -b:v 52.5M -bufsize:v 200M 16_raptors_best.mp4

    I calculated the PSNR and SSIM of the x264+medium file I posted earlier and this new one I just encoded and here are the results:

    SSIM Y:0.959238 (13.897472) U:0.975642 (16.133561) V:0.981609 (17.354046) All:0.965701 (14.647154)
    PSNR y:41.928283 u:45.644368 v:47.033381 average:42.957816 min:39.503639 max:48.255363
    ffmpeg -i 16_raptors_X264_medium.mp4 -i "16_raptors 5994 pr422hq uhd 45sec.mov" -lavfi "ssim;[0:v][1:v]psnr" -f null -

    SSIM Y:0.958001 (13.767614) U:0.972802 (15.654552) V:0.980596 (17.120981) All:0.964234 (14.465243)
    PSNR y:41.763287 u:44.819323 v:46.434988 average:42.700097 min:38.632995 max:48.545092
    ffmpeg -i 16_raptors_best.mp4 -i "16_raptors 5994 pr422hq uhd 45sec.mov" -lavfi "ssim;[0:v][1:v]psnr" -f null -

    They are very comparable and to my eyes I think both encodes look for all practical purposes the same.

    One last thing, nvenc is evidently "extensible", meaning it seems since some parts are implemented via CUDA a talented developer should be able to "hack" in new features, for instance NVIDIA added Sample Adaptive Offset (SAO) to nvenc with it's latest SDK (which is integrated in the latest ffmpeg built I am using), said feature is also implemented via CUDA, it's always on because NVIDIA considers it such an important quality improvement, but it does require a Pascal card, because it is implemented in CUDA I believe it's probably enforced by the drivers to not let it run on non-Pascal cards.

    More tomorrow.
    Image Attached Thumbnails NVENC_VideoEncoder_API_ProgGuide.pdf  

    Image Attached Files
    Quote Quote  
  19. The NVENC encode has lost a lot of fine, low contrast, detail. This can be seen in the thin individual strands around edges of the birds and is especially obvious as posterization artifacts in the blue sky background in frames ~1250 to ~1750.
    Quote Quote  
  20. Originally Posted by sophisticles View Post

    The reason I used yuv420p was because I didn't want anyone accusing me of "cheating" or biasing against or for either encoder, yes nv12 offers better quality with nvenc than yuv420p but in my tests rgb0 offers better quality than nv12 and in all honesty since the source files are 4:4:4 the best option would be to test both x264 and nvenc with yuv444p, which I have also done and will be posting shortly.
    No, the source is 10bit 422

    Yes, it's not a proper comparison when you disable psy options for one (they are off by default for ffmpeg nvenc), what leave them on for another and wish to use SSIM or PSNR





    With respect to the encoder, there are 2 common misconceptions about NVENC, one of which PDR incorrectly parroted in his post, which I would like to clear up right now.

    NVENC is not a pure hardware based encoder, i.e. implemented via fixed function units on the video card, it's a hybrid approach,
    Yes, recall we had this discussion before. You posted that PDF in other threads. I never said it's pure hardware above

    which brings me to the second misconception that PDR seems to think is true, namely there is no part of the nvenc encoder that runs on the cpu.
    This doesn't make any sense either, nor did I say or imply that - can you encode with a GPU only without CPU ? Come on...

    The confusion arises from some early Nvidia documentation where it stated that some features, such as AQ were implemented in "software". Some people, understandably so, took that to mean that the calculations were being handled by the cpu. This is not the case, later documentation, such as the linked to developer's guide clearly states:
    .
    .
    On the subject of Adaptive Quantization, which PDR seems to be particularly enamored with Spatial AQ, under "recommended settings" it states "Temporal AQ in general gives better quality than Spatial AQ but is computationally complex".
    No , that's your misunderstanding

    In actual testing, there is a change in the AQ effectiveness about 1-2 years ago compared to recently . Spatial is better than temporal. You obviously haven't tested it and are just copy/pasting or taking Nvidia for their word. I'm wondering if the actual ffmpeg nvenc implementation for AQ is run on CPU because it looks a hell of a lot like haali's AQ implementation (which is what x264's AQ is based on). I mean I'm not looking at the code, just the results

    I mean dramatic change . As effective as when x264 got AQ implementation. If you recall , early NVEnc testing showed the Nvidia AQ modes were virtually useless. Both of them. In more recent tests (within a few months) it makes unusable encodes, usable when you use the proper settings. And it relates to video editing and the OP . One of the most common complaints on video editing boards is banding in skies. You are seeing that in your nvenc encode - high quantization, blocking. Licensed encoders from typical NLE's don't have control over AQ (you need higher end, enterprise encoder) . x264 was able to fix that for free (I mean without enormous increases in bitrate). Now I'm seeing that ability with NVEnc to an extent. So I'm wondering if they ditched one implementation for another
    Quote Quote  
  21. Do some of your own testing instead of "parroting" some PDF

    e.g. NVEnc, same settings, same bitrate etc... only difference tAQ vs sAQ , both 12 (they go 1-15 scale with ffmpeg nvenc implementation)

    tAQ
    Click image for larger version

Name:	tAQ.png
Views:	249
Size:	392.7 KB
ID:	41454

    tAQ enhanced view
    Click image for larger version

Name:	tAQ_enhanced.png
Views:	229
Size:	333.1 KB
ID:	41455

    sAQ
    Click image for larger version

Name:	sAQ.png
Views:	220
Size:	757.0 KB
ID:	41456

    sAQ enhanced view
    Click image for larger version

Name:	sAQ_enhanced.png
Views:	214
Size:	581.8 KB
ID:	41457


    Massive difference. It makes unusable encodes , usable. Almost "x264-like" when using custom AQ settings. This is a major change from before .
    Quote Quote  
  22. No, the source is 10bit 422

    Yes, it's not a proper comparison when you disable psy options for one (they are off by default for ffmpeg nvenc), what leave them on for another and wish to use SSIM or PSNR
    Taken straight from media info:

    Video
    ID : 1
    Format : ProRes
    Format version : Version 1
    Format profile : 4444
    Codec ID : ap4h
    Duration : 45 s 20 ms
    Source duration : 45 s 28 ms
    Bit rate mode : Variable
    Bit rate : 2 570 Mb/s
    Width : 3 840 pixels
    Height : 2 160 pixels
    Display aspect ratio : 16:9
    Frame rate mode : Constant
    Frame rate : 59.940 (60000/1001) FPS
    Chroma subsampling : 4:4:4
    Scan type : Progressive
    Bits/(Pixel*Frame) : 5.170
    Stream size : 13.5 GiB (100%)
    Source stream size : 13.5 GiB (100%)
    Title : Core Media Video
    Writing library : Apple
    Language : English
    Encoded date : UTC 2016-10-21 00:18:08
    Tagged date : UTC 2016-10-21 00:19:04
    Color primaries : BT.709
    Transfer characteristics : BT.709
    Matrix coefficients : BT.709

    BTW, where did you get the idea that psy options are off by default with ffmpeg? Do you know what is and isn't turned on when you select -preset slow?

    This doesn't make any sense either, nor did I say or imply that - can you encode with a GPU only without CPU ? Come on...
    What you said is that you seem to think AQ is cpu driven, it is not, there is no part of nvenc that is cpu driven, portions are powered by the fixed function blocks and portions are powered by the CUDA cores, you can confirm this by checking the video engine and gpu usage during encodes using different settings. The cpu only comes into place in decoding and feeding the data to the gpu, but that's it. Go back and reread what you said and then you'll understand the comment.

    Spatial is better than temporal. You obviously haven't tested it and are just copy/pasting or taking Nvidia for their word.
    You mean listening to the company that actually designed the hardware and encoder and believing the developer's documentation they put out? How foolish of me.

    @Jagabo, unless you have a 4k monitor I don't think you can honestly say that the supposed shortcomings of nvenc you claim to see are the result of the encode and not scaling errors or related issues with the player and/or drivers you're using.

    If you're going to try and nitpick the quality of the encodes, you should have a 4k 10bit monitor that supports 100% of the BT.709 color gamut, with a pro caliber video card (Quadro/FirePro).

    This is one of the problems I have with people that dismiss objective metrics such as PSNR and SSIM and instead insist that subjective measurements, such as which one looks better, is somehow a superior measurement, unless you have the proper hardware then your visual tests can't be trusted.

    Here's 4 tests I did to test the effectiveness of Spatial vs Temporal AQ, here are the command lines:

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv444p -preset hp -rc vbr -bf 3 -g 120 -strict_gop 1 -qmin 0 -qmax 69 -spatial-aq 0 -temporal-aq 0 -qcomp 0.6 -qdiff 4 -qblur 0.5 -b:v 52.5M -bufsize:v 200M 16_raptors_1.mp4

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv444p -preset hp -rc vbr -bf 3 -g 120 -strict_gop 1 -qmin 0 -qmax 69 -spatial-aq 1 -temporal-aq 0 -qcomp 0.6 -qdiff 4 -qblur 0.5 -b:v 52.5M -bufsize:v 200M 16_raptors_2.mp4 <--SAQ defaults to a strength of 8 unless manual set to something else.

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv444p -preset hp -rc vbr -bf 3 -g 120 -strict_gop 1 -qmin 0 -qmax 69 -spatial-aq 0 -temporal-aq 1 -qcomp 0.6 -qdiff 4 -qblur 0.5 -b:v 52.5M -bufsize:v 200M 16_raptors_3.mp4

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec h264_nvenc -pix_fmt yuv444p -preset hp -rc vbr -bf 3 -g 120 -strict_gop 1 -qmin 0 -qmax 69 -spatial-aq 1 -temporal-aq 1 -qcomp 0.6 -qdiff 4 -qblur 0.5 -b:v 52.5M -bufsize:v 200M 16_raptors_4.mp4

    time ffmpeg -i "16_raptors 5994 pr422hq uhd 45sec.mov" -vcodec hevc_nvenc -pix_fmt p010le -preset hp -profile main10 -rc vbr -g 120 -qmin 0 -qmax 69 -spatial_aq 1 -aq-strength 15 -qcomp 0.6 -qdiff 4 -qblur 0.5 -b:v 52.5M -bufsize:v 200M 16_raptors_5.mp4

    ffmpeg -i 16_raptors_1.mp4 -i "16_raptors 5994 pr422hq uhd 45sec.mov" -lavfi "ssim;[0:v][1:v]psnr" -f null -
    SSIM Y:0.956881 (13.653266) U:0.960096 (13.989870) V:0.975186 (16.053106) All:0.964054 (14.443549)
    PSNR y:41.566372 u:42.644416 v:44.708516 average:42.786461 min:38.633825 max:48.968479

    ffmpeg -i 16_raptors_2.mp4 -i "16_raptors 5994 pr422hq uhd 45sec.mov" -lavfi "ssim;[0:v][1:v]psnr" -f null -
    SSIM Y:0.955449 (13.511423) U:0.959552 (13.931077) V:0.974741 (15.975854) All:0.963248 (14.347135)
    PSNR y:41.273894 u:42.534675 v:44.595794 average:42.594473 min:38.572117 max:48.840695

    ffmpeg -i 16_raptors_3.mp4 -i "16_raptors 5994 pr422hq uhd 45sec.mov" -lavfi "ssim;[0:v][1:v]psnr" -f null -
    SSIM Y:0.956899 (13.655114) U:0.960112 (13.991609) V:0.975227 (16.060249) All:0.964079 (14.446574)
    PSNR y:41.571317 u:42.647453 v:44.718055 average:42.791732 min:38.628517 max:48.978322

    ffmpeg -i 16_raptors_4.mp4 -i "16_raptors 5994 pr422hq uhd 45sec.mov" -lavfi "ssim;[0:v][1:v]psnr" -f null -
    SSIM Y:0.954624 (13.431724) U:0.958805 (13.851568) V:0.974114 (15.869355) All:0.962514 (14.261351)
    PSNR y:41.091334 u:42.427535 v:44.403277 average:42.435148 min:38.427054 max:48.578024

    ffmpeg -i 16_raptors_5.mp4 -i "16_raptors 5994 pr422hq uhd 45sec.mov" -lavfi "ssim;[0:v][1:v]psnr" -f null -
    SSIM Y:0.960737 (14.060128) U:0.974007 (15.851444) V:0.975386 (16.088178) All:0.965390 (14.607981)
    PSNR y:41.929868 u:45.581750 v:47.156846 average:42.961431 min:38.719837 max:52.425965

    As soon as I decide on the next source to test with I'll post some more test encodes.
    Image Attached Files
    Quote Quote  
  23. Originally Posted by sophisticles View Post

    Taken straight from media info:
    ..
    You're right - I never downloaded the file - it is mislabelled. "pr422hq" is 10bit422

    BTW, where did you get the idea that psy options are off by default with ffmpeg?
    It says so in the help for AQ . Also , there is a difference when using the settings.


    Do you know what is and isn't turned on when you select -preset slow?
    I don't have a verbose list if that's what you're asking , and the ffmpeg nvenc.c code doesn't help much besides saying 2pass HQ . But I do know for ffmpeg nvenc it changes bframes from zero to three compared to default settings . There are probably other difference that might be revealed if you dig deeper into a stream analysis . Could you find any doc's from nvidia on that shedding more info ?




    Spatial is better than temporal. You obviously haven't tested it and are just copy/pasting or taking Nvidia for their word.
    You mean listening to the company that actually designed the hardware and encoder and believing the developer's documentation they put out? How foolish of me.
    Not necessarily foolish. I'm just cautious - I don't readily accept claims at face value. If a x264 or x265 developer claims something, I go and test it to see for myself too. I don't "blindly" accept anything

    @Jagabo, unless you have a 4k monitor I don't think you can honestly say that the supposed shortcomings of nvenc you claim to see are the result of the encode and not scaling errors or related issues with the player and/or drivers you're using.
    If you don't have a UHD/4K display, you can view it 1:1 . Maybe not motion tests but parts of single frame analyses are still valid

    This is one of the problems I have with people that dismiss objective metrics such as PSNR and SSIM and instead insist that subjective measurements, such as which one looks better, is somehow a superior measurement, unless you have the proper hardware then your visual tests can't be trusted.
    There are many problems with PSNR and SSIM . Discussed extensively in other threads . I won't bother going into it here. But you have to understand what they really mean , and how to use them in context

    Here are some test results with x264 tune psnr, vs nvenc slow preset at 1080p . Am I to blindly believe that x264 is >2x better ? The NVEnc encode is 2x the filesize and still "lower" in "quality."

    x264 6.9Mb/s
    [Parsed_psnr_2 @ 000000000239b9c0] PSNR y:44.104502 u:44.788000 v:46.334324 average:44.519808 min:40.863521 max:49.397296

    NVEnc 6.9Mb/s
    [Parsed_psnr_2 @ 00000000055e4620] PSNR y:42.513370 u:44.024464 v:45.255745 average:43.103264 min:37.982297 max:49.226881

    NVEnc 8.6Mb/s
    [Parsed_psnr_2 @ 00000000055b8da0] PSNR y:42.998222 u:44.269159 v:45.505405 average:43.530671 min:39.133275 max:49.333274

    NVEnc 10.35Mb/s
    [Parsed_psnr_2 @ 000000000607a080] PSNR y:43.457583 u:44.514088 v:45.749977 average:43.936171 min:39.636442 max:49.759078

    NVEnc 12.1Mb/s
    [Parsed_psnr_2 @ 00000000060fecc0] PSNR y:43.734361 u:44.659110 v:45.896306 average:44.179028 min:40.058205 max:49.759078

    NVEnc 13.8Mb/s
    [Parsed_psnr_2 @ 0000000001ddee80] PSNR y:44.046695 u:44.818566 v:46.047692 average:44.449938 min:40.586992 max:49.961077


    PS. these are actual results . But I would argue they don't reflect actual usage scenario or "quality" . If you use custom x264 settings, it's more like 2-2.5x "better" but that doesn't reflect reality . Tuning for PSNR gives higher numbers usually but makes it look like crap .

    At the end of the day, most people just want something similar in quality to input (output close to input quality). Is that too much to ask for a NLE ? It seems that way. You need much higher bitrate using a bundled NLE encoder to achieve a certain actual quality level than x264, especially when using x264 custom settings. That's what is important to me. At what bitrate can you achieve a visual certain quality level? What settings can get rid of that crappy blocking in the skies or horrible looking fades ,e tc...? Not some useless metric. It's not some far fetched scenario either - this gets discussed all the time on video editing forums.

    And I'm telling you , that sAQ is the 1st step for NVEnc becoming more useful in some scenarios . It's still not production ready. Right now it's an immature encoder with many issues (but much better than earlier "GPU" encoders), but if either NVidia or end user patches can continue to improve on it more, that would be awesome. You certainly can't complain about the speed.
    Last edited by poisondeathray; 30th Apr 2017 at 15:40.
    Quote Quote  
  24. Originally Posted by poisondeathray View Post
    At the end of the day, most people just want something similar in quality to input (output close to input quality). Is that too much to ask for a NLE ?
    The answer is an obvious yes, because a NLE is a Non Linear Editor, not a Non Linear Encoder. The job of an NLE is to allow you to input an acquisition format, edit it as you so desire and then output a mastering format not a delivery format.

    It's the job of an encoder to then take a mastering format and produce a high quality delivery format.

    As for nvenc being an "immature" encoder, I would say that x265 is also an immature encoder (I have seen encodes that are stunning and encodes that are crappy) but so is x264, it still has well known issues with dark areas and the solution is basically to let the bit rate balloon up.

    BTW, if you didn't download the source file I linked to, then how can you make a valid judgment with regards to nvenc quality vs x264 quality?
    Quote Quote  
  25. Originally Posted by sophisticles View Post



    What you said is that you seem to think AQ is cpu driven, it is not, there is no part of nvenc that is cpu driven, portions are powered by the fixed function blocks and portions are powered by the CUDA cores, you can confirm this by checking the video engine and gpu usage during encodes using different settings. The cpu only comes into place in decoding and feeding the data to the gpu, but that's it. Go back and reread what you said and then you'll understand the comment.
    I said I wasn't sure what the current status is .

    Can you confirm what is actually used in the actual ffmpeg implementation of nvenc today ?

    In this thread linked, and in the Japanese thread, selur reported that AQ was a software feature . Mind you that was for NVencC . I don't know the basis or validity for that, you'd have to check with selur . That's probably where I got early implementations being "CPU" .

    https://forum.videohelp.com/threads/370223-NVEncC-by-rigaya-NVIDIA-GPU-encoding/page3#post2420905
    Quote Quote  
  26. Originally Posted by sophisticles View Post
    Originally Posted by poisondeathray View Post
    At the end of the day, most people just want something similar in quality to input (output close to input quality). Is that too much to ask for a NLE ?
    The answer is an obvious yes, because a NLE is a Non Linear Editor, not a Non Linear Encoder. The job of an NLE is to allow you to input an acquisition format, edit it as you so desire and then output a mastering format not a delivery format

    It's the job of an encoder to then take a mastering format and produce a high quality delivery format.
    Yes, I agree, but it' s nice to be able to have the option to export from the NLE in a specified delivery format of whatever you choose. Not all projects are massive undertakings.

    And not all deliver formats are necessarily "high quality". Sometimes you have other targets like low bandwidth web, or portable devices, etc...



    BTW, if you didn't download the source file I linked to, then how can you make a valid judgment with regards to nvenc quality vs x264 quality?
    I didn't download that big source file until just now.

    In case you didn't know , I've done many tests on my own with multiple sources. That's how I know your tests and settings were wrong before even looking at the video


    As for nvenc being an "immature" encoder, I would say that x265 is also an immature encoder (I have seen encodes that are stunning and encodes that are crappy) but so is x264, it still has well known issues with dark areas and the solution is basically to let the bit rate balloon up.
    NVEnc is at a stage like x264 was many years ago . Except many times faster . If you go way back a few years I was clamoring about x264 AQ too Go find those posts , there is an eerie similarity

    No, for x264 there are other settings you can use, including AQ modes and zones . You can "fix" almost anything without ballooning everything
    Quote Quote  
  27. Originally Posted by sophisticles View Post
    @Jagabo, unless you have a 4k monitor I don't think you can honestly say that the supposed shortcomings of nvenc you claim to see are the result of the encode and not scaling errors or related issues with the player and/or drivers you're using.
    I'm using AviSynth to open the videos and crop so I can see 1:1 what's in them. I can also interleave them so it's easy to step back and forth between the same frame of each video.
    Quote Quote  
  28. Originally Posted by poisondeathray View Post
    Can you confirm what is actually used in the actual ffmpeg implementation of nvenc today ?

    In this thread linked, and in the Japanese thread, selur reported that AQ was a software feature . Mind you that was for NVencC . I don't know the basis or validity for that, you'd have to check with selur . That's probably where I got early implementations being "CPU" .
    You mean other than the official NVIDIA SDK's and programming guides that explicitly state that said features, including SAQ and TAQ are powered by the CUDA cores? There's also the obvious test to run a test encode sans AQ and monitor cpu use, gpu use, encoding engine use, video ram use and system ram use, take that as a baseline then run the exact same encode only this time enable either SAQ or TAQ or both and again monitor the usage of the various parts of the computer, you can verify this on Windows via GPU-Z, I have verified this on Linux via the NVIDIA control panel.

    The misunderstanding comes from many people assuming that "software" implies running on a cpu, that is not what it means, when we talk about a hardware accelerated feature we are talking about something implemented on fixed function units, like ASICs, when we talk about a "software" implementation we're talking about a feature that is configurable by a programmer that's executed on a general purpose processor.

    GPU'z have been general purpose processors since the Geforce 3 and it's fully programmable vertex shaders via HLSL, today's gpu's are fully programmable using a wide variety of languages and thus it's proper to call any feature that run on a gpu a "software" feature.

    It should be easy to verify that Rigaya's NVencC operates in a similar fashion, just try an encode with and without SAQ and see if your gpu usage changes or if your cpu usage changes.
    Quote Quote  
  29. Good idea

    Just because something is outlined in a SDK , doesn't mean its implemented in the same fashion or with all features. I'd trust your actual testing data more



    For ffmpeg I was getting mixed results , almost not statistically significant , but I think it was because I was using NV decode/encode.

    So using intel decode to feed , findings are more clear. On the Nvidia GPU, definitely 1-3 % higher GPU load (which I think are the "cuda cores") , and 1-2% memory load, and at peak ~20-25% more video engine load when using sAQ .

    Haven't tested NVEncC yet, or tAQ, but I'll report back if things are significantly different or if I get chance to test later .
    Quote Quote  
  30. What video card did you say you have again?

    On Ubuntu 17.04, with the latest drivers, a GTX1050 and the latest ffmpeg build, depending on the resolution and what exactly I'm doing I get much higher usage of the video card, I once made a 4k mosaic of 4 distinct 1080p with rc-lookahead 32, TAQ, SAQ 15, 4 b-frames, 4 reference frames, 48 mb/s and the cuda cores and video engine were pegged at nearly 100% each and out of 2gb frame buffer 1.8gb was being used.

    The gpu usage definitely goes up on my system if lookahead, SAQ or TAQ are used as does the buffer used.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!