VideoHelp Forum
+ Reply to Thread
Results 1 to 15 of 15
Thread
  1. Member
    Join Date
    Nov 2020
    Location
    Rainbow
    Search PM
    Hello members!

    Why is the NVIDIA GTX 1660 faster in 8K hevc encoding than the RTX 3070 ?

    Watch the test here:

    https://www.youtube.com/watch?v=YuV0ujgR5Qo
    Last edited by Video Grain; 21st Nov 2020 at 07:46.
    Quote Quote  
  2. Member
    Join Date
    Aug 2008
    Location
    Australia
    Search Comp PM
    The rtx3070 is a new hardware platform.Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures.It will take some time for the drivers and software to catch up and mature.
    Quote Quote  
  3. Member
    Join Date
    Nov 2020
    Location
    Rainbow
    Search PM
    Originally Posted by isapc View Post
    The rtx3070 is a new hardware platform.Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures.It will take some time for the drivers and software to catch up and mature.
    AFAIK, the only difference in Ampere that it can encode in HEVC12 bit, and it got a AV1 HW decoder, but it is not faster in encoding neither in decoding.
    Quote Quote  
  4. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Video encoding is done in a separate area of the whole Nvidia GPU chipset (NVENC), not related to the units responsible for 3D graphics. But as the author explained, utilization in other areas may limit the speed of the NVENC chip, so there may be several small reasons which sum up in a few percent of lower efficiency. Still, nothing to seriously worry about. The magnitude is equal. And if you prefer quality, you won't use a GPU encoder anyway...
    Quote Quote  
  5. Member
    Join Date
    Nov 2020
    Location
    Rainbow
    Search PM
    Originally Posted by LigH.de View Post
    Video encoding is done in a separate area of the whole Nvidia GPU chipset (NVENC), not related to the units responsible for 3D graphics. But as the author explained, utilization in other areas may limit the speed of the NVENC chip, so there may be several small reasons which sum up in a few percent of lower efficiency. Still, nothing to seriously worry about. The magnitude is equal. And if you prefer quality, you won't use a GPU encoder anyway...
    People must use GPU encoding, 12it NVENC HEVC is damn good quality . The commercially available CPUs are still too slow for video compression. Don't forget even 4K@ 30 is backward, even 4K@60 is backward too. Nowadays only the 8K@60 is COOL, and the retail CPUs are not enough fast for that.
    Quote Quote  
  6. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Originally Posted by Video Grain View Post
    Nowadays only the 8K@60 is COOL
    I will take that with a Video Grain of salt and enjoy not needing everything being hyped. Keeps the blood pressure in a healthy range. Cool things are for rich people. The rest must stay humble.
    Quote Quote  
  7. Member
    Join Date
    Nov 2020
    Location
    Rainbow
    Search PM
    Originally Posted by LigH.de View Post
    Originally Posted by Video Grain View Post
    Nowadays only the 8K@60 is COOL
    I will take that with a Video Grain of salt and enjoy not needing everything being hyped. Keeps the blood pressure in a healthy range. Cool things are for rich people. The rest must stay humble.
    It does not change the fact, that only 8K60p (and above) is COOL, and CPUs are too slow for that.
    Quote Quote  
  8. Originally Posted by Video Grain View Post
    AFAIK, the only difference in Ampere that it can encode in HEVC12 bit
    Where have you read that? Everything I've read says Turing and Ampere use the same encoder.

    https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
    Quote Quote  
  9. Originally Posted by LigH.de View Post
    Video encoding is done in a separate area of the whole Nvidia GPU chipset (NVENC), not related to the units responsible for 3D graphics.
    This is grossly misleading, portions of the encoding are done on the ASIC chip and portions are done on the gpu cores; when the encoding is done in RGB instead of NV12 then the whole thing is done on the gpu cores:

    https://docs.nvidia.com/video-technologies/video-codec-sdk/nvenc-video-encoder-api-pro...ide/index.html

    Although the core video encoder hardware on GPU is completely independent of CUDA cores or graphics engine on the GPU, following encoder features internally use CUDA for hardware acceleration.

    Note: The impact of enabling these features on overall CUDA or graphics performance is minimal, and this list is provided purely for information purposes.
    Two-pass rate control modes for high quality presets
    Look-ahead
    All adaptive quantization modes
    Weighted prediction
    Encoding of RGB contents
    I can tell you from personal experience that simply switching from -pix_fmt nv12 to -pix_fmt rgb with nvenc via ffmpeg, the gpu use jumped to nearly 100% and NVENC usage dropped to 0% encoding a 4K video.

    In theory, with higher resolutions + rgb encoding, the RTX3070 should be way faster than a GTX1660.
    Quote Quote  
  10. Member
    Join Date
    Nov 2020
    Location
    Rainbow
    Search PM
    Originally Posted by sophisticles View Post
    Originally Posted by LigH.de View Post
    Video encoding is done in a separate area of the whole Nvidia GPU chipset (NVENC), not related to the units responsible for 3D graphics.
    This is grossly misleading, portions of the encoding are done on the ASIC chip and portions are done on the gpu cores; when the encoding is done in RGB instead of NV12 then the whole thing is done on the gpu cores:

    https://docs.nvidia.com/video-technologies/video-codec-sdk/nvenc-video-encoder-api-pro...ide/index.html

    Although the core video encoder hardware on GPU is completely independent of CUDA cores or graphics engine on the GPU, following encoder features internally use CUDA for hardware acceleration.

    Note: The impact of enabling these features on overall CUDA or graphics performance is minimal, and this list is provided purely for information purposes.
    Two-pass rate control modes for high quality presets
    Look-ahead
    All adaptive quantization modes
    Weighted prediction
    Encoding of RGB contents
    I can tell you from personal experience that simply switching from -pix_fmt nv12 to -pix_fmt rgb with nvenc via ffmpeg, the gpu use jumped to nearly 100% and NVENC usage dropped to 0% encoding a 4K video.

    In theory, with higher resolutions + rgb encoding, the RTX3070 should be way faster than a GTX1660.

    Practically, the 1660TI mobile is not really slower (maybe 1-2%) in my laptop, than my 2070 in my PC. I can also confirm , and that is true for ALL available YUV formats, , because I have never encoded videos in RGB. Are you sure that RGB encoding is possible for NVENC chips?

    Why do you use FFMPEG, it is very backwards and have very few options in NVENC, and many basic options are still lacking. Why don't you use the more sophisticated and modern Rigaya's NVENC?
    Last edited by Video Grain; 24th Nov 2020 at 01:12.
    Quote Quote  
  11. ^So basically what you just said is that you enjoy not understanding what you read.
    Quote Quote  
  12. Unless something has changed, NVEnc does not support actual RGB export, only import of a RGB stream

    "Encoding of RGB contents" is rather ambiguous, but that refers to HW CUDA acceleration for RGB input, not output. Ie. the RGB=>YUV conversion step is HW accelerated by "CUDA cores" (and those few other operations listed). The YUV encoding of the actual export video is not done by "CUDA cores" . (unless you are using the old depreciated CUDA encoder, not NVEnc)
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    Unless something has changed, NVEnc does not support actual RGB export, only import of a RGB stream

    "Encoding of RGB contents" is rather ambiguous, but that refers to HW CUDA acceleration for RGB input, not output. Ie. the RGB=>YUV conversion step is HW accelerated by "CUDA cores" (and those few other operations listed). The YUV encoding of the actual export video is not done by "CUDA cores" . (unless you are using the old depreciated CUDA encoder, not NVEnc)
    Feel free to try it yourself and monitor gpu and nvenc usage during -pix_fmt nv12 and -pix_fmt rgb, I've done hundreds of encodes using both settings, I've seen it with my own eyes.
    Quote Quote  
  14. Originally Posted by sophisticles View Post
    Originally Posted by poisondeathray View Post
    Unless something has changed, NVEnc does not support actual RGB export, only import of a RGB stream

    "Encoding of RGB contents" is rather ambiguous, but that refers to HW CUDA acceleration for RGB input, not output. Ie. the RGB=>YUV conversion step is HW accelerated by "CUDA cores" (and those few other operations listed). The YUV encoding of the actual export video is not done by "CUDA cores" . (unless you are using the old depreciated CUDA encoder, not NVEnc)
    Feel free to try it yourself and monitor gpu and nvenc usage during -pix_fmt nv12 and -pix_fmt rgb, I've done hundreds of encodes using both settings, I've seen it with my own eyes.

    I'm assuming you meant -pix_fmt rgb24 , because "rgb" does not exist in ffmpeg parlance . It auto selects "bgr0", but the actual output is actually YV12 (YUV 4:2:0), but full range YUV. NVENC does not support RGB output AFAIK - this is what the millions of gamers have been clamoring for, actual RGB encoding, instead of YUV444 . Did you actually check the output file ?

    For RGBP input -

    Both video engine load ~40% and GPU load ~15% are were used with -pix_fmt rgb24 , actual output file YV12 . So that supports what the Nvidia documents say, the GPU CUDA cores are used for the RGB=>YV12 conversion (or at least partially to help accelerate)

    Video engine load ~26% , GPU load ~2% for -pix_fmt nv12 , actual output file YV12. That supports what the Nvidia documents say
    Quote Quote  
  15. Unfortunately I ended up dissembling my desktop months ago to upgrade both my mother's and brother's computers, have been using 2 laptops since then, so i have no way of double checking.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!