VideoHelp Forum
+ Reply to Thread
Results 1 to 19 of 19
Thread
  1. as i already have x570 motherboard and ram bought like 4 months ago i'm planning to get either 3900x or 3950x from AMD which both are high core count cpu's and wanted to ask how much cores does x265 or hevc encoding can handle? i know that my used program can max out 8 core cpu (tested with friends ryzen 7 3700x) which speeds up video encoding... also another question would be - why do people on other topics say ''Note that quality diminishes as the number of cores/threads used to encode a video file increases. Once that number reaches the double digits, the quality decrease starts to become noticeable. '' why quality should decrease, if encode is being done twice faster because of more cores on same preset and same settings are people suggesting that high core count cpu cannot be used with x265 to produce good encoding results? thus i'm on BIG dilema here also please simplify the answers if possible thanks
    Quote Quote  
  2. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    A single core encoding provides the best quality, as that single core has access to all the information when it's needed to make the best choices. Multi-core encodings don't get access to all the information, so it makes less efficient decisions. Getting worse with every added core.
    Quote Quote  
  3. https://www.pugetsystems.com/pic_disp.php?id=56252
    Simply put, the main difference between the AMD Ryzen and Intel 9th Gen processors is that Intel is better at processing H.264/H.265 footage, while AMD Ryzen processors are better at processing RED footage."
    https://www.pugetsystems.com/labs/articles/Premiere-Pro-CPU-Roundup-AMD-Ryzen-3rd-Gen-...X-series-1535/

    Sure, amd is coming out with 32+ core CPUs, but $$$$.

    I'd look carefully at the benchmarks above and cpu price to pick one that fits your budget.

    That said, Intel Quicksync on the latest cpus are very competitive for Fast encodes (e.g. 200+fps h.265 encodes for bluray rips).
    https://www.streamingmedia.com/Articles/Editorial/Featured-Articles/Hardware-Based-Tra...aspx?pageNum=2

    And fast nvidia cards easily break 400+ fps on 2k movie encodes.
    https://helgeklein.com/blog/2019/02/hardware-encode-video-in-h-265-with-free-tools-to-...ve-disk-space/

    .....

    So yes, you can go buy a 32+ core amd cpu for several thousand dollars with software x.265 encodes,

    or a cheap $200-500 Intel cpu with Quicksync, even add a $500 nvidia 2060 and easily match and beat using hardware encodes.

    Honestly, before buying a $$$$ cpu, look at the nvidia hardware h.265 encoder benchmarks - I bet you can do much better cheaper with hardware encoding.

    But if not, more cores the merrier. No upper limit afik on cores utilized by software encoders. All the tests I've seen max out all the multi core systems cpu cores for encode when pushed to their max up to 128 cores.
    https://medium.com/@singhkays/testing-x265-encoder-scaling-on-a-128-core-azure-vm-for-...r-20bba32aadfc
    ....

    Quality, at least on Intel cpu didn't change with number of simultaneous encodes - just slower overall to complete.

    No idea what amd is doing.
    Quote Quote  
  4. Originally Posted by KarMa View Post
    A single core encoding provides the best quality, as that single core has access to all the information when it's needed to make the best choices. Multi-core encodings don't get access to all the information, so it makes less efficient decisions. Getting worse with every added core.
    But, if it decreases does it decrease drastically? or for example like 1% from each added core so lets say for 4 core would be 5% loss for 8 would be 10% something like this (preset is slower), because now i was about to get atleast 3900x to make my encoding faster and to have spare resources when encoding is running, for example so i can encode and game at same time... because my 4c 8t cpu lets me encode and use the pc fine no problems (feels a bit less responsive when encoding and cpu is at 99%) and also can play games like lol with some ping or fps drops but its not good experience, AAA games are unplayable when encoding... BIG dilemma - one side says that encode would scale fine, another side says more cores - decreasing quality, encoding on single core would take AGES to finish... 4 times or more longer than now 25min video takes around 1 hour 10min to finish on preset slower
    Last edited by sonyzz; 26th Jan 2020 at 10:37.
    Quote Quote  
  5. Originally Posted by babygdav View Post
    https://www.pugetsystems.com/pic_disp.php?id=56252
    Simply put, the main difference between the AMD Ryzen and Intel 9th Gen processors is that Intel is better at processing H.264/H.265 footage, while AMD Ryzen processors are better at processing RED footage."
    https://www.pugetsystems.com/labs/articles/Premiere-Pro-CPU-Roundup-AMD-Ryzen-3rd-Gen-...X-series-1535/

    Sure, amd is coming out with 32+ core CPUs, but $$$$.

    I'd look carefully at the benchmarks above and cpu price to pick one that fits your budget.

    That said, Intel Quicksync on the latest cpus are very competitive for Fast encodes (e.g. 200+fps h.265 encodes for bluray rips).
    https://www.streamingmedia.com/Articles/Editorial/Featured-Articles/Hardware-Based-Tra...aspx?pageNum=2

    And fast nvidia cards easily break 400+ fps on 2k movie encodes.
    https://helgeklein.com/blog/2019/02/hardware-encode-video-in-h-265-with-free-tools-to-...ve-disk-space/

    .....

    So yes, you can go buy a 32+ core amd cpu for several thousand dollars with software x.265 encodes,

    or a cheap $200-500 Intel cpu with Quicksync, even add a $500 nvidia 2060 and easily match and beat using hardware encodes.

    Honestly, before buying a $$$$ cpu, look at the nvidia hardware h.265 encoder benchmarks - I bet you can do much better cheaper with hardware encoding.

    But if not, more cores the merrier. No upper limit afik on cores utilized by software encoders. All the tests I've seen max out all the multi core systems cpu cores for encode when pushed to their max up to 128 cores.
    https://medium.com/@singhkays/testing-x265-encoder-scaling-on-a-128-core-azure-vm-for-...r-20bba32aadfc
    ....

    Quality, at least on Intel cpu didn't change with number of simultaneous encodes - just slower overall to complete.

    No idea what amd is doing.
    i get your points but what im going for is not quad number costing parts its triple number ones and they recently got cheaper with 440gbp for 12 core and 699 for 16 core, also i already got motherboard on august as i already was aiming for ryzen 3000 series to begin with so intel is kinda not an option... i'm using 5 year old intel quad core cpu right now, its still good for gaming but i want to encode and game at same time without buying new motherboard with each upcoming cpu release
    Quote Quote  
  6. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Originally Posted by sonyzz View Post
    Originally Posted by KarMa View Post
    A single core encoding provides the best quality, as that single core has access to all the information when it's needed to make the best choices. Multi-core encodings don't get access to all the information, so it makes less efficient decisions. Getting worse with every added core.
    But, if it decreases does it decrease drastically?
    For the 12 cores you are looking at, I still would not worry about the quality hit. It's just that it becomes noticeable in side by side comparisons that there is a hit to efficiency. You are still going to be way better than any Intel QS, AMD, Nvidia hardware encoders.


    Originally Posted by babygdav View Post
    https://www.pugetsystems.com/pic_disp.php?id=56252
    Simply put, the main difference between the AMD Ryzen and Intel 9th Gen processors is that Intel is better at processing H.264/H.265 footage, while AMD Ryzen processors are better at processing RED footage."
    https://www.pugetsystems.com/labs/articles/Premiere-Pro-CPU-Roundup-AMD-Ryzen-3rd-Gen-...X-series-1535/

    Sure, amd is coming out with 32+ core CPUs, but $$$$.

    I'd look carefully at the benchmarks above and cpu price to pick one that fits your budget.

    That said, Intel Quicksync on the latest cpus are very competitive for Fast encodes (e.g. 200+fps h.265 encodes for bluray rips).
    https://www.streamingmedia.com/Articles/Editorial/Featured-Articles/Hardware-Based-Tra...aspx?pageNum=2
    From your own source. "The Intel advantage for H.264/265 media is pretty easily explained by the fact that Premiere Pro supports hardware accelerated encoding/decoding of H.264/265 media via Intel Quick Sync. AMD does not have this feature (nor does the Intel X-series for that matter), which explains why the Intel 9th Gen CPUs are simply going to be better at processing H.264/265 footage."

    This is all fine and well for Adobe PP workflows, but by saying Intel is better for H.264/H.265 is a bit misleading as not all programs utilized Quicksync and hardware decoding introduces higher chances of decoding errors. Along with the fact that you can just throw in a cheap GPU to decode H.264/265 if your NLE supports the card.
    Quote Quote  
  7. i have a 2990wx ( 32 core - 64 threads) that im very happy with and the x265 encoding quality is excellent
    Quote Quote  
  8. Originally Posted by KarMa View Post
    Originally Posted by sonyzz View Post
    Originally Posted by KarMa View Post
    A single core encoding provides the best quality, as that single core has access to all the information when it's needed to make the best choices. Multi-core encodings don't get access to all the information, so it makes less efficient decisions. Getting worse with every added core.
    But, if it decreases does it decrease drastically?
    For the 12 cores you are looking at, I still would not worry about the quality hit. It's just that it becomes noticeable in side by side comparisons that there is a hit to efficiency. You are still going to be way better than any Intel QS, AMD, Nvidia hardware encoders.
    I mean the quality loss is always present on any encode, and im fine with it up to the point for example if there is around 10-15% quality loss mainly something like missing grain due to x265 removing it, but visually to naked eye looking pretty much same, by looking at video (not comparing frame to frame) and file size being around 10 times smaller - i am fine with it (good example would be 7.5-8gb 1440x1080 29min video compressed to just shy of 350mb at same resolution on x265 and visually to the naked eye looking the same except missing small grain particles due to x265 auto removing them) i'm more than fine with such results just that it took around 1 hours 20 minutes to complete on slower preset at around 4-4.5fps speed hence the quad core im having now. So, was wondering if i can speed it up to lets say 8fps and have same or similar quality but, when i started researching most of forums said what you said ''more cores - quality decrease'' so i got kinda scared to the point, thinking that 16 core cpu would butcher video quality so much that it would look like crappy x264 encode, thus getting confused a lot.
    Last edited by sonyzz; 26th Jan 2020 at 11:21.
    Quote Quote  
  9. Originally Posted by hdfills View Post
    i have a 2990wx ( 32 core - 64 threads) that im very happy with and the x265 encoding quality is excellent
    That option is out of my budget as motherboard i got has AM4 socket with x570 chipset and my nh-d15 cooler base would be too small for threadripper anyways more additional expenses.
    Quote Quote  
  10. Funny that this topic should be brought up because I was just researching the same thing. A few thoughts:

    1) I absolutely can not bring myself to build a system with a current generation AMD cpu, not because of performance but because they just do not hold their value well at all. The 12C/24T 1920X was originally an $800, iirc, now you can buy one for $200, though the motherboards are still very expensive. The 2700X was a $300+ processor when it came out, I have seen them now for as low as $130. I fully expect the 3900X to eventually be under $200 in about a year.

    2) you need to make sure that the software you use can utilize an extremely high core count cpu and even if it does seem to load up all the cores is it actually speeding the process up. Look at this article from the handbrake people:

    https://handbrake.fr/docs/en/1.3.0/technical/performance.html

    Note the scaling on a 22C/44T Xeon. According to the following article the sweet spot for Premiere Pro is 8-12 cores:

    https://community.adobe.com/t5/premiere-pro/premiere-pro-and-multicore-support/td-p/4788536

    Look at this article:

    https://www.guru3d.com/articles-pages/amd-ryzen-9-3950x-review,18.html

    Vegas only supports 16 threads, not that it matters because a 2 minute test clip takes over 50 minutes to complete on a lowly 200GE and nearly 11 minutes to complete on a 2950X, meaning that a 2 hour movie would still take 10+ hours to finish encoding on the fastest cpu tested.

    I've been running a bunch of test encodes on a i3 7100, 16GB DDR4 2666, and separate NVMe's for reading and writing and a 52 minute test source, with 3 filters, for saturation, contrast and sharpening and encoding with x264 very fast, on a 4790 based Xeon with 16GB DDR3 1600 SSD's take nearly 9 hours to complete, if I add a GTX1050 to handle the filtering, that drops down to the 6-7 hour range, depending on x264 tuning, and ironically enough if I use the gpu for both filtering and encoding via nvenc, either avc or hevc, by the time I tune nvenc for maximum quality, it's actually slower than the x264 encode by just a few minutes (I have tested this numerous times, with numerous sources, on that system nvenc is reliably slower than x264 when used from within an NLE).

    The i3 meanwhile, using the iGPU for filtering and QSV via VAAPI for encoding, beats that system by hours, taking 5 1/2 hours to do the same job.

    I am expecting a decent tax refund and I had considered buying a second hand dual Xeon workstation (12C/24T total can be had for about $300), but after all the testing I have done with the i3, I think I might just buy a i5 9400 for $130 with the nearly free MB microcenter gives you when you buy a cpu and just use the ram and NVMe's I have and call it a day.
    Quote Quote  
  11. Originally Posted by sophisticles View Post
    Funny that this topic should be brought up because I was just researching the same thing. A few thoughts:

    1) I absolutely can not bring myself to build a system with a current generation AMD cpu, not because of performance but because they just do not hold their value well at all. The 12C/24T 1920X was originally an $800, iirc, now you can buy one for $200, though the motherboards are still very expensive. The 2700X was a $300+ processor when it came out, I have seen them now for as low as $130. I fully expect the 3900X to eventually be under $200 in about a year.

    2) you need to make sure that the software you use can utilize an extremely high core count cpu and even if it does seem to load up all the cores is it actually speeding the process up. Look at this article from the handbrake people:

    https://handbrake.fr/docs/en/1.3.0/technical/performance.html

    Note the scaling on a 22C/44T Xeon. According to the following article the sweet spot for Premiere Pro is 8-12 cores:

    https://community.adobe.com/t5/premiere-pro/premiere-pro-and-multicore-support/td-p/4788536

    Look at this article:

    https://www.guru3d.com/articles-pages/amd-ryzen-9-3950x-review,18.html

    Vegas only supports 16 threads, not that it matters because a 2 minute test clip takes over 50 minutes to complete on a lowly 200GE and nearly 11 minutes to complete on a 2950X, meaning that a 2 hour movie would still take 10+ hours to finish encoding on the fastest cpu tested.

    I've been running a bunch of test encodes on a i3 7100, 16GB DDR4 2666, and separate NVMe's for reading and writing and a 52 minute test source, with 3 filters, for saturation, contrast and sharpening and encoding with x264 very fast, on a 4790 based Xeon with 16GB DDR3 1600 SSD's take nearly 9 hours to complete, if I add a GTX1050 to handle the filtering, that drops down to the 6-7 hour range, depending on x264 tuning, and ironically enough if I use the gpu for both filtering and encoding via nvenc, either avc or hevc, by the time I tune nvenc for maximum quality, it's actually slower than the x264 encode by just a few minutes (I have tested this numerous times, with numerous sources, on that system nvenc is reliably slower than x264 when used from within an NLE).

    The i3 meanwhile, using the iGPU for filtering and QSV via VAAPI for encoding, beats that system by hours, taking 5 1/2 hours to do the same job.

    I am expecting a decent tax refund and I had considered buying a second hand dual Xeon workstation (12C/24T total can be had for about $300), but after all the testing I have done with the i3, I think I might just buy a i5 9400 for $130 with the nearly free MB microcenter gives you when you buy a cpu and just use the ram and NVMe's I have and call it a day.
    I have the budget to get 3950x but im on dilemma here that what will be the point of it if it cannot use atleast 12 cores to the max with 4 cores being spare for me to game on, on top of that what would be the point of it if quality would decrease a lot by using more cores, no such things are specified in x265 documents, like how much quality decrease per core added, or what is breaking point - where more cores doesn't mean faster encodes, or my head is too dum to understand if its written somewhere in rocket science language... leaving me and others wandering in forums creating topics, wasting other people time ps. my friend says get cpu from amazon if it won't do what you expected - return it, and get another one...
    Quote Quote  
  12. https://www.streamingmedia.com/Articles/Editorial/Featured-Articles/Hardware-Based-Tra...tors_selection

    https://devblogs.nvidia.com/turing-h264-video-encoding-speed-and-quality/

    https://streaminglearningcenter.com/blogs/ffmpeg-command-threads-how-it-affects-qualit...rformance.html

    Notice importantly that a test of cpu threads resulted in a vmaf store difference of about 2/100 = 2%.

    Both hardware encoders from nvidia and Intel perform similarly in vmaf to cpu encoding, but far faster. Importantly, simply switching the Intel encoder from speediest to highest quality improves vmaf more than 2. Also, simply giving an encode 500kbps more bumps up vmaf more than 2.

    In other words, yes, you'll lose sight quality (at medium speed encodes) of about 2% with a 32 core encode, but slightly bumping up bitrate can easily compensate. Or, simply push through multiple single thread encodes at the same time.

    ...

    Realistically, pretty much everyone in broadcast have moved to hardware encodes due to their ability to match cpu encode quality I general at speeds far faster today.

    Importantly notice the one test on the hp z480 with 40 cores maxing out with a cpu encode speed at about ONLY 110 fps!! Any hardware encode like Intel Quicksync on a recent 4-8 core i7 can push 300+ fps easily far cheaper, as can a good nvidia card which can do slightly better with h.265 encodes.

    ....

    A ton of cores has other benefits in 3D modeling, other types of non video encoding and tasks, but for h.264/h.265, you'll get similar quality but 3x+ faster with hardware. Any tradeoff with multiple cpu encodes or hardware vs software is easily regained by giving the encode about 500kbps more bit rate.

    Ie. For most home users, better to spend the money on a nvidia card like 1660 through 2080 than it will be to increase cpu core count (assuming you've got a decent cpu) for both video encoding, and gaming ,)
    Quote Quote  
  13. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    https://devblogs.nvidia.com/turing-h264-video-encoding-speed-and-quality/

    Nvidia is claiming better quality than x264 Medium. Wow, big claim. I don't buy it consider all of the visual x264 vs Hardware tests posted here over the years has made it fairly clear that Hardware encoders are considerably less bitrate efficient and have worse bitrate allotment management. Most recent comparison was this time sink of a thread, https://forum.videohelp.com/threads/394581-Encoding-test-some-AVC-encoders

    Realistically, pretty much everyone in broadcast have moved to hardware encodes due to their ability to match cpu encode quality I general at speeds far faster today.
    The quality doesn't match but it's more energy efficient and these encoders usually have a lot of encoding ability overhead so they won't ever get slowed down (reliable).
    Quote Quote  
  14. https://www.streamingmedia.com/Articles/Editorial/Featured-Articles/Hardware-Based-Tra...aspx?pageNum=2
    "Note that there was a great deal of variation in the subjective results on a clip-by-clip basis. For example, NVIDIA enjoyed a significant advantage over Intel Quick Sync in the Football and Meridian test clips, which Intel Quick Sync reversed with a substantial lead in the GTAV clip, where both hardware codecs ranked behind the x264 medium clip. "

    "Note that for these H.264 clips, we did not use any tuning mechanisms for the objective benchmarks" not did they use the super slowest software encoding settings."

    But, in general the Visual scores closely followed each other hardware or software.

    So yes, I'm sure cpu based encodes are better if you love sub 30fps encodes, 10x slower than hardware encodes, at the SAME bitrates - BUT, simply giving the hardware encodes 500kbps~ more bit rate at super fast encode settings easily makes up for the Visual difference.

    Yes, cpu software encodes are the best if
    a. You're trying to squeeze in the best and most files into a limited space
    b. and don't care about time.

    ....

    For me, time really was the huge deciding factor.
    No more waiting hours for each dvd/bluray encode on slower encode settings. Just a dozen minutes with Quicksync at slightly larger file sizes and I'm done with videos that look fine.

    (And begs the question. If you're simply reduplicating the encode the scene has already ripped online, why bother doing all that yourself? Burning through a ton of energy and time.)

    That other long thread tested consumer level codecs, but didn't test Adobe Media Converter which does better imo on h.264 encodes without much work (other than setting two pass, highest quality).

    Commercially, Ateme etc are used instead for major movie disc mastering, so simply comparing 2nd and 3rd tier encoders to each other still doesn't reveal what the Best can do AT THE SAME BITRATES.

    Keep in mind, even poorer encoders can do Better by simply increasing bitrates. Thus, unless absolute minimum file size is of importance, simply give it more bits.


    ........

    In the end, two different questions.

    Asking for faster cpu encodes isn't asking for the Best software encoding quality. For the latter, you'd pick the Slowest settings, multipass, running on 1 cpu core (cpu water cooled and over clocked to run 5ghz+), etc.

    Also, for a Fixed bitrate, a $1000+ 24+ core cpu will be working at its max trying to encode a nice h.264 file, which in turn will be killed in quality by simply encoding using Intel Quicksync on a $300 cpu into the better looking h.265 codec at the same file size/bitrate, with matching or faster encode speeds.

    http://aerobytepc.com/index-html/nvidia-hardware-encoding/

    For h.265, Streaming Media tests show that hardware encoders can do Better than software encodes depending on settings and bitrate.

    https://unrealaussies.com/tech/nvenc-x264-quicksync-qsv-vp9-av1/4/#x265

    "Things to note for x265:

    Slow preset performs very similarly to Turing HEVC/H.265 NVENC of the same bitrate.
    Many presets beat both Pascal and Maxwell HEVC NVENC at 4Mbps and 6Mbps, but at 8Mbps the advantage fades."

    https://software.intel.com/en-us/articles/evolution-of-hardware-hevc-encode-on-tenth-g...ore-processors

    At the Slowest speed encodes between Quicksync on the latest 10th generation Intel cpus vs x.265, essentially no visual quality differences of note using hw vs sw encoding today.
    Last edited by babygdav; 26th Jan 2020 at 17:53.
    Quote Quote  
  15. @babygdav
    or a cheap $200-500 Intel cpu with Quicksync, even add a $500 nvidia 2060 and easily match and beat using hardware encodes.
    “...and beat using software encodes” was meant here, right ?

    But if not, more cores the merrier. No upper limit afik on cores utilized by software encoders. All the tests I've seen max out all the multi core systems cpu cores for encode when pushed to their max up to 128 cores.
    From this thread :
    Click image for larger version

Name:	threads1_000108.png
Views:	946
Size:	553.5 KB
ID:	51680
    1 thread
    Click image for larger version

Name:	threads36_000108.png
Views:	834
Size:	518.9 KB
ID:	51681
    36 threads
    Quote from “poisondeathray” :
    “Here is a b-frame comparison at 1 and 36 threads for 1000kbps for 720x400. Open them in different tabs and flip back and forth.
    [screenshots above]
    But it's unrealistic to run x264 with --threads 1.
    If you ran threads in between like 18 the compression/quality would be somewhere in between. Higher threads than 36 like on more modern workstations would be even lower quality. You can over come this by using higher bitrates (higher bitrates "fix" everything).”


    because my 4c 8t cpu lets me encode and use the pc fine no problems (feels a bit less responsive when encoding and cpu is at 99%) and also can play games like lol with some ping or fps drops but its not good experience, AAA games are unplayable when encoding... BIG dilemma - one side says that encode would scale fine, another side says more cores - decreasing quality, encoding on single core would take AGES to finish... 4 times or more longer than now 25min video takes around 1 hour 10min to finish on preset slower
    First World Problems...
    Quote Quote  
  16. Hardware encodes.
    Today's h.265 hardware encoders are good enough at reasonable settings that they match software encodes.

    Now, if you go crazy with a 1 thread, 1 core only software encode at the very slowest/best settings that'll take hours and hours, yes, software is absolutely better.

    Just comes down to are you willing to wait hours and hours vs minutes? Willing to increase bitrate slightly to gain back the difference when using hardware encoders?

    If it's multi core software vs hardware at reasonable settings, you've got a current draw in encode quality.

    Every encoder, software or hardware, will have an extreme case that makes it look worse for that 1 case, but overall, you've got parity at fast to medium speed encodes for h.264/h.265 such that for basics like general disc ripping and encoding, most can go with Quicksync or nvidia encodes and the slight differences won't matter.
    Quote Quote  
  17. “Here is a b-frame comparison at 1 and 36 threads for 1000kbps for 720x400. Open them in different tabs and flip back and forth.
    [screenshots above]
    But it's unrealistic to run x264 with --threads 1.
    If you ran threads in between like 18 the compression/quality would be somewhere in between. Higher threads than 36 like on more modern workstations would be even lower quality. You can over come this by using higher bitrates (higher bitrates "fix" everything).”
    but frame threads can be set in x265 encoding settings like frame threads = 1 on auto it uses wpp
    Quote Quote  
  18. current settings are more like that for 1080p video on 4c 8t i7:
    wpp / ctu=32 / min-cu-size=8 / max-tu-size=32 / tu-intra-depth=2 / tu-inter-depth=2 / me=3 / subme=5 / merange=57 / rect / no-amp / max-merge=3 / temporal-mvp / no-early-skip / rskip / rdpenalty=0 / no-tskip / no-tskip-fast / strong-intra-smoothing / no-lossless / no-cu-lossless / no-constrained-intra / no-fast-intra / open-gop / no-temporal-layers / interlace=0 / keyint=250 / min-keyint=23 / scenecut=40 / rc-lookahead=30 / lookahead-slices=4 / bframes=8 / bframe-bias=0 / b-adapt=2 / ref=4 / limit-refs=2 / limit-modes / weightp / weightb / aq-mode=3 / qg-size=32 / aq-strength=0.80 / cbqpoffs=0 / crqpoffs=0 / rd=4 / psy-rd=0.70 / rdoq-level=2 / psy-rdoq=1.00 / log2-max-poc-lsb=8 / limit-tu=0 / no-rd-refine / signhide / deblock=1:1 / no-sao / no-sao-non-deblock / b-pyramid / cutree / no-intra-refresh / rc=crf / crf=22.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ipratio=1.40 / pbratio=1.30
    Quote Quote  
  19. So we all know about quality and have discussed that to death.

    We also know about Hardware vs software regarding speed.

    What I'm not seeing discussed so much is efficiency of CPU. This doesn't really matter on hardware encodes, but I've recently been told (hence my coming here) that worse than a loss in quality at high CPU thread count is a loss of efficiency. The person that said this suggested 4 cores 8 threads as a maximum. For efficiency.

    Where this comes into play is when you have 32 threads to play with and apparently the loss in efficiency (meaning it takes a lot longer) is noticeable. Obviously for one encode you won't really care, but if you have thousands you will.

    What I'd like to know is how much efficiency (as in time efficiency) is lost doing e.g. 1 encode over 32 threads vs 4 encodes each locked to 8 threads over say 3 months of encoding.

    What would this equal in time as a percentage. E.g. over one month it will be 10% slower / faster etc.

    I can certainly see on my threadripper that 1 or 2 encodes on a 32 thread system do not fully utilise all the cores / threads. But these two encodes are spread across all 32 threads.

    I'm going to try it limited to specific threads anyway, but thought it would be a good discussion to cap this topic off.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!