VideoHelp Forum
+ Reply to Thread
Results 1 to 23 of 23
Thread
  1. As it says in the title, since the 1980s I've accumulated about 10TB of mixed videos, past rips, and home movies on my file server, as well as DVDs. Looking at them now, about 6 TB of them (~30 days actual play time) are large files at ridiculously high bitrates that I just don't need. A quick calculation suggests that if I recode all files with > 150MB and > 2500 kbps video, to 2000 kbps HEVC, and rip all DVDs I haven't yet ripped, at the same video bitrate, I'll save about 1.2TB of file space, which is quite a lot for me and worth it.

    I use a mix of FreeBSD and Windows (including Windows Server), and because HEVC is sloowwwww, I have not one, but 3 decent machines I can split the files across and run 24/7, to cut the time right down. (One of these machines would be a background task on my mostly-idle FreeBSD server, the other 2 are unused machines and can run ffmpeg under FreeBSD or Windows). I've already ripped the physical DVDs to disk as .VOB's and .TS, in preparation. My tool of choice is ffmpeg x64 CLI.

    I'd like it ideally done within about a week, if possible, or 2 at most. But at the monent I'm getting about 0.1x ~ 0.3x, even after moving from "slower" to "slow". Probably because there's no graphics cards or encoder hardware assist in any of these three machines.

    So I'm posting to ask advice on 2 aspects:


    1) What hardware might help to seriously accelerate HEVC encoding?

    Ideally I'd like at least one option that will also work with ffmpeg on FreeBSD, because that wqay I can also do any future conversions on my file server from console. But mostly it's a one-off task, so cheapest is best, and if that means Windows for driver availability thats okay.

    Also what I've read, suggests that hardware HEVC encoding acceleration can be a problem, because the options supported by cards are seriously limited compared to those supported by software. I might need 10 bit not 8 bit, or specific colour spaces - I don't know what settings I need. if the card doesn't support them, I don't know how big a deal it is. Insight and card advice needed badly :)

    My available hardware platforms (all use motherboard VGA, none have builtin graphics):
    • File server = Xeon E5-1620 v4 (4 core+HT, 3.5GHz, haswell generation), 256 GB 2400 MHz RAM. (FreeBSD only)
    • Spare #1 = Xeon E5-1680 v4 (8 core+HT, 3.4 GHz, Haswell generation), 128 GB 2400 MHz RAM. (FreeBSD or Windows)
    • Spare #2 = Intel i5-2300 (4 core, 2.8 GHz + turbo, Sandy Bridge generation), 8 GB 1600 MHz RAM but can add more or faster. (FreeBSD or Windows)

    2) Are there any specific settings or formats that I am strongly recommended to use, or that the card needs to support?

    The files are a complete mishmash of existing formats and rips over the years, going back to the 1990s, off many devices, codecs, containers and sources, so I'd like to pick one format that will generally deliver well, wherever I go in future. I'm happy to use HEVC for all of it, regardless. No reason not to, easier to write the script. I've historically used mp4 for max portability but finding of course that most subtitle streams need converting to text_mov, which I know nothing about. Also there's some subtleties with HEVC options, which I don't know much about either.

    My current command I'm trialling is: ffmpeg -i {FILENAME} -map 0 -c:a copy -c:s copy -c:v libx265 -preset slower -b:v 2000k {FILENAME}.RECODED.mp4

    I've run 1 pass simply because 2 pass doubles the time, for uncertain quality benefits. I also now know I need to add -c:s text_mov for mp4. But I've seen stuff about using 10 bit not 8 for HEVC, and stuff about using -pix_fmt yuv422p10 or similar for compatibility and best colour spaces, and I don't have a clue if those are significant benefits, or what exactly is best. In fact I don't even know if mp4 is still the most widely supported format (I'd like to never have to recode again, so I want widest support, hence unsure if I want to use mkv for example). So my settings questions are:
    • In 2019 is mp4 still "acceptable" and should I still use it? If not, what is more widely fully supported, and what issues does mp4 have that I could avoid?
    • My subtitles contain formatting and unicode/non-English characters. What args/containers/formats to use?
    • 8 bit or 10 bit? Add -pix_fmt or not, and which one? Any other args that are recommended?

    SUMMARY

    As you can see, its a bit of a messy question, because I need to consider both hardware acceleration cards, and software args, not just one or the other. Encoding really isn't my skill area, so any advice or information from the many gurus and enthusiasts here, would be so appreciated! :)
    Last edited by STilez; 4th Oct 2019 at 18:56.
    Quote Quote  
  2. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Originally Posted by STilez View Post
    I'll save about 1.2TB of file space, which is quite a lot for me and worth it.
    That's like $20 worth of hard drive space.
    Quote Quote  
  3. I'm a Super Moderator johns0's Avatar
    Join Date
    Jun 2002
    Location
    canada
    Search Comp PM
    For the time spent and amount of money you are saving it's not worth it as KarMa mentioned,just buy more hdd's and switch to hevc as new files come through.
    I think,therefore i am a hamster.
    Quote Quote  
  4. Originally Posted by KarMa View Post
    Originally Posted by STilez View Post
    I'll save about 1.2TB of file space, which is quite a lot for me and worth it.
    That's like $20 worth of hard drive space.
    A bit of a side issue, but as it's file server space, it amplifies a lot. Say 1.2 TB raw storage saved. I'm using ZFS which starts to slow past about 50% full. so saving 1.2TB cuts my pool size down by 2.4TB. The pool is on a redundant file system, so 2.4TB of usable space requires x3 that in disks, because I'm using heavy duty redundancy against disk loss. So 7.2TB of raw disk space. Then I also have the same in my backup server. Suddenly 1.2TB of saved space = almost 15TB of actual storage. And that's assuming I don't have future DVDs/BDs, rips, home movies or downloaded videos to recode in future. Which I will. If 1.2 creeps up to 1.5 or 1.8 TB saveable over time.... you see the point. The file server has about 120TB raw storage and so does the backup server, giving immense peace of mind but also a very high cost per TB of actual store space.

    But I'd like to ask to keep it on track. Even if you wouldn't do it that way for your needs, can you assume I've made the decision that the manner in which files are stored is best for my other uses (which I'm not going into here: broadly, very high efficiency, hence mirrors+caching layers+10G LAN rather than RAID and usual 1G, but needs far more disks per TB space), and that therefore recoding is definitely worth it?

    tl;dr, thank you - I think this is generally a valid question, but it's a distraction in my case.
    Quote Quote  
  5. What CPUs do you have? What GPUs? What codecs are your source videos encoded with? What kind of resolutions (SD, HD)?

    By the way, "speed" as reported by ffmpeg is pretty useless without knowing the frame rate. A 25 fps video encoded at 25 frames per second is a speed of 1.0x. A 50 fps video encoded at 25 frames per second is a speed of 0.5x.
    Last edited by jagabo; 4th Oct 2019 at 22:35.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    What CPUs do you have? What GPUs? What codecs are your source videos encoded with? What kind of resolutions (SD, HD)?
    CPUs and systems available - listed in OP

    GPUs - none ATM in any of them. (Basic on-motherboard VGA, they're server boards by and large). Available GPUs are: Quadro 6000, GRID K2 (both from old VM servers), Radeon HD5670 (probably useless). Most of my available stuff is ex-server, and focused on non-graphics performance. I'll probably need to buy something - either GPU/CUDA/encoder card.

    Hopefully something with drivers that'll allow ffmpeg to use hwaccel encoding on FreeBSD, not just Windows. Bonus if it also allows 10 bit HVEC hwaccel encoding. I don't have a particular budget in mind as I don't know what I actually need.

    Source file existing codecs - a complete total mix. Probably everything under the sun that was ever mainstream or widely used, they go back 20+ years. Most of the big files are naturally movies and TV recordings, but even those include a variety of PVR, DVD rips, and whatever codecs were fashionable at the time or used by the recorder, in a mix of avi/mp4/mkv/mpg containers. I'd expect quite a lot of h264/divx/xvid from the last 10 years, and quite a lot also in whatever DVDs are natively encoded in (from unrecoded VOB rips).

    Resolutions - The older ones are again a complete mix of resolutions, but everything from about 2005 was converted to at least HD, and most of the files in the last 5 years in a mix of HD/4k. That said a lot of them are unconverted, so they have original native resolution, which seems to be HD or better a lot of the time.

    Originally Posted by jagabo View Post
    By the way, "speed" as reported by ffmpeg is pretty useless without knowing the frame rate. A 25 fps video encoded at 25 frames per second is a speed of 1.0x. A 50 fps video encoded at 25 frames per second is a speed of 0.5x.
    I agree, useless for most things. It does give me an idea how long a job will take, as the total play length is known and (in this case) the majority of files are probably a consistent 23 ~ 25 fps. More helpfully, ffmpeg says I'm getting 12 ~18 fps on a test run, using (I assume??) CPU alone. Not sure what command to use, to find what method it's using/will use/could use for a specific encoding job. The total playtime is ~30 days, So its probably got something like 65 million frames to recode, or about 3 weeks 24/7 spread across 3 machines at the above speeds.
    Last edited by STilez; 5th Oct 2019 at 00:21.
    Quote Quote  
  7. Originally Posted by STilez View Post
    Originally Posted by jagabo View Post
    What CPUs do you have? What GPUs? What codecs are your source videos encoded with? What kind of resolutions (SD, HD)?
    CPUs and systems available - listed in OP
    The reason I asked was because it's difficult to tell what your reported encoding speed meant without those details -- and whether your expectations were realistic. A 2160p video will take much longer to encode than a similar length 480p video. A 60 fps video will take longer than a 24 fps video. Shrinking an MPEG 2 video by 20 percent with h.265 will give better quality (relative to the source) than shrinking a h.265 video by the same amount. It also doesn't make sense to reduce them by the same percentage. The MPEG 2 video could be shrunken to 1/4 it's original size and still give good quality. Shrinking an h.265 video by that same amount will look pretty bad (unless its bitrate was already extravagantly high).

    Haswell, Ivy Bridge, and Sandy Bridge only have GPU encoding for h.264 and MPEG2. So you're only using CPU encoding in the test you gave as an example. In any case, ffmpeg to explicitly be told to use the GPU, -c:v hevc_qsv, for example (many of the other command line parameters may have to change too).

    The Xeon E5-1680 with 8 cores, 16 threads should easily be the fastest of your CPUs (other are only 4 core, 8 threads). I don't know which CPU you tested with but I converted a 720x480 Xvid AVI with your command line (preset=slower) with a i5 2500K and got around 0.1x (2.4 frames per second) on a 23.976 fps source. At preset=slow I get around 0.5x (12 fps). Using the GPU of a Celeron J4105 (Gemini Lake) I got around 5x (120 fps) at preset=slow.


    Originally Posted by STilez View Post
    GPUs - none ATM in any of them. (Basic on-motherboard VGA, they're server boards by and large). Available GPUs are: Quadro 6000, GRID K2 (both from old VM servers), Radeon HD5670 (probably useless). Most of my available stuff is ex-server, and focused on non-graphics performance. I'll probably need to buy something - either GPU/CUDA/encoder card.
    Off the top of my head I don't know which of those supports h.265 encoding. You can look it up as well as I, I'm sure. But in general, GPU encoding delivers lower quality at the same bitrate compared to CPU encoding. In general, the quality of GPU encoding has been increasing. Newer GPUs deliver better quality than older GPUs.

    Originally Posted by STilez View Post
    Hopefully something with drivers that'll allow ffmpeg to use hwaccel encoding on FreeBSD
    I don't know anything about which GPUs are supported by ffmpeg under FreeBSD.

    Originally Posted by STilez View Post
    Bonus if it also allows 10 bit HVEC hwaccel encoding. I don't have a particular budget in mind as I don't know what I actually need.
    Within the same generation the video encoding is the same between the low end and high end cards. Ie, a Nvidia 1030 has the same video encoder as the 1080. 10 bit encoding gives smoother gradients than 8 bit encoding, even with 8 bit sources.

    Originally Posted by STilez View Post
    Source file existing codecs - a complete total mix. Probably everything under the sun that was ever mainstream or widely used, they go back 20+ years. Most of the big files are naturally movies and TV recordings, but even those include a variety of PVR, DVD rips, and whatever codecs were fashionable at the time or used by the recorder, in a mix of avi/mp4/mkv/mpg containers. I'd expect quite a lot of h264/divx/xvid from the last 10 years, and quite a lot also in whatever DVDs are natively encoded in (from unrecoded VOB rips).
    In general, the stated goal at each new generation of codecs was to achieve the same quality at half the bitrate of the previous generation. If we consider MPEG 2 as 100 percent bitrate:

    Code:
    MPEG2                          100
    Xvid, Divx, MPEG4 part 2        50
    h.264, AVC, MPEG4 part 10, VP8  25
    h.265, HEVC, VP9                13
    AV1                              6
    In reality, the improvements weren't quite that good. I'd put it closer to 0.7 per generation rather than 0.5. Still, you will find that video encoded with older codecs can be re-encode to much smaller sizes without losing too much quality. Videos encoded with newer codecs won't recompress as much without losing significant quality. And of course, you're re-encoding with a lossy codec so the quality can only go down.

    In your ffmpeg example you were using a slower preset, presumably for the quality. But then you were using AVBR encoding which a lower quality method. With bitrate based encoding you really need 2 passes to get the best quality and bitrate distribution. You can use a fast first pass so it doesn't take twice as long. I don't think any GPU encoder is capable of true 2-pass encoding.
    Quote Quote  
  8. Member
    Join Date
    Dec 2012
    Location
    Germany
    Search Comp PM
    1. the fastest option is the Ryzen 9 3900X, GPU's aren't faster than x264 superfast and their quality is worse
    2. x264 superfast constant quality 20 has the best speed/quality ratio and for x265 I use medium constant quality 18.5
    Quote Quote  
  9. Thanks muchly for this, jagabo, and sorry about the reply delay (busy work times). I've grouped points together to avoid repetition in my comments.

    Originally Posted by jagabo View Post
    Shrinking an MPEG 2 video by 20 percent with h.265 will give better quality (relative to the source) than shrinking a h.265 video by the same amount. It also doesn't make sense to reduce them by the same percentage. The MPEG 2 video could be shrunken to 1/4 it's original size and still give good quality. Shrinking an h.265 video by that same amount will look pretty bad (unless its bitrate was already extravagantly high).

    In your ffmpeg example you were using a slower preset, presumably for the quality. But then you were using AVBR encoding which a lower quality method.
    My rationale was something like this (I don't know if it's actually valid or not, though, so criticisms and suggestions are welcome).

    Given my source material is all movies and home videos, unless there's something pathological, I was historically pretty happy with 1300 kbps 2 pass divx as a rule, last time I did a major batch of encoding - which was probably about 10 years ago :) I agree, there's no point taking a 752x 384 divx @ 2500 kbps and recoding to 752x384 HEVC @ 2000 kbps, but I'm at a loss how to besat pick encode criteria. Perhaps naively, I figured that current/future screen resolutions are higher than before, but also, current codecs are much more efficient. But at the same time, I don't know what CRF/other settings to use, so I went pragmatic and figured 2000 kbps average rate would surely be a decent improvement on quality from the past (50% higher bitrate + more efficient codec anyway). I'm not a HD fanatic, and "good enough quality" is subjective, so it's hard to know how to classify or describe what my quality tastes are. In a simplistic way, my aim is to get video streams at the best quality I can but generally below a certain bitrate, I guess.

    If setting an average bitrate is a poor approach, would CRF be better? If so, how can I estimate a CRF value from my existing encodes? Is there a tool to find a numeric CRF for a video, or do I just cut 5 minute snips from a bunch of media, encode at a bunch of CRF's, and pick what I like? If so how to choose scenes, and what other settings are most worth looking at? Do 2 pass bitrate and CRF deliver comparable quality for a given file size? That's really why I went with a bitrate basis - I have a historical rate that I can compare to, it's not a complete guess. And size reduction is what I'm after, implying a bitrate anyhow.

    Originally Posted by jagabo View Post
    Haswell, Ivy Bridge, and Sandy Bridge only have GPU encoding for h.264 and MPEG2. So you're only using CPU encoding in the test you gave as an example. In any case, ffmpeg to explicitly be told to use the GPU, -c:v hevc_qsv, for example (many of the other command line parameters may have to change too).
    The Xeon E5-1680 with 8 cores, 16 threads should easily be the fastest of your CPUs (other are only 4 core, 8 threads). I don't know which CPU you tested with but I converted a 720x480 Xvid AVI with your command line (preset=slower) with a i5 2500K and got around 0.1x (2.4 frames per second) on a 23.976 fps source. At preset=slow I get around 0.5x (12 fps). Using the GPU of a Celeron J4105 (Gemini Lake) I got around 5x (120 fps) at preset=slow.
    Yeah, my encoding speeds closely match yours. Encoding 23.98 and 25 fps sources, I get between 0.7x speeds (8 core) to 0.1x speeds (4 core). A quick test on my desktop (Ivy Bridge Extreme hexacore i7-4960X with a modern Radeon card R9 370) gets about 3.8x - 4x, which makes sense. But I need it for other work, so I can't use it.

    I didn't go this route, because it's a lot cheaper to get a decent video card or HEVC encoder card, than buy/update a whole new CPU/MB/RAM.


    Originally Posted by jagabo View Post
    Off the top of my head I don't know which of those supports h.265 encoding. You can look it up as well as I, I'm sure. But in general, GPU encoding delivers lower quality at the same bitrate compared to CPU encoding. In general, the quality of GPU encoding has been increasing. Newer GPUs deliver better quality than older GPUs.
    Within the same generation the video encoding is the same between the low end and high end cards. Ie, a Nvidia 1030 has the same video encoder as the 1080. 10 bit encoding gives smoother gradients than 8 bit encoding, even with 8 bit sources.
    This is a major area of confusion for me. I take your point that dedicated GPU encoding isn't greatest quality. As far as I know, there are several ways HEVC *might* get encoded:
    1. Pure CPU software (standard x64 instruction set, no assistance)
    2. CPU assistance via dedicated on-CPU core (Intel Quick Sync, I don't know if Ryzen has an equivalent)
    3. GPU card, using dedicated HEVC encoder hardware
    4. GPU or general compute card, using general GPU compute capability (rather than a dedicated ASIC encoder/core)
    5. Specialist HEVC encoder card (encoder acceleration only, not general GPU)

    Of these 5 options, is it just (1) that's painfully slow, and (3) that's a lower quality? How do they compare for speed / quality / settings flexibility? For example, a parallelised HEVC encoder running on CUDA might be fast, without having the quality issues a GPU dedicated encoder core suffers from. Or might allow 10 bit where others stick at 8 bit. Or a cheap but good quality dedicated HEVC encoder card might be suitable, if I don't need any other GPU capabilities.

    In brief, which of these encoding methods are "out there", and what are their pros and cons? Which ones are fast and support decent options/good quality encoding? Which if any support 10 bit? It would help a lot, simply to be more sure about the differences between these 5 approaches, and how experienced encoders view them.
    Quote Quote  
  10. I wouldn't waste my time re-encoding old low-res Xvid/DivX files into HEVC at all. I'd keep them in original, maybe repack them to mkv and only re-encode DVDs and HD home movies into HEVC. I don't use x265 much but in x264 days rule of thumb was to use CRF 18-22. I don't think those values scale the same for x265, you can go even higher, maybe 22-26. Choose what you consider good enough quality for you and encode the rest with same CRF value. CRF will do the rest for you.
    Quote Quote  
  11. All the h.264 GPU encoders (intel, nvidia, amd) deliver lower quality than x264 at the same bitrate. I don't do much h.265 encoding but from what I've seen the same is true or x265 and the h.265 GPU encoders. The major loss (when using ample bitrates) is in small low contrast detail (film grain, CCD noise, light wood grain, fuzziness in a sweater, etc.). But since you are starting with already (over?) compressed material you may have already lost such details.

    The loss of such small low contrast detail can result in posterization artifacts on shallow smooth gradients. Using 10 bit encoding (and playback chain) reduces those posterization artifacts. Not many devices outside a computer support 10 bit h.264. Support for 10 bit h.265 is more common.

    Some recent threads:

    https://forum.videohelp.com/threads/394569-I-there-any-benefit-to-encoding-to-10-bits
    https://forum.videohelp.com/threads/394581-Encoding-test-some-AVC-encoders
    Quote Quote  
  12. I think the main thing that's confusing me is this:
    1. Encoding HEVC using GPU HW acceleration produces comparatively substandard quality (whether NVidia or AMD), comparable to "fast" preset (or at best "slow" preset, if using the latest Turing GPUs). NVidia is rated better than AMD but even then is advised to be avoided if quality matters. People seeking decent quality encodes seem to be actively advised to not use GPU encoding for this reason.
    2. Encoding HEVC using CPU dedicated processors (Intel QS?) seems to be also actively recommended against. It's scored as less capable and (QS best compared to NVENC best) seems to be considered to give poorer quality encodes than NVENC in most discussions.
    3. Encoding HEVC using unaccelerated software + CPU x64 instruction set is painfully slow. We've kind of reached agreement that "painfully slow" is around 3 - 18 fps (0.1x to 0.75x sort of region), for many CPUs, even when using a full 8x core CPU to do it.

    So if HW encoding is substandard and software encoding is up to 10x play time to encode, what the heck are people using, who want decent/good quality HEVC encoding along with say upwards of 40-50 fps**? Where's the "out"? Or do all HEVC encodes by experienced encoders who want decent quality, just get accepted as taking 6 - 12 hours for a 2 hour movie, and that's how it is?

    Putting the same question 2 other ways, in case it helps:

    - What exact HW/SW capabilities do rippers/encoders who want decent quality, check for, in their CPU's/GPU's, to allow an HEVC encode at good quality ("slow" or similar, or other better-than-medium-fast encodes) in less than geological time?
    - What exact stuff needs to show up in the manufacturer's CPU/GPU specs, these days, to ensure reasonably fast AND good quality HEVC encodes?


    This is what's really confusing me, and probably 50% of what underpins the question in my OP.


    (** I'm sidestepping that the exact speed and quality depends on the source, bitrates, target, desired quality, platform, settings, etc. I'm simplifying, and assuming that 40 - 50 fps would be seen as "reasonable" by a user who regularly encodes,, that the question is fairly clear, and that experienced encoders who advise against HVENC because it's "only as good as fast/medium", do so because they have a sense of a better quality they aim for/achieve, and yet they don't tend to dedicate their entire 4-8 core main PC for 6 - 12 hours at 100% CPU just to get one movie encoded at that quality )
    Last edited by STilez; 9th Oct 2019 at 20:28.
    Quote Quote  
  13. I think many encoders don't care that much about quality. Some probably don't really see the difference. Some probably see the noise reduction they're getting with the GPU encoders (and fast settings of CPU encoders) and like it. Personally, I think the encoder's job is to reproduce the source as accurately as possible, film grain, CCD noise, and all. If I want noise reduction I'll do that before encoding.
    Quote Quote  
  14. Originally Posted by jagabo View Post
    I think many encoders don't care that much about quality. Some probably don't really see the difference. Some probably see the noise reduction they're getting with the GPU encoders (and fast settings of CPU encoders) and like it. Personally, I think the encoder's job is to reproduce the source as accurately as possible, film grain, CCD noise, and all. If I want noise reduction I'll do that before encoding.
    Hmm. And those who do care about quality? (Maybe not fanatically but enough to want better than "fast", anyway).

    Do they simply have to accept it'll be a 6-12 hour pure-x64 encode, if the source is BR/HD/2K+ and they want 2 pass at decent quality? ("slow" or CRF 18-22 say)

    Or do they have some way to encode a 2 hour movie in under 2 hours, without buying a dedicated 12 - 16 core CPU system for it? Some hw accel/hw assist/CPU instruction set criteria/GPU capability, that let them get better quality without spending half a day encoding each film?
    Quote Quote  
  15. You can always make up for inferior GPU encoding by using more bitrate.
    Quote Quote  
  16. Originally Posted by jagabo View Post
    You can always make up for inferior GPU encoding by using more bitrate.
    So is a fast modern high core count CPU like i9 or Zen 2 3900X, still the only way to get reasonable encode rate (>40-90 fps) *and* good quality (CRF ~ 18-20) encodes, as at 2019? Can't be done on GPU or assisted by one?

    Is that what experienced HEVC encoders use?

    Originally Posted by jagabo View Post
    Using the GPU of a Celeron J4105 (Gemini Lake) I got around 5x (120 fps) at preset=slow.
    Is that purely CPU x64 sw, or with QSV assistance?

    If it did make use of QSV, was it with lower quality, or could you get decent quality out of it?

    as yo7 can see, I'm still unclear what experienced encoders use to get fast-ish encodes without sacrificing quality? (Apart from a huge core modern dedicated CPU!)
    Last edited by STilez; 13th Oct 2019 at 05:56.
    Quote Quote  
  17. Originally Posted by STilez View Post
    So is a fast modern high core count CPU like i9 or Zen 2 3900X, still the only way to get reasonable encode rate (>40-90 fps) *and* good quality (CRF ~ 18-20) encodes, as at 2019?
    Yes. If you want the highest quality (per bitrate) your only choice is to use CPU encoding. With CPU encoding the more cores you use the faster the processing.

    Originally Posted by STilez View Post
    Can't be done on GPU or assisted by one?
    How many times does it have to be said? GPU encoding delivers lower quality than CPU encoding (per bitrate).

    Originally Posted by STilez View Post
    Originally Posted by jagabo View Post
    Using the GPU of a Celeron J4105 (Gemini Lake) I got around 5x (120 fps) at preset=slow.
    Is that purely CPU x64 sw, or with QSV assistance?
    QSV (there's no add-in graphics card in that computer, IGP only). The ffmpeg command line was something like:

    Code:
    ffmpeg -y -i %1 -c:v h264_qsv -preset 1 -rdo 1 -trellis 3 -global_quality 30 -look_ahead 1 -look_ahead_depth 40 -bf 4 -refs 4 -g 250 -c:a copy "%~dpn1.qsv_h264.ts"
    I probably could have gotten even faster results if I had used QSV for the decoding of the source too.

    Originally Posted by STilez View Post
    If it did make use of QSV, was it with lower quality, or could you get decent quality out of it?
    You can get whatever quality you want out of QSV -- just use more (higher quality) or less bitrate (lower quality). But at the same bitrate CPU encoding with a good encoder and proper settings can deliver higher quality.

    But as noted before, peoples' perception of video quality varies. You may be happy with the results of GPU encoding. Maybe not.
    Quote Quote  
  18. I think.I'm getting there. Sorry if a lot of it is obvious to experienced encoders/rippers, but the descriptions in the forums often skip detail I need, to be sure of fully understanding what's possible in HEVC encoding.

    So tl;Dr, let me know if ive got the nuances now...

    As I understand it, yes, GPU such as 1660/1660 Ti, using NVENC, can indeed encode at any quality/CRF, including lossless and 2 pass, and do it fast and well (100+fps). But if used for HEVC they'll either be lesser quality for a comparable bitrate, or produce larger files for.comparable quality to CPU-only. But efficiency aside, the GPU versions are at least able to match CPU-only quality, and do it at a fairly fast rate (100+ fps according to nVidias performance data on Turing).

    So its mainly at this point, confirming that GPU *can* meet arbitrary quality, and can handle the more commonly needed HEVC parameters, without needing an exceptionally powerful CPU. Just that it wont do it at the same efficiency/bitrate. There'll in effect be a size vs. quality inefficiency penalty. But not an actual limit on quality, per se.
    Last edited by STilez; 13th Oct 2019 at 08:57.
    Quote Quote  
  19. That's right. Any encoder (of the ones we're discussing here) can deliver good quality. It's just a matter of how much bitrate it will need to deliver that quality.

    Note that NVEnc's 2-pass mode isn't true 2-pass encoding. It just looks ahead a bit more than their single pass encoding. A true 2-pass encoder examines the entire video in the first pass and builds a map of how much relative bitrate each portion of the video needs. Then in the second pass it uses that map to apportion bitrate throughout the entire video. NVidia's 2-pass mode works in small sections. It will not apportion bitrate as well as a full 2-pass encode.
    Quote Quote  
  20. A
    Originally Posted by jagabo View Post
    That's right. Any encoder (of the ones we're discussing here) can deliver good quality. It's just a matter of how much bitrate it will need to deliver that quality.

    Note that NVEnc's 2-pass mode isn't true 2-pass encoding. It just looks ahead a bit more than their single pass encoding. A true 2-pass encoder examines the entire video in the first pass and builds a map of how much relative bitrate each portion of the video needs. Then in the second pass it uses that map to apportion bitrate throughout the entire video. NVidia's 2-pass mode works in small sections. It will not apportion bitrate as well as a full 2-pass encode.
    Not a problem, but good to know. And thank you for the patience and help. I've learned a lot and feel.OK now.

    Out of interest can a true 2 pass encode be done at all, using NVENC, if it's explicitly done as 2 separate calls to ffmpeg (as usual?), to gather stats then run an encode.pass, as opposed to say, calling NVENC to run a 2 pass encode in one "action?"

    And what sort of levels of inefficiency (%) should I expect for NVENC vs CPU, for say, a "typical" movie (HD/4k, 2 hours, 23.98/25 fps, 10 bit encode, CRF say 19 ~ 25 ish)? Very rough idea is helpful, 10%/20%/30%/50% larger for comparable quality? I know.it'll be rough as heck, and depends on a million other things, but any idea from yiur experience, would be helpful. And thanks once again for your help in this thread.
    Last edited by STilez; 13th Oct 2019 at 10:31.
    Quote Quote  
  21. I'm not an expert on NVENC but I don't think it's possible to perform a true 2-pass encode with it (or ffmepg using it). I believe it lacks the ability to create a global bitrate map of the video in one pass, and the ability to use a global bitrate map in a another pass.

    Regarding how much more bitrate you'll need to match a good software encoding -- that's hard to answer. It varies from generation to generation (the GPU encoders have gotten better over the years) and it depends on the particular video. I haven't seen a lot of examples with the latest Nvidia GPUs (and I don't have one). And usually what I see is someone posting an CPU and GPU encode at the same bitrate (but no source to compare with). With those it's usually obvious that more of the small, low contrast detail is missing in the GPU encode. But I don't know how much more bitrate it would take for the GPU encode to match the CPU encode. My rough estimation is 25 to 50 percent more bitrate with modern GPUs. So if a GPU is able to get good quality at 10 Mb/s a GPU encoding would have to use 12.5 to 15 Mb/s to match it.
    Quote Quote  
  22. That's a lot.
    But yeah. It is what it is,I guess......

    Thank you greatly for all your help
    Quote Quote  
  23. Member
    Join Date
    Aug 2006
    Location
    United States
    Search Comp PM
    Originally Posted by jagabo View Post
    Within the same generation the video encoding is the same between the low end and high end cards. Ie, a Nvidia 1030 has the same video encoder as the 1080. 10 bit encoding gives smoother gradients than 8 bit encoding, even with 8 bit sources.
    The NVIDIA 10xx series cards don't follow the usual rule of thumb. NVIDIA GT 1030 cards don't support video encoding. All they can do is decode. The GTX 1050 cards are the least expensive of that generation supporting NVENC. GTX 1080 cards have 2 NVENC engines and are faster than the other cards in their generation if both NVENC engines are utilized. If only one NVENC engine is used, then it encodes no faster than the others.

    I don't know what is true for the later NVIDIA generations.
    Ignore list: hello_hello, tried, TechLord, Snoopy329
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!