VideoHelp Forum
+ Reply to Thread
Results 1 to 23 of 23
Thread
  1. Have been using Freemake for many years. Mostly to crop off unwanted beginning junk and ending credits. Convert OP to same format as IP. Have always used 2 pass encoding assuming must provide better quality. Today decided to find info regarding the difference. I see no options anywhere regarding filesize or bit rate. Forum posts have many references to these qualities. Overly complicated for my purposes. Gist appears to indicate 2 pass will, in most cases, provide better quality. Additional conversion time and resulting file size are of no consequence to me. Just want best quality possible.
    Quote Quote  
  2. Originally Posted by Jaysonbluejay View Post
    Have always used 2 pass encoding assuming must provide better quality.
    Well, yes and no, but mostly no. Assuming that by one pass encoding you mean quality based encoding (and not some kind of useless ABR encoding), one pass gives you the quality you set without regard to final file size. Two pass encoding gives you a set file size (or average bitrate) without regard to quality. A two pass encode the same final size as a one pass encode will have about the same quality.
    Quote Quote  
  3. My 2 cents - multi-pass encoding is able (should be able) to allocate limited number of bits in more optimal (disputable - depends on decisions strategy) way.
    Quote Quote  
  4. This may be more info than you wanted, but it's a good article:

    https://slhck.info/video/2017/03/01/rate-control.html
    Quote Quote  
  5. x264 and x265 have two basic modes of encoding: bitrate and quality.

    With bitrate based encoding you select the bitrate and the encoder delivers whatever quality it can for that bitrate. With this method you know the final file size (because: file size = bitrate * running time) but you don't know the quality.

    With quality based encoding (CRF) you select the quality and the encoder uses whatever bitrate is necessary to achieve that quality. With this mode you know the final quality but you don't know the file size.

    Within bitrate based encoding there are one pass and two (or multi) pass modes. With a single pass the encoder doesn't know how much bitrate different parts of the video will require so it has to be conservative. It can't use too much bitrate for a particular shot because more bitrate may be needed for other shots. Any shot that uses more than the specified average bitrate takes away bitrate from other shots. With two pass encoding the encoder first examines the entire video to see how much bitrate will be needed for each scene (this is performed by encoding at constant quality) and that information is saved in a file (the stats file). During the second pass it uses that information to best allocate bitrate to each scene, while delivering the final average bitrate.

    2-pass VBR and CRF deliver about the same quality when they deliver the same file size. So basically, use CRF encoding when you want a known quality, use 2-pass VBR encoding when you want a specific file size.

    And keep in mind that you never know what bitrate is necessary for a particular video. Different properties require different bitrates.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    x264 and x265 have two basic modes of encoding: bitrate and quality.

    With bitrate based encoding you select the bitrate and the encoder delivers whatever quality it can for that bitrate. With this method you know the final file size (because: file size = bitrate * running time) but you don't know the quality.

    With quality based encoding (CRF) you select the quality and the encoder uses whatever bitrate is necessary to achieve that quality. With this mode you know the final quality but you don't know the file size.
    Nice explanation

    That's the world if you consider x264 and x265 only.

    But things became a bit more complicated when Google introduced it's VP9. And probably the currently hyped AOM AV1 will behave similar as it's based on VP9 successor VP10.

    In those "Google modes" you have three basic choices:
    • Bitrate mode
    • Constant quality mode
    • Constraint quality mode

    Bitrate mode is nearly the same as with x264 and x265. But the difference is that for VP9 with constant quality mode a 2pass encoding is suggested (instead of 1pass), too. 1pass is also possible but it tends to produce a lower quality.

    Constraint quality is newer and one of the VP9 improvements. For videos with a low complexity it behaves very much like a constant quality encode. But if videos get more and more complex it behaves more and more like a bitrate mode encoding. For that purpose you define an upper bitrate limit. Encoding of very complex videos will behave as in bitrate mode with that upper limit as the target bitrate. 2pass is suggested for this type of encoding, too.
    Quote Quote  
  7. That constraint quality looks more like CRF (1pass quality) with peaks cut off. That is set by limiting buffers.

    And those buffers should be always set , because we do not want bitrate to skyrocket. Especially 4k enkoding nowadays. I was always trying to bring it up as a normal way of CRF encoding, but folks tend to be scared off by mere mentioning that bitrate should be limited using with CRF. It always should be, depending what are max bitrates (the quality would not be visually much better anyway) . Off course you should know what you are doing, if encoding HD video and setting it to 10Mbit is too low etc.

    This way you can encode even lower bitrates to simulate CBR, or almost. One way to see it also is to proportionally increase bitrate for those scene that are not complex but banding could be introduced. Video needs to be tested with these peaks cut off. Videographers mostly work with the same type of videos so after some practice one gets hand of it.
    Quote Quote  
  8. Originally Posted by _Al_ View Post
    That constraint quality looks more like CRF (1pass quality) with peaks cut off. That is set by limiting buffers.
    Nope.

    Constraint quality is completely different from the usage of buffers. See e.g.: https://trac.ffmpeg.org/wiki/Encode/VP9#constrainedq

    It's a transition between constant quality and bitrate mode instead.

    Buffers have their advantages but can easily waste short complex scenes. With constraint quality that won't happen.
    Quote Quote  
  9. I can understand about that gradual process, even if it does not look important, because those cut off limits are usually not visible anyway, because those are those max bitrates.


    I do not get that last sentence though, that cut off is happening with max limit set by someone who knows what he is doing, that bitrate is the highest for video that should be delivered. If quality is not enough, it is too low and that limit is set badly. It happens if encoding internet streams etc., but that is design because bitrate is strict, but should not happen for quality video delivery. Say you need to get HD video out there but limit it to 30Mbit. Those 30Mbit scenes are not the scenes with the problems. Even if that scene needs 50Mbit for particular quality quantizer. Almost every time what visually strikes you is those scenes with low bandwidths set by quantizer because of low complexity, dark areas, low light scenes, when banding is introduced. Everyone would notice banding. But video looks very much ok with 30Mbit even if shooting some tree with billion flickering leaves and that scene would need 50-60Mbits. It is just fine on that top limit.
    Quote Quote  
  10. Originally Posted by _Al_ View Post
    I can understand about that gradual process, even if it does not look important, because those cut off limits are usually not visible anyway, because those are those max bitrates.
    Mostly they don't but sometimes they do. E.g. Youtube does "aggressive" buffering.

    I've once seen a very nice example for that and will make the effort to look for that. I.e.: I'll provide an example demonstrating the difference between buffering with complex scenes and the same with constraint quality.
    Quote Quote  
  11. They need to encode fast and pronto, that might be different level of encoding, something like very fast modes for x264 and keeping unnecessary high resolution for that bitrate. It is all wrong in the first place.
    Maybe it is not going to take long at all to have these values be changing from frame to frame for some codecs (resolution and some other). I think Vapoursynth, it is already set up that way already to produce something like that.
    Quote Quote  
  12. Originally Posted by fornit View Post
    Buffers have their advantages but can easily waste short complex scenes. With constraint quality that won't happen.
    nope - it is not different than live broadcast encoding with any sane codec from at least 20 years - you have limited buffer and limited maximum bitrate that can fill buffer - so for peaks codec must to satisfy to not go with bitrate higher than allowed and normally to fill buffer but not over-fill or under-fill .
    For example your TV program is delivered within such constrains - broadcaster is unable to go over bitrate allowed by digital modulation (like extreme case of the 8-VSB in United States where transponder can have exactly and only 19.3804Mbps) so it can use or CBR strategy (but this will be waste of bits for less complex scenes) or it can go for VBR but encode to not higher than and fit within buffer limit - some encoder(s) (and muxer) may refine this to point called statistical multiplexing where actual channel bitrate is controlled in dynamic way between many services (so for example dynamic and complex scenes may "borrow" bitrate from less complex other programs - commonly film channels or sport channels are located with channels almost static like journalist interviews within same transponder).
    Quote Quote  
  13. Well.

    We did some research last year because we've built our own (small) tube system that can be found at https://www.team-andro.com/tube

    Video conversion has been my part. Of course I've looked deeper at what the big players are doing and at Youtube I've stumbled over some things. One of those has been the buffering.

    Have a look at following video and look a bit closer to the 17 seconds at the beginning:
    https://www.youtube.com/watch?v=5idr7K4Wwhg

    This is one frame of the 480p Youtube x264 version:

    Image
    [Attachment 45933 - Click to enlarge]


    If you encode the same video at 480p with x264 crf 26 you'll get nearly the same average bitrate. But that specific frame looks very different:

    Image
    [Attachment 45934 - Click to enlarge]


    Why does this happen? It's caused by the buffering! You can see what happens if you analyze the Youtube video with the help of the tool "Bitrate viewer". Here are the findings for the 720p version:

    Image
    [Attachment 45935 - Click to enlarge]


    The graphic is GOP based. I.e.: You see how much bitrate is spent within intervals of about 5 seconds each. Obviously there's a limit around 2400 kbit that cannot be exceeded within 5 seconds. But during the first 17 seconds some more bitrate would surely look better.

    Now let's have a look at bitrate distribution of the x264 crf 26 encode (it's GOP based with an interval of around 5 seconds, too):

    Image
    [Attachment 45936 - Click to enlarge]


    You can see that bitrate distribution is completely different here. You have a very high peak at the beginning reflecting the complexity of that first scene.

    And that's what I mean if I say that an "aggressive buffering" strategy can waste quality.

    Please don't misunderstand me: I'm far away of saying that the guys at Youtube aren't doing a great job. But they have their goals. And quality is important but they seem to have other goals, too. Obviously a more constant bitrate overrules quality here.


    Originally Posted by pandy View Post
    nope - it is not different than live broadcast encoding with any sane codec from at least 20 years - you have limited buffer and limited maximum bitrate that can fill buffer
    I absolutely agree. But there are 2 points you should consider:
    • It's a difference whether you deliver "video on demand" or live streaming
    • There are many things which have changed over the last 20 years

    Nowadays "video on demand" is usually offered in form of a dash stream. What can happen if you encode with bigger buffers and the HTML5 player notices that speed is too low to fill the buffer fast enough? Then it will simply do what it is designed for and will jump one quality step down. But the video won't stop to play.


    Anyway: If you need to have a high quality encode and you don't require to stream your video with bandwidth restrictions, then it's best to avoid buffers at all. Or is there somebody who has a different opinion about that?
    Last edited by fornit; 26th Jun 2018 at 18:30. Reason: typo
    Quote Quote  
  14. Yes, if internet stream cut off could be visible and terrible for extremes like that. But there is no choice. You cannot NOt limit buffers for internet stream, just to have decent stream. You can avoid it, nowadays most of people would play it without any problem, but if your target is someone older with very old computer , very slow connection , video would break up for them. Maybe it is time to forget about that. Who knows.
    Another issue is if you host those videos your provider would let you know, they hate video streaming, terms are unlimited this and that but he would bring it up anyway and service could be interrupted. Video sizes could balloon if not restraining bitrate.
    Originally Posted by fornit View Post
    Anyway: If you need to have a high quality encode and you don't require to stream your video with bandwidth restrictions, then it's best to avoid buffers at all. Or is there somebody who has a different opinion about that?
    I agree as a matter of fact, I even mentioned that briefly, I did not take it as a internet stream video delivery discussion only. It depends how you set up that cut off. For regular video it'd be much higher. As I mentioned, folks start to encode 4k videos. It must be limited otherwise you can get insane bitrates for those videos coming from consumer camcorders with huge DOF where everything is sharp.
    Quote Quote  
  15. Originally Posted by fornit View Post
    I absolutely agree. But there are 2 points you should consider:
    • It's a difference whether you deliver "video on demand" or live streaming
    • There are many things which have changed over the last 20 years

    Nowadays "video on demand" is usually offered in form of a dash stream. What can happen if you encode with bigger buffers and the HTML5 player notices that speed is too low to fill the buffer fast enough? Then it will simply do what it is designed for and will jump one quality step down. But the video won't stop to play.


    Anyway: If you need to have a high quality encode and you don't require to stream your video with bandwidth restrictions, then it's best to avoid buffers at all. Or is there somebody who has a different opinion about that?
    Buffer is unavoidable - normally your goal is to reduce buffer as much as possible on other way large buffer allow to hide some anomalies (in connectivity and/or source complexity). From YT perspective they for sure must limit bitrate as providing millions connections leading to unavoidable clash with technology limitations - assuming single feed around 2Mbps then for 1000 connections you must provide 2Gbps and YT probably feeding in same moment few tens millions if not hundreds millions of connections - this will require enormous network capabilities even if trying to spread load on different geographical localisation - at some point your backbone will be saturated... so YT strategy is moderate size buffer with relatively low bitrate (i mean 2Mbps for 1080p H.264 is feasible but at a cost of encoding time or quality loss). Partial solution for this scalable encoding (video/audio) - technologies exist but they are not particularly popular. DASH is rather crude workaround for technology limitations - it works but... in corporations like YT leading voice belongs for sure not to engineers.
    Quote Quote  
  16. Originally Posted by pandy View Post
    Buffer is unavoidable
    If we talk about video streaming: YES
    When I downsize a blu ray rip to watch it on my PC later: NO

    Hope, that's acceptable for you?

    Originally Posted by pandy View Post
    Partial solution for this scalable encoding (video/audio) - technologies exist but they are not particularly popular.
    That sounds interesting - what's the technology you refer to?
    Quote Quote  
  17. Originally Posted by fornit View Post
    If we talk about video streaming: YES
    When I downsize a blu ray rip to watch it on my PC later: NO

    Hope, that's acceptable for you?
    You should understand how modern video codec works, what is DPB, how bitrate is shaped across time - this not to be accepted or not by me - feel free to raise your ideas to patent office - this will be breakthrough for video encoding. You always need buffer - you can argue how big buffer is required but you need to collect bits before starting decoding process

    Originally Posted by fornit View Post
    That sounds interesting - what's the technology you refer to?
    First example https://en.wikipedia.org/wiki/Scalable_Video_Coding
    Last edited by pandy; 28th Jun 2018 at 02:33.
    Quote Quote  
  18. Take a look at Blu-Ray Martian for example, I think the way it was encoded might be something like that, it goes to certain limit and then it is cut off. It must be. It is Blu-Ray it just cannot go higher, or they made certain ceiling for encoding. And you cannot see any problems with that video, it is just OK,. Everything must have certain limit if hardware (Blu-Ray) or sane limits (encoding 4k) are needed.

    Example, you can encode 1280x720 for yourself about CRF 18,certainly you set no limits or very high buffers are ok so technically it never reaches it. If you encode 4k on the other hand, in theory same movie distributed as 4k and watch bitrates. It just might go too far. So on the other hand there are cases (higher resolutions, hardware limitations, web) where it must be set somewher on "the edge". As soon as you do that, you can afford a bit lower quantizer (higher quality), because it is being cut off anyway. So this is huge at the same time, no banding and problems with scenes where x264 is not distributing enough bitrate.

    There is no this or that is better.
    Quote Quote  
  19. Originally Posted by pandy View Post
    You should understand how modern video codec works, what is DPB, how titrate is shaped across time
    To be honest, I'm not sure whether we are talking about the same things. From my simple understanding the Decoded Picture Buffer (DPB) should be considered when setting Re-Frames as there are restrictions with certain hardware playback devices. But usually (with x264) the number of Re-Frames depends on the profile used. And it's common knowledge which devices will support which profiles.

    Please correct me if I'm wrong. I'm always happy if I can learn.

    Originally Posted by pandy View Post
    You always need buffer - you can argue how big buffer is required but you need to collect bits before starting decoding process
    Again I'm not sure whether we are talking about the same things. If you set a buffer of 60 seconds in ffmpeg (e.g.: -maxrate=1000k and -bufsize=60000k) this will definitely NOT mean that the HTML5 Player will spend 60 seconds with buffering until it starts the video. And it also doesn't mean that the video won't start before 60000k have been buffered.

    Instead the video will start immediately as the amount of buffering doesn't depend on your encoding and probably even not on your HTML5 video player. The browser will make the decision how much to buffer.

    That's why from my understanding in the HTML5 world buffering will have an influence on bitrate distribution of the video only and not more. If you don't set a buffer at all then your bitrate will be less constant. That's all.


    You seem to know much about video encoding. So I guess you have something different in mind if you talk about buffering. Maybe you can explain what you mean. Would be helpful for me.
    Last edited by fornit; 27th Jun 2018 at 20:09. Reason: typo
    Quote Quote  
  20. Originally Posted by fornit View Post
    To be honest, I'm not sure whether we are talking about the same things. From my simple understanding the Decoded Picture Buffer (DPB) should be considered when setting Re-Frames as there are restrictions with certain hardware playback devices. But usually (with x264) the number of Re-Frames depends on the profile used. And it's common knowledge which devices will support which profiles.

    Please correct me if I'm wrong. I'm always happy if I can learn.
    Buffers are present at various levels they buffer data to achieve some targets. DPB is one of examples why you can't design modern, efficient codec without buffers. And DPB is extreme case, bitrate buffer is in different level and its present because it serve and provide solution for other area.
    Only pure intra codec without bitrate limitations may be designed as not using buffer higher than single frame. However bitrate efficiency of intra coding usually make them not feasible for consumer class of devices and applications. they are focused on quality not on bitrate - bandwidth in large scale deployment is more costly than quality degradation.

    Originally Posted by fornit View Post
    Again I'm not sure whether we are talking about the same things. If you set a buffer of 60 seconds in ffmpeg (e.g.: -maxrate=1000k and -bufsize=60000k) this will definitely NOT mean that the HTML5 Player will spend 60 seconds with buffering until it starts the video. And it also doesn't mean that the video won't start before 60000k have been buffered.

    Instead the video will start immediately as the amount of buffering doesn't depend on your encoding and probably even not on your HTML5 video player. The browser will make the decision how much to buffer.

    That's why from my understanding in the HTML5 world buffering will have an influence on bitrate distribution of the video only and not more. If you don't set a buffer at all then your bitrate will be less constant. That's all.


    You seem to know much about video encoding. So I guess you have something different in mind if you talk about buffering. Maybe you can explain what you mean. Would be helpful for me.
    Do your math - if you have 60 sec buffer and your target bitrate is 1000kbps and your link bandwidth is 1100kbs (10 % overhead is quite fair assumption) then at worst case when first I frame is 60000kb large then you need to wait 60 seconds before decoder will be able to start decoding as decoder need to wait for end of data indicator - some magic signal that will say to decoder - 'this all data for first frame' - then some blocks in decoder may perform data integrity checking and if there is no errors then they can pass information to another block of decoder - 'all data are present and they are OK' (or not then you should be careful during decoding). O course this is trivialization how things are done, usually some steps can be done in parallel so latency is minimised.
    You constantly referring to case (quite idealistic) where decoder has unlimited bandwidth available (unlimited when compared storage bitrate to stream bitrate) but in real life even such situation may not occur to frequently on broadcaster side. You should analyse situation for trick modes where there is plenty of non reference frames in stream - how fast data need to be acquired and decoded - this push your local storage (and decoder) to situation where your capabilities are usually not enough (unless you have HW capable to perform decoding with speed of fps*fps i.e. 60fps stream can be decoded 60 times faster than 60fps - your HW resources must be capable to decode 3600 frames per second - this is of course some example based on some assumption but it shows situation where even your local resources can easily reach saturation and expose issue of limited capabilities)
    Quote Quote  
  21. Originally Posted by _Al_ View Post
    Everything must have certain limit if hardware (Blu-Ray) or sane limits (encoding 4k) are needed.
    I don't have much experience with the encoding (or downsizing) of UHD Blu Rays until now. I bought one at the beginning of last year (Oblivion - great film, I love it). But at that point of time there was no software available to copy the main film to my hard disk for testing purposes.

    At the beginning of this month I've looked again at that topic and could do my first tests. The m2ts-file has a size of about 62 GB and a bitrate of 68.3 Mb/s (including 5 audio tracks).

    The main challenges have been HDR related: Keep HDR properties or convert it to SDR without flattening colours.

    I did one 4k HDR encode at crf 18 HEVC (with 2 language tracks, simple AAC at 192k each). The result has been a file with a size of 7.3 GB and an average bitrate of 8.088 kb/s in total. So, much smaller than the original but with a very fine quality.

    If you would set buffers here at e.g. a maxrate of 20 Mb/s and a bufsize of 40 Mb then I don't think that this could affect quality in a noticable way.

    On the other hand it's no problem to play the original m2ts file directly with VLC or the Windows 10 video player. And when looking at VLC bitrate statistics it seems that the original file always produces a much higher bitrate than my encode.


    Do you have much experience with UHD Blu Ray encoding? Maybe we can share some hints then
    Quote Quote  
  22. Originally Posted by fornit View Post
    Do you have much experience with UHD Blu Ray encoding? Maybe we can share some hints then
    Only Blu-Ray, I did not mean UHD releses but rather our 4k videos, coming from all kind of sources, 4k camcorders, include drones etc. Footage is sharp, not steady, it takes much more bitrate as oppose to movie releases. Your example was going in that direction as well.

    If I can generalize, Blockbuster releases do not cause encoder to distribute too much bitrate, except if there is too much noise. Rather the problem is mostly on the other end. Not enough bitrate for gradients, low light, dark areas.
    Quote Quote  
  23. ABR encoding sucks compared to CRF or 2 pass, which for x264 encode in much the same way. For ABR the bitrate needs to be high enough that you can't see the encoder is adjusting the quality as it encodes to hit a target bitrate. If the bitrate is ridiculously low 2 pass might be a little better than CRF, because CRF has to guess the I-frame bonus, although CRF doesn't have to correct itself to hit a target bitrate as 2 pass does. Realistically though, 2 pass and CRF are pretty much the same quality at the same bitrate, as long as it's not ridiculously low.

    Of course the x264 VBV settings specified can also put the brakes on the bitrate regardless of the encoding method.

    Here's a similar comparison. The bit distribution was similarly not ideal for the ABR encode. https://forum.videohelp.com/threads/371187-Testing-NVENC-with-the-GTX-960#post2384552

    Here's a summery of x264's rate control methods. I don't think what you're seeing with bitrate viewer is buffer related. It's just that ABR encoding has to start guessing from the first frame and keep adjusting the quality when it's wrong.
    https://forum.videohelp.com/threads/381668-would-it-make-more-sense-to-use-1-pass-enco...65#post2470457

    None of that may be applicable to YouTube and how they manage to turn everything to mush.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!