I am trying to learn video encoding with ffmpeg cli tool
And found this which is confusing me
Will any one explain this things with a good example
I wanna encode a video which resolution is
Width = 480
Height = 240
Video legnth is 60seconds(1min)
I want to encode it in variable bitrate of 160kbps
Which can up/down(fluctuate) in complex sections
Like in simple sections it should be low kbps
And in complex sections it should be high kbps
And I don't wanna do 2 pass encoding
May be it is called as Constrained encoding
Main thing which confuses me is
And don't give CRF examples
Just give bitrate examples
Sorry for bad english
+ Reply to Thread
Results 1 to 9 of 9
-b:v specifies average bitrate encoding. It's a variable bitrate, but it's lower quality than 2 pass or CRF.
For 2 pass encoding you specify the bitrate and the first pass is used to determine how to distribute the bits for a constant quality throughout.
For CRF encoding it's much the same as 2 pass except you specify the quality rather than the bitrate, so a first pass isn't needed.
For average bitrate encoding, the encoder can only guess how to distribute the bits as it encodes. It'll still use more bits in complex sections etc, but the quality is constantly adjusted as the encode progresses in order to achieve the specified bitrate. Sometimes, especially for short encodes, it'll start off with a bad guess and you'll see the quality change.
-minrate, -maxrate & -bufsize limit the bitrate. They can be used with ABR, CRF and 2 pass encoding. They're mainly used for streaming to prevent the bitrate fluctuating too much (in which case they can reduce the quality) or to limit the bitrate to ensure it won't exceed the abilities of a particular decoder. Don't use them unless you need to. Just specify a bitrate.
Thanks for reply
I know how to do 2 pass encoding
And constant bitrate encoding
But I don't know anything about
how to do Average bitrate encoding with ffmpeg
Will you give ffmpeg code example for
How I can do average birate encoding with using
Bitrate unit must be in kbps ,not in mbps because I understand kbps more😅
How to chose -bufsize? And what is role of bufsize
Will my bitrate increase more then maxrate
Or decrease less then minrate while playing video?
What will happen if I keep bufsize too much or too low?
Bufsize is it inside video player?
And what is Video Buffering Verifier
See in screenshot that white line is it bufsize?
-minrate, -maxrate & -bufsize limit the amount the bitrate would normally vary when using one of the variable bitrate modes. If the bitrate needs to increase for a complex scene but your -maxrate and -bufsize limit that increase, then they're limiting the quality. Don't use them unless you know you need to.
-bufsize is related to a decoder's video frame buffer.
-maxrate should set a hard limit for the bitrate. -bufsize (the way I understand it) limits the total bitrate over a small group of decoded frames so they'll fit in a player's buffer. Don't use them unless you know you need to.
-maxrate & -bufsize are the video buffering verifier settings. Don't use them unless you know you need to, which you never will for an average bitrate of 160kbps.
That's all you need to enable average bitrate encoding.
See in screenshot that white line is it bufsize?
uhmm too difficult to understand this term -bufsize :/
Btw CBR or ABR or 2 pass encoding which gives best video quality with best compression
And which method is faster to encode
I readed somewhere that 2 pass takes twice more time then CBR
Don't know about ABR
And also am thinking to encode some movies
Which method will be good?
And which video/audio codec gives best compression?
I think h265(hevc) for video
And he-aac_v2 for audio
But some websites says vp9
For codecs like h264, not all frames are independently encoded. Many frames rely on information from preceding frames, or even from the following frames, in order to be decoded properly. Therefore, to decode a frame, often the decoder also has to decode some surrounding frames in order to obtain the information it needs for the frame you want.
The way I understand it, those frames must be decoded and stored in a player's buffer. As you play a video, frames are decoded and sent to the buffer until they're displayed or no longer needed. Frames are constantly entering and leaving the buffer. -bufsize is used to control the total bitrate of each group of frames stored in the buffer. It's dependant on things like the number of "reference" frames, the level a player supports, and the frame rate and resolution.
For the chart I linked above, the numbers are macroblocks, not bitrate as such, which confuses me, but for a player that fully supports level 4.1, which is very common today, the VBV settings are (x264 command line, kbps):
--level 4.1 --vbv-bufsize 78125 --vbv-maxrate 62500
Often players don't support a level fully though. For example, the official VBV settings for HD bluray are:
--level 4.1 --vbv-bufsize 30000 --vbv-maxrate 40000
The other use for maxrate and bufsize is to limit the bitrate for streaming, where you mightn't want the bitrate to get too high.
For the bitrate and resolution you're working with, I can't imagine why you'd ever need them.
My main interest is quality, as I only encode video for myself, so I always use CRF encoding. The resulting bitrate will be higher than average for harder to compress video and lower than average when it's easy to compress, but for a given CRF value with the same encoder settings it's the average bitrate that varies, not the quality.
I still think x264 is the best codec for standard and high definition. For UHD, x265 is no doubt better. x265 is slower than x264 though.
AAC-LC is almost universally supported. It's all I ever use as I'm not concerned about ultra-low audio bitrates. If you are then HE-AAC is probably the way to go.
Here's some screenshots from an old thread comparing x264's ABR and 2 pass encoding for a short, complex video. The same average bitrate was used each time. It illustrates what can happen when ABR has to guess. The first scene was quite complex, so ABR gave the first frame a whole lot of bits, then had drop the quality so it wouldn't exceed the specified average bitrate, and by frame 50 the whole thing had turned to crap.
[Attachment 31117 - Click to enlarge]
2 pass had the benefit of a first pass.
[Attachment 31118 - Click to enlarge]
By frame 2000, where the picture was easier to encode, ABR was using the bits it saved, which would have been better spent earlier, so the ABR quality was a little higher.
[Attachment 31125 - Click to enlarge]
[Attachment 31126 - Click to enlarge]
How ABR distributed the bits over the 2500 frames encoded (shown as the bitrate for groups of frames).
[Attachment 31174 - Click to enlarge]
How 2 pass spent them.
[Attachment 31176 - Click to enlarge]
And for a CRF value resulting in much the same average bitrate as ABR and 2 pass, CRF distributed the bits much like 2 pass did.
[Attachment 31177 - Click to enlarge]
Here's the old thread if you're interested.
Which app he used to see bitrate chart??