Hi all, this is my first post.
I'm using a 2 pass encoding process with x264 for my bd compilations.
My videos are from various sources and i have full motion contents along with static slide shows, on the same BD.
My way to calculate the average needed bitrate takes only duration into account, so i take the duration of the whole compliation versus BD-RE capacity. Fine.
Then i encode every video with the same average bitrate, and everything goes well.
Now the question. There is a way to better distribute the data with x264, taking into account the complexity of different kind of images?
In other words, a simple slideshow from a conference needs less data to be encoded than a full motion video shot with my camera, but they ended up with the same average bitrate, using my method.
One solution would be to concatenate all contents in a sigle file 'before' the h264 encoding process, so the dual pass algorithm can efficently deal with that, but then i have problems with audio syncing and db authoring. Furthermore videos sometimes have different fps.
+ Reply to Thread
Results 1 to 9 of 9
Using a -crf setting in 1 pass does just that : define a certain level of quality, and automatically adapt the bitrate according to the complexity of the transcoded footage.
From what I've read, a 1-pass encoding at a given -crf (Constant Rate Factor) is visually identical to a 2-pass encoding set to the same average bitrate value, meaning that the bitrate distribution algorithm is efficient enough to make 2-pass encoding superfluous, unless one needs to output a particular file size.
Depending on whether your priority is to preserve the source quality or to save storage space (can't have both), the general consensus recommands using a -crf value between 18 and 23 ; below 18 the gain in visual quality becomes negligible while the bitrate and file size increases significantly, while beyond 23 the quality loss starts to become noticeable.
thankyou for the answer.
Of course this is a great solution for a constant quality, but in the process of BD authoring i always try to reach the maximum disc capacity in order to maximize quality.
Most times i have more than 450 total minutes, so a decent crf encoding would easily overshot 22GB.
What i'm looking for is a way to make x264 to do its dualpass bitrate calculations over the whole compilation, while encoding each single video track. (if exist something like that)
Last edited by AlessandroM; 30th Oct 2018 at 15:32.
But i could use partially your solution.
I could encode crf the most static contents, get the total size and dedicate the rest of the BD to the full motion videos.
Warning: To make Blu-ray compliant material, you should use the CRF mode for the mainly still material also with typical VBV restrictions. If you burn it onto a Blu-ray disc, a too low bitrate may cause issues reading it from the optical disc in a correct speed, and the playback may fail with a "buffer overflow error".
Yes, it sounds paradox: A too low bitrate causes an overflow, because more than one GOP might be loaded into the decoding buffer.
Then perhaps appending all sequences in an Avisynth script (temporarily converting the FPS if needed), running a dummy first pass with that script, to somehow obtain a CRF value which will result in a global bitrate close to the target average bitrate, then using that value for each individual sequence... I haven't much experience with 2-pass encoding so I don't know of a tool/method to do that. I've read that internally when encoding in 2 passes, x264 is doing just that : running an analysis pass, then using the statistics to define the CRF value for the actual encoding (this may be wrong).
For example, say you have two segments. You encode both at CRF 10. One turns out 40 GB the other 10 GB, a total of 50 GB. Now you know that you have to use half the bitrate on each segment to fit the video in 25 GB.
Last edited by jagabo; 30th Oct 2018 at 23:15.