VideoHelp Forum

Try DVDFab and download streaming video, copy, convert or make Blu-rays,DVDs! Download free trial !
+ Reply to Thread
Results 1 to 4 of 4
Thread
  1. I'm trying to understand LWLibavVideoSource better, using my personal setup to encode video. I'm using MeGUI as my reference point in "how to use" AviSynth and LWLibavVideoSource, simply looking at the support files generated during an encode process.

    I've managed to learn a lot by looking at all these files, and I had to look up "color space" in order to start wrapping my brain around what things like "YUV411P8" mean.

    Is the main point of changing formats to get more color in less space? Or is there something else on a more grand scale I'm missing?

    If space saving is the main point, is there a chart that helps show the different formats, file size, along side video length, or something like that?

    I don't know what I don't know at this point, so any info or links would be appreciated.

    EDIT: Upon more searching, I see how "open ended" this question is. My main context is encoding / conversion / compression. Hope that helps narrow things down. Basically, does the color format and/or bit depth effect the file size or quality in any meaningful way?

    Is it more common to force this kind of format change when ripping from DVD? For quality enhancement?
    Last edited by TheArkyTekt; 27th Nov 2019 at 07:16.
    Quote Quote  
  2. This should get you started thinking about size for video.

    Different formats for YUV video means different subsampling( 411,420,422,444) and differen bitdepth for values (P8 is 8bits, P10 is 10 bits).

    Bitdepth has nothing to do with subsampling.

    Bitdepth
    8bit values are from 0 to 255, 10bit values are from 0 to 1023. So 8bit value 16 is 16x2x2=64 in 10bit. As you can see, you can assign more values in 10bit, so it is more accurate. 8bit has four times less possible values.

    subsampling
    Before you start to "decode" YUV formats and subsampling, the best way is to visualize uncompressed video for particular subsampling. Not sure why no one uses it.

    YUV is planar video (same as RGB), so it has three planes Y,U and V . Plane means x x y array of data
    Imagine one VIDEO FRAME of 4x4 pixels to simplify it, not 1920x1080 frame. No subsampling first so YUV444P8:
    Y plane (luminescence), top left renders any color (whatever stored in U and V) as dark pixels, then going light in the middle of plane and darker again towards bottom right corner:
    16, 32, 64, 128
    32, 64, 128, 235
    64, 128, 235, 64
    128, 235, 64, 32
    U plane all values are right in the middle (0-255), meaning there is no color. (compare with former analog video usage: values were from -1 to 1, so 0 was no color):
    128,128,128,128
    128,128,128,128
    128,128,128,128
    128,128,128,128
    V plane (all values are gray, meaning there is no color):
    128,128,128,128
    128,128,128,128
    128,128,128,128
    128,128,128,128

    so this 4x4 video frame would be black&white
    if video has 25 frames per second, there would be 25 consequent video frames after each other stored
    if you checked some RGB video, planes would have the same full resolutions for all three planes, same as this YUV444, but RGB values would be different and mean something else, for example YUV (16,128,128) value for that top left corner pixel, would be RGB equivalent: (16,16,16) or more precisely (0,0,0) because RGB is usually full range (0 to 255) and YUV is limited (16 to 235 for Y, 16 to 240 for U and V)

    If that 4x4 pixel video frame was YUV422P8:
    Y plane (luminescence), same as YUV444P8 and also same for whatever subsampling, luma resolution is always full:
    16, 32, 64, 128
    32, 64, 128, 235
    64, 128, 235, 64
    128, 235, 64, 32
    U plane all values are gray, x coordinates have half resolution, y values have full resolution:
    128,128
    128,128
    128,128
    128,128
    V plane all values are gray, x coordinates have half resolution, y values have full resolution :
    128,128
    128,128
    128,128
    128,128

    so color in U plane would be stored/shared for two neighboring pixels

    If that 4x4 pixel video frame was YUV420P8:
    Y plane (luminescence):
    16, 32, 64, 128
    32, 64, 128, 235
    64, 128, 235, 64
    128, 235, 64, 32
    U plane all values are gray, x coordinates have half resolution, y values have half resolution:
    128,128
    128,128

    V plane all values are gray, x coordinates have half resolution, y values have half resolution :
    128,128
    128,128

    So colors in U and V plane would be stored/shared for two neighbouring pixels and also pixels beneath them.
    It is simplified like that because there is a thing called chroma location but for the simplicity

    YUV420P8 is used for delivery, if codec is H.264 8bit is a norm. That P8 stands for 8bit. If HEVC then 10bit. 8bit or 10bit is bitdepth for values and has nothing to do with subsampling

    Filters use whatever format, more subsampling meansvideo is smaller for storage, less subsampling means video is more accurate (if it was captured that way). But beware changing subsampling, upsampling and downsampling again is not good either, values are being changed, you get further from original video. Upsampling is not that bad, but going back again starts to blur original more.

    More bitdepth means to be more accurate while filtering also, and while upsampling dither could be used. Remember that 16 value (for 8bit) equals 64 (for 10bit)? If getting 10bit equivalent for 8bit value 17, you'd get 68, so if dithering that value could be randomly chosen from 64 to 68 so you get rid of some staccato , steady pattern and introduce more chaos for that 10bit video. You increase bitrate even more, because creating more values, nevertheless encoding looks smoother and without artifacts in color gradients, encoder does not create banding( blops of video color in gradient).

    Also worth mentioning, there is also floating (32bit in memory) and half floating point (16bit in memory) values apart for whole , integer 8bit, 10 bit or even 16bit. Those values could be used for calculations, some filters processing video work with floating point values. Using precise values avoid rounding up again and again, same for example like high bitdepth integer values.
    For 32bit floating point: Y value would be from 0.0 to 1.0, U and V values from -0.5 to 0.5

    Comparisons of the same RGB and YUV pixel value using different bits as integer of floating point form. RGB 8bit pixel value is (255,0,0) which is basically RED pixel:
    Code:
    RGB 8bit value r:255, g:0, b:0 - red color,
    RGB 10bit: r:1023, g:0, b:0
    RGB 16bit: r:65535, g:0, b:0
    RGB32bit  floating point: r:1.0, g:0.0, b:0.0    (all plane values are from 0.0 to 1.0)
    YUV 8bit: y:63, u:102, v:240
       255 value is illegal for YUV so full range  to limited range  is applied and max value for chroma is 240 instead of 255, for luma it is 235
       also on the other end, minimum values are 16 for YUV, not 0
    YUV 10bit: y:250, u:409, v:960
    YUV floating point 32bit: y:0.2125999927520752, u:-0.11457210779190063, v:0.5 (y plane 0 to 1.0, U and V values from -0.5 to 0.5)
    Last edited by _Al_; 27th Nov 2019 at 22:59.
    Quote Quote  
  3. Wow @_Al_ , I cannot thank you enough! Didn't expect that much of a response!

    Thanks for all the info. You gave me a lot of good info to look through to understand these types of settings/parameters passed during the encoding process.

    Thanks again
    Last edited by TheArkyTekt; 27th Nov 2019 at 12:31.
    Quote Quote  
  4. There's a list of the Avisynth supported color formats here, and a description of what they mean: http://avisynth.nl/index.php/Convert

    I assume MeGUI is ensuring the video is decoded as 8 bit, while keeping the original color format if possible. DVD video is 8 bit anyway. Was "YUV411P8" an example after indexing a DVD, because that seems a bit odd. In fact I don't think MeGUI adds the format argument for standard 8 bit sources, but the norm for conversion would be "YUV420P8" (effectively YV12). If you're using the x264 encoder and you want the video to be playable on something that isn't a PC, it needs to be 8 bit. For 10 bit x265 encoding, you can manually remove the format argument from the script, or change it to "YUV420P10" for 10 bit sources, and to encode as 10 bit you have to configure the encoder correctly. For x264 there's a checkbox to enable 10 bit encoding, for x265 there's not, so to encode as 10 bit with x265 you need to add the following to the custom command line section in the encoder configuration.

    --profile main10

    Keep in mind that many Avisynth plugins only support 8 bit video, and because player support for 10 bit h264 will probably never happen, I assume MeGUI is "playing it safe" and automatically converting high bit depth sources to 8 bit, or rather it tells Lsmash to decode them as 8 bit.

    You're possibly more likely to see color banding in an 8 bit encode than a 10 bit encode, but there's dithering plugins to help prevent it. I add GradFun3() to the end of most scripts. It's a function of the DitherTools plugin. It converts 8 bit video to 16 bit then dithers it back to 8 bit. F3KDB() is a similar plugin.
    Quote Quote  



Similar Threads