VideoHelp Forum
+ Reply to Thread
Results 1 to 9 of 9
Thread
  1. Hi there

    I have a 4K webstream that is SDR, it is in 10 bit x265. I want to downscale it to 1080p for my 8 bit SDR display but I prefer to preserve the 10 bit color depth in x264.

    1. How do I know if the 4K SDR file is indeed using 10 bit color or if it is just encoded in 10 bit?

    2. Which x264 settings should I use to preserve the color information so I can get a "technically" better picture than its 1080p BD counterpart, kind of like super sampling?

    I'd appreciate your feedback, thanks!
    Quote Quote  
  2. Originally Posted by johnny27depp View Post
    Hi there

    I have a 4K webstream that is SDR, it is in 10 bit x265. I want to downscale it to 1080p for my 8 bit SDR display but I prefer to preserve the 10 bit color depth in x264.

    1. How do I know if the 4K SDR file is indeed using 10 bit color or if it is just encoded in 10 bit?
    You can't determine this definitively. And there are various ways to conceal the truth, if someone wanted to.

    But in general, native 10bit footage should have more unique colors, when tested on a 10bit display, or 10bit RGB.

    A video made by a simple 8bit to 10bit bit shift (the most common way for 8bit to 10 bit encoding) will have the same number of unique colors if you go back down to 8bit YUV, then to 10bit RGB using nearest neighbor for the chroma scaling (as to not generate aliasing). But if you do this procedure with native 10bit YUV video, you will see a reduction in number of unique colors . You can analyze a 10bit RGB DPX image taken from the video using a high bit depth version of ImageMagick for example, using -identify.

    Another way is to check the diff using vapoursynth for example. Essentially, "reversing" the bitshift and comparing back at 10bit will lead to the same thing for 8bit footage bitshifted to 10bit, but different for native 10bit because you're reducing the number of actual values

    Other signs you can look for are dithering patterns. Some conversions involve going to higher bit depths through an intermediate stage first (such as 16 bit, perhaps for higher precision filtering or intermediate processing), and then downconversion to 10bit. This sometimes has indicative signs, sort of "fingerprints" of sorts depending on what algorithm is used

    Another is analysis of 10bit gradients. Native 10bit footage will have incremental gradations, but 8bit to 10bit will sometimes have "jumps" or missing gaps . It's easier to see on a 10bit display, you won't necessarily see it on a 8bit display , but you can still analyze pixel values for example. Not always sensitive because of rounding errors, lossy encoding, and sometimes dithering and grain can obscure the issue

    There are some commercial and specialized in-house/proprietary tools that can analyze streams too , but they are usually not available to average consumer, or cost prohibitive


    2. Which x264 settings should I use to preserve the color information so I can get a "technically" better picture than its 1080p BD counterpart, kind of like super sampling?
    x264 is an encoder; it usually doesn't have many processing tools like super sampling , resizing (some have filtering patches, but they are custom distributions) . It depends on what the UHD footage is like, what needs to be done. Normally you 'd use processing tools such as avisynth, vapoursynth, ffmpeg. If it's good quality, then just a downscale is usually enough. You'd choose an algorithm approrpriate for the footage. If you go with something too sharp, you will create aliasing and ringing artifacts. But a sharper algorithm might be appropriate for "soft" UHD/4K source

    If the UHD was native 10bit, then keep a native 10bit tool chain, otherwise you lose a bunch of information. e.g if the resize was an 8bit filter, you'd be "culling" a lot of the data. Handbrake used to do this IIRC, not sure if it still does

    Also , if the UHD version used Rec2020 SDR, you'd have to take into account the conversion to Rec709 as well if going to 1080

    But for the x264 settings you'd at least want --input-depth 10 --output-depth 10 to keep 10bit
    Quote Quote  
  3. Originally Posted by poisondeathray View Post

    Another way is to check the diff using vapoursynth for example. Essentially, "reversing" the bitshift and comparing back at 10bit will lead to the same thing for 8bit footage bitshifted to 10bit, but different for native 10bit because you're reducing the number of actual values
    Okay so I tried the ConvertBits() function on a specific frame, exported 24-bit bitmap images and opened them up in FastStone Image Viewer 7.4 to count the unique colors:

    4K - 414548 colors (Original)

    4K.ConvertBits(8) - 108785 colors

    4K.ConvertBits(8).ConvertBits(10) - 317768 colors

    4K.ConvertBits(16) - 414404 colors

    So I can safely assume there is indeed 10bit color information? Or is it not that simple?


    Also , if the UHD version used Rec2020 SDR, you'd have to take into account the conversion to Rec709 as well if going to 1080

    But for the x264 settings you'd at least want --input-depth 10 --output-depth 10 to keep 10bit
    The 4K WEB-DL is actually Rec709 so I don't need to worry about that conversion. The main thing I want to take advantage of is the color information so would there be benefit in using 4:4:4 color space when encoding since the source has double the resolution/color information (2160p) required by the output (1080p)?
    Quote Quote  
  4. Originally Posted by johnny27depp View Post
    Originally Posted by poisondeathray View Post

    Another way is to check the diff using vapoursynth for example. Essentially, "reversing" the bitshift and comparing back at 10bit will lead to the same thing for 8bit footage bitshifted to 10bit, but different for native 10bit because you're reducing the number of actual values
    Okay so I tried the ConvertBits() function on a specific frame, exported 24-bit bitmap images and opened them up in FastStone Image Viewer 7.4 to count the unique colors:

    4K - 414548 colors (Original)

    4K.ConvertBits(8) - 108785 colors

    4K.ConvertBits(8).ConvertBits(10) - 317768 colors

    4K.ConvertBits(16) - 414404 colors

    So I can safely assume there is indeed 10bit color information? Or is it not that simple?
    Not that simple, because you need to compare at 10bit RGB (0-1023 values per channel) . Bitmap is 8bit RGB (0-255 values per channel)

    The chroma resizing step to generate the RGB image can generate false colors not in the source due to aliasing. The default is bicubic and this definitely does because it samples a 4x4 grid. "nearest neighbor" or point resize should be used for the chroma upsampling. Also, Avisynth's native scaler has a bug where it shifts the chroma, it's not a true nearest neighbor resize. You'd have to compensate for that or use avsresize (z_convertformat) instead in avisynth

    I'll post an example tonight or this weekend if I have time

    If you do the procedure correctly, a negative test result (showing no diff) 100% definitively tells you it's been bit shifted, but a positive result (showing a diff) doesn't necessarily indicate anything - there are many reasons why something could be different (additional steps, filters, or different method of 8 to 10bit conversion used can cause difference) . So it's not great test




    The main thing I want to take advantage of is the color information so would there be benefit in using 4:4:4 color space when encoding since the source has double the resolution/color information (2160p) required by the output (1080p)?
    Yes you can take advantage of using the UHD chroma resolution at 4:2:0, which is 1920x1080 for CbCr or 4:4:4 at 1920x1080. But only if you intended target supports it . Most hardware players, devices do not. If you're using computer playback it should be ok

    Depending on what the film/digital was shot on , processing, lossy "web" compression - it might not have actually have high quality 4:2:0 chroma at UHD . You can view the chroma planes individually in avisynth with UtoY , or VtoY . But eitherway, the color resolution should be "better" than the BD if you keep 4:4:4 , unless they really messed it up somewhere
    Last edited by poisondeathray; 20th Sep 2019 at 08:43.
    Quote Quote  
  5. Originally Posted by poisondeathray View Post
    The chroma resizing step to generate the RGB image can generate false colors not in the source due to aliasing. The default is bicubic and this definitely does because it samples a 4x4 grid. "nearest neighbor" or point resize should be used for the chroma upsampling. Also, Avisynth's native scaler has a bug where it shifts the chroma, it's not a true nearest neighbor resize. You'd have to compensate for that or use avsresize (z_convertformat) instead in avisynth

    I'll post an example tonight or this weekend if I have time
    Yes please I'd really appreciate an example. And also any other bad practices I should avoid when processing in 10 bit. I am using the latest version of AviSynth+. So if I do any resizing then I can't just call Spline64Resize()? I have to use the prefix z resizers i.e. z_Spline64Resize()?

    Originally Posted by poisondeathray View Post
    Yes you can take advantage of using the UHD chroma resolution at 4:2:0, which is 1920x1080 for CbCr or 4:4:4 at 1920x1080. But only if you intended target supports it . Most hardware players, devices do not. If you're using computer playback it should be ok

    Depending on what the film/digital was shot on , processing, lossy "web" compression - it might not have actually have high quality 4:2:0 chroma at UHD . You can view the chroma planes individually in avisynth with UtoY , or VtoY . But eitherway, the color resolution should be "better" than the BD if you keep 4:4:4 , unless they really messed it up somewhere
    Okay sure, whether I do this or not depends on the results of the test. Thanks for all the explanations, I really appreciate them.
    Quote Quote  
  6. Originally Posted by johnny27depp View Post

    And also any other bad practices I should avoid when processing in 10 bit. I am using the latest version of AviSynth+. So if I do any resizing then I can't just call Spline64Resize()? I have to use the prefix z resizers i.e. z_Spline64Resize()?
    Internal Spline64 works ok , and at different bit depths and no known issues (but not used for analysis when you want nearest neighbor / point), but it might be too "sharp" for downscaling - you can cause ringing, halos if the source was "sharp" to begin with . Also, it might cause a slight discrepancy where the Y' does not match the chroma. If the UHD 4:2:0 chroma was "soft" , the colors might not quite match the lines at 1920x1080 because the Y' is scaled down and very sharp. You have to run some tests and look closely
    Quote Quote  
  7. Looking at it more closely - I'm wrong; I didn't take into the account of lossy encoding, rounding errors introduced . You can't determine accurately the simple bitshift scenario with that simple check . You have to rely on other tests, pixel values / patterns, gradients, dithering clues

    For the 4:4:4 @ 1920x1080 by keeping the UHD 1920x1080 chroma , you have to shift the Y src_left=-0.50 to match when using avisynth resizers
    See this thread, especially Gavino's post #4
    https://forum.doom9.org/showthread.php?t=170029
    Quote Quote  
  8. Okay I realize now that whether it is 8 or 10 bit, there is still double the chroma resolution compared to the BD.

    Thanks for that reference! And thank you so much for your time and detailed explanations. I really appreciate it!
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!