Hi there
I have a 4K webstream that is SDR, it is in 10 bit x265. I want to downscale it to 1080p for my 8 bit SDR display but I prefer to preserve the 10 bit color depth in x264.
1. How do I know if the 4K SDR file is indeed using 10 bit color or if it is just encoded in 10 bit?
2. Which x264 settings should I use to preserve the color information so I can get a "technically" better picture than its 1080p BD counterpart, kind of like super sampling?
I'd appreciate your feedback, thanks!
+ Reply to Thread
Results 1 to 9 of 9
-
-
You can't determine this definitively. And there are various ways to conceal the truth, if someone wanted to.
But in general, native 10bit footage should have more unique colors, when tested on a 10bit display, or 10bit RGB.
A video made by a simple 8bit to 10bit bit shift (the most common way for 8bit to 10 bit encoding) will have the same number of unique colors if you go back down to 8bit YUV, then to 10bit RGB using nearest neighbor for the chroma scaling (as to not generate aliasing). But if you do this procedure with native 10bit YUV video, you will see a reduction in number of unique colors . You can analyze a 10bit RGB DPX image taken from the video using a high bit depth version of ImageMagick for example, using -identify.
Another way is to check the diff using vapoursynth for example. Essentially, "reversing" the bitshift and comparing back at 10bit will lead to the same thing for 8bit footage bitshifted to 10bit, but different for native 10bit because you're reducing the number of actual values
Other signs you can look for are dithering patterns. Some conversions involve going to higher bit depths through an intermediate stage first (such as 16 bit, perhaps for higher precision filtering or intermediate processing), and then downconversion to 10bit. This sometimes has indicative signs, sort of "fingerprints" of sorts depending on what algorithm is used
Another is analysis of 10bit gradients. Native 10bit footage will have incremental gradations, but 8bit to 10bit will sometimes have "jumps" or missing gaps . It's easier to see on a 10bit display, you won't necessarily see it on a 8bit display , but you can still analyze pixel values for example. Not always sensitive because of rounding errors, lossy encoding, and sometimes dithering and grain can obscure the issue
There are some commercial and specialized in-house/proprietary tools that can analyze streams too , but they are usually not available to average consumer, or cost prohibitive
2. Which x264 settings should I use to preserve the color information so I can get a "technically" better picture than its 1080p BD counterpart, kind of like super sampling?
If the UHD was native 10bit, then keep a native 10bit tool chain, otherwise you lose a bunch of information. e.g if the resize was an 8bit filter, you'd be "culling" a lot of the data. Handbrake used to do this IIRC, not sure if it still does
Also , if the UHD version used Rec2020 SDR, you'd have to take into account the conversion to Rec709 as well if going to 1080
But for the x264 settings you'd at least want --input-depth 10 --output-depth 10 to keep 10bit -
Okay so I tried the ConvertBits() function on a specific frame, exported 24-bit bitmap images and opened them up in FastStone Image Viewer 7.4 to count the unique colors:
4K - 414548 colors (Original)
4K.ConvertBits(8) - 108785 colors
4K.ConvertBits(8).ConvertBits(10) - 317768 colors
4K.ConvertBits(16) - 414404 colors
So I can safely assume there is indeed 10bit color information? Or is it not that simple?
Also , if the UHD version used Rec2020 SDR, you'd have to take into account the conversion to Rec709 as well if going to 1080
But for the x264 settings you'd at least want --input-depth 10 --output-depth 10 to keep 10bit -
Not that simple, because you need to compare at 10bit RGB (0-1023 values per channel) . Bitmap is 8bit RGB (0-255 values per channel)
The chroma resizing step to generate the RGB image can generate false colors not in the source due to aliasing. The default is bicubic and this definitely does because it samples a 4x4 grid. "nearest neighbor" or point resize should be used for the chroma upsampling. Also, Avisynth's native scaler has a bug where it shifts the chroma, it's not a true nearest neighbor resize. You'd have to compensate for that or use avsresize (z_convertformat) instead in avisynth
I'll post an example tonight or this weekend if I have time
If you do the procedure correctly, a negative test result (showing no diff) 100% definitively tells you it's been bit shifted, but a positive result (showing a diff) doesn't necessarily indicate anything - there are many reasons why something could be different (additional steps, filters, or different method of 8 to 10bit conversion used can cause difference) . So it's not great test
The main thing I want to take advantage of is the color information so would there be benefit in using 4:4:4 color space when encoding since the source has double the resolution/color information (2160p) required by the output (1080p)?
Depending on what the film/digital was shot on , processing, lossy "web" compression - it might not have actually have high quality 4:2:0 chroma at UHD . You can view the chroma planes individually in avisynth with UtoY , or VtoY . But eitherway, the color resolution should be "better" than the BD if you keep 4:4:4 , unless they really messed it up somewhereLast edited by poisondeathray; 20th Sep 2019 at 08:43.
-
Yes please I'd really appreciate an example. And also any other bad practices I should avoid when processing in 10 bit. I am using the latest version of AviSynth+. So if I do any resizing then I can't just call Spline64Resize()? I have to use the prefix z resizers i.e. z_Spline64Resize()?
Okay sure, whether I do this or not depends on the results of the test. Thanks for all the explanations, I really appreciate them. -
Internal Spline64 works ok , and at different bit depths and no known issues (but not used for analysis when you want nearest neighbor / point), but it might be too "sharp" for downscaling - you can cause ringing, halos if the source was "sharp" to begin with . Also, it might cause a slight discrepancy where the Y' does not match the chroma. If the UHD 4:2:0 chroma was "soft" , the colors might not quite match the lines at 1920x1080 because the Y' is scaled down and very sharp. You have to run some tests and look closely
-
Looking at it more closely - I'm wrong; I didn't take into the account of lossy encoding, rounding errors introduced . You can't determine accurately the simple bitshift scenario with that simple check . You have to rely on other tests, pixel values / patterns, gradients, dithering clues
For the 4:4:4 @ 1920x1080 by keeping the UHD 1920x1080 chroma , you have to shift the Y src_left=-0.50 to match when using avisynth resizers
See this thread, especially Gavino's post #4
https://forum.doom9.org/showthread.php?t=170029 -
Okay I realize now that whether it is 8 or 10 bit, there is still double the chroma resolution compared to the BD.
Thanks for that reference! And thank you so much for your time and detailed explanations. I really appreciate it!
Similar Threads
-
Converting UHD HDR to SDR?
By tommy2010 in forum Blu-ray RippingReplies: 9Last Post: 19th Mar 2021, 03:12 -
10-bit to 8-bit Video File Conversions - the easy (free) way!
By VideoWiz in forum Video ConversionReplies: 10Last Post: 6th Feb 2020, 03:24 -
HDR to SDR Avisynth+
By tommy2010 in forum Video ConversionReplies: 2Last Post: 7th Jan 2018, 16:51 -
MPC HC: 32-bit or 64-bit on Windows 7 64-bit
By flashandpan007 in forum Software PlayingReplies: 20Last Post: 22nd Jul 2016, 09:22 -
Lossless (10 Bit RGB 444) and (10 Bit YUV 422) Compression Codec's
By JasonCA in forum Video ConversionReplies: 62Last Post: 25th Dec 2014, 23:40