Apologies if this question isn't appropriate for this forum, but it seems like the right place to start.
I understand that 10 bit color means that each of the RGB channels has 1024 different possible values, leading to a over a billion possible colors.
I also understand that in order to achieve true 10 bit color (in the context of playing video files on a computer), you need:
a video file that is encoded in 10 bit color
software that supports this
a video card capable of 10 bit processing (and drivers that support/unlock this)
a video cable that supports this (i.e. displayport)
a 10 bit capable display
I've also read that even if you are missing the hardware requirements for 10-bit color, you can achieve a benefit by playing 10-bit files on an 8 bit display, due to "higher internal precision" which compensates for/reduces the noise involved in decoding. This means that any banding that would have been introduced due to this noise would be reduced. I've also heard that there are file size benefits.
I have a few questions:
I'm assuming that even if you set up everything correctly, and play a hi10p file, you will still not get true 10
bit color, unless you have the proper hardware setup. In other words, while you may reduce banding due to noise, you will never be able to eliminate the banding which is due to the limitations of 8 bit (e.g. a shallow gradient spanning the entire screen will show banding simply because there are not enough different levels of that shade to appear seamless to the eye). Is this correct?
I have a Sony GDM-FW900 - it's a high end trinitron CRT display. From what I understand, since CRTs are analogue, they are in principle capable of high color bit depths, since all they need to do is change the voltage on the guns by a slight amount. But I've also heard that higher end CRTs may actually be only capable of 8-bit color since they employ constraints in the processing pipeline to ensure accurate responses (for example, the voltage regulator may have circuitry that keeps the voltages to 256 distinct levels for each gun). Does anyone know whether trinitrons can output 10-bit color?
I have an Nvidia GTX 660, and it has displayport on it. I've heard that under linux at least, one can obtain drivers that unlock the 10-bit capabilities of nvidia cards (under windows you apparently need a quadro to do this). Given that I have a CRT, would I still need to use the displayport? Currently, I use DVI-BNC.
thanks for reading if you got here!
+ Reply to Thread
Results 1 to 17 of 17
Thread: question about 10 bit color
You are mixing up 10bit vs. 8bit precision calculation.
And 4:2:0 vs 4:2:2 and 4:4:4 color sampling.
a. the 10bit precision of the calculation is what causes banding (or helps avoiding it)
b. the 10bit precision is what is not supported by hardware decoders
c. 10bit precision can be used with each of the yuv color samplings 4:2:0/4:2:2/4:4:4
d. 10bit precision does not need a special display to help
-> only if you want 10bit color sampling you need a special (higher color sampling is ment to avoid chroma sub-sampling artifacts)
so what's the relevant issue when it comes to producing 1024*1024*1024 possible colors: 10 bit, or chroma subsampling?
I'm not really familiar with chroma subsampling, but I always understood 10 bit color as meaning 1024 possible levels per channel.
If your goal is to avoid banding on smooth color gradients you need a higher calculation precision.
If your source is 4:4:4 or 4:2:2 and you want to keep the color representation as correct as possible use more more bits for color sampling.
If your source is 4:2:0, which is normally the case unless you do a screen capture or record a video game, up-sampling the color (to 10bit) won't help make the image look better.
-> personally I use 10bit coding precision if I do not care about hardware support (since I encode for software only playback; the decoder still needs to support 10bit coding precision)
Personally I never use higher chroma sampling (unless my source is 4:2:2 or 4:4:4 and I create intermediate files), since the decoders I use do a good job avoiding sub-sampling problems. For more Infos about chroma sub-sampling, see: https://en.wikipedia.org/wiki/Chroma_subsampling
Most common consumer distribution formats will be 8bit and subsampled 4:2:0 (e.g. blu-ray , dvd, flash). You can find probably many anime (e.g. fansubs) probably encoded with Hi10p, but they are likely derived from 8bit sources
So you can look at some blender projects (e.g. tears of steel, sintel), there are full features available in 16bit pngs or tiff sequences ; Or look on the red forums for some redcode footage . Many pro cameras will have aquisition in 10bit or more usually 4:2:2 or better (e.g. Panasonic's AVC-Intra is 10bit 4:2:2 Hi10p) ; in fact 10bit is pretty much standard for pro format aquisition . So if you encode one of those sources properly from 10bit or higher sources, then you can achieve what you want
thanks a lot for the information. Any insights into the CRT/displayport side of things?
Sorry, no clue haven't had a crt monitor for quite some time,..
np, appreciate all the help
it's a high end trinitron computer monitor, so it may well do some digital processing. Maybe there's a way to hack it though, through something like WinDAS.
I doubt a CRT based computer monitor would do any digital processing of the incoming signal. It's likely all analog. A high end TV might be different.
very cool, there may be hope for 10 bit color for me then
What is your input that 10bit color sampling makes sense?
I'd like to experiment with graphics applications that support 10 bit color. I'd just like to be able to visually experience it. Also, if there are 10 bit video files (maybe anime or other), that have a 10 bit color source, then it'd be nice to experience that too.
But I love me some shallow gradients