VideoHelp Forum
+ Reply to Thread
Results 1 to 25 of 25
Thread
  1. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    Hi all,

    Apologies if this question isn't appropriate for this forum, but it seems like the right place to start.

    I understand that 10 bit color means that each of the RGB channels has 1024 different possible values, leading to a over a billion possible colors.


    I also understand that in order to achieve true 10 bit color (in the context of playing video files on a computer), you need:

    a video file that is encoded in 10 bit color
    software that supports this
    a video card capable of 10 bit processing (and drivers that support/unlock this)
    a video cable that supports this (i.e. displayport)
    a 10 bit capable display

    I've also read that even if you are missing the hardware requirements for 10-bit color, you can achieve a benefit by playing 10-bit files on an 8 bit display, due to "higher internal precision" which compensates for/reduces the noise involved in decoding. This means that any banding that would have been introduced due to this noise would be reduced. I've also heard that there are file size benefits.

    I have a few questions:

    1:

    I'm assuming that even if you set up everything correctly, and play a hi10p file, you will still not get true 10
    bit color, unless you have the proper hardware setup. In other words, while you may reduce banding due to noise, you will never be able to eliminate the banding which is due to the limitations of 8 bit (e.g. a shallow gradient spanning the entire screen will show banding simply because there are not enough different levels of that shade to appear seamless to the eye). Is this correct?


    2:

    I have a Sony GDM-FW900 - it's a high end trinitron CRT display. From what I understand, since CRTs are analogue, they are in principle capable of high color bit depths, since all they need to do is change the voltage on the guns by a slight amount. But I've also heard that higher end CRTs may actually be only capable of 8-bit color since they employ constraints in the processing pipeline to ensure accurate responses (for example, the voltage regulator may have circuitry that keeps the voltages to 256 distinct levels for each gun). Does anyone know whether trinitrons can output 10-bit color?

    3:

    I have an Nvidia GTX 660, and it has displayport on it. I've heard that under linux at least, one can obtain drivers that unlock the 10-bit capabilities of nvidia cards (under windows you apparently need a quadro to do this). Given that I have a CRT, would I still need to use the displayport? Currently, I use DVI-BNC.

    thanks for reading if you got here!
    Quote Quote  
  2. You are mixing up 10bit vs. 8bit precision calculation.
    And 4:2:0 vs 4:2:2 and 4:4:4 color sampling.

    a. the 10bit precision of the calculation is what causes banding (or helps avoiding it)
    b. the 10bit precision is what is not supported by hardware decoders
    c. 10bit precision can be used with each of the yuv color samplings 4:2:0/4:2:2/4:4:4
    d. 10bit precision does not need a special display to help
    -> only if you want 10bit color sampling you need a special (higher color sampling is ment to avoid chroma sub-sampling artifacts)
    Quote Quote  
  3. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    so what's the relevant issue when it comes to producing 1024*1024*1024 possible colors: 10 bit, or chroma subsampling?

    I'm not really familiar with chroma subsampling, but I always understood 10 bit color as meaning 1024 possible levels per channel.
    Quote Quote  
  4. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by spacediver View Post
    so what's the relevant issue when it comes to producing 1024*1024*1024 possible colors: 10 bit, or chroma subsampling?
    The relevant issue is 10bit color depth or bit depth

    But a file encoded with Hi10P derived from an 8 bit source will not give you all the possible colors
    Quote Quote  
  5. If your goal is to avoid banding on smooth color gradients you need a higher calculation precision.
    If your source is 4:4:4 or 4:2:2 and you want to keep the color representation as correct as possible use more more bits for color sampling.
    If your source is 4:2:0, which is normally the case unless you do a screen capture or record a video game, up-sampling the color (to 10bit) won't help make the image look better.
    -> personally I use 10bit coding precision if I do not care about hardware support (since I encode for software only playback; the decoder still needs to support 10bit coding precision)

    Personally I never use higher chroma sampling (unless my source is 4:2:2 or 4:4:4 and I create intermediate files), since the decoders I use do a good job avoiding sub-sampling problems. For more Infos about chroma sub-sampling, see: https://en.wikipedia.org/wiki/Chroma_subsampling
    Quote Quote  
  6. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    Originally Posted by poisondeathray View Post
    Originally Posted by spacediver View Post
    so what's the relevant issue when it comes to producing 1024*1024*1024 possible colors: 10 bit, or chroma subsampling?
    The relevant issue is 10bit color depth or bit depth

    But a file encoded with Hi10P derived from an 8 bit source will not give you all the possible colors
    Yep this makes sense. Are there any HI10p files out there that have a 10 bit source?
    Quote Quote  
  7. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by spacediver View Post
    Originally Posted by poisondeathray View Post
    Originally Posted by spacediver View Post
    so what's the relevant issue when it comes to producing 1024*1024*1024 possible colors: 10 bit, or chroma subsampling?
    The relevant issue is 10bit color depth or bit depth

    But a file encoded with Hi10P derived from an 8 bit source will not give you all the possible colors
    Yep this makes sense. Are there any HI10p files out there that have a 10 bit source?

    Most common consumer distribution formats will be 8bit and subsampled 4:2:0 (e.g. blu-ray , dvd, flash). You can find probably many anime (e.g. fansubs) probably encoded with Hi10p, but they are likely derived from 8bit sources

    So you can look at some blender projects (e.g. tears of steel, sintel), there are full features available in 16bit pngs or tiff sequences ; Or look on the red forums for some redcode footage . Many pro cameras will have aquisition in 10bit or more usually 4:2:2 or better (e.g. Panasonic's AVC-Intra is 10bit 4:2:2 Hi10p) ; in fact 10bit is pretty much standard for pro format aquisition . So if you encode one of those sources properly from 10bit or higher sources, then you can achieve what you want
    Quote Quote  
  8. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    thanks a lot for the information. Any insights into the CRT/displayport side of things?
    Quote Quote  
  9. Sorry, no clue haven't had a crt monitor for quite some time,..
    Quote Quote  
  10. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    np, appreciate all the help
    Quote Quote  
  11. Originally Posted by spacediver View Post
    Any insights into the CRT/displayport side of things?
    I suspect it's a matter of whether or not the TV does any digital processing. If so, it's probably 8 bit. So you won't get 10 bit precision. If the TV is pure analog then you'll get the full 10 bit precision.
    Quote Quote  
  12. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    it's a high end trinitron computer monitor, so it may well do some digital processing. Maybe there's a way to hack it though, through something like WinDAS.
    Quote Quote  
  13. I doubt a CRT based computer monitor would do any digital processing of the incoming signal. It's likely all analog. A high end TV might be different.
    Quote Quote  
  14. vanished El Heggunte's Avatar
    Join Date: Jun 2009
    Location: Misplaced Childhood
    Search Comp PM
    Originally Posted by spacediver View Post
    it's a high end trinitron computer monitor, so it may well do some digital processing.
    No way. The "digital" parts in an analog device just replace the olde and goode *manual* analog controls. Even the ancient VCRs were "digitally" controlled, right.
    Quote Quote  
  15. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    very cool, there may be hope for 10 bit color for me then
    Quote Quote  
  16. What is your input that 10bit color sampling makes sense?
    Quote Quote  
  17. Member
    Join Date: Jul 2013
    Location: Toronto
    Search PM
    Originally Posted by Selur View Post
    What is your input that 10bit color sampling makes sense?

    I'd like to experiment with graphics applications that support 10 bit color. I'd just like to be able to visually experience it. Also, if there are 10 bit video files (maybe anime or other), that have a 10 bit color source, then it'd be nice to experience that too.

    But I love me some shallow gradients
    Quote Quote  
  18. Member
    Join Date: Jan 2014
    Location: Kazakhstan
    Search Comp PM
    Привет,
    Prompt the formula encoding and decoding 10bit color, or some sort of gui/calculator exist?
    Quote Quote  
  19. Member Cornucopia's Avatar
    Join Date: Oct 2001
    Location: Deep in the Heart of Texas
    Search Comp PM
    Banding is a form of Quantization error, which usually occurs due to (as has already been said) low precision calculation + lower bitdepth. Banding can occur at ANY bitdepth, but isn't really noticeable above 8bits because our eyes are the weak link in the chain. (Notice that in that chain of support you forgot the final link: eyes).

    Banding can be eliminated by using a combination higher calculation precision, higher storage (bitdepth) precision and/or DITHER. Witness that you can create a GIF or other reduced-palette/LUT image with the equivalent of 4bits or less and still have no banding IF you use dither. Dither is used for the same purpose in the audio world. However there is one major difference: dynamic range.

    Digital audio systems can have a stated dynamic range of 24bits (aka 144dB) or possibly more, though the real world necessity for dynamic range is actually limited to ~100dB. This means you can have EXTRA bits (aka extra dB of SNR) with which to "rob peter to pay paul". As we know, dither substitutes quantization error with ~LSB noise. This lowers the overall dB (usually ~3-6dB each time used), but if you've got that extra padding available anyway, you'll never miss it. Which is why all pro audio engineers now work in 24dB until the final stage.

    Digital video systems could benefit in the same way, but their Dynamic Range is not yet at the point where they have the liberty to continually/regularly reduce their dB via Dither. Most digital systems are in the 40-60dB, with some of the best in the 90dB range. So a 3-6dB loss is much more noticeable here than in audio systems. Video just hasn't yet achieved universal system transparency like audio has (it's a much more complex beast).

    Yet, digital video system can be created where ALL calculations remained as high precision floating point up until the point of rendering/exporting, wherein a single dither transform is applied, and this would maintain smooth, natural levels even with 8bit final precision. It's done often with photos these days, hopefully it will be the norm for video.

    Scott
    "When will the rhetorical questions end?!" - George Carlin
    Quote Quote  
  20. Most of modern graphic cards are equipped with 10 bit (or more) DAC so they are able to produce "10 bit" or higher bit depths analog signal natively, side to this as noise is unavoidable in analog world it can be seen as natural dither (and with sufficient level it works quite well), additionally artificial dither (frequently with noise shaping) can be used to move perceived bit depth even further - classic example are 6 bit displays where with help even simple dither more than 8 bit depth can be simulated and simulation is way more efficient when resolution is very high - 6 bit 4k display can be better at some cases than FHD 8 - 10 bit display.
    I don't think that any CRT suffer from digital processing - exception are some CRT TV's but they resolution is limited (SD and some HD ready) - high end computer display is usually fully analog (don't expect to put high speed ADC, video processor, memories etc to display where it is not necessary).
    Quote Quote  
  21. Member
    Join Date: Jan 2014
    Location: Kazakhstan
    Search Comp PM
    How encodes RGB32 in 10bit, and then what formula decodes/restores back?
    Quote Quote  
  22. Originally Posted by Gravitator View Post
    How encodes RGB32 in 10bit, and then what formula decodes/restores back?
    Same formula as in case of 1 or 4 or 16 bit.
    Decode/restore back? Not sure if understand your question correctly.
    Generally same principle as for other cases - average.
    Quote Quote  
  23. Keep in mind that 32 bit RGB is really 24 bit RGB plus 8 bits of alpha channel. So it's still 8 bit per channel data.
    Quote Quote  
  24. Member
    Join Date: Jan 2014
    Location: Kazakhstan
    Search Comp PM
    How does H.264 / H.265 and other encoders compress images in 10bit ...
    Quote Quote  
  25. Calculation precision doesn't have to be the same as color representation.
    Using 4:4:4 doesn't help with banding, using a higher calculation precision does.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  



Similar Threads