Hi guys,
Can someone tell me why some uncompressed video formats employ chroma subsampling and are still referred to as uncompressed? I'm thinking of YUV 4:2:2 10 bit uncompressed codecs like v210. i've never heard a satisfactory explanation and i can never find the right google terms to give an answer!
Many thanks,
Kieran.
+ Reply to Thread
Results 1 to 29 of 29
-
Last edited by kieranjol; 16th May 2015 at 06:05.
-
Thanks Jagabo. Is it a case of specific definitions of subsampling vs compression? It would seem that subsampling is a form of redundancy removal as well. In the case of 4:2:2, the 50% chroma loss is not easily discernable vs 4:4:4:4 so it can be removed?
-
Why do you care whether it's called compression or not. It is what it is. And chroma subsampling is easily visible with the right source material.
https://forum.videohelp.com/threads/294144-Viewing-tests-and-sample-files?p=1792760&vie...=1#post1792760
What's gone is gone and cannot be fully restored. Every time you view a 4:2:2 or 4:2:0 video you are upsampling to 4:4:4 RGB.
If I cropped half the frame away would you want to consider that compression too?Last edited by jagabo; 16th May 2015 at 08:04.
-
The reason I care is that I simply want to understand the distinction. I had a naive view that using an uncompressed codec meant that nothing was lost when ingesting, for example, a tape or film. I'm still trying to wrap my brain around what you're saying. In my limited understanding, I would have thought that for a codec to be classed as uncompressed, it would have to be 4:4:4, as reducing the chroma information is compressing the signal.
-
Uncompressed means raw. Compression turns raw information into numbers, coefficients, symbols, words ect. using various algorithms. Lossless compression allows us to restore original data. Lossy compression is not precise and during compression we loose some or the original information so during decompression we can not restore original data.
Raw video signal can be in various formats, RGB, YV12 (4:2:0), YV16 (4:2:2). It can bi 8-bit, 10-bit, 16-bit, ect.
So, discarding chroma resolution and/or luma resolution has nothing to do with compression as data will still be in a raw format.
Reason you don't understand very well is because you don't understand term "compression". And I can not explain it better as my english is a bit limited. Just search for term "compression" to understand better. -
Chroma subsampling is obviously a form of compression., in fact it is a lossy compression. It uses the fact that the human vision is less sensitive to color resolution.
Compression techniques in general use various psycho-visual principles, e.g. some things we just notice more that others, that could be in color, intensity, movement or combinations thereof. That color is compressed separately from other compression was understandable for historical reasons. However in the 21th century it is definitely not the optimum. Color compression should be handled by the CODEC not by using the chroma subsampling machete where it cuts color resolution regardless of context.
Of course if some people simply repeat the mantra "it's not compression, it's not compression.....( repeat 100 times) they eventually believe it themselves.
"It's not true, it's not true, newpball is wrong, he is wrong" (repeat 1000 times).
Last edited by newpball; 16th May 2015 at 11:58.
-
My take would be that interlacing and chroma subsampling are both forms of "compression" but not in the same sense as "data compression". And generally it's understood that "uncompressed" refers to the latter sense, so there's no need to throw in more words.
Of course, if you make your definition too broad then downsizing the entire video could also be considered "compression".
Ultimately I suppose the reason for the terminology is that when 4:2:2 was first introduced, there was no need to call it "compressed" to differentiate it if from anything else. If you were a producer in 1986 mastering on the bleeding-edge D-1 tape format, you didn't have a 4:4:4 tape somewhere that was considered the unmolested source material. I think you were just happy that you could do nth-generation dubs without losing quality. -
Thank you Detmek, I do have difficulty understanding compression. I think I've reached the limits of my brainpower!This might be semantics, but I thought "RAW" video was something different to uncompressed. As in RAW has no processing whatsoever, and may not even be readable without some sort of filtering.
http://en.wikipedia.org/wiki/Raw_image_format -
In the professional video and photography world - "RAW" has different meaning than "uncompressed" . "RAW" usually means undebayered , unfiltered , sensor data
It's not the same thing - these are independent terms - You can have "compressed" raw, or "uncompressed" raw . For example, Recode RAW is an example of lossy compressed raw.
It should be handled by the person, not the codec. A codec might make the wrong choice for a certain situation.
Yes, in this imaginary world - ideally you should have control over everything. But in real world, there is cost $ as part of the consideration
Even up to couple years ago, the lowest cost of entry for full 4:4:4 was about $20,000 for the Sony F3 with RGB upgrade (it's eventually became a free upgrade) . And that isn't even full 4:4:4 when measured on chroma zone plates (sensor could not deliver it) . Nowadays, it's a lot cheaper, because UHD 4:2:0 has 1920x1080 CbCr color information. You can just downscale the Y' and you have 4:4:4 1080p . That's one of the huge benefits of oversampling -
Love the Tom Green gif, and thanks for the reply! It's interesting that you say that the codec isn't handling the colour compression. By this do you mean that colour compression could be part of the redundancy removal that is already carried out by a codec? This is painting a clearer picture in my head of how colour compression is handled differently.
-
-
UHD, not 4K .
Mathematically, you do get 10bit, at least in greyscale. But in practice, it's not so simple, and you actually end up with slightly less than 10bit values (but very close). Part of the problem is the UHD CbCr channels are 8bit 1920x1080 , so that causes quantization or discrete steps. The conversion has to be dithered and done properly for you to see a difference -
There was a thread over at Doom9 with BenWaggoner & DarkShakiri in it that talked about what newpball was proposing, and after much tangential bantering, the upshot of it all was that, as far as they could discern (and it seemed like opposing camps came to agreement), using the anti-redundancy tools in a codec such as x264 did indeed reduce the bitrate/filesize better for a given quality (or quality for a given bitrate/filesize) than color subsampling.
But, and here is the important part: it was more efficient ONLY for high/very high bitrates.
For medium/low/very low bitrates (of which, 90%+ of the users here qualify), chroma subsampling was more efficient!
Maybe that will put it to rest (but I doubt it).
******************
As far as the OP is concerned with definitions, I think you're going to have to just go along with the professionally accepted categorizations, or you'll always be at odds with the full understanding of what's being discussed.
There are (or can be) various stages video goes through in its journey (from scene to audience):
Optical capture
Analog Electrical transform
Raw Digitizing (Sampling), and optional color subsampling
Convert to Standardized forms (RGB, YUV), and optional color subsampling
Intraframe Transform & Bitrate reduction (aka "compression")
Interframe compression
And then it reverses the process.
Note that with the "standardized form" of RGB or YUV, you could still do a direct "paint by numbers" on a grid and see a visible, understandable image. Once the next transform stage is done, the numbers that describe the image are no longer directly human-eye representational.
I believe this is the demarcation you might be looking for between color-subsampling & compression. In a broad sense there are similarities, but much of what is done with video & audio must dig deeper than broad generalities in order to be useful.
Scott -
If I take a colour image and remove all the chroma information so what I'm left with is black and white, have I compressed it, or have I created something different by removing the colour information?
If I take a 16 bit wave file and re-sample it as 8 bit, obviously I've compressed it in exactly the same way subsampling didn't. I think you're confused about the difference between the amount of data available for compression and the compression itself.
I saw something similar in another thread recently where people were claiming turning the volume up and down automatically was compression and not normalisation. Like then, what constitutes compression here is probably largely semantics too.
A moderator should make that a sticky. Would you mind if I quoted you and used it as a forum signature? -
Yes, raw is not really a correct term but I could not find better one. So, trying to explain.
In audio world PCM is uncompressed format. It does not require to be treated in any way so applications can work with it. And every lossless (FLAC, ALAC, WavPack) or lossy compressed format (MP3, AAC, Vorbis) has to be uncompressed into PCM in order to be played back or edited.
In video world video needs to be in uncompressed format so you can play it back or edit without any further modification. So, every lossless or lossy compressed video needs to be decompressed in order to be played back or edited.
Chroma subsampling is related to YUV video format where luma and chroma are separated. In that way luma and chroma can have different resolution. For YUV 4:4:4 luma and chroma have the same resolution. For YUV 4:2:0 luma and chroma have different resolution. To convert YUV 4:4:4 to YUV 4:2:0 you need to resize chroma. And that is not compression. You can resize by discarding pixels or interpolating pixels which will create something new but it is not compression.
hello_hello's example also works as explanation.
It's not true, it's not true, newpball is wrong, he is wrong. x1000 -
Yes, it started with /was born of the YUV video format, but it can be used in the 'RGB world' as well.
For example, there are JPG images whose sources were not converted to YCbCr before being compressed;
and in this case, the channels red and blue may be subsampled.
It's not true, it's not true, newpball is wrong, he is wrong. x1000 -
-
I think this has really clarified it. It does appear to be a resizing. To be very crude about it, one could say that with a PAL Uncompressed 4:2:2 video, the Y channel is 720x576, but U and V are both effectively stored in the container before decompression as 360x288? I'm probably being too literal here, but that's the gist of it?
-
-
@El Heggunte
I didn't know that RGB can be subsampled. I know it can be full or limited range but not subsampled.
@pandy
Yep, subsampling was main way to same bandwidth in analog but it was useful even with digital video in early days. Now that we deal with 4K resolutions and high bitrate video streams it may be a time to stop using chroma subsample. Unfortunately, new UHD Blu-Ray standard does not support 4:4:4 chroma (unless they changed something in the last few months). -
@Detmek:
https://forum.videohelp.com/attachments/24800-1398489774/Nene421-rgb.jpg
https://forum.videohelp.com/attachments/24802-1398497209/Nene3xRGB.jpg
One can easily create red+blue_subsampled .JPGs with cjpeg. -
Well yes and no - don't forget that video have 3 dimensional structure - X, Y and time - there is few video standards that exploiting this - OK all of them was not popular, and now they are outdated but - time and bandwidth can be exchanged (at least under some limited constrains).
Why BD UHD should be better than HDMI 2.0?
HDMI 2.0 is limited to 60 fps, 8 bit and sometimes to 4:2:0 - looks like BD UHD spec is anyway beyond HDMI capabilities.
To be honest i can accept consumer 4:2:2 but i prefer 4:4:4 however most of people will don't see difference even with 4:1:0 (or perhaps 2:1:0) - truth is that most of people dont see significant difference between 320x240 and 1920x1080 so... why bother? -
-
-
Since the great majority of humans' perception of resolution is based on luma/luminence (aka "Y" in the YUV), and since the Y is NOT subsampled, you are quite incorrect in your assessment.
Scott -
-
Because the eye is far more sensitive to luma than to chroma, right? You can get away with subsampling chroma.
I've seen at AVS (more than once) members say what they want is UHD downconverted to 1080p, for display on a 1080p OLED ( to get more chroma information per pixel). Not quite sure what to make of that.If I get UHD Blu-Ray, I want a UHD TV as well.
Pull! Bang! Darn!
Similar Threads
-
How to play file with chroma subsampling 4.2.2?
By thinredline in forum Software PlayingReplies: 18Last Post: 30th Dec 2014, 15:55 -
FFMPEG chroma subsampling catproblem
By marcorocchini in forum Newbie / General discussionsReplies: 10Last Post: 25th Apr 2014, 04:42 -
How to play this file? (4:2:2 chroma subsampling H.264)
By kasda in forum Software PlayingReplies: 7Last Post: 18th May 2013, 06:16 -
ffmpeg Convert dv video to h.264 and change chroma subsampling
By wotdefcuk in forum Video ConversionReplies: 7Last Post: 23rd Jan 2013, 17:02 -
Transcoding w/Procoder 3 Chroma Subsampling (lines in the color red)
By Sullah in forum Video ConversionReplies: 0Last Post: 21st Jul 2011, 13:30