As I have been learning more about chroma subsampling, I decided to do a quick test. As most of you likely know, modern, HD consumer cameras record in YUV 4:2:0. This is the case for my Canon Vixia HV40. However, I have a Ninja which records in YUV 4:2:2. So one of the questions on my mind was whether the HDMI-out on my Canon was subsampled to 4:2:0 or 4:2:2. Reviews on the web have been fairly silent on the matter. But I figured out a quick test that should answer the question once and for all.
Below are cropped, 400% zoomed screenshots from Virtualdub of the exact same frame from my Canon and Ninja recorded simultaneously along with a photo from the web which is clearer for comparison. I think the most dramatic part of the Ninja image is the blue color of the small "Line Out" label shows up while the Canon is completely gray. So I would conclude based on this, as well as many other parts of the image (the green and red lights, the two angled reflections on the silver knob, the silver knob edge, etc.), that the HDMI-out on the camera is not subsampled as low as the camcorder. Also, please ignore the fact that I can't get virtualdub to render the aspect ratio of the HDV footage correctly. I don't think it matters.
However, here is my real question. When I ConvertToYV16(interlaced=false,matrix="PC.701",chr omaresample="spline36") the HV40 4:2:0 video, there is no difference in the image. So correct me if I am wrong, but shouldn't the chroma-resampled video using the Spline36 resizer show at least some sort of interpolation of the image especially for the lights and silver knob reflections and edge?
[Attachment 36135 - Click to enlarge]
Ninja YUV 4:2:2 10-bit
[Attachment 36132 - Click to enlarge]
Canon HV40 YUV 4:2:0 8-bit
[Attachment 36133 - Click to enlarge]
Canon HV40 Chroma resampled to YV16
[Attachment 36134 - Click to enlarge]
+ Reply to Thread
Results 1 to 8 of 8
Last edited by SameSelf; 14th Mar 2016 at 05:16.
You would expect it to only minor differences. I think what you're missing is that you're "viewing" the image in RGB (the 4:2:0 gets upsampled to RGB, full color), so there is actually another conversion going on.
Assuming you're converting to RGB in the same way for the screenshot, you can think of it as just upscaling the chroma in 2 steps instead of 1
It's analagous to this for the U, V planes: Would you expect there be much difference if :
A) I take a 1000x1000 photo , scale it to 2000x2000 in 1step
B) I take a 1000x1000 photo , scale it to 1000x2000, then scale that to 2000x2000 in a 2nd step ?
Well it depends on the scaling algorithm used, but if you use the same for each step, you won't expect there to be a huge difference
There is a slight difference. You can see it with:
p1=ImageSource("canon420.png").Crop(5,3,-6,-0) p2=ImageSource("canon422.png") Subtract(p1,p2)
Here's another way to view the differences:
p1=ImageSource("canon420.png").Crop(5,3,-6,-0).ConvertToYV24() p2=ImageSource("canon422.png") .ConvertToYV24() Interleave(StackHorizontal(p1.UtoY(), p1.VtoY()),StackHorizontal(p2.UtoY(), p2.VtoY()))
Last edited by jagabo; 13th Mar 2016 at 04:22.
Great feedback guys! Many, many thanks. I ran some more tests based on your feedback realizing that, yes, I have no idea how Virtualdub converts YUV to RGB. So rather than leave things to chance, I made a more explicit script:
LWLibavVideoSource(src, fpsnum = 30000, fpsden = 1001, format = "YUV420P8") #ConvertToYV16(interlaced=false,matrix="PC.701",chromaresample="Spline36") #Uncomment to test 4:2:2 resample ConvertToYV24(interlaced=false,matrix="PC.701",chromaresample="Point") ConvertToRGB(interlaced=false,chromaresample="Point")
YUV420=ImageSource("HV40_420.png").Crop(24,18,0,-3).Trim(0,-1).ConvertToYV24() YUV422=ImageSource("HV40_422.png").Crop(15,3,-24,0).Trim(0,-1).ConvertToYV24() Sub=Subtract(YUV420,YUV422).ConvertToYV24() StackSub = StackHorizontal(Sub.UtoY(), Sub.VtoY()) Stack420 = StackHorizontal(YUV420.UtoY(), YUV420.VtoY()) Stack422 = StackHorizontal(YUV422.UtoY(), YUV422.VtoY()) Interleave(StackSub,Stack420,Stack422)
VirtualDub was doing -- which was probably bilinear. Point resizing is sharper still but delivers terrible aliasing artifacts.
Originally you were doing something like this to the YV12 chroma channels which were width/2 and height/2 of the luma channel:
Last edited by jagabo; 13th Mar 2016 at 17:04.
I was really close to pulling the trigger on a 4K camcorder this weekend. But after downloading some sample footage, I noticed it was 4:2:0. So I decided to look deeper into this before pulling the trigger.
Keep in mind that the bayer pattern sensor used in most cameras reduces their chroma resolution to something like 4:2:0 to start with.
And manufacturers overstate the real resolution of the sensors -- ie, that 8x8 grid at the wiki page would be called an 64 pixel sensor. But it cannot resolve an 8x8 checkerboard pattern. So 4:2:0 is sufficient to encode what they're actually capturing.