As I have been learning more about chroma subsampling, I decided to do a quick test. As most of you likely know, modern, HD consumer cameras record in YUV 4:2:0. This is the case for my Canon Vixia HV40. However, I have a Ninja which records in YUV 4:2:2. So one of the questions on my mind was whether the HDMI-out on my Canon was subsampled to 4:2:0 or 4:2:2. Reviews on the web have been fairly silent on the matter. But I figured out a quick test that should answer the question once and for all.
Below are cropped, 400% zoomed screenshots from Virtualdub of the exact same frame from my Canon and Ninja recorded simultaneously along with a photo from the web which is clearer for comparison. I think the most dramatic part of the Ninja image is the blue color of the small "Line Out" label shows up while the Canon is completely gray. So I would conclude based on this, as well as many other parts of the image (the green and red lights, the two angled reflections on the silver knob, the silver knob edge, etc.), that the HDMI-out on the camera is not subsampled as low as the camcorder. Also, please ignore the fact that I can't get virtualdub to render the aspect ratio of the HDV footage correctly. I don't think it matters.
However, here is my real question. When I ConvertToYV16(interlaced=false,matrix="PC.701",chr omaresample="spline36") the HV40 4:2:0 video, there is no difference in the image. So correct me if I am wrong, but shouldn't the chroma-resampled video using the Spline36 resizer show at least some sort of interpolation of the image especially for the lights and silver knob reflections and edge?
Web Photo
[Attachment 36135 - Click to enlarge]
Ninja YUV 4:2:2 10-bit
[Attachment 36132 - Click to enlarge]
Canon HV40 YUV 4:2:0 8-bit
[Attachment 36133 - Click to enlarge]
Canon HV40 Chroma resampled to YV16
[Attachment 36134 - Click to enlarge]
+ Reply to Thread
Results 1 to 8 of 8
-
Last edited by SameSelf; 14th Mar 2016 at 05:16.
-
You would expect it to only minor differences. I think what you're missing is that you're "viewing" the image in RGB (the 4:2:0 gets upsampled to RGB, full color), so there is actually another conversion going on.
Assuming you're converting to RGB in the same way for the screenshot, you can think of it as just upscaling the chroma in 2 steps instead of 1
It's analagous to this for the U, V planes: Would you expect there be much difference if :
A) I take a 1000x1000 photo , scale it to 2000x2000 in 1step
B) I take a 1000x1000 photo , scale it to 1000x2000, then scale that to 2000x2000 in a 2nd step ?
Well it depends on the scaling algorithm used, but if you use the same for each step, you won't expect there to be a huge difference -
There is a slight difference. You can see it with:
Keep in mind that when you convert YV12 and YV16 to RGB to make PNG files both must have their chroma upscaled to 4:4:4. So the only difference will be the explicit spline36 resample on the vertical dimension vs. whatever VirtualDub (?) used to convert to RGB.Code:p1=ImageSource("canon420.png").Crop(5,3,-6,-0) p2=ImageSource("canon422.png") Subtract(p1,p2)
Here's another way to view the differences:
You'll see the spline36 resized chroma is sharper vertically.Code:p1=ImageSource("canon420.png").Crop(5,3,-6,-0).ConvertToYV24() p2=ImageSource("canon422.png") .ConvertToYV24() Interleave(StackHorizontal(p1.UtoY(), p1.VtoY()),StackHorizontal(p2.UtoY(), p2.VtoY()))Last edited by jagabo; 13th Mar 2016 at 04:22.
-
Great feedback guys! Many, many thanks. I ran some more tests based on your feedback realizing that, yes, I have no idea how Virtualdub converts YUV to RGB. So rather than leave things to chance, I made a more explicit script:
Then after grabbing screenshots from Virtualdub, I processed the png's:Code:LWLibavVideoSource(src, fpsnum = 30000, fpsden = 1001, format = "YUV420P8") #ConvertToYV16(interlaced=false,matrix="PC.701",chromaresample="Spline36") #Uncomment to test 4:2:2 resample ConvertToYV24(interlaced=false,matrix="PC.701",chromaresample="Point") ConvertToRGB(interlaced=false,chromaresample="Point")
Here is a short movie I made hopefully showing the differences between the HDV footage at the native 4:2:0 versus upsampling using Spline36 to 4:2:2. So, yes, there is a difference and it seems to lie in the vertical direction. I am just not certain how you concluded that it is sharper.Code:YUV420=ImageSource("HV40_420.png").Crop(24,18,0,-3).Trim(0,-1).ConvertToYV24() YUV422=ImageSource("HV40_422.png").Crop(15,3,-24,0).Trim(0,-1).ConvertToYV24() Sub=Subtract(YUV420,YUV422).ConvertToYV24() StackSub = StackHorizontal(Sub.UtoY(), Sub.VtoY()) Stack420 = StackHorizontal(YUV420.UtoY(), YUV420.VtoY()) Stack422 = StackHorizontal(YUV422.UtoY(), YUV422.VtoY()) Interleave(StackSub,Stack420,Stack422) -
I have to admit, while this is all fairly sophisticated stuff, I am at a loss at what to conclude from all this!
-
It was sharper than what VirtualDub was doing -- which was probably bilinear. Point resizing is sharper still but delivers terrible aliasing artifacts.
Originally you were doing something like this to the YV12 chroma channels which were width/2 and height/2 of the luma channel:
Where the first resize was specified by you and second resize was done by VirtualDub. Then you were comparing to VirtualDub's equivalent of:Code:Spline36Resize(width/2, height).BilinearResize(width,height)
So, of course, the first method has sharper chroma channels vertically. Spline36 is much sharper than bilinear.Code:BilinearResize(width, height)
Last edited by jagabo; 13th Mar 2016 at 17:04.
-
Thanks as always. I guess I am a little discouraged by all this. I was under the naïve assumption that resizing chroma using Spline36 (or some other resizer) would get me close or at least closer to the Ninja result. But I was wrong. OTOH, the Ninja surprises me because it records an actual 4:2:2 image from the HDMI-out versus upsampling a 4:2:0 image a la ConvertToYV16. The Canon HV40 may not be the sharpest camera, but with the Ninja I am able to capture broadcast spec footage fairly cheaply. Can you believe these suckers go for only $300 on ebay now?
I was really close to pulling the trigger on a 4K camcorder this weekend. But after downloading some sample footage, I noticed it was 4:2:0. So I decided to look deeper into this before pulling the trigger.
Thank you for that clarification. Good stuff as always. -
Keep in mind that the bayer pattern sensor used in most cameras reduces their chroma resolution to something like 4:2:0 to start with.
https://en.wikipedia.org/wiki/Bayer_filter
And manufacturers overstate the real resolution of the sensors -- ie, that 8x8 grid at the wiki page would be called an 64 pixel sensor. But it cannot resolve an 8x8 checkerboard pattern. So 4:2:0 is sufficient to encode what they're actually capturing.
Similar Threads
-
MeGui - Avisynth Colorspace Question
By Bully9 in forum Video ConversionReplies: 15Last Post: 26th Apr 2015, 15:44 -
Colorspace questions
By -Habanero- in forum Newbie / General discussionsReplies: 4Last Post: 15th Apr 2015, 21:07 -
x264: Which colorspace is best for input?
By bryanburke in forum Video ConversionReplies: 13Last Post: 13th Aug 2014, 17:27 -
keep colorspace with mencoder
By marcorocchini in forum Newbie / General discussionsReplies: 0Last Post: 18th Apr 2014, 13:15 -
Color format / colorspace question
By hello_hello in forum Video ConversionReplies: 8Last Post: 19th Sep 2012, 23:13


Quote
