The bit depth has very little to do with the accuracy in this context , because it's analgous to downscaling and upscaling in photoshop, but you're doing it to the CbCr channels . If you downscale a photo , then upscale a photo, it's going to look like crap right? Regardless of 32bit float or 8bit
e.g 1920x1080 4:2:0 really means Y' is 1920x1080, but Cb, and Cr whcih contain the color information are only 960x540 . That's what chroma subsampling is. When you convert to RGB, you scale Y Cb and Cr to RGB with each plane 1920x1080. (It's not quite that simple, because there isn't 1:1 mapping between RGB and YCbCr color models - but it's an easy way to think of it) . Each time you convert back and forth, the losses compound, color lines get blurrier and blurrier (analgous to if you down/up scaled in photoshop several times)
It's very easy to demonstrate on clean content , like test patterns, clean animation. Not so easy to demonstrate on live action, like a typical film, or only visible in certain sequences and content, like a bright red sign. That's why 4:2:0 was chosen in the first place many years ago for delivery - humans have very poor color perception sensitivity compared to luma black/white.
Import a YV12 video into AE , export RGB, convert it back to YV12 (or if you want you can export YV12 directly and let AE do the RGB=>YV12 conversion ), then view a screenshot (it will be converted back to RGB for display, because the screenshot is in RGB) . The lines will become blurrier . If you repeat this, it gets worse and worse. If you need test content, or charts or something to prove it to yourself if this is really happening, just ask.
Because you're scaling the chroma channels, the algorithm used will have an impact on the end results. e.g. if you used "nearest neighbor" the color channels and pixels will look more blocky and pixellated. e.g Apple and quicktime use this. In some circumstances that is desired. If you use bicubic it will be smoother, usually that is more appealing to human eye on most content. If you use something very sharp, like lanczos4, it will give "chroma aliasing"
+ Reply to Thread
Results 31 to 53 of 53
Last edited by poisondeathray; 23rd Nov 2014 at 11:41.
Originally Posted by newpball
I said "or if you want you can export YV12 directly and let AE do the RGB=>YV12 conversion"
YUV in, RGB processing, YUV out
It doesn't matter which tool does RGB => YUV, you will incur a loss. That loss will be more visible if it's 4:2:0 , than 4:2:2 or 4:4:4. It's a universal law, like gravity . It's visible with clean content
The visibile loss boils down to upscaling , downscaling and upscaling of the chroma channels (you actually have other losses, but they are more difficult to see) . With rasterised content, you will always lose quality.
If you're using the same YUV<=>RGB algorithms, there is no function difference in terms of YUV, RGB processing between A and B because there is (1) YUV to RGB conversion, and (1) RGB to YUV conversion .
For "A" if you input YUV directly, then that application you are importing to does the YUV => RGB conversion. Sometimes there is no conversion (e.g. Premiere pro YUV timeline, if you use YUV effects. And even if you use RGB effects, only those sections that you've applied it to will incur the loss - you will see "red render bar" over those sections)
"export RGB" means exporting in an RGB format, not YUV, CMYK or something else - don't know how much clearer that can be ? Can you clarify your question ?
Isn't the problem that the poster is worried that AE causes quality loss because it internally uses RGB? He does not want to export to RGB, he wants to export to YUV.
Because AE works in RGB, if you import the original YV12 video into AE, and do the compositing in AE, the original main video will incur an additional YV12=>RGB conversion , instead of remaining in YV12 all the way through to the end
Instead, if you just render out the effects with a luma matte (as RGB), only those small parts will incur the additional loss( that you normally would have incurred anyways because the end format is YV12) . You're doing the compositing in YUV, not RGB. So point is the main video doesn't touch RGB in the workflow at all, and that is avoidable quality loss
In the end, for normal content, it's not a big deal. You really need crisp clear colored lines to see the deterioration. But the point is it's avoidable quality loss, and "best practices"
If this is new to you, read up on "chroma subsampling" and "color models" and do some tests to prove (or disprove ) these "alleged" facts to yourself. I can tell you're not going to sleep until you do
I had it explained about a dozen times to me by guys like jagabo and I needed to prove it to myself with days of of testing before I finally understood what was really happening
@newpball, The loss comes from 2 areas: rounding in the color primaries conversion formula, and in the rescaling. You can tell this because even RGB<->YUV4:4:4 shows loss (where there should be no scaling). Oh, and there are some colors in one colorspace that don't have an equivalent in the other, so if they appear (are generated, etc), something needs to be done to map them (which incurs loss/inaccuracy). In practice, EVERY YUV<->RGB conversion adds loss. Using "32bit accuracy" greatly improves the conversion and minimizes the loss, but it's still there.
In the current version of Premiere Pro (CC 2014) all 32 bit effects are also YUV safe and all YUV safe effects are also 32 bit with one exception, black & white. And that one obviously won't do any good
You do it in AE, and test with 32bit and 8bit modes
Or you can do it in Premiere Pro and export out an RGB format. Beware, when you do it this way, it's usually the codec that does the RGB conversion, not Adobe AME, or Premiere - and even if "maximum depth" is checked it's usually an 8bit conversion, but you cannot be sure unless you open up PP/AME and look at the code - so do it in AE to be sure you're getting 32bit
I could do this, have an uncompressed YUV (4:2:2) source, open that in PP, overlay that with an RGBA source where only a small corner has non transparent information and export that and compare the results with the original (obviously ignoring the corner with the non transparent information).
Does that make sense?
Use AE, like the OP was going to. You clearly have control over 32bit vs 8bit there. If you want to "see" the results clearly and prove (or disprove ) this , use a 4:2:0 test pattern with thin colored lines.
As mentioned earlier , if your source is "typical" live action, it's difficult to see the degradation, unless it has strong colored edges like a red sign . Things like CG, anime, cel shading have distinct colored edges, so are more prone
Many people have posted thse types of comparisons before, just search.
Okay, it looks like there is definitely a difference when processed in PP, assuming I got everything right.
Here is an uncompressed YUV 4:2:2 video:
01 Source YUV.avi
Here is the lossless RGBA video with the "logo":
02 Overlay RGBA.avi
Here is the combined video exported as uncompressed YUV 4:2:2:
Thanks for sharing the tests.
For lossless screenshots for the forum, you can use PNG (lossless compression). TIFF isn't allowed probably since it's uncompressed. For these quick tests, a single encoded frame or two would probably suffice (150 frames isn't needed since we not looking at motion or other characteristics) - save on Baldrick's bandwidth bill... or not
One issue is you're scaling your overlay (you're not using "square pixels") , so this introduces other variables . Notice the overlay size and position have shifted (there isn't 1:1 pixel mapping). Also something else wasn't done correctly because there is pixel shifting in the main image that is different than the shifting in the overlay (that's the main reason here why you will see a difference if you're doing a subtract or difference operation, not a colorspace conversion)
You're using 4:2:2, so the actual visible loss for humans won't be as bad. Recall for 720x480 4:2:0, the U, V planes would be 360x240, so 1/2 resolution in each direction . For 4:2:2, they would be 360x480, so full resolution height. That full resolution height is one of the reasons why 4:2:2 is standard for professional work and broadcast - the handling of interlaced material is such that so that chroma samples don't have to be divided between 2 luma lines
Did you send it to AME ? Did you have maximum render depth enabled ? (MRQ shouldn't affect this if you're doing a 1:1 test, but because there is apparently some scaling going on, this actually does matter)
Last edited by El Heggunte; 23rd Nov 2014 at 23:58.
My logic is flawed anyway, because BMP's are allowed to be uploaded, and I'm pretty sure those don't have other compression options.
This is a "banding" ramp test pattern, chosen because this is one of the most "complained about" issues in anime, and the OP referred to it specifically about it in the first post. This demonstrates "true" banding , not macroblocks / compression artifacts since all the tests here use lossless compression
All videos use UT video codec 4:2:0, and cut to a single frame with vdub in "direct stream copy". (ahm.. notice how small the files are..., and they have no audio, because we aren't testing audio...) Direct screenshots were taken with avspmod, and the amplifed difference screenshots were taken with AE. (This matters when you start to look at pixel shifting, because of the different chroma scaling algorithms used. Some are center aligned, some are left aligned)
"0_original_utvideo_420.avi" is the "original"
"A_AE_32bpc_utvideo_420.avi" is the operation done in AE in 32bpc, saved out directly from AE as 4:2:0 UT video
"B_overlay_utvideo_420.avi" is the operation done with overlay() in avisynth
"stupidlogo.png" is the stupid logo
You might not be able to "see" the banding introduced depending on if you have an 6,8 or 10 bit display. So here is the amplified difference (adjustment layer with levels in AE with gamma set to 2) . This is a common procudure used to examine encodes, differences between videos.
amped diff AE
amped diff Avisynth Overlay()
The reason it gets worse when you take a trip back and forth between YUV and RGB land, is YUV and RGB are non overlapping color models (do you recall the color cube model posted in the other thread ?) , as not all YUV colors can be represented in RGB. Another reason is AE uses a standard Rec conversion, not full range so there is more scaling going on. In 32bit mode, you have access to YUV data before the effect of RGB filters (so nothing is clipped right away) , but it's still based a Rec conversion (Y16-235 and CbCr16-240 gets "mapped" to 0,0,0-255,255,255) . If you map 0-255 to 0,0,0-255,255,255 , there is less scaling going on (full range). But that introduces some other issues, especially with end delivery formats. Even if intermediate calcs are done in 32bit float, when you export out an 8bit YV12 format you scale RGB 0-1.0 (in 32bit float values) back to YUV 16-235. Lastly there is the rounding issues, but those are the least important. But the benefit is reduced when going back to 8bit for final format. All these put together add up to lower quality . (Now, you can use color management and LUTs in AE , but I won't discuss it here)
Just a clarification about overlay() ; in avisynth 2.6.x , it actually does it's calculations internally as yuv444, so even if you do an RGBA overlay, you don't incur the additional RGB loss on the main video. In 2.5.x I believe it worked in YUY2 (yuv422)
Here are the results for YUV 4:2:0 with two frames (PP chokes on uncompressed one frame) and no audio (sorry for the large file size and the audio earlier!) and a consistent PAR.
01 Source YUV.avi
Overlay (note: not exactly placed the same as the prior one):
02 Overlay RGBA.avi
03 Result YUV.avi
Difference (jagabo's method)
Last edited by newpball; 24th Nov 2014 at 11:38.
But in avisynth 2.6, this is what Overlay() is doing - the RGBA overlay is internally converted to YUV444