VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 53 of 53
Thread
  1. Originally Posted by newpball View Post
    Originally Posted by poisondeathray View Post
    The YUV method is higher quality (less quality loss because you're up and down sampling the chroma), but if you *need* AE, then the RGB trip is unavoidable . The quality loss is visible only with clean colored lines , like clean animation, clean titles, and only to some people (averge joe won't be able to tell). Your audience is anime fans, so they might be able to tell; they tend to be very picky about lines. With live action content is usually difficult to see the degradation unless you zoom in .
    poisondeathray do you have any test results actually showing that YUV -> RGB -> YUV shows quality loss in Premiere Pro/AE?

    May I ask what you base your statement on?

    Nobody disagrees that if you convert a video from YUV to RGB and then take that converted video and convert it back from RGB to YUV you get quality loss but I was under the impression that in Premiere Pro/AE RGB values are kept at 32 bits per channel RGB accuracy thus converting that back to YUV would not degrade the quality at all.
    Yes, of course. They are posted all over by myself , others and dozens of other forums. This are established verifiable facts, not some "personal opinion"

    The bit depth has very little to do with the accuracy in this context , because it's analgous to downscaling and upscaling in photoshop, but you're doing it to the CbCr channels . If you downscale a photo , then upscale a photo, it's going to look like crap right? Regardless of 32bit float or 8bit

    e.g 1920x1080 4:2:0 really means Y' is 1920x1080, but Cb, and Cr whcih contain the color information are only 960x540 . That's what chroma subsampling is. When you convert to RGB, you scale Y Cb and Cr to RGB with each plane 1920x1080. (It's not quite that simple, because there isn't 1:1 mapping between RGB and YCbCr color models - but it's an easy way to think of it) . Each time you convert back and forth, the losses compound, color lines get blurrier and blurrier (analgous to if you down/up scaled in photoshop several times)

    It's very easy to demonstrate on clean content , like test patterns, clean animation. Not so easy to demonstrate on live action, like a typical film, or only visible in certain sequences and content, like a bright red sign. That's why 4:2:0 was chosen in the first place many years ago for delivery - humans have very poor color perception sensitivity compared to luma black/white.

    Import a YV12 video into AE , export RGB, convert it back to YV12 (or if you want you can export YV12 directly and let AE do the RGB=>YV12 conversion ), then view a screenshot (it will be converted back to RGB for display, because the screenshot is in RGB) . The lines will become blurrier . If you repeat this, it gets worse and worse. If you need test content, or charts or something to prove it to yourself if this is really happening, just ask.

    Because you're scaling the chroma channels, the algorithm used will have an impact on the end results. e.g. if you used "nearest neighbor" the color channels and pixels will look more blocky and pixellated. e.g Apple and quicktime use this. In some circumstances that is desired. If you use bicubic it will be smoother, usually that is more appealing to human eye on most content. If you use something very sharp, like lanczos4, it will give "chroma aliasing"
    Last edited by poisondeathray; 23rd Nov 2014 at 11:41.
    Quote Quote  
  2. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Originally Posted by poisondeathray View Post
    Import a YV12 video into AE , export RGB, convert it back to YV12
    No, no, no, I wrote I already agreed that that would give quality loss, please read what I wrote.

    Originally Posted by newpball
    Nobody disagrees that if you convert a video from YUV to RGB and then take that converted video and convert it back from RGB to YUV you get quality loss
    I am not talking about exporting the video in RGB, I am talking about YUV in, processing internally in Premiere Pro/AE in RGB and then export in YUV.
    Quote Quote  
  3. Originally Posted by newpball View Post
    Originally Posted by poisondeathray View Post
    Import a YV12 video into AE , export RGB, convert it back to YV12
    No, no, no, I wrote I already agreed that that would give quality loss, please read what I wrote.

    I am talking about YUV in, processing internally in Premiere Pro/AE in RGB and then export in YUV.
    Yes, that's what it says, please read what I wrote

    I said "or if you want you can export YV12 directly and let AE do the RGB=>YV12 conversion"

    YUV in, RGB processing, YUV out

    It doesn't matter which tool does RGB => YUV, you will incur a loss. That loss will be more visible if it's 4:2:0 , than 4:2:2 or 4:4:4. It's a universal law, like gravity . It's visible with clean content

    The visibile loss boils down to upscaling , downscaling and upscaling of the chroma channels (you actually have other losses, but they are more difficult to see) . With rasterised content, you will always lose quality.
    Quote Quote  
  4. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Originally Posted by poisondeathray View Post
    Originally Posted by newpball View Post
    Originally Posted by poisondeathray View Post
    Import a YV12 video into AE , export RGB, convert it back to YV12
    No, no, no, I wrote I already agreed that that would give quality loss, please read what I wrote.

    I am talking about YUV in, processing internally in Premiere Pro/AE in RGB and then export in YUV.
    Yes, that's what it says, please read what I wrote
    So what are you saying?

    Let me ask you this, is there a difference between.

    A) Input YUV, process internally RGB, export YUV
    B) Input YUV, export RGB, input RGB export YUV


    So when you write export RGB what do you actually mean?
    Quote Quote  
  5. Originally Posted by newpball View Post
    Originally Posted by poisondeathray View Post
    Originally Posted by newpball View Post
    Originally Posted by poisondeathray View Post
    Import a YV12 video into AE , export RGB, convert it back to YV12
    No, no, no, I wrote I already agreed that that would give quality loss, please read what I wrote.

    I am talking about YUV in, processing internally in Premiere Pro/AE in RGB and then export in YUV.
    Yes, that's what it says, please read what I wrote
    So what are you saying?

    Let me ask you this, is there a difference between.

    A) Input YUV, process internally RGB, export YUV
    B) Input YUV, export RGB, input RGB export YUV


    So when you write export RGB what do you actually mean?

    If you're using the same YUV<=>RGB algorithms, there is no function difference in terms of YUV, RGB processing between A and B because there is (1) YUV to RGB conversion, and (1) RGB to YUV conversion .

    For "A" if you input YUV directly, then that application you are importing to does the YUV => RGB conversion. Sometimes there is no conversion (e.g. Premiere pro YUV timeline, if you use YUV effects. And even if you use RGB effects, only those sections that you've applied it to will incur the loss - you will see "red render bar" over those sections)

    "export RGB" means exporting in an RGB format, not YUV, CMYK or something else - don't know how much clearer that can be ? Can you clarify your question ?
    Quote Quote  
  6. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Originally Posted by poisondeathray View Post
    For "A" if you input YUV directly, then that application you are importing to does the YUV => RGB conversion. Sometimes there is no conversion (e.g. Premiere pro YUV timeline, if you use YUV effects. And even if you use RGB effects, only those sections that you've applied it to will incur the loss - you will see "red render bar" over those sections)
    The red bar has nothing to do with the alledged loss, it simply indicates the processing cannot be done at normal video speed.

    Originally Posted by poisondeathray View Post
    "export RGB" means exporting in an RGB format, not YUV, CMYK or something else - don't know how much clearer that can be ? Can you clarify your question ?
    But exporting to RGB is no relevant in this topic is it?

    Isn't the problem that the poster is worried that AE causes quality loss because it internally uses RGB? He does not want to export to RGB, he wants to export to YUV.
    Quote Quote  
  7. Originally Posted by newpball View Post
    Originally Posted by poisondeathray View Post
    For "A" if you input YUV directly, then that application you are importing to does the YUV => RGB conversion. Sometimes there is no conversion (e.g. Premiere pro YUV timeline, if you use YUV effects. And even if you use RGB effects, only those sections that you've applied it to will incur the loss - you will see "red render bar" over those sections)
    The red bar has nothing to do with the loss, it simply indicates the processing cannot be done in normal video speed.
    I'm just letting you know that you don't necessarily lose quality on the ENTIRE video, because only those sections that encounter the RGB conversion will incur the additional loss



    Originally Posted by poisondeathray View Post
    "export RGB" means exporting in an RGB format, not YUV, CMYK or something else - don't know how much clearer that can be ? Can you clarify your question ?
    But exporting to RGB is no relevant in this topic is it?

    Isn't the problem that the poster is worried that AE causes quality loss because it internally uses RGB? He does not want to export to RGB, he wants to export to YUV.
    Exactly!

    Because AE works in RGB, if you import the original YV12 video into AE, and do the compositing in AE, the original main video will incur an additional YV12=>RGB conversion , instead of remaining in YV12 all the way through to the end

    Instead, if you just render out the effects with a luma matte (as RGB), only those small parts will incur the additional loss( that you normally would have incurred anyways because the end format is YV12) . You're doing the compositing in YUV, not RGB. So point is the main video doesn't touch RGB in the workflow at all, and that is avoidable quality loss

    In the end, for normal content, it's not a big deal. You really need crisp clear colored lines to see the deterioration. But the point is it's avoidable quality loss, and "best practices"
    Quote Quote  
  8. Originally Posted by newpball View Post
    Originally Posted by poisondeathray View Post
    For "A" if you input YUV directly, then that application you are importing to does the YUV => RGB conversion. Sometimes there is no conversion (e.g. Premiere pro YUV timeline, if you use YUV effects. And even if you use RGB effects, only those sections that you've applied it to will incur the loss - you will see "red render bar" over those sections)
    The red bar has nothing to do with the alledged loss, it simply indicates the processing cannot be done at normal video speed.
    I love how you added "alleged loss" . In the video world, that's like saying to a physicist that gravity "allegedly exists"

    If this is new to you, read up on "chroma subsampling" and "color models" and do some tests to prove (or disprove ) these "alleged" facts to yourself. I can tell you're not going to sleep until you do

    I had it explained about a dozen times to me by guys like jagabo and I needed to prove it to myself with days of of testing before I finally understood what was really happening
    Quote Quote  
  9. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    @newpball, The loss comes from 2 areas: rounding in the color primaries conversion formula, and in the rescaling. You can tell this because even RGB<->YUV4:4:4 shows loss (where there should be no scaling). Oh, and there are some colors in one colorspace that don't have an equivalent in the other, so if they appear (are generated, etc), something needs to be done to map them (which incurs loss/inaccuracy). In practice, EVERY YUV<->RGB conversion adds loss. Using "32bit accuracy" greatly improves the conversion and minimizes the loss, but it's still there.

    Scott
    Quote Quote  
  10. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Originally Posted by Cornucopia View Post
    Using "32bit accuracy" greatly improves the conversion and minimizes the loss, but it's still there.
    How do we actually test and verify this?

    In the current version of Premiere Pro (CC 2014) all 32 bit effects are also YUV safe and all YUV safe effects are also 32 bit with one exception, black & white. And that one obviously won't do any good
    Quote Quote  
  11. You do it in AE, and test with 32bit and 8bit modes

    Or you can do it in Premiere Pro and export out an RGB format. Beware, when you do it this way, it's usually the codec that does the RGB conversion, not Adobe AME, or Premiere - and even if "maximum depth" is checked it's usually an 8bit conversion, but you cannot be sure unless you open up PP/AME and look at the code - so do it in AE to be sure you're getting 32bit
    Quote Quote  
  12. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    I could do this, have an uncompressed YUV (4:2:2) source, open that in PP, overlay that with an RGBA source where only a small corner has non transparent information and export that and compare the results with the original (obviously ignoring the corner with the non transparent information).

    Does that make sense?
    Quote Quote  
  13. Use AE, like the OP was going to. You clearly have control over 32bit vs 8bit there. If you want to "see" the results clearly and prove (or disprove ) this , use a 4:2:0 test pattern with thin colored lines.

    As mentioned earlier , if your source is "typical" live action, it's difficult to see the degradation, unless it has strong colored edges like a red sign . Things like CG, anime, cel shading have distinct colored edges, so are more prone

    Many people have posted thse types of comparisons before, just search.
    Quote Quote  
  14. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Okay, it looks like there is definitely a difference when processed in PP, assuming I got everything right.

    Here is an uncompressed YUV 4:2:2 video:
    01 Source YUV.avi

    Here is the lossless RGBA video with the "logo":
    02 Overlay RGBA.avi

    Here is the combined video exported as uncompressed YUV 4:2:2:
    03 Combined.avi


    Quote Quote  
  15. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Here is the difference between the source and combined:

    Oops, can't upload a tiff file.

    Oh well, here is a jpeg:

    Click image for larger version

Name:	Difference.jpg
Views:	214
Size:	77.9 KB
ID:	28750

    Quote Quote  
  16. Thanks for sharing the tests.

    For lossless screenshots for the forum, you can use PNG (lossless compression). TIFF isn't allowed probably since it's uncompressed. For these quick tests, a single encoded frame or two would probably suffice (150 frames isn't needed since we not looking at motion or other characteristics) - save on Baldrick's bandwidth bill... or not

    One issue is you're scaling your overlay (you're not using "square pixels") , so this introduces other variables . Notice the overlay size and position have shifted (there isn't 1:1 pixel mapping). Also something else wasn't done correctly because there is pixel shifting in the main image that is different than the shifting in the overlay (that's the main reason here why you will see a difference if you're doing a subtract or difference operation, not a colorspace conversion)

    You're using 4:2:2, so the actual visible loss for humans won't be as bad. Recall for 720x480 4:2:0, the U, V planes would be 360x240, so 1/2 resolution in each direction . For 4:2:2, they would be 360x480, so full resolution height. That full resolution height is one of the reasons why 4:2:2 is standard for professional work and broadcast - the handling of interlaced material is such that so that chroma samples don't have to be divided between 2 luma lines

    Did you send it to AME ? Did you have maximum render depth enabled ? (MRQ shouldn't affect this if you're doing a 1:1 test, but because there is apparently some scaling going on, this actually does matter)
    Quote Quote  
  17. DECEASED
    Join Date
    Jun 2009
    Location
    Heaven
    Search Comp PM
    Originally Posted by poisondeathray View Post
    TIFF isn't allowed probably since it's uncompressed.
    Actually, that's incorrect. The TIFF container supports LZW, Deflate, and even JPG.
    Not every software supports all these possibilities though.
    Last edited by El Heggunte; 23rd Nov 2014 at 23:58.
    Quote Quote  
  18. Originally Posted by El Heggunte View Post
    Originally Posted by poisondeathray View Post
    TIFF isn't allowed probably since it's uncompressed.
    Actually, that's incorrect. The TIFF container supports LZW, Deflate, and even JPG.
    Not every software supports all those possibilities at the same time, though.
    Your right! I can see LZW as an option in most software, but not the others. You learn something new every day. JPG in an TIFF? wow never had the slightest clue

    My logic is flawed anyway, because BMP's are allowed to be uploaded, and I'm pretty sure those don't have other compression options.
    Quote Quote  
  19. This is a "banding" ramp test pattern, chosen because this is one of the most "complained about" issues in anime, and the OP referred to it specifically about it in the first post. This demonstrates "true" banding , not macroblocks / compression artifacts since all the tests here use lossless compression

    All videos use UT video codec 4:2:0, and cut to a single frame with vdub in "direct stream copy". (ahm.. notice how small the files are..., and they have no audio, because we aren't testing audio...) Direct screenshots were taken with avspmod, and the amplifed difference screenshots were taken with AE. (This matters when you start to look at pixel shifting, because of the different chroma scaling algorithms used. Some are center aligned, some are left aligned)

    "0_original_utvideo_420.avi" is the "original"
    "A_AE_32bpc_utvideo_420.avi" is the operation done in AE in 32bpc, saved out directly from AE as 4:2:0 UT video
    "B_overlay_utvideo_420.avi" is the operation done with overlay() in avisynth
    "stupidlogo.png" is the stupid logo

    AE 32bpc
    Click image for larger version

Name:	A_AE_32bpc.png
Views:	197
Size:	88.6 KB
ID:	28756

    Avisynth Overlay()
    Click image for larger version

Name:	B_avisynth_overlay.png
Views:	319
Size:	96.1 KB
ID:	28757

    You might not be able to "see" the banding introduced depending on if you have an 6,8 or 10 bit display. So here is the amplified difference (adjustment layer with levels in AE with gamma set to 2) . This is a common procudure used to examine encodes, differences between videos.

    amped diff AE
    Click image for larger version

Name:	amp_diff_AE_32bpc.png
Views:	276
Size:	52.1 KB
ID:	28758

    amped diff Avisynth Overlay()
    Click image for larger version

Name:	amp_diff_avisynth_overlay.png
Views:	274
Size:	17.2 KB
ID:	28759

    The reason it gets worse when you take a trip back and forth between YUV and RGB land, is YUV and RGB are non overlapping color models (do you recall the color cube model posted in the other thread ?) , as not all YUV colors can be represented in RGB. Another reason is AE uses a standard Rec conversion, not full range so there is more scaling going on. In 32bit mode, you have access to YUV data before the effect of RGB filters (so nothing is clipped right away) , but it's still based a Rec conversion (Y16-235 and CbCr16-240 gets "mapped" to 0,0,0-255,255,255) . If you map 0-255 to 0,0,0-255,255,255 , there is less scaling going on (full range). But that introduces some other issues, especially with end delivery formats. Even if intermediate calcs are done in 32bit float, when you export out an 8bit YV12 format you scale RGB 0-1.0 (in 32bit float values) back to YUV 16-235. Lastly there is the rounding issues, but those are the least important. But the benefit is reduced when going back to 8bit for final format. All these put together add up to lower quality . (Now, you can use color management and LUTs in AE , but I won't discuss it here)

    Just a clarification about overlay() ; in avisynth 2.6.x , it actually does it's calculations internally as yuv444, so even if you do an RGBA overlay, you don't incur the additional RGB loss on the main video. In 2.5.x I believe it worked in YUY2 (yuv422)
    Image Attached Thumbnails Click image for larger version

Name:	stupidlogo.png
Views:	258
Size:	10.2 KB
ID:	28752  

    Image Attached Files
    Quote Quote  
  20. Originally Posted by newpball View Post
    Here is the difference between the source and combined:
    Another method of comparing: v1-v2+128, and amplified 16x:

    v1=AviSource("01 Source YUV.avi")
    v2=AviSource("03 Combined.avi")
    Subtract(v1,v2) # v1 - v2 + 128
    StackHorizontal(last, Levels(120,1,136,0,255))

    Click image for larger version

Name:	diff.png
Views:	119
Size:	435.0 KB
ID:	28764
    Quote Quote  
  21. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Here are the results for YUV 4:2:0 with two frames (PP chokes on uncompressed one frame) and no audio (sorry for the large file size and the audio earlier!) and a consistent PAR.

    Source:
    01 Source YUV.avi

    Overlay (note: not exactly placed the same as the prior one):
    02 Overlay RGBA.avi

    Result:
    03 Result YUV.avi

    Difference (jagabo's method)
    Click image for larger version

Name:	04 Difference (Avisynth).png
Views:	122
Size:	312.2 KB
ID:	28789
    Last edited by newpball; 24th Nov 2014 at 11:38.
    Quote Quote  
  22. Originally Posted by newpball View Post
    Question, would not the best approach be to convert the overlay to YUV and export it to a lossless format and then merge the videos?
    In PP (and AE), all layer blending operations work in RGB, so it wouldn't help there . In fact it would be a bit worse because you incur RGBA=>YUVA=>RGBA , instead of just RGBA

    But in avisynth 2.6, this is what Overlay() is doing - the RGBA overlay is internally converted to YUV444
    Quote Quote  
  23. Banned
    Join Date
    Oct 2014
    Location
    Northern California
    Search PM
    Originally Posted by poisondeathray View Post
    Originally Posted by newpball View Post
    Question, would not the best approach be to convert the overlay to YUV and export it to a lossless format and then merge the videos?
    In PP (and AE), all layer blending operations work in RGB, so it wouldn't help there

    But in avisynth 2.6, this is what Overlay() is doing - the overlay is internally converted to YUV444
    Indeed, I see what you mean.
    (I took the question out but you caught me in the act ).
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!