VideoHelp Forum

Try DVDFab and download streaming video, copy, convert or make Blu-rays,DVDs! Download free trial !
+ Reply to Thread
Page 1 of 4
1 2 3 ... LastLast
Results 1 to 30 of 133
Thread

Threaded View

  1. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Hi All,

    I'd appreciate some pointers for processing Rec.601 YV12 footage for Rec.709 MPEG-2 encoding. Much of it is shot outdoors with the background over-exposed to compensate for a darker foreground. According to Histrogram() most frames have invalidly high luma values. Here's a 2-frame sample clip.

    Whenever RGB conversion becomes necessary in the process, for this kind of luma range which would be the best matrix to convert back and forth with - Rec601 or PC.601? Although I've been able to generate terrible results, I'm not sure how to gauge the better-looking ones, being displayed on a PC monitor at PC range I presume.

    In this case which matrix would be best for converting Rec.601 YV12 to Rec.709 for final output - Rec709 or PC.709?

    How does one actually convert Rec.601 YV12 to Rec.709 YV12? With Rec.601 YV12 input, ConvertToYV12(matrix="Rec709") fails with
    ConvertToYV12: invalid "matrix" parameter (RGB data only)
    Does that mean converting Rec.601 YV12 -> Rec.709 YV12 can't be done without an intermediate conversion to say RGB?

    Many thanks in advance,
    Francois
    Quote Quote  
  2. You can't go directly from rec.601 YV12 to rec.709 YV12 with ConvertToYV12(). You could ConvertToRGB(matrix="rec601") followed by ConvertToYV12(matrix="rec709"). But doing that directly on your overblown brights will kill the detail in the sky. You want go tame those brights first. You could use ColorMatrix(mode="Rec.601->Rec.709") to avoid that. But you should really tame those brights before encoding anyway. Something like ColorYUV(gain_y=-20). So

    Code:
    ColorYUV(gain_y=-20)
    ConvertToRGB(matrix="rec601")
    ConvertToYV12(matrix="rec709")
    or

    Code:
    ColorYUV(gain_y=-20)
    ColorMatrix(mode="Rec.601->Rec.709")
    should both give the same result (aside from rounding errors).

    Look for darker shots and make sure ColorYUV(gain_y=-20) doesn't cause the darks to get too dark. If so, you have reduce the contrast and change the offset too.
    Last edited by jagabo; 11th Mar 2013 at 07:56.
    Quote Quote  
  3. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    This has been a most... illuminating discussion!

    Originally Posted by jagabo View Post
    You could ConvertToRGB(matrix="rec601") followed by ConvertToYV12(matrix="rec709"). But doing that directly on your overblown brights will kill the detail in the sky.
    Do I understand correctly that YUV->RGB conversion involves amongst others scaling luma values to the range [0,255] from the range [16,235], i.e. any values outside that range in the original video are lost (similarly for U & V from range [16,240] etc.)?

    Originally Posted by 2Bdecided View Post
    Why do you need to convert it? My recollection is that you can encode and flag Rec.601 colour to MPEG-2 just fine. IIRC if you don't specify what the colour is, SD video defaults to Rec.601. I could be wrong.
    Good question. The answer is statements like this, but I'm not sure how authorative that is either.

    Originally Posted by 2Bdecided View Post
    ...it is far better to ensure the video is legal and valid (and preferably NOT just by hard-clipping it to the limits) - i.e. luma sits between 16-235 except for sharpening overshoots, and luma+chroma point to real RGB values.
    Originally Posted by jagabo View Post
    In any case, the OP should fix the levels in his video before encoding
    Is there a better way of checking for out-of-range YUV values than scrubbing through the clip with Histogram(mode="Levels")? Even if overkill at this point but just to help my understanding, is it possible for individually valid Y, U and V values to represent invalid RGB values in combination, which is the impression I formed from David's discussion? [Edit: actually from this specific post: "Even using PC-range, not all YUV combinations are 'valid'"] And how to check for such illegal combinations?

    Thanks very much for the suggestions (and another helpful illustration from jagabo), I'm going to try them out.
    Last edited by fvisagie; 11th Mar 2013 at 14:10.
    Quote Quote  
  4. Originally Posted by fvisagie View Post
    This has been a most... illuminating discussion!

    Originally Posted by jagabo View Post
    You could ConvertToRGB(matrix="rec601") followed by ConvertToYV12(matrix="rec709"). But doing that directly on your overblown brights will kill the detail in the sky.
    Do I understand correctly that YUV->RGB conversion involves amongst others scaling luma values to the range [0,255] from the range [16,235], i.e. any values outside that range in the original video are lost (similarly for U & V from range [16,240] etc.)?
    Yes. Alternatively you could use the PC matrices which avoid the scaling:

    Code:
    ConvertToRGB(matrix="PC.601").ConvertToYV12(matrix="PC.709")
    Originally Posted by fvisagie View Post
    Is there a better way of checking for out-of-range YUV values than scrubbing through the clip with Histogram(mode="Levels")?
    Not that I know of.

    Originally Posted by fvisagie View Post
    is it possible for individually valid Y, U and V values to represent invalid RGB values in combination
    Yes. I wrote a script a few months ago that highlights exactly such pixels by converting YUV to RGB and back, subtracting the result from the original, and amplifying the result.
    Quote Quote  
  5. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by fvisagie View Post
    Originally Posted by 2Bdecided View Post
    Why do you need to convert it? My recollection is that you can encode and flag Rec.601 colour to MPEG-2 just fine. IIRC if you don't specify what the colour is, SD video defaults to Rec.601. I could be wrong.
    Good question. The answer is statements like this, but I'm not sure how authorative that is either.
    It is not even discussing the same issue. The poster also made a fairly monumental typo. What he's trying to say is correct though: when decoding DV for DVD authoring, either use YUY2, or YV12 with the MPEG-2 chroma sample placement, not YV12 with the native DV sample placement (which is different). This has nothing to do with 601 or 709. DV is always SD, and always 601.

    is it possible for individually valid Y, U and V values to represent invalid RGB values in combination
    Yes.

    You are over complicating this. Your camcorder is just like 99% of recent consumer camcorders: it records luma 16-255. Just use the levels line I provided above (or the equivalent smoothlevels line if you want) to map it to 16-235, and leave it at that. No need to worry about illegal colours. No need to worry about 601/709.

    If you wish to do manual per-shot colour correction in your NLE, then go for it, but that's a separate issue.

    Cheers,
    David.
    Quote Quote  
  6. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    What he's trying to say is correct though: when decoding DV for DVD authoring, either use YUY2, or YV12 with the MPEG-2 chroma sample placement, not YV12 with the native DV sample placement (which is different).
    OK, having introduced that term here, you'll now have to tell me where I can learn more about it .

    Originally Posted by 2Bdecided View Post
    You are over complicating this. Your camcorder is just like 99% of recent consumer camcorders: it records luma 16-255. Just use the levels line I provided above (or the equivalent smoothlevels line if you want) to map it to 16-235, and leave it at that. No need to worry about illegal colours. No need to worry about 601/709.
    That's a relief! Many thanks for your advice, and that of the others.

    Kind regards,
    Francois
    Quote Quote  
  7. Originally Posted by fvisagie View Post
    Originally Posted by 2Bdecided View Post
    What he's trying to say is correct though: when decoding DV for DVD authoring, either use YUY2, or YV12 with the MPEG-2 chroma sample placement, not YV12 with the native DV sample placement (which is different).
    OK, having introduced that term here, you'll now have to tell me where I can learn more about it .
    http://avisynth.org/mediawiki/Sampling
    http://avisynth.org/mediawiki/ConvertToYUY2
    http://www.mir.com/DMG/chroma.html
    Quote Quote  
  8. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    Your camcorder is just like 99% of recent consumer camcorders: it records luma 16-255.
    Looking into luma correction a little more closely, it seems my camera's luma occupies the whole [0,255] range

    Click image for larger version

Name:	original analysed.png
Views:	209
Size:	544.7 KB
ID:	16740

    So I experimented with mapping [0,255] to [16,235] in Levels(), and for comparison also included the corresponding ColorYUV() statement
    Code:
    levels      = "Levels(0,1.0,255,16,235,coring=false)"
    coloryuv    = "ColorYUV(off_y=16, gain_y=-36)"
    StackHorizontal(ColorYUV(analyze=true).Subtitle("original", align=1), \
                    Eval(levels).ColorYUV(analyze=true).Subtitle(levels, align=1), \
                    Eval(coloryuv).ColorYUV(analyze=true).Subtitle(coloryuv, align=1))
    I'd previously imagined that Levels() compresses chroma also, but now mapping to [16,235] makes that fairly clear

    Click image for larger version

Name:	original vs levels vs coloryuv.png
Views:	174
Size:	1.63 MB
ID:	16741

    The AVS 2.58 documentation for Levels() seems to confirm that
    Code:
    For adjusting brightness or contrast it is better to use Tweak or ColorYUV, because Levels  also changes the chroma of the clip.
    Mapping [0,255]->[16,235] instead of [16,255]->[~16,235] as before certainly compresses luma more, but I haven't been able to spot any visible banding with this footage. ColorYUV() does a nice job of leaving chroma alone and the results are quite pleasing visually under the circumstances, so I think I'll go with that.
    Quote Quote  
  9. Originally Posted by fvisagie View Post
    Looking into luma correction a little more closely, it seems my camera's luma occupies the whole [0,255] range
    The few pixels down below Y=16 are only noise and overshoots. That's the reason why footroom and headroom are part of the spec. (Add Blur(1.0) before the levels check and you'll see all the pixels below 16 disappear.) Look at a mix of light and dark shots. And see if there is anything significant down there.

    Originally Posted by fvisagie View Post
    Mapping [0,255]->[16,235] instead of [16,255]->[~16,235] as before certainly compresses luma more, but I haven't been able to spot any visible banding with this footage.
    Because of the noise.
    Last edited by jagabo; 14th Mar 2013 at 15:50.
    Quote Quote  
  10. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by fvisagie View Post
    This has been a most... illuminating discussion!
    Indeed. Off of the mention of TV's that crush invalid colors, and those that don't, and setting displays properly...

    I note that a YUV histogram is often used to check these values for correction. Well and good, but IMO Avisynth's Levels histogram is OK for luma levels but rather useless for color ranges -- and I read in Doom9 somewhere that the U and V panels in that histogram are inaccurate.

    I use that histogram for a preliminary luma check, but I convert the video to RGB (using 601 or 709, whichever applies) and then use the ColorTools histogram in VirtualDub to check the color range, or use the RGB histograms in Photoshop Pro or AfterEffects. I do this because I feel that YUV tells me how the data is stored, but RGB tells me how it will be interpreted and displayed. I've seen a great many YUV histograms with perfectly fine 16-235 luma levels, but colors were smashed against all sides of the RGB histograms. Often, my eyes can see these effects without reverting to 'grams and 'scopes. I make preliminary corrections for legal values in both luma and chroma using Avisynth tools (ColorYUV, etc.).

    A contrast/brightness technique mentioned is the use of pluges and contrast bars, in versions for both PC and TV. These are OK for general quickie setup, but there are more precise methods. Because of the way PC monitor brightness and contrast controls work (or don't work), a pluge is often the only tool available other than colorimeter/software kits designed for PC hardware. The kits are far more accurate than either the controls or the eyeball. But for TV I have EyeOne colorimeters and HCFR software to set black and white levels, which at the same time serves as a gamma setup.

    For TV calibration, I use IRE-0 to IRE-100 gray test patches from the GetGray calibration disc. GetGray itself is an SD tool, but gray patches for SD and HD are the same for RGB setup of SD and HD alike. For color, you'd use the GetGray color patches for SD but go for the AVSHD709 disc for HD color tests. For SD and HD grayscale-only, either of the gray patch sets will work.

    Setting TV white level: Mount the colorimeter on the TV, plug its USB cable to a computer, and start the HCFR software. Begin by displaying a 100% (IRE-100 bright white) gray panel. Adjust TV contrast until the IRE-100 panel gives a "reasonable" luminance reading in the range of 30 to 40 cd/m2 for LCD/plasma/CRT. This cd/m2 figure is not arbitrary: it corresponds to a universally accepted brightness level for viewing in average indoor lighting without smashing brights all over the place. I like a fairly punchy image, so I set for a white level of 40 cd/m2. You can go higher or lower, but doing so will affect the target gamma of 2.0 or 2.2 (I shoot for 2.0).

    Setting TV black level: Next, display an IRE-10 dark gray panel (not IRE-0). Adjust the Brightness control until you get an IRE-10 luminance reading that is 0.65% of the IRE-100 reading. If the IRE-100 was 40, then the IRE-10 reading should be in the neighborhood of 0.26 cd/m2 (40 X .0065 = 0.26). Because Contrast and Brightness tend to interact, you likely should repeat the readings and readjust until you get the 40 and 0.26 readings. You have now set a gamma in the area of 2.0 or so. Later, after grayscale calibration, the overall gamma will have been modified, so it's a good idea to check those black and white levels again. Some TV's are not capable of a 2.0-2.2 gamma; in that case, the plug pattern will verify the set's capabilities and allow tweaking.

    A saturation setting should follow the levels and grayscale calibration. Some experts contend that Saturation is best set using a 100% or 75% Red color patch (use Rec601 color patches for SD, Rec709 color patches for HD -- although, in fact, there's not that much difference between them). Display a 100% or 75% light gray patch, take a reading with HCFR, and note the Y-level reading. HCFR gives x,y, and Y' (the same as YUV "Y") for that patch. You want to note the Y color luminance reading from that patch. Next, display a matching 100% or 75% Red patch, careful to match 100% or 75%, whichever you're using. Set the saturation control until the Red Y-level reads 21% of the white patch.

    Some experts suggest that primary Green is a better base (especially if you're viewing a Panasonic TV with its volcano-level Reds). For a Green patch, set saturation to 70% of the white Y-level. Or of you want to get picky, SD Green should be 70% of the white patch, HD should be 71%. If you can see the difference, you're pretty damn good. If you use other colors, the percentage of white-to-color Y readings goes along these lines:

    Red 21%
    Green 70-71%
    Blue 8-7%
    Yellow 9%
    Cyan 78%
    Magenta 29%
    White 100%

    Setting Tint is far more complicated. To do it precisely, you need HCFR and its Rec601 or Rec709 CIE test matrix to adjust Blue and Green CMS matrix levels -- not possible if you don't have individual CMS controls on your TV, and most TV's don't have them. In that case, the old blue viewing filter and the Tint test screen from the GetGray or AVSHD disc will have to suffice.

    These methods can't be used on a PC. For one thing, many PC monitors don't have accurate RGB controls; their effects vary widely across the RGB range, and it's not possible to adjust Y-saturation levels on anything except pricey pro monitors. For a PC, the XRite calibration kits are quite accurate and thorough. One begins by following the software prompts, which usually involve setting brightness to 120 cd/m2 or so. Most monitors have settings far beyond this value, which means they are bright as hell. The software gives you a grid in which to align a cursor when brightness/contrast are properly set. The rest is automated, acting mostly on the graphic card's LUT to make 38 RGB grayscale x, y, Y', and gamma corrections.

    Not long ago I posted some HCFR images showing how close these PC calibration kits can get: https://forum.videohelp.com/threads/335402-VHS-capture-critique?p=2083260&viewfull=1#post2083260. This post used HCFR's sRGB panels.

    Here are two articles that describe the TV and projector calibration process:
    http://www.curtpalme.com/forum/viewtopic.php?t=10457&start=0&postdays=0&postorder=asc&highlight=
    http://www.avsforum.com/t/852536/basic-guide-to-color-calibration-using-a-cms-updated-and-enhanced
    Here is a link to sample PC calibration:
    http://www.tftcentral.co.uk/reviews/eye_one_display2.htm

    Following these guides for PC/TV, bad video processing becomes immediately obvious, and proper video looks....well, the way it's supposed to.
    Last edited by sanlyn; 25th Mar 2014 at 20:02.
    Quote Quote  
  11. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Why do you need to convert it? My recollection is that you can encode and flag Rec.601 colour to MPEG-2 just fine. IIRC if you don't specify what the colour is, SD video defaults to Rec.601. I could be wrong.


    btw, I used to think using matrix="PC.601" etc instead of "Rec601" was the way to preserve superwhites/blacks, but it's not quite correct...
    http://forum.doom9.org/showthread.php?t=164981
    ...which would be unfortunate when doing a colour conversion.


    This will bring over-bright luma in range quite simply, without affecting black...
    levels(0,1.0,255,2,235,coring=false)
    ...but there are other ways.

    Cheers,
    David.
    Quote Quote  
  12. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Looks like I've opened myself a can of worms...

    If I leave the levels alone, is there anybody downstream who's expected to clip them? Or does it depend on the implementation of the particular encoder, renderer etc?

    edit: Is there some tool that can tell with which matrix a particular clip has been encoded with?
    Quote Quote  
  13. Originally Posted by fvisagie View Post
    If I leave the levels alone, is there anybody downstream who's expected to clip them? Or does it depend on the implementation of the particular encoder, renderer etc?
    Encoders usually pass them along. But when converted to RGB for display (all displays are RGB) the out-of-spec darks and brights will be crushed. Your original sample on the left, after ColorYUV(gain_y=-20) on the right:

    Click image for larger version

Name:	samp.jpg
Views:	368
Size:	123.1 KB
ID:	16688

    Originally Posted by fvisagie View Post
    edit: Is there some tool that can tell with which matrix a particular clip has been encoded with?
    It depends on the container and codec. MediaInfo shows some. DgIndex for MPEG 2 video. Sometimes it isn't flagged and you have to guess. The general rule is rec.601 for SD, rec.709 for HD.
    Quote Quote  
  14. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by fvisagie View Post
    If I leave the levels alone, is there anybody downstream who's expected to clip them? Or does it depend on the implementation of the particular encoder, renderer etc?
    Encoders usually pass them along. But when converted to RGB for display (all displays are RGB) the out-of-spec darks and brights will be crushed.
    ...but not all displays will crush them.

    Also, non-PC-centric RGB video over HDMI is always black=16 white=235, not the highlight-clipping black=0 white=255 that is most commonly found on PCs. (By preference, non-PC-centric video is YUV over HDMI if both devices support it - but if one doesn't, you get 16/235 RGB instead).


    That said, it is far better to ensure the video is legal and valid (and preferably NOT just by hard-clipping it to the limits) - i.e. luma sits between 16-235 except for sharpening overshoots, and luma+chroma point to real RGB values.

    Cheers,
    David.
    Quote Quote  
  15. Originally Posted by 2Bdecided View Post
    Originally Posted by jagabo View Post
    Originally Posted by fvisagie View Post
    If I leave the levels alone, is there anybody downstream who's expected to clip them? Or does it depend on the implementation of the particular encoder, renderer etc?
    Encoders usually pass them along. But when converted to RGB for display (all displays are RGB) the out-of-spec darks and brights will be crushed.
    ...but not all displays will crush them.
    Of course. It depends on the display and it's settings. A properly calibrated display will crush them. Y=16 should be as dark as the device can display. Y=235 should be as bright as the device can display. You can usually fiddle with the brightness and contrast settings to bring out details outside those regions (when YUV is transmitted). But who want's to fiddle with the settings for every video they play? Encode to the standard and you won't have to.

    Originally Posted by 2Bdecided View Post
    Also, non-PC-centric RGB video over HDMI is always black=16 white=235, not the highlight-clipping black=0 white=255 that is most commonly found on PCs.
    Most devices I have have settings for YUV output or RGB output. And the RGB output has settings for how the YUV is converted to RGB. Usually a "high" and "low" settings for YUV 16-235 -> RGB 0-255 or YUV 0-255 -> RGB 0-255. But the standard is the former.
    Quote Quote  
  16. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    This will bring over-bright luma in range quite simply, without affecting black...
    levels(0,1.0,255,2,235,coring=false)
    ...but there are other ways.
    Originally Posted by 2Bdecided View Post
    Your camcorder is just like 99% of recent consumer camcorders: it records luma 16-255. Just use the levels line I provided above (or the equivalent smoothlevels line if you want) to map it to 16-235, and leave it at that.
    Thanks for that example - I'd have handled the 16 wrong on my own! Mapping 0->2 leaves some leeway one imagines, but for what?
    Quote Quote  
  17. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by fvisagie View Post
    Originally Posted by 2Bdecided View Post
    This will bring over-bright luma in range quite simply, without affecting black...
    levels(0,1.0,255,2,235,coring=false)
    ...but there are other ways.
    Thanks for that example - I'd have handled the 16 wrong on my own! Mapping 0->2 leaves some leeway one imagines, but for what?
    You want 16 mapped to 16, and 255 mapped to 235, with a linear range inbetween (and below, to avoid clipping any blacker-than-black overshoots). If you do 0-255>0-235, then 16 gets mapped to 14.7. If you use 0-255>1-235 or 2-235 you avoid that minuscule bit of black crushing, with 16 mapped to 15.7 or 16.6 instead. I wasn't sure which way it was rounded, so used 2. It's just being picky


    Originally Posted by poisondeathray View Post
    But I seriously doubt you need to be worried about dithering on this type of content - you probably care more about the image itself than a waveform tracing
    I have never had a clean enough signal from one of my SD camcorders to make this an issue - the noise in the original image self-dithers it and prevents banding. With VHS captures I sometimes so some extreme level changes, giving a terrible looking histogram - but after subsequent denoising you'd never know. Denoising first and then applying extreme level changes gives easily visible banding, because there's no noise to dither the 8-bit level change.

    I think I can see banding in blue skies from my HD camcorder if I use levels.


    Originally Posted by jagabo View Post
    I stumbled across this PDF with a good explanation of the differences in chroma sub-sampling:
    http://www.compression.ru/download/articles/color_space/ch03.pdf
    I'm sure Keith Jack will be delighted that a Russian website has ripped an entire chapter from his Video Demystified book. (some of it is freely and legally available from Google Books though).

    Cheers,
    David.
    Quote Quote  
  18. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    With VHS captures I sometimes so some extreme level changes, giving a terrible looking histogram - but after subsequent denoising you'd never know. Denoising first and then applying extreme level changes gives easily visible banding, because there's no noise to dither the 8-bit level change.
    Banding, yes, and a few other noisy problems. So that's why I've insisted that I fix basic levels/colors before other cleaning. Thanks for explaining my own methods to me.
    Last edited by sanlyn; 25th Mar 2014 at 20:04.
    Quote Quote  
  19. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    You want 16 mapped to 16, and 255 mapped to 235, with a linear range inbetween (and below, to avoid clipping any blacker-than-black overshoots). If you do 0-255>0-235, then 16 gets mapped to 14.7. If you use 0-255>1-235 or 2-235 you avoid that minuscule bit of black crushing, with 16 mapped to 15.7 or 16.6 instead. I wasn't sure which way it was rounded, so used 2. It's just being picky
    Just the way I like it!

    Originally Posted by 2Bdecided View Post
    Originally Posted by poisondeathray View Post
    But I seriously doubt you need to be worried about dithering on this type of content - you probably care more about the image itself than a waveform tracing
    I have never had a clean enough signal from one of my SD camcorders to make this an issue - the noise in the original image self-dithers it and prevents banding. With VHS captures I sometimes so some extreme level changes, giving a terrible looking histogram - but after subsequent denoising you'd never know. Denoising first and then applying extreme level changes gives easily visible banding, because there's no noise to dither the 8-bit level change.
    You guys have convinced me. Now I'm not scared of banding with this footage. In passing you guys implied the existence of potential "debanding or denoising first?" issues and I've seen some anxious debates on that, so I'm actually quite relieved I don't have to concern myself with that.
    Last edited by fvisagie; 13th Mar 2013 at 05:43. Reason: Missing [QUOTE]
    Quote Quote  
  20. David is correct regarding consumer electronics RGB over HDMI, the exception is 640x480 which is always full range according to the spec .

    But CE almost never is set to RGB, since all consumer formats are YCbCr

    The bottom line , as everyone agrees, is just to use legal range YCbCr
    Last edited by poisondeathray; 11th Mar 2013 at 13:38.
    Quote Quote  
  21. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Yes, that last link has the pretty pictures I recall.

    You don't need to learn about it (except for curiosity) - just use Cedocida DV codec, and either force YUY2 (either using the control panel for the codec, or using pixel_type="YUY2" in the AVIsource command in AVIsynth), or forec YV12 and select "MPEG 2 interlaced" as the decoder option in the Cedocida control panel.

    If you do it incorrectly, it's quite hard to see any difference, and very hard to say for certain that it's "wrong" just by looking at the result.

    Cheers,
    David.
    Quote Quote  
  22. Yes, not worth worrying about. Just keep track of interlaced vs. progressive issues. Eg., ConvertToYV12(interlaced=true) for interlaced video, ConvertToYV12() for progressive.
    Quote Quote  
  23. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Thanks, guys.

    Back to 601 vs. 709 one last time, if I mix this DVCAM with HD content, is it advisable to convert one's format to that of the other, say the DVCAM to Rec.709?

    edit: I notice that when normalising levels bright horizontal lines appear in the histogram, also visible in jagabo's picture above. They're present with both of ColorYUV() and levels(); their number and position just vary in each case. What do those lines represent?
    Last edited by fvisagie; 12th Mar 2013 at 09:39.
    Quote Quote  
  24. Originally Posted by fvisagie View Post

    edit: I notice that when normalising levels bright horizontal lines appear in the histogram, also visible in jagabo's picture above. They're present with both of ColorYUV() and levels(); their number and position just vary in each case. What do those lines represent?
    Those lines represent quantization. You will see banding along gradients, because your manipulations were done in 8bit, and this is an 8bit format. You can dither (essentially add noise to hide it)

    e.g. either use smoothlevels() instead of levels, or use levels() with dither=true (only available in avisynth 2.6.x) , or you can add dithering separately with the dither tools (e.g. gradfun3 or related filters)
    Quote Quote  
  25. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    You can dither (essentially add noise to hide it)

    e.g. either use smoothlevels() instead of levels, or use levels() with dither=true (only available in avisynth 2.6.x) , or you can add dithering separately with the dither tools (e.g. gradfun3 or related filters)
    I take it you and David are referring to SmoothAdjust's Smoothlevels() function, not the earlier SmoothLevels script?

    For me Avisynth 2.6.x won't be an option yet for a while. How would you chiefly contrast smoothlevels() vs. separate dither tools like gradfun3, e.g. when would one generally prefer the one over the other?
    Quote Quote  
  26. Originally Posted by fvisagie View Post
    Originally Posted by poisondeathray View Post
    You can dither (essentially add noise to hide it)

    e.g. either use smoothlevels() instead of levels, or use levels() with dither=true (only available in avisynth 2.6.x) , or you can add dithering separately with the dither tools (e.g. gradfun3 or related filters)
    I take it you and David are referring to SmoothAdjust's Smoothlevels() function, not the earlier SmoothLevels script?
    Yes, Lato has re-written it, smoothadjust.dll comes as a .dll now, not an avs(i) script. The avisynth call is still smoothlevels() as before


    How would you chiefly contrast smoothlevels() vs. separate dither tools like gradfun3, e.g. when would one generally prefer the one over the other?
    On content like this you won't notice much of a difference. I doubt you would be able to see the "banding" changes reflected in the waveform (the histogram() is really a Y' waveform) manifest in the actual image. But with smooth gradients in content like CGI, anime, maybe shots of a clear blue sky, you can notice a difference in the actual image , and the type of dithering used makes a difference. There is more control over the type of dithering with separate plugins like flashkyuu_deband, and the dither package . The dither package has uses for other related operations, such as dithering for colorspace conversions, (fake, stacked) higher bit depth support

    You originally mentioned converting for RGB (presumably for use in other programs). Some of the problems with RGB plugins in avisynth /vdub is that they work at lower 8 bit depth an in non linear sRGB - so the conversion math isn't as accurate, and you get more banding with gamma error (1+1 doesn't equal 2). The "correct" way would be to linearize everything using a linear workflow when doing color corrections in RGB
    Quote Quote  
  27. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Thanks for the rundown on dithering approaches, much appreciated.

    Originally Posted by poisondeathray View Post
    You originally mentioned converting for RGB (presumably for use in other programs).
    That's right.

    Originally Posted by poisondeathray View Post
    Some of the problems with RGB plugins in avisynth /vdub is that they work at lower 8 bit depth an in non linear sRGB - so the conversion math isn't as accurate, and you get more banding with gamma error (1+1 doesn't equal 2). The "correct" way would be to linearize everything using a linear workflow when doing color corrections in RGB
    What do you mean with 'linear workflow', and 'linearize everything'? I suspect though that this might not apply here - you mention these in regard to corrections in RGB, while in this case I'll likely do colourspace corrections in the original YV12. Still, if you could spare an answer I'd be grateful.
    Quote Quote  
  28. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by fvisagie View Post
    Back to 601 vs. 709 one last time, if I mix this DVCAM with HD content, is it advisable to convert one's format to that of the other, say the DVCAM to Rec.709?
    Yes, as part of the conversion from SD to HD. Your NLE will do it for you (but probably badly). AVIsynth can do it very well.

    Several NLEs say they let you mix formats within one project, but the ones I've tried do the conversion pretty poorly.

    When working in HD, I used AVIsynth to convert SD to HD before rendering a project. If I know I'll use most of the SD clips, I might convert them all before I start. If I'll only use a small fraction, then I'll use the SD versions in the NLE to start with, and create HD versions in AVIsynth only of the ones that survive to the final edit. Some NLEs make it easy to replace a clip with a better quality version without messing with the edit.

    Cheers,
    David.
    Quote Quote  
  29. With mixed content you usually convert to whatever the final format is going to be. e.g. if it's an HD timeline intended for HD export, you convert the SD material to Rec709 using colormatrix

    Everything else still applies - you need to either "legalize" values in YCbCr before converting to RGB, or use a full range matrix before converting to RGB, otherwise the "Rec" matrices will clip your overbrights
    Quote Quote  
  30. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    With mixed content you usually convert to whatever the final format is going to be. e.g. if it's an HD timeline intended for HD export, you convert the SD material to Rec709 using colormatrix

    Everything else still applies - you need to either "legalize" values in YCbCr before converting to RGB, or use a full range matrix before converting to RGB, otherwise the "Rec" matrices will clip your overbrights
    I'm a little confused here. Earlier David mentioned (and referred to a discussion on the topic)

    Originally Posted by 2Bdecided View Post
    btw, I used to think using matrix="PC.601" etc instead of "Rec601" was the way to preserve superwhites/blacks, but it's not quite correct...
    http://forum.doom9.org/showthread.php?t=164981
    ...which would be unfortunate when doing a colour conversion.

    In terms of avoiding clipping, what conversions or roundtrips do then actually work? And which not, and why?
    Quote Quote  



Similar Threads