VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 46
Thread
  1. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    I'm going a bit crazy with capturing the correct chroma and hue levels from VHS. My goal with this is that I want to ensure that I am capturing all the correct Chroma and Hue that I can from my VHS tape. I've gone through a lot of threads, and find A LOT of discussion on luma (the Y component) and getting the luma levels correctly captured to the capture card but not much on chroma (the UV component). I thought I would give Chroma and Hue some attention.

    For what I say here below, let's assume the HUE levels are correctly adjusted prior to capture to digital:

    However, what it seems to me (please correct me if I am wrong), that the Chroma sort of piggy backs of the luma levels. In other words, as long as I ensure my luma levels are correct, then my Chroma levels would also be correct as long as I have captured 'some' chroma. So all I need is a little chroma to determine

    To me, chroma reflects the saturation of color. If there is no chroma, then there is no color (it's a black and white picture).

    However, having captured 'some' chroma, I then have enough information encoded to digital I would believe. This leads me to think that trying to capture correct cromma levels is irrelivent. Why? Because, it seems I can simply correct the chromma levels in POST production (i.e. correcting the chroma in the digitally captured video)? Yes? No?

    Chroma consinsts of U and V where U and V are blue–luminance and red–luminance differences from Y (luma), correct? So as long as I have recorded some blue-luminance (the U component) difference from Y (luma) and red-luminance (the V component) from Y (luma), then the COLOR is essentially captured. Is the Chroma level correct (i.e. the saturation) upon capture from VHS? I would think: who cares! Because, in post production (from the digitally captured video) I can adjust the Chroma level (i.e. saturation) which would be as equivillent to if I adjusted the Chorma levels (say through a hardware proc amp) prior to the capture to digital video. Right? Is that the correct understanding? Or am I missing something.

    So this assumes, the HUE is correct upon captured.

    Now let's look at HUE:

    To me, which I could be wrong, correct Chroma and HUE are two separate things. But, assuming I had the correct Chroma levels correct prior to capturing video, wouldn't I also be able to correct the HUE in post production as well?

    My point is this: the most important part of capturing VHS to digital (assuming I have the best VCR, best TBC, ...etc), is making sure the luma levels are correct when capturing to a capture card (assumes a properly calibrated capture card too). And, that I have captured some of the UV component of the VHS to know what the color is for the video. Capturing the correct luma and some of the UV component, I've essentially sucked everything off the VHS tape. Is this correct? Or where am I going wrong?

    I kept reading everywhere, capture the right chroma levels. But, the question I kept asking is, what does that even mean to capture the correct chroma levels?

    In summary:

    • Luma (Y component) - very important to get levels correct when capturing to your video capture card
    • Chroma (UV component) - as long as you have captured some of the color information, the correct Chroma and Hue adjustments can be performed in post production (i.e. after the analog VHS video has been already captured to digital)
    Correct? Or where am I not understanding things clearly?
    Quote Quote  
  2. Originally Posted by JasonCA View Post
    However, what it seems to me (please correct me if I am wrong), that the Chroma sort of piggy backs of the luma levels. In other words, as long as I ensure my luma levels are correct, then my Chroma levels would also be correct
    No. The Chroma signals are separate signals, even though they are carried on the same wire (in the case of composite video).

    Originally Posted by JasonCA View Post
    To me, chroma reflects the saturation of color. If there is no chroma, then there is no color (it's a black and white picture).
    Yes.

    Originally Posted by JasonCA View Post
    However, having captured 'some' chroma, I then have enough information encoded to digital I would believe. This leads me to think that trying to capture correct cromma levels is irrelivent. Why? Because, it seems I can simply correct the chromma levels in POST production (i.e. correcting the chroma in the digitally captured video)? Yes? No?
    Yes. Unless they are too far off. You're best off getting them close to start with.

    Originally Posted by JasonCA View Post
    To me, which I could be wrong, correct Chroma and HUE are two separate things.
    No. Hue is a rotation of the U/V plane. See attached video. U is graphed on the right, V below, UV plane in the lower right.

    Originally Posted by JasonCA View Post
    But, assuming I had the correct Chroma levels correct prior to capturing video, wouldn't I also be able to correct the HUE in post production as well?
    Yes. Again, as long as they are not too far off.

    Originally Posted by JasonCA View Post
    what does that even mean to capture the correct chroma levels?
    Chroma should be between 16 and 240 (with 8 bit sampling). But even within that range, not all combinations of Y, U, and V are valid.

    Beyond simple chroma levels, with VHS you have all kinds of non-linearity problems. Luma can "influence" the chroma (ie, the something that was the same hue and saturation in bright and dark areas may appear with different hues and saturations on VHS tape). Etc.
    Attached Files
    Last edited by jagabo; 14th Dec 2013 at 19:34.
    Quote Quote  
  3. My limited understanding:

    Originally Posted by JasonCA View Post
    Because, it seems I can simply correct the chromma levels in POST production (i.e. correcting the chroma in the digitally captured video)?
    If you think of saturation as chroma's equivalent to luma's contrast range, the same issues apply. If you compress (desaturate) the range too much during capture, then attempt to correct this later by expanding it, you will end up with banding.
    Quote Quote  
  4. Conversely, if you capture with too much saturation you will loose saturation peaks in the original video.
    Quote Quote  
  5. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 19th Mar 2014 at 09:47.
    Quote Quote  
  6. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Since I have received your input jagabo, I have been playing around with the UV in VideoScope. I must admit, I have overlooked the Chroma and didn't 'see' how it was shown through the scopes. Now, I see that it can be done with VideoScope. Excellent!

    Excellent and very helpful and insightful video you liked in the post.

    One thing I am wondering though, how did you get the cross hairs in the UV vectorscope in scope tools! Mine is blank (or I should say bare)?

    With the VectorScope, I wish the waveform had tick marks for the [16 to 235] or [0,255] postions for the luma. But, that's another story.

    Originally Posted by jagabo View Post
    Chroma should be between 16 and 240 (with 8 bit sampling). But even within that range, not all combinations of Y, U, and V are valid.
    Could you please expand on this a bit more. Looking at the V component on the scope in the video you sent and running through it, you'll see all colors seem to move from top to bottom (roughly 16 to 240 I would believe). Not sure what you mean not all combinations would be valid? How so. I'll have to think more about this myself too.

    But, this brings me to a question: how does one detect or determine when the chroma combinations should be valid? The VideoScope shows the UV components. On it, you can look at the combinations of "U","V" or "YUV". But what would I be looking for to ensure that I've not under-saturated (as vaporeon800 pointed out which would cause banding when trying to uncompress), or over-saturated as you pointed out which would loose saturation peaks?

    If I were to use the VideoScope for this, what would I essentially be looking for as I went through for example 1 minute of video? It's not clear to me. I don't mind viewing and watching the scopes, but I am not sure what I am trying to look for or even what scope would be best if any at all. Or, is there a Avisynth or VDUB tool for this?

    For example:

    Final Cut Pro - Displaying Excess Luma and Chroma Levels in the Viewer and Canvas:

    http://documentation.apple.com/en/finalcutpro/usermanual/index.html#chapter=78%26secti...5%26tasks=true

    Looks like they have a tool that seems interesting. Never used it. But what certain things could I do similar with AviSynth or VDub?

    Name:  FinalCutProZebraStripe.png
Views: 335
Size:  65.9 KB

    Originally Posted by sanlyn View Post
    What the O.P. is attempting to do during capture has been attempted by many over the years, including pro shops with equipment so expensive and sophisticated that it defies description. And they still end up working scene by scene, and often frame by frame. Wish you luck there, but the best anyone can do is to capture with luma and chroma values within reasonable limits to avoid even tougher work in intermediate processing -- processing that will be needed anyway. The color and luma response of VHS is a nonlinear, constantly fluctuating nightmare. Retail tapes are no exception; most of them were produced with autogain and autocolor that make a train wreck out of any pretensions to color "accuracy".
    Thank you for your response and input sanlyn. For me, this is for home videos from one VHS home video camera. The reality is that I must at some point transfer my video's and digitize them at some point before the tapes become damaged (they are already getting old as it is already). And that's why I'm trying to better understand the chroma aspect of things and how to best capture it off the tape. So, I'm trying to keep the practical side in perspective too.

    Correct me if I am wrong, but for home video's that are captured from ONE identical video camera, it seems I could capture the entire tape to digital in one pass? And then in post, do scene to scene correction at a later time. The goal is to get as much possible off the tape as correctly as possible (good luma levels and good chroma levels) to digital. In other words, I want to suck everything off from the tape to digital as BEST as I can. And then do the rest in digital, sure, whenever I want to do so and without fear about my tapes aging. Then once it's digitized, I can enjoy in my leisure to post edit and fix things.

    Retail movies, on the other hand, have different cuts, perhaps different cameras, commercials, ...etc. As you said, "most of them were produced with autogain and autocolor that make a train wreck out of any pretensions to color 'accuracy'". So I can see why scene-to-scene may be better in those cases. Going from scene-to-scene on a retail tape, maybe the hue (foe example) is changing and throwing all the colors off from scene to scene. Which is why you do scene-to-scene correction on true video production captures. Or, am I wayyyyyyy off?

    For home movies, should I be capturing off the tape from scene-to-scene? Stop the tape.... re-adjust my chroma levels, hue, luma levels in a proc amp, and then capture that scene? Or? Remember, for me, all the VHS home movies were from one video camera. I can't imagine the hue is rotating or otherwise every-time I've used my camera in the past. I would think, that once I got the colors correct, then they are probably pretty good from scene to scene? But I could be wrong As long as we all are learning, I don't mind!

    In summary, so.... how does one detect invalid or out of range Chroma levels? A good scope for this would be what? Or, maybe there just isn't any good way? If there is a good scope to do this with, what would I be looking for in the scope? Trying to figure out how I should be looking at this.

    Again, I really appreciate everyone's input.
    Quote Quote  
  7. Originally Posted by JasonCA View Post
    One thing I am wondering though, how did you get the cross hairs in the UV vectorscope in scope tools! Mine is blank (or I should say bare)?
    It's a PNG image that I overlaid onto the VideoScope() image:

    Name:  vectorscope overlay.png
Views: 318
Size:  715 Bytes

    The script:

    Code:
    ovr=ImageSource("vectorscope overlay.png").ConvertToYV12()
    Colorbars(width=720,height=480, pixel_type="YV12")
    Trim(0,360) # we will rotate hue 360 degrees through 360 frames
    Crop(0,0,-0,320) # just the color bars
    Animate(0, 360, "Tweak", 0.0,1.0,0,1.0,false,false,  360.0,1.0,0,1.0,false,false)
    BilinearResize(80,360).BilinearResize(720,360) # blur the bars so the vector scope will be more than just seven dots
    ConverttoYUY2() # for VideoScope()
    VideoScope("both", true, "U", "V", "UV")
    Overlay(last, ovr, width-256, height-256, ovr) # overlay crosshairs
    Originally Posted by JasonCA View Post
    Originally Posted by jagabo View Post
    Chroma should be between 16 and 240 (with 8 bit sampling). But even within that range, not all combinations of Y, U, and V are valid.
    Could you please expand on this a bit more.
    Some combinations of YUV, even when individual components are all within their valid ranges, result in illegal RGB colors (outside the 0-255 range).

    Originally Posted by JasonCA View Post
    how does one detect or determine when the chroma combinations should be valid?
    You can convert the image to RGB and back to YUV, then subtract from the original. That will leave you with a mostly grey image except where there are illegal combinations of YUV. You can increase the contrast to better visualize the errors. See this post:

    http://forum.videohelp.com/threads/354292-Preparing-this-Rec-601-YV12-clip-for-Rec-709...=1#post2226483

    That's similar to what FCP does for illegal chroma detection, though not as pretty. You could come up with a more complex script to come closer to what FCP does.
    Quote Quote  
  8. Here is another interesting visualization that shows the valid YUV space: http://forum.doom9.org/showthread.php?t=154731
    Quote Quote  
  9. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Very intresting thread jagabo! Thank you thank you for giving me direction on this!

    It looks like fvisagie was asking a similar question to the one I am asking:

    http://forum.videohelp.com/threads/354292-Preparing-this-Rec-601-YV12-clip-for-Rec-709...=1#post2226313


    Test #1:

    Code:
    ColorBars()
    ConvertToYUY2()
    Subtract(last, ConvertToRGB(last, interlaced=true, matrix="rec601").ConvertToYUY2(interlaced=true, matrix="rec601"))
    Levels(112,1,144,0,255)
    Name:  UsingYUY2.png
Views: 114
Size:  2.0 KB

    Test #2:

    Code:
    ColorBars()
    ConvertToYV12()
    Subtract(last, ConvertToRGB(last, interlaced=true, matrix="rec601").ConvertToYV12(interlaced=true, matrix="rec601"))
    Levels(112,1,144,0,255)
    Name:  UsingYV12.png
Views: 119
Size:  4.3 KB

    Based on post #12:
    http://forum.videohelp.com/threads/354292-Preparing-this-Rec-601-YV12-clip-for-Rec-709...=1#post2226324

    Test #3:

    Code:
    ColorBars()
    ConvertToYV12()
    Subtract(last, ConvertToRGB(last, interlaced=true, matrix="rec601").ConvertToYV12(interlaced=true, matrix="PC.709"))
    Levels(112,1,144,0,255)
    Name:  BasedOnPost#12.png
Views: 120
Size:  3.9 KB

    This one doesn't look helpful. Seems like it's the negative of the video. Probably didn't understand the purpose in post #12.

    So based on my 3 tests above, is the bleeding through of the colors what you refer to as 'illegal' combinations protruding through between the colors? Obviously, you see it's grey in the color areas that are flat (i.e. not transitioning between colors).


    In post #19 found here:

    http://forum.videohelp.com/threads/354292-Preparing-this-Rec-601-YV12-clip-for-Rec-709...=1#post2226467

    poisondeathray writes:

    Originally Posted by poisondeathray View Post
    With mixed content you usually convert to whatever the final format is going to be. e.g. if it's an HD timeline intended for HD export, you convert the SD material to Rec709 using colormatrix

    Everything else still applies - you need to either "legalize" values in YCbCr before converting to RGB, or use a full range matrix before converting to RGB, otherwise the "Rec" matrices will clip your overbrights
    This is intresting, because checking for illegal Chroma sort of aludes to the destination colorspace. For SD this is Rec601. For HD, that is Rec709.

    This means, what to look for in the results of the AviSynth chroma test script is dependent upon your destination colorspace? For SD (Rec601) in conversion to RGB the result is my Test #1 above. For HD (Rec709) in the conversion to RGB, I get the results of Test #2 above.

    A big but here! Now I captured from VHS to my capture card and stored it in YUV instead of RGB (which seems the typical storage workflow for VHS tapes). So although I am looking for illegal Chroma ranges in conversion to RGB, does not imply necessarily that my Chroma ranges are illegal in the YUV space that is stored in my captured video file? Or does it?

    I'm more concerned about having captured VHS to a digital file with the legal UV chroma values. Instead, it sort of seems like I'm verifying the legality of what's captured based on a conversion to an RGB colorspace which could vary?

    I guess what I am saying, does this AviSynth script test for illegal Chroma levels help to confirm that I probably have to go back and re-capture the VHS tape since my Chroma levels were either to high or to low? I'm more concerned about the data lost in conversion from VHS->capture card->stored YUV digitized file. Why? Because that means in the future, I would go "darn, I captured the illegal range of Chroma, now I have to re-setup all hardware and capture from VHS tape again". Obviously, nothing is perfect, but I want to get as close to understanding as much as I can and doing all my captures as correctly as possible too.
    Quote Quote  
  10. Originally Posted by JasonCA View Post
    So based on my 3 tests above, is the bleeding through of the colors what you refer to as 'illegal' combinations protruding through between the colors?
    No; you're just showing the effects of 4:2:2 and 4:2:0 chroma subsampling on hard-edged color transitions.

    And I'm not sure why you're leading yourself astray with all of this Rec709 stuff.
    Quote Quote  
  11. The way the technique works is by converting the video to RGB, back to YUV, then seeing what pixels changed relative to the original. So anything that causes changes will show up in the subtracted image.

    ConvertToRGB(matrix="rec601").ConvertToYV12(matrix ="PC.709")) causes all the colors to change because of the different matrices. So it's not a fair test of whether your source had illegal colors. The other colorbar tests showed changes after the twin conversion because of errors with chroma subsampling. Going from YUY2 or YV12 to RGB and back will always give you small errors like that. You can verify this by using YUV 4:4:4 chroma subsampling:


    Code:
    ColorBars()
    ConvertToYV24(interlaced=true, matrix="rec601") # YUV 4:4:4 to avoid chroma subsampling errors
    Subtract(last, ConvertToRGB(last, interlaced=true, matrix="rec601").ConvertToYV24(interlaced=true, matrix="rec601"))
    Levels(112,1,144,0,255)
    Real world video generally does not have such sharp edges so it will exhibit the problem less frequently.

    Lastly, there will always be small errors with RGB/YUV conversions because, with 8 bit samples, not every RGB value has a unique YUV value, and vice versa.

    As long as your chroma doesn't get clipped during capture, or get badly quantized by using too small a range (almost greyscale), you can reasonably fix it in post.
    Quote Quote  
  12. The Doom9 thread I linked to includes a script specifically for checking "bad" values in any given YV12 source. But even Blu-ray Discs are mastered with "invalid" YUV (and I'm not talking about crappy discs from small companies).
    Quote Quote  
  13. Originally Posted by vaporeon800 View Post
    The Doom9 thread I linked to includes a script specifically for checking "bad" values in any given YV12 source.
    Originally Posted by gavino
    Script loading is very slow because of the mt_lutxyz function (which has to create a lookup table with 2^24 entries).
    He wasn't kidding. It took a minute or so on my computer!
    Quote Quote  
  14. Member 2Bdecided's Avatar
    Join Date: Nov 2007
    Location: United Kingdom
    Search Comp PM
    As long as you haven't clipped (=too high saturation) or quantised to oblivion (=too low saturation) U or V during capture, you can fix it however you like in the digital domain.

    Unless you have a very small amount of home movies, or a very large amount of free time, or you are looking for a new time-consuming hobby, or someone is paying you to do it, it's unlikely that you will find the time or patience to apply individual adjustments to every single scene.


    Unless you are looking to wear your tapes out, it's unwise to keep playing and re-winding sections of your tapes until you find the "perfect" levels for capture. Just capture the whole lot at a level that doesn't clip. The default settings for hue and saturation are normally fine.




    If you fix the levels digitally before denoising, you won't see banding. VHS is almost always too noisy to get banding, even if you only use half of the usably capture range. Whereas if you aggressively denoise, and then try and fix the levels on a clip with very wrong levels, you will often get banding.


    Cheers,
    David.
    Quote Quote  
  15. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Originally Posted by vaporeon800 View Post
    The Doom9 thread I linked to includes a script specifically for checking "bad" values in any given YV12 source. But even Blu-ray Discs are mastered with "invalid" YUV (and I'm not talking about crappy discs from small companies).

    Thanks for pointing that out!

    Here's the code that does this, well, sort of:

    Code:
    # Demonstration of valid YUV values, showing (for both TV ranges and PC ranges):
    # - ranges of U and V for each possible value of Y;
    # - ranges of Y and V for each possible value of U;
    # - ranges of Y and U for each possible value of V.
    # Requires MaskTools v2
    
    # Support function.
    # Given MaskTools expressions for Y, U, and V,
    # returns expression which yields 255 for valid RGB and 0 for invalid.
    function ValidRGB(string y, string u, string v, bool "pcRange") {
      pcRange = Default(pcRange, false)
    
      # Normalise Y to [0,1] and U, V to [-1,+1]:
      y = pcRange ? y+" 255 /" : y+" 16 - 219 /"
      u = pcRange ? u+" 128 / 1 -" : u+" 16 - 112 / 1 -"
      v = pcRange ? v+" 128 / 1 -" : v+" 16 - 112 / 1 -"
    
      # Rec 601 coefficients:
      Kr = " 0.299"  Kg = " 0.587"  Kb = " 0.114"
      Kr1 = " 1"+Kr+" -"  Kb1 = " 1"+Kb+" -"
      # From http://avisynth.org/mediawiki/Color_conversions:
      # R = Y + V*(1-Kr)
      # G = Y - U*(1-Kb)*Kb/Kg - V*(1-Kr)*Kr/Kg
      # B = Y + U*(1-Kb)
      r = y + v + Kr1 + " * +"
      g = y + u + Kb1 + Kb + Kg + " / * * -" + v + Kr1 + Kr + Kg + " / * * -"
      b = y + u + Kb1 + " * +"
      lwb = " 0 >="
      upb = " 1 <="
      and = " &"
      good = r+lwb+r+upb+and+g+lwb+and+g+upb+and+b+lwb+and+b+upb+and
      return good+" 255 0 ?"
    }
    
    
    # Returns a mask with 255 where YV12 clip has valid RGB and 0 elsewhere
    function RGBMask(clip c, bool "pcRange") {
      c2 = c.BilinearResize(2*c.width, 2*c.height)
      return c.mt_lutxyz(c2.UToY(), c2.VToY(), ValidRGB(" x", " y", " z", pcRange))
    }
    
    # Shows the areas of a YV12 clip which contain 'invalid RGB';
    # good pixels are replaced by 'color', default black.
    function ShowBadRGB(clip c, bool "pcRange", int "color") {
      mask = c.RGBMask(pcRange)
      c.mt_merge(BlankClip(c, color=color), mask, U=3, V=3, luma=true)
    }
    And here's my script to show 'BAD' RGB:

    Code:
    colorbars #75% colorbars from 16 to 235
    ConvertToYV12()
    ShowBadRGB(pcRange=false,color=256)
    The result is this:

    Name:  BadRGB0000.png
Views: 120
Size:  2.1 KB

    This is sort of opposite of what I want. Here, the BAD colors are shown through, and anything that is GOOD is blacked out. The function is called ShowBadRGB().

    What I am looking for is showing all GOOD RGB values and when there is RGB out of the right levels, for THAT to be blacked out. In other words, I would have expected everything in black to be shown in the image above, and everything that is shown in color (the blue patch on the lower left) to be blacked out. To me, that is a better behavior for the the call ShowBadRGB(). To me, ShowBadRGB() acts like what I would call HideGoodRGB().

    Actually, what would be nice, is if I could even choose the color to be shown to represent what is bad. That way, everything that is within valid ranges simply show through as normal. Anything that is BAD get's colored by the color I chose to represent the bad color.

    Something like:

    Code:
    ShowBadRGB(int "color"= #00FF00) # show all RGB colors that are out of range GREEN
    
    #or
    
    ShowBadRGB(int "color"= #000000) # show all RGB colors that are out of range BLACK
    
    #or
    
    ShowBadRGB(int "color"= #FFFFFF) # show all RGB colors that are out of range WHITE
    Anyway to do that? If I had that, then I'd preety much be doing what Final Cut Pro does with the Zebra pattern thing.
    Quote Quote  
  16. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by vaporeon800 View Post
    The Doom9 thread I linked to includes a script specifically for checking "bad" values in any given YV12 source.
    Originally Posted by gavino
    Script loading is very slow because of the mt_lutxyz function (which has to create a lookup table with 2^24 entries).
    He wasn't kidding. It took a minute or so on my computer!
    Yes, I noticed the same thing. ShowBadRGB() seems to take about 2 minutes before it starts to give feedback back to the user! So yes, one has to be patient when using it
    Quote Quote  
  17. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    As long as you haven't clipped (=too high saturation) or quantised to oblivion (=too low saturation) U or V during capture, you can fix it however you like in the digital domain.
    And that's what I am working still to understand: what tools I can use to know if U or V (or the combination of them) is clipping or looks to be a bit under-saturated. If we can modify this ShowBadRGB to show out or range U or V mapped to RGB as out of range a specific color (as I mentioned in a prior post), then I would essentially have what Final Cut Pro does with the Zebra pattern thing.

    Originally Posted by 2Bdecided View Post
    Unless you have a very small amount of home movies, or a very large amount of free time, or you are looking for a new time-consuming hobby, or someone is paying you to do it, it's unlikely that you will find the time or patience to apply individual adjustments to every single scene.
    I have one VHS camera that has been used to record all home movies over many many years. So I would think, that if I got all the setup correct (i.e. capturing correct LUMA and ensuring that my Chroma is not, as you put it, "clipped (=too high saturation) or "quantised to oblivion (=too low saturation)" then I should be able to just capture all tapes relatively at the same levels. The colors should be for the most part all written to the VHS tapes the same since it's all by one VHS camera for all tapes. That's sort of my thinking on it. I never got any feedback further from sanlyn saying otherwise since my last response.

    And, I am trying to stay practical too. I rather get all the tapes copied and transferred (as best as possible) then to have them rot away thinking I need to do heavy scene to scene cleanup which I believe (because I have used one camera for all my VHS tapes) could simply allow me to do scene-to-scene correction in post (years down the road or at my own leisure). I have all the tapes transferred, now I can then take all the time I like to play with the colors and what now.

    Originally Posted by 2Bdecided View Post
    Unless you are looking to wear your tapes out, it's unwise to keep playing and re-winding sections of your tapes until you find the "perfect" levels for capture. Just capture the whole lot at a level that doesn't clip. The default settings for hue and saturation are normally fine.
    Yes, that's what I am thinking! I will also hold on to the VHS tapes as long as possible. But, at least having them transferred NOW ensures I have a copy. If I did make a mistake, I would then in the future still have the possibility to go back to the VHS tapes to re-capture them (if the tapes are still in good condition). So, right now, if I am going to spend the time copying the LOT of them, I want to make sure I have everything setup as correctly as possible. Or, at least be aware enough to know where I am loosing data in the transfer process from VHS analog to digital capture.

    Originally Posted by 2Bdecided View Post
    If you fix the levels digitally before denoising, you won't see banding. VHS is almost always too noisy to get banding, even if you only use half of the usably capture range. Whereas if you aggressively denoise, and then try and fix the levels on a clip with very wrong levels, you will often get banding.
    So you are saying that if I correct the levels in digital beforing running the digital video through a denoising filter, I wont see banding? Wheras, if I denoise digitially right now, and then try and correct the levels, I would get banding? This concern, though, seems to be an issue for post capture. However, this also sort of implies, that I want to get the right levels during capture. And when you say Levels...I am thinking....ensuring I have the right Luma levels and Chroma levels prior to capture. And that's what I am focused on right now.
    Quote Quote  
  18. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 19th Mar 2014 at 09:48.
    Quote Quote  
  19. Originally Posted by JasonCA View Post
    what tools I can use to know if U or V (or the combination of them) is clipping or looks to be a bit under-saturated.
    VideoScope("both", true", "U", "V", "UV")

    U and V have to very very under saturated before it becomes a problem with VHS. To the point that the picture will look nearly greyscale.
    Quote Quote  
  20. Originally Posted by JasonCA View Post
    This is sort of opposite of what I want. Here, the BAD colors are shown through, and anything that is GOOD is blacked out... what would be nice, is if I could even choose the color to be shown to represent what is bad.
    Something like:

    Code:
    function HighlightBadRGB(clip vid, int "color")
    {
      color = default(color, $ff0000)
    
      badcolor = BlankClip(vid, color=color)
      Subtract(ConvertToYV24(vid), ConvertToYV24(vid).ConvertToRGB().ConvertToYV24())
      Overlay(ColorYUV(off_y=-126), Invert().ColorYUV(off_y=-130), mode="add") # Y = abs(Y-126)
      # maybe add a threshold function here
      ColorYUV(gain_y=65000) # all non-zero values --> 255
      Overlay(vid, badcolor, 0, 0, last)
    }
    That's not perfect but should be adequate for most situations.
    Quote Quote  
  21. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 19th Mar 2014 at 09:48.
    Quote Quote  
  22. These "invalid rgb" scripts only tell you if your YUV image contains pixels which will convert to invalid RGB colors -- ie, colors which cannot be represented with RGB values between 0 and 255. They aren't meant to tell you how to adjust the video to get "correct" colors in an image. And, as you know, even if the YUV contains invalid combinations, you can still correct them -- unless peaks are crushed at 0 and 255.
    Quote Quote  
  23. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 19th Mar 2014 at 09:48.
    Quote Quote  
  24. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by JasonCA View Post
    This is sort of opposite of what I want. Here, the BAD colors are shown through, and anything that is GOOD is blacked out... what would be nice, is if I could even choose the color to be shown to represent what is bad.
    Something like:

    Code:
    function HighlightBadRGB(clip vid, int "color")
    {
      color = default(color, $ff0000)
    
      badcolor = BlankClip(vid, color=color)
      Subtract(ConvertToYV24(vid), ConvertToYV24(vid).ConvertToRGB().ConvertToYV24())
      Overlay(ColorYUV(off_y=-126), Invert().ColorYUV(off_y=-130), mode="add") # Y = abs(Y-126)
      # maybe add a threshold function here
      ColorYUV(gain_y=65000) # all non-zero values --> 255
      Overlay(vid, badcolor, 0, 0, last)
    }
    That's not perfect but should be adequate for most situations.
    Excellent! Thanks so much jagabo! You've been a great help! It's exactly what I am looking for . However, I made the following slight change. For whatever reason, my ConverToYV24() doesn't seem to work. Or AviSynth reports an error not knowing what ConvertToYV24() is. But to be clear about what I'm using, here's the call:
    Code:
    function HighlightBadRGB(clip vid, int "color")
    {
      color = default(color, $ff0000)
    
      badcolor = BlankClip(vid, color=color)
      Subtract(ConvertToYV12(vid), ConvertToYV12(vid).ConvertToRGB().ConvertToYV12())
      Overlay(ColorYUV(off_y=-126), Invert().ColorYUV(off_y=-130), mode="add") # Y = abs(Y-126)
      # maybe add a threshold function here
      ColorYUV(gain_y=65000) # all non-zero values --> 255
      Overlay(vid, badcolor, 0, 0, last)
    }
    Now the question is if I understand it's use and what it's doing.

    Here's NTSC ColorBars with 'normal' saturation and utilizing your HighlightBadRGB() function:

    Code:
    olorBars() #75% colorbars [16->235]
    ConvertToYV12()
    Crop(last,0,0,640,300)
    BilinearResize(640,480)
    ConvertToYV12()
    Tweak(sat=1.0) #normal saturation (unchanged)
    finalClip = ConvertToYUY2()
    videoScopeClip = VideoScope(finalClip,"both", true, "U", "V", "UV")
    highlightedBadRGBClip = HighlightBadRGB(finalClip, color=$FFFFFF) #we will use WHITE to represent BAD RGB
    Overlay(videoScopeClip, highlightedBadRGBClip)
    This above script results in the following:

    Click image for larger version

Name:	NTSCColorBarsNormalSaturationU_v_VPlot0000.png
Views:	15
Size:	15.7 KB
ID:	22214

    Do you see any white? I don't. It looks pretty good! So, to me, the Chroma levels are within limits.

    Now, I'm going to bump up the saturation very slightly.

    Here's the change to the script from above:

    Code:
    ColorBars() #75% colorbars [16->235]
    ConvertToYV12()
    Crop(last,0,0,640,300)
    BilinearResize(640,480)
    ConvertToYV12()
    Tweak(sat=1.205) #saturation bumped up slightly
    finalClip = ConvertToYUY2()
    videoScopeClip = VideoScope(finalClip,"both", true, "U", "V", "UV")
    highlightedBadRGBClip = HighlightBadRGB(finalClip, color=$FFFFFF) #we will use WHITE to represent BAD RGB
    Overlay(videoScopeClip, highlightedBadRGBClip)
    Here's the result of the above script:

    Click image for larger version

Name:	NTSCColorBarsRaisedSaturationU_v_VPlot0000.png
Views:	18
Size:	15.6 KB
ID:	22215

    You can see the Cyan and the Green are clipped since WHITE represents the Bad RGB.

    If you open up both images in your browser using tabs, you can flip between the two.

    Notice the UV scope (aka VectorScope) in the lower right corner? You'll see that raising the saturation, all colors expand on the VectorScope.

    Now look at the U and V scopes. Just based on the U and V scopes (not the UV vectorscope), would you have been able to determine that the Chroma levels are within proper levels? I would say no. Which is why this HightlightBadRGB(), at least to me, seems helpful.

    When you look at the Luma waveform, it's easy to detect when the Luma levels are illegal (being clipped). For Rec 601, if they are lower than 16 or higher than 235, then your Luma is in illegal (clipped) territory. On the other hand, just by the U and V scopes themselves, you can't just look at the U and V scopes and determine your Chroma levels are illegal. However, with Chroma, it goes back to my original question: how do you detect when the Chroma levels are invalid? To me, it would seem, that this HighlightBadRGB() now helps to do this. Correct? Now I am seeing too that it seems the VectorScope also helps to show when your Chroma is illegal.

    Where I'm still not clear is if the Chroma levels are valid for different color-spaces? In other words, when the HighlightBadRGB() shows BAD RGB colors (in this case I was using WHITE to represent Chroma clipping), are the UV's not mapping to RGB space due to the RGB space being used? My thoughts on this:

    VHS should be captured under the setup of Rec 601. The Luma levels must be from 16 to 235. Because of this, my VectorScope must therefore also be setup for 75% saturation. If on the VectorScope, any colors (or combination of them ) exceed the VectorScope safe area, then my Chroma levels are too high. I think that's what it comes down too. So when people say "Check to see if your Chroma is within limits". The answer is I could use the VectorScope for this to see if my Chroma levels are clipping.

    Alternatively, I now can also use this HighlightBadRGB() to help highlight where in the video my Chroma levels are clipping. However, using the vectorscope, it's hard to tell when a FRAME is clipping Chroma. That's where HighlightBadRGB() helps!

    So how does this apply to my PRE VHS capture?

    For VHS, the levels for capture should be mapped from 16 to 235. In other words, on capture white is 235 and black is 16. Any extreme white highlights on the VHS capture can fall between 236 and 255 and that should be fine. And any extreme darks (darker than black), can fall from 0 to 15. In this way, I've essentially correctly captured my Luma levels for capture....right?

    For VHS and capturing correct Chroma levels, I need to make sure my Chroma stays within 75% saturation of the VectorScope. This means that if I see that my Chroma has exceeded the safe area of the 75% saturation on the VectorScope, I need to re-capture the VHS video with LOWER saturation (by the use of a hardware proc amp...etc). Ideally, as people would emphasize, it's better to get this right in hardware on capture. Right? Alternativly now, I could also use this HighlightBadRGB() to point out where Chroma levels are bad. If when looking at the Video after I captured and using this HighlightBadRGB() in conjuction with VectorScope and I see at any point where the video is highlight with bad RGB values, then I would also need to go back and re-capture my VHS tape.

    In response to sanlyn:

    Let me be clear, my goal here is ****NOT**** to obtain a PERFECT picture on capture. This was NEVER my assumption from the begging if you go back and re-read what I said in my earlier posts. My goal here, is to (1) Ensure my Luma levels on capture are as CORRECT as possible (2) to ensure my Chroma levels are within the safe proper levels too. Before I started this thread I wondered: what does it mean to have proper Chroma levels? How do I even measure that I have good Chroma levels? With what scope, if one exists, do I measure good Chroma levels with? It was not clear to me even after searching through the forums what that all meant. But now I see the VectorScope and HighlightBadRGB() are part of my answer.

    However, as 2BDecided pointed out about the right level of Chroma for capture:

    Originally Posted by 2Bdecided View Post
    "As long as you haven't clipped (=too high saturation) or quantised to oblivion (=too low saturation) U or V during capture, you can fix it however you like in the digital domain."
    This is my goal! It's more like: what can I do right now and what can I save later for the digital domain? It's not to have a perfect picture on capture, but to capture the video with the correct levels as possible. My goal is transfer as much info from the VHS tape with the correct levels to the capture card and save it. Then, in POST capture, I can at my leisure and in my own time do as much scene-to-scene correction as I like in the digital domain. If my luma and chroma and hue were pretty good when I captured the VHS tapes, then it should only require small tweaks on a scene-to-scene basis to get them to be OK. At least that is what still seems to be my understanding...unless anyone can help make it clear where I've gone really wrong I'll stick with that as my explanation for now.

    Do you see? I am trying to separate two things here (while realizing they are still interrelated): pre-capture and post-capture.....they are separate steps in the whole process. I am right now focused ONLY on pre-capture. As 2BDecided also said:

    Originally Posted by 2Bdecided View Post
    "Unless you are looking to wear your tapes out, it's unwise to keep playing and re-winding sections of your tapes until you find the "perfect" levels for capture. Just capture the whole lot at a level that doesn't clip. The default settings for hue and saturation are normally fine."
    So that's what I am doing. I plan on capturing my VHS tapes NOW. And then using the scopes and even HighlightBadRGB() to see if my levels are out of range. If they are, what does that tell me? It tells me it's BETTER to re-capture the VHS tape 'again' NOW with better levels than years down the road. Yes, I could attempt to fix them in post capture months or years later in the digital domain, but that would then be trying to band-aid the real problem. So you see, I am using such a script as HighlightBadRGB() as well as all the other scopes and tool to help indicate WHEN I may need to re-capture the VHS tape NOW as opposed to later.

    As 2BDecided also said:

    Originally Posted by 2Bdecided View Post
    Unless you have a very small amount of home movies, or a very large amount of free time, or you are looking for a new time-consuming hobby, or someone is paying you to do it, it's unlikely that you will find the time or patience to apply individual adjustments to every single scene.
    To be practical, I rather have a digital copy of my VHS tapes NOW then years later. So the question I am asking myself and others, is what can I do now while being practical and what can I do later? Right now, I want to have digital copies of my VHS tapes (just to be on the safe side since the tapes are getting old). So, I rather spend the time and capture them all at once with the proper levels as best as I can (utilizing the tools as I said), and then years down the road as I review my captures, I can do scene-to-scene correction at my leisure. Furthermore, if I get my levels correct, then there probably isn't much left to really pull off the tapes thereafter either. Despite this, I'll keep the VHS tapes just to be sure (as long as nothing happens to them), but most likely the other issues can then be fixed in post digital domain after they were all captured.

    Make sense? Again, just trying to help you understand what I am trying to accomplish here.
    Last edited by JasonCA; 20th Dec 2013 at 00:08.
    Quote Quote  
  25. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Originally Posted by sanlyn View Post
    Your conclusions aren't valid because they're based on unsupported premises. The premises are that your exposure calculations were correct for each scene, that your camera didn't use autogain or autocolor over which you have no control, that your playback hardware will play and transmit the originally photographed material exactly as it was originally recorded, and that VHS recording and playback are consistent and predictable from moment to moment.
    Perhaps, but is that a problem for pre-capture or post-capture? You raise a few things I'd like to note:

    (1) "The premises are that your exposure calculations were correct for each scene". Where did I ever say this? Making sure my levels are correct to my capture card from VHS are separate from the expectation that my scene is correct. I don't expect the scenes to be correct. But, I expect the VHS signals that contain the scenes to be correctly captured. So, I believe the scene-to-scene correction can be done in the digital domain here. I could be wrong! But that's why we are discussing things

    (2) "that your camera didn't use autogain or autocolor over which you have no control" I don't disagree that my colors and gain can be off due to how my camera decided to capture the scene or adjust the gain/colors. Most likely, autogain and autocolors may have indeed altered the scene that was captured to VHS. I won't disagree with this. But the point is, how does that affect the VHS signal to the capture card? The incorrect autogain and colors would therefore have been recorded to tape that way. And reading that back from the VHS tape, they would STILL be that way REGARDLESS. In such a case, what is the hardware (like a proc amp) going to do that I couldn't do in software?

    (3) " that your playback hardware will play and transmit the originally photographed material exactly as it was originally recorded". In my opinion, again I could be wrong, but I am using only ONE camera. Most likely, the camera is writing the signals out to VHS in the same way from tape to tape. The way I think about it, since it's one camera that captured all VHS tapes, the signals are all being stored in pretty much the same way. So, in some ways, I do believe that if I setup the hardware to capture the levels for one tape correctly, then for the most part, I should be able to capture all the other tapes at the same levels. Would it may vary a bit from tape to tape..perhaps slightly...but in terms of playing back the signals on the VHS tape to my capture card...I would imagine would be pretty much the same for all tapes since it was recorded to tape by the same VHS camcorder.

    (4) "VHS recording and playback are consistent and predictable from moment to moment". For me is the same VHS camcorder. Using the same camcorder, if I eject the tape and put another tape into the camcorder, are the signals somehow magically written drastically different to that VHS tape using the exact same camcorder? I would think not that different. Now if I used 5 different VHS camcorders and wrote to the same VHS tape, I would say, yes I would agree. However, using the same camcorder, I would think the way the VHS tapes are written to are for the same part written in the same manner. It is the same head, same transport, same camera logic. Someone can clue me in if they think they know better and my thinking on this is WAY off .

    There are two separate chroma issues. First, one being the way in which the tape is played back from VHS tape by the VCR. The VCR itself can certainly throw the hue off or boost the luma or otherwise from VCR to VCR for the same tape. Secondly, then there is the chroma issues with how the camcorder normally thinks of color and captures that color to the VHS tape. Right now, I am just trying to get my colors correct in the way the VHS tape is played back to the capture card (the levels). The way the colors are off due to how the camera is writing or adjusting colors to the VHS tape I can deal with in post capture I would imagine.

    Originally Posted by sanlyn View Post
    All that aside, how much have you actually captured since you first began 2 years ago to look for ways to precisely measure what your imprecise VHS is doing during capture, and what did you learn from analyzing these captures? Did you use your JVC player and ATI 600 device to capture to lossless YUY2 media? Did you make comparisons with and without the unnecessary TBC-1000 in your capture circuit (where likely you would notice that the main effect of the frame-level TBC was that it softened images visibly without making further improvements over your JVC's built-in line TBC).
    I've been collecting the hardware to do it. In the two years I've been in and out on this: reading the threads and trying to learn as much as I can. Until I understand enough, I have put off touching my video tapes at all. I've only played around with a few VHS tapes that I have recorded through my VHS camcorder for which I don't care about. So I use those to play around with now and then. Otherwise, just enjoying trying to learn and understand how to approach my situation. Yes, I have an ATI 600, TBC-1000, JVC VCR's now...etc. I'm getting there .

    Originally Posted by sanlyn View Post
    By now, or at least very soon from now, you should have several hours of lossless YUY2 captures that you can learn from. You would also have a PC monitor properly calibrated with a colorimeter-and-software calibration kit, learned to use it in subdued lighting conditions, and hopefully have learned how limiting it is to use lesser displays such as laptop screens. You will quickly collect at least 3 or 4 dozen new Avisynth plugins and some RGB plugins for precise color and level correction in VirtualDub (such as Trevlac's ColorTools histograms and 'scopes, along with ColorMill, gradation curves and similar RGB color filters, and a pixel value sampler such as csamp.exe). Those will set you on your way to making initial color and level corrections and other cleanup in the original YUV colorspace, and learning how to correctly move into RGB and to correctly return to the final YV12 colorspace that you will need for encoding, using tools designed to do so with a minimum of disruptive effects. You repair or at least mitigate the usual VHS defects such as chroma shift and bleed, halos, ringing, dropouts, comets, ripples, tears, tape noise, rainbows, etc. In RGB you'll more precisely set white balance, gray balance, and black balance, work with gamma, middle point and level controls rather than brightness and contrast controls, and work with color in the more precise luma and chroma ranges that are required to correct the kind of ugly and corrupt color that comes from VHS.
    Though important, I'm not entirly about doing post capture digital work right now although I have played around obviously with that too. Obviously you can see I am intrested in making sure I'm capturing the right colors and levels from the VHS tapes. So, I'm about getting a good capture of the signal off the VHS as best as I can. And then, like I said, worry about doing scene-to-scene correction on it later. I have to start somewhere. Otherwise, I'll keep putting this off forever haha

    Originally Posted by sanlyn View Post
    Unless you have some exceptionally exotic and expensive pro gear that require an engineering degree and Hollywood budget, you'll likely learn to use VDub's histogram and/or a GraphEdit tool to make basic capture settings for brightness and contrast (which is what most of us mere mortals use), to make brief tests to check the effectiveness of your settings, to adjust them as needed and to make more checks, and likely make two captures of each source tape rather than one. You would not add denoisers or sharpeners during capture (unless you want to sharpen tape noise and add more problems to existing analog defects for whatever purpose you might have in mind). If a completely whacko scene should pop up that is a really weird maverick outside the range of other scenes on the tape, it might be necessary to make a separate capture of that section of the tape, using appropriate settings.
    Yes, I'm sure to make a few captures of the source tape. Yes, I am using VDub and AviSynth...etc. But, it's just right now to determine if my VHS captures are decent...not perfect....but good enough for it to be used in post digital processing.

    Originally Posted by sanlyn View Post
    If you are looking for a way to get perfection from VHS in a single pass and to go straight to encoding and burning 60-plus hours of old, noisy, and in some cases duplicated VHS tapes in a single week, then IMO you are wasting your time with capture cards, PC's, lossless media, Avisynth, etc. Seriously. You can minimize damage and levels/color errors during capture, but you won't get a perfect product from that first step.
    Haha....not at all! I expect to do scene-to-scene correction and fix things. But, I believe a lot of that sort of stuff can be done later as long as my capture was decent to begin with.

    Great hearing from you sanlyn. I appreciate your input on things!
    Quote Quote  
  26. Member 2Bdecided's Avatar
    Join Date: Nov 2007
    Location: United Kingdom
    Search Comp PM
    You're over thinking this.

    Sanlyn suggested that you should go and capture some VHS. I agree.


    Forget about illegal RGB values. You can worry about that after you've captured.


    I've never captured anything where U or V clipped, and it's YUV that you're capturing, not RGB. The reason there are lots of threads about getting luma levels right, and no threads about getting U and V levels right when capturing is because it's almost never an issue.




    Scene by scene adjustments are a different thing entirely. Worry about that after you've captured too. You will almost certainly need different adjustments for indoor / poor artificial light, and outdoor / good daylight. But capture it the same.


    Worry about the things that could ruin your results, not the things that won't. Decent VCR? No nasty analogue sharpening and denoising? line-TBC? Non-clipping luma levels? Non-clipping audio levels? No mains interference? Audio in sync? No dropped frames? backup HDD ready to store your raw captures in duplicate so you don't lose the lot? Something decent to watch your captures on? Time to edit the whole lot into a short sequence that people might actually want to watch? Time to do half an hours worth all the way from initial capture to final DVD before you do everything all the way through, to check you're not wasting your time because there's something catastrophically wrong somewhere?


    That kind of thing.


    Cheers,
    David.
    Quote Quote  
  27. Originally Posted by JasonCA View Post
    For whatever reason, my ConverToYV24() doesn't seem to work.
    You need version 2.6 of AviSynth to use ConvertToYV24() -- YUV 4:4:4.

    Originally Posted by JasonCA View Post
    On the other hand, just by the U and V scopes themselves, you can't just look at the U and V scopes and determine your Chroma levels are illegal. However, with Chroma, it goes back to my original question: how do you detect when the Chroma levels are invalid? To me, it would seem, that this HighlightBadRGB() now helps to do this. Correct?
    Yes. But only "illegal" after conversion to RGB. If you're planning on adjusting colors later, and will be working in YUV, you don't need to worry about that during capture. As long as the colors aren't clipping at the hard boudaries (0, 255) you can still correct them. If you don't plan on adjusting colors later, or are using software that only works in RGB, you want them to be legal during your cap.

    In your color bars example you use Tweak(sat=1.205) to make two of the bars "illegal". You could restore them by using Tweak(sat=0.8) or thereabouts. So even though they were illegal, they can be made legal again, without any loss (other than rounding errors).

    Originally Posted by JasonCA View Post
    Now I am seeing too that it seems the VectorScope also helps to show when your Chroma is illegal.
    Not entirely. Because where the YUV->RGB clipping occurs varies with Y too.

    Originally Posted by JasonCA View Post
    Where I'm still not clear is if the Chroma levels are valid for different color-spaces? In other words, when the HighlightBadRGB() shows BAD RGB colors (in this case I was using WHITE to represent Chroma clipping), are the UV's not mapping to RGB space due to the RGB space being used?
    Yes. If your YUV is rec.709 (high definition) you would change the ConvertTo's in the HighlightBadRGB() to include matrix="recy709". It's the same for PC.601 and PC.709.

    Originally Posted by JasonCA View Post
    I now can also use this HighlightBadRGB() to help highlight where in the video my Chroma levels are clipping. However, using the vectorscope, it's hard to tell when a FRAME is clipping Chroma. That's where HighlightBadRGB() helps!
    Again, if you're going to adjust colors, valid RGB only matters only for your final output. Not during capture. You would use HighlightBadRGB() to check the output of your adjustments.


    Originally Posted by JasonCA View Post
    For VHS, the levels for capture should be mapped from 16 to 235. In other words, on capture white is 235 and black is 16. Any extreme white highlights on the VHS capture can fall between 236 and 255 and that should be fine. And any extreme darks (darker than black), can fall from 0 to 15. In this way, I've essentially correctly captured my Luma levels for capture....right?
    Yes.

    Originally Posted by JasonCA View Post
    For VHS and capturing correct Chroma levels, I need to make sure my Chroma stays within 75% saturation of the VectorScope. This means that if I see that my Chroma has exceeded the safe area of the 75% saturation on the VectorScope, I need to re-capture the VHS video with LOWER saturation
    No. As long as there is no hard clipping of chroma you can fix it in post.
    Last edited by jagabo; 20th Dec 2013 at 07:46.
    Quote Quote  
  28. Member
    Join Date: Dec 2005
    Location: United States
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    You're over thinking this.
    Perhaps, but understanding how to measure the signals if I want to do that is a bit helpul. Or just being aware of them helps. For example, if I didn't know what a waveform was, I'd have no idea how to know when Luma was clipping.

    Originally Posted by 2Bdecided View Post
    I've never captured anything where U or V clipped, and it's YUV that you're capturing, not RGB. The reason there are lots of threads about getting luma levels right, and no threads about getting U and V levels right when capturing is because it's almost never an issue.
    This is what I kept reading everywhere, "I've never captured anything where U or V clipped". Or others saying, just make sure your Chroma doesn't clip. But how? That was the question. How do you know that your U or V is not clipping on capture to your capture card? You said, "I've never captured anything where U or V clipped". But what did you look at to determine this? How did you measure or by what means can one measure such? How do YOU know where the U or V signal is clipping? If you said, "Using my vectorscope, I was able to see that my U and V's are not being clipped" I would at least know what you are using to measure. This is important, because i want to know if I'm loosing information on my capture. For Luma, this is easy to see, because you could use the Waveform and see it is slammed up pass the points where the video card can capture the signals. In your captured video file....you would have lost highlights and what not. That's pretty clear.

    However, for Chroma...it's not so simple...or maybe it is. For example if I were to use AviSynth's VideoScope I could use

    Code:
    VideoScope(finalClip,"both", true, "U", "V", "UV")
    But looking at the U and V separately themselves (mind you, not the UV vectorscope area), I would NOT be able to tell from that when the U and V are clipping from what I've come to understand. So to me, the U and V signals and seeing them on a waveform LIKE monitor is useless. Why? Because it's the combination of U and V that together make up the color. And, that's why the VectorScope seems more helpful in measuring the Chroma levels. For NTSC which VHS falls into, my Chroma levels on the VectorScope should be at no more than a saturation of 75%. Typically, the VectorScope has a setup for 100% saturated color bars or 75% saturated color bars. For VHS, it falls under Rec 601 and therefore my VectorScope should be set to 75%. Corect? That's how I see it.

    So is this how you measure when your U and V are clipping, by using the VectorScope? Or?

    Originally Posted by 2Bdecided View Post
    Scene by scene adjustments are a different thing entirely. Worry about that after you've captured too. You will almost certainly need different adjustments for indoor / poor artificial light, and outdoor / good daylight. But capture it the same.
    Yes, I agree, I will need to do scene by scene adjustments for poor lighting...whether the video was shot indoors vs outdoors. Completely agree! Scene by scene corrections therefore is something can therefore be done in post-capture with good results as long as my Luma and Chroma levels were captured properly (meaning, the signals to the capture card where not clipped). As you said, "But capture it the same". In other words, regardless of the scene to scene lighting issues....capture the video signals properly and deal with the scene-to-scene issues in post capture. And that seems the approach and path I am taking. If someone said otherwise, like, "You must do scene to scene correction in PRE capture" then I would ask why that was? Again, I'm just trying to separate what I should be concerned with on pre-capture and then what to be concerned about on post-capture. Doesn't that sound reasonable if not practical too?

    Originally Posted by 2Bdecided View Post
    Worry about the things that could ruin your results, not the things that won't. Decent VCR? No nasty analogue sharpening and denoising? line-TBC? Non-clipping luma levels? Non-clipping audio levels? No mains interference? Audio in sync? No dropped frames? backup HDD ready to store your raw captures in duplicate so you don't lose the lot? Something decent to watch your captures on? Time to edit the whole lot into a short sequence that people might actually want to watch? Time to do half an hours worth all the way from initial capture to final DVD before you do everything all the way through, to check you're not wasting your time because there's something catastrophically wrong somewhere?
    Pre-Capture concerns (VHS to Capture Card):

    1. "Decent VCR?"
    2. "No nasty analogue sharpening and denoising?"
    3. "line-TBC?" or even Full Frame TBC too
    4. "Non-clipping luma levels? " Exactly, and it's why I'm concerned with luma levels on pre-capture. So part of Pre-capture is checking your luma levels are being output correctly to your capture card.
    5. "No mains interference?" I guess you mean electical interfiernece of some sort?
    6. "Audio in sync?" 1st verification pre-capture.
    7. "Something decent to watch your captures on?" Pre & Post.
    8. "Time to do half an hours worth all the way from initial capture to final DVD before you do everything all the way through, to check you're not wasting your time because there's something catastrophically wrong somewhere?" Pre & post.
    9. And right now, I'm trying to determine how to verify Chroma levels here. So the additional question is, "Non-clipping Chroma levels?" and I right now am saying, but how do I verify Chroma levels are clipping? What do I look at? How do I measure it?

    Post-Capture concerns:

    1. "No dropped frames?"
    2. "Audio in sync?" 2nd verification on post-capture
    3. "backup HDD ready to store your raw captures in duplicate so you don't lose the lot?"
    4. "Something decent to watch your captures on?" Pre & Post.
    5. "Time to edit the whole lot into a short sequence that people might actually want to watch?"
    6. "Time to do half an hours worth all the way from initial capture to final DVD before you do everything all the way through, to check you're not wasting your time because there's something catastrophically wrong somewhere?" Pre & post.
    7. I'd add, verifying your Luma levels again...make sure your Luma signal is not clipped
    8. I'd add also, verifying your Chroma levels here too (which is what I'm doing with AviSynth after I've captured my video). But to do this part, I first have to understand HOW and what I am looking for in order to verifying that my Chroma levels weren't clipped in post-capture.
    So see, I'm trying to seperate out what I should be concerned about NOW on [VHS->capture card] and what I should be concerned about down the road....like....scene to scene correction seems to be something I can worry about LATER.
    Quote Quote  
  29. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Typically what is used is a YCbCr parade. It's a waveform traceing of Y, Cb, Cr

    In avisynth you can use histogram("levels") , to see Y, Cb, Cr displayed . It goes from 0-255. If the ends are "bunched" up then you have clipping
    http://avisynth.nl/index.php/Histogram#Levels_mode



    To visualize the content, to detect "hot" areas in a frame, along the same lines as FCP and other various NLE plugins, you can use avisynth limiter() with show=something

    The default values are set to Y' 16-235 , CbCr 16-240 (but you can set different limits)

    http://avisynth.nl/index.php/Limiter

    show="luma_grey" will make it greyscale with values above & below max_luma and min_luma will be colored

    show="chroma_grey" will do it similarly for chroma

    As long as you have values within YCbCr 0-255, you can salvage those vAs soon as you touch RGB all bets are off



    All this badRGB discussion is more academic more than anything else. It's not used in practice . In practice, "bad RGB" exists everywhere , even in broadcast safe values. Even "full range" PC matrix 609/709 RGB conversion is full of out of gamut values. The reason is the 8bit RGB color cube model is tiny compared to the 8bit Y'CbCr color model. Graphically you can look at the color cube model - all values of 8bit RGB fit withing the 8bit YCbCr model, but the reverse isn't true .

    http://software.intel.com/sites/products/documentation/hpc/ipp/ippi/ippi_ch6/ch6_color...ls.html#fig6-4

    In order to express those some of those negative and out of gamut values, different matrices, and different RGB models are used. For example ITU Rec1361 is used for wide gamut displays. Some of those previous "out of gamut" negative values can be expressed and be seen. In the distant future, Rec2020 which covers even larger space - will be the new standard for 4K displays.
    Quote Quote  
  30. See Figure D-1:

    http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi/0650/bks/SGI_Developer/books/DI..._html/apd.html

    That's the RGB cube inside the YUV cube.

    You don't need to worry about YUV values outside the RGB cube while capturing. While capturing you only need to make sure that YUV values are inside the YUV cube. Only on conversion to RGB do you need to be sure the YUV values are inside the RGB cube. If you do all your processing in YUV then you only need to assure that your final output is within the RGB cube so that it won't be clipped on playback.
    Quote Quote  



Similar Threads