VideoHelp Forum
+ Reply to Thread
Page 6 of 8
FirstFirst ... 4 5 6 7 8 LastLast
Results 151 to 180 of 231
Thread
  1. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    I've been to a lot of places the last 2 months and have learned a great deal - but still have a long way to go. Wasn't really trying to solicit training, just explaining that I'd had absolutely no photographic/videographic background & that's a huge hurdle. The histograms & explanations had to wait until I had a chance of understanding them. Hopefully I'm close now, but colorspace-conversation-elaboration and the GreyRamp animation were not intuitively obvious the last time I looked at them. Again, I'm not asking for detailed help there yet; I just need time to study them & that's what I'm doing.

    Back to the first histogram. I'm just didn't understand why I saw invalid values in the avisynth histogram/waveform. "Danger" zone values in both the vdub capture histogram and the avisynth histogram/waveform are blacks and whites that will be lost when coverted to RGB, right? The vdub histogram shows luma values on the X-axis and point totals on the Y-axis. The "native" avisynth histogram/waveform shows luma values on the X-axis and horizontal position on the Y-axis. jagabo's HistogramOnBottom shows luma on the Y-axis and horizontal position on the X-axis. Hopefully I got that right. So I shouldn't see any "danger" zone values in avisynth if there's no red in vdub capture histogram. But I did. After fussing with this for several hours, found this AM that I'd been looking at the orig Tap 550 capture instead of the latest one with the proc amp adjustments. Duh. Feel really dumb posting about this earlier. Water under the bridge. Will try to tackle another histogram today.

    sanlyn, still trying to acquire the filters you used for the guitar fix. The masktools at warpenterprises is MT-masktools; there's a million versions of masktools in the avisynth external filter downloads. Which?
    Last edited by dianedebuda; 12th Feb 2014 at 10:53.
    Quote Quote  
  2. Originally Posted by dianedebuda View Post
    The vdub histogram shows luma values on the X-axis and point totals on the Y-axis. The "native" avisynth histogram/waveform shows luma values on the X-axis and horizontal position on the Y-axis. jagabo's HistogramOnBottom shows luma on the Y-axis and horizontal position on the Y-axis. Hopefully I got that right.
    Yes, that's right.

    The traditional waveform monitor is based on how a video signal looks on an oscilloscope display. The analog TV signal scans, one scanline at a time, from left to right on the CRT screen. When viewed on an oscilloscope, the left-to-right scanning is retained but the brightness of the video is translated to height on the display.
    Quote Quote  
  3. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    My exposure to oscilliscopes is pretty much limited to what I did in college, back in the dark ages. Only remember sine waves. But your explanation of the crt/scope translation makes perfect sense to me. Thanks. FWIW, I did correct my post which had HistogramOnBottom description with 2 Y-axis instead of XY.
    Quote Quote  
  4. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    Still looking at avisynth histograms. Think I'm ok with the default, classic one. Levels looks pretty much like the vdub capture one for the luma plus there's a U & V one. The use for these is looking for invalid values in the borders, right?

    I'm having trouble with the Color one though. The X-axis is U and the Y-axis is V. The green/blue/purple/red layout makes sense to me. Intuitively I see the UV coordinate as the hue. Fully saturated is supposed to be at the center and radiate out as saturation decreases. But the way I'm visualizing it, moving away from center changes hue. Obviously I'm missing something here. My concept of saturation is diming a light bulb of a particular hue. How is this histogram used? That is, what would you see that tells you your image is "good" or "needs adjustment"?
    Quote Quote  
  5. Originally Posted by dianedebuda View Post
    Still looking at avisynth histograms. Think I'm ok with the default, classic one. Levels looks pretty much like the vdub capture one for the luma plus there's a U & V one. The use for these is looking for invalid values in the borders, right?
    You mean Histogram(mode="levels")? Yes, you want to keep Y, U, and V within their respective borders. But not all combinations of Y, U, and V within those limits are valid RGB colors.


    Originally Posted by dianedebuda View Post
    I'm having trouble with the Color one though. The X-axis is U and the Y-axis is V. The green/blue/purple/red layout makes sense to me. Intuitively I see the UV coordinate as the hue. Fully saturated is supposed to be at the center and radiate out as saturation decreases. But the way I'm visualizing it, moving away from center changes hue.
    The center of the box is grey. Saturation is the distance from the center to a U,V coordinate. Hue is the angle of the vector from the center to a selected U,V coordinate.

    View the attached AviSynth animation.
    Image Attached Files
    Last edited by jagabo; 14th Feb 2014 at 17:45.
    Quote Quote  
  6. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    That animation showed exactly what was wrong with my visualization. Hue is not a specific UV coordinate - it's the ratio of U to V. With that in mind, it all works. Thank you!

    So, how do you use this Color histogram to evaluate video? Do certain shapes or cluster locations w.r.t. center suggest problems/solutions?

    Any other histogram tools that I should learn to use besides the native avisynth & VideoScope (AlignExplode)? Seems to me I saw an RGB one frequently in the threads that used a common vdub plugin, but didn't bookmark.
    Quote Quote  
  7. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by dianedebuda View Post
    sanlyn, still trying to acquire the filters you used for the guitar fix. The masktools at warpenterprises is MT-masktools; there's a million versions of masktools in the avisynth external filter downloads. Which?
    Sorry I missed this one.

    MaskTools is so widely used that is has built quite a revision history for itself, to the point where there are revisions of revisions. Over the years some very popular plugins have been designed around old and new masktools versions. Many of those plugins refuse to go away despite change in masktools itself.

    The way people use old and new versions is to create different plugin folders for different versions of popular tools. In Avisynth you can create multliple plugin folders and name them something like plugins2, plugins3, etc. The "regular" plugin folder is where you keep new stuff that you use all the time; any dll or avsi script in there is loaded automatically when needed. To load something from the "other" plugin folders, you do it explicitly in a script with the LoadPlugin statement. For example, if I have an older masktools in a folder called plugins2, I load it this way:

    Code:
    LoadPlugin("D:\AVisynth 2.5\plugins2\masktools.dll")
    There are four versions of masktools that should be all you'll ever need:

    1.5.1.0 = oldest surviving version of original version 1 of masktools: MaskTools.dll (aka masktools-v1.5.8.zip) (html doc says "v1.5.6")

    2.0.30.0 = mt_masktools.dll (Jan 2006) aka v.2.0a30. Early version of masktools 2 for Avisynth 2.0. Can still be used with version Avisynth v.2.5.x

    2.0.45.0 = mt_masktools-25.dll for Avisynth 2.5.x Can be used in Avisynth 2.6x but can give errors used that way with some plugins. v2.0.45.0 is often kept around because it allows the use of QTGMC deinterlacing with YUY2 video; otherwise QTGMC works only in YV12 with all other masktools versions.

    v Aug-2006 = mt_masktools_26.dll . MaskTools v2 for Avisynth version 2.6x. Can also be used (most of the time) with Avisynth 2.5x. There is yet another, separate modified 2.6 version for 16-bit processing that comes with the dither() plugin package.

    The existing versions of MaskTools and MVTools are here: http://manao4.free.fr/
    Last edited by sanlyn; 19th Mar 2014 at 03:12.
    Quote Quote  
  8. Originally Posted by dianedebuda View Post
    So, how do you use this Color histogram to evaluate video? Do certain shapes or cluster locations w.r.t. center suggest problems/solutions?
    I don't find the UV vectorscope to be very useful unless you have SMPTE color bars to calibrate to:

    Click image for larger version

Name:	smpte.jpg
Views:	196
Size:	62.2 KB
ID:	23614

    That's a VideoScope() UV graph in the lower right corner. The crosshairs and small white boxes over the UV graph are an overlay indicating the standard SMPTE colorbar locations. At the top right is the U waveform graph, the bottom left the V waveform graph. The color bars are blurred so you can see dotted lines between the points in the UV graph. If the bars were perfectly sharp you would see only a dot in each box. Hue rotation animation attached.

    Another place this can be useful is when checking for white balance (how pure the greys are). You can crop away parts of the frame leaving only something you know should be grey then view the UV plot to verify the picture is clustered around the center. Or you can view the U or V waveforms and verify they are at 128:

    Click image for larger version

Name:	wbal.jpg
Views:	185
Size:	71.9 KB
ID:	23616

    This was a noisy analog capture of a b/w video. You can see that there's chroma noise in the signal because the U, V, waveforms aren't perfect thin lines, and rather than a single dot at the center of the UV plot there's a fuzzy ball.
    Image Attached Files
    Last edited by jagabo; 15th Feb 2014 at 09:17.
    Quote Quote  
  9. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    You can see the problem in the gray image using a pixel sampler. All areas are a bit yellow. Readings off most areas show a slight deficit of blue. Of course, it also depends on how the jpg image was made.
    Last edited by sanlyn; 19th Mar 2014 at 03:13.
    Quote Quote  
  10. Originally Posted by sanlyn View Post
    You can see the problem in the gray image using a pixel sampler. All areas are a bit yellow. Readings off most areas show a slight deficit of blue.
    Yes, you can also see that the spot in the UV graph is slightly off center.
    Quote Quote  
  11. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by dianedebuda View Post
    So, how do you use this Color histogram to evaluate video? Do certain shapes or cluster locations w.r.t. center suggest problems/solutions?

    Any other histogram tools that I should learn to use besides the native avisynth & VideoScope (AlignExplode)? Seems to me I saw an RGB one frequently in the threads that used a common vdub plugin, but didn't bookmark.
    Studying YUV scopes does reveal some specifics -- not specific objects, perhaps, but some problems become evident, such as color casts. The "c" tap practise AVI is slightly too red (but not by much). An RGB histogram and pixel sample could tell you a little more: for instance, the same AVI is slightly too red in the brights, slightly too purple in the blacks.

    You can get information like this with a pixel sampler that can read pixel values off videos from VirtuaDub. Most advanced video apps with a graphical interface have pixel readers of some kind. Why doesn't VirtualDub have this? Beats me. There are a couple of popular desktop add-ons that furnish the capability. They're a little clunky to use, because their small readout panels tend to get covered when you click on other windows in the desktop. But when running they place a small icon in the taskbar tray; click the tray icon, and the readout panels come to the foreground.

    One such tool is Csamp, a tiny executable that you place on your desktop where it becomes just an icon. Click it, and the readout panel pops up, with a smaller icon in the system tray while it's turned "on". Post #119 in an old forum thread shows CSamp.exe in actual use (OMG!): https://forum.videohelp.com/threads/338999-Calibrating-luminance-levels-color-for-captu...=1#post2109690. It also shows the ColorTools "parade" RGB histogram in use.

    CSamp.exe is here: http://www-personal.engin.umd.umich.edu/~jwvm/vision/projects/csamp.zip

    ColorTools is here: http://trevlac.us/colorCorrection/colorTools.html. The actual download link really is on that page, but it's hard to see. Download: http://trevlac.us/colorCorrection/clrtools.vdf.

    The next post in the thread mentioned above (#120) shows CSamp, ColorTools, and the gradation curves VirtualDub color filter in actual use, all at once (OMG!).
    Gradation curves home page: http://members.chello.at/nagiller/vdub/index.html.

    You know those multi-range color wheels you see in Vegas Pro, Adobe Pro, etc.? VDub doesn't have the wheels, but they have a ColorMill multi-function color fixer that uses sliders instead of wheels (I like the sliders better). The ColorMill page at ColorMill: http://fdump.narod.ru/rgb.htm shows some graphic examples of what the filter can do if you scroll down the page. The download is here: http://fdump.narod.ru/Downloads/ColorMill2.1.zip.

    There's another free pixel sampler around that's more fancy than CSamp, but still free. Below is an image of ColorPic v4 in use (there's a newer version 4.1 that has a skin tone palette. Very useful). Below, I closed a few of ColorPic's extra panels and left the RGB readout and magnifier turned on. I was using ColorPic to read the nose highlights of the boy in tap practice who is wearing the yellow shirt. According to ColorPic, the RGB values say the kid's nose is pretty bright, especially for a nose.
    Image
    [Attachment 23620 - Click to enlarge]


    YUV scopes can tell you how YUV data is saved. RGB scopes tell you how it's displayed. Pixel samplers let you pinpoint exactly what you're looking for. The samplers tell you if white isn't white, black isn't black, or gray isn't gray. And they tell you why.

    You might ask why photogs and cameramen check little gray or white patches to check color balance. They use those hues because whites, grays, and blacks require Red Blue and Green to be in proper balance for those hues to look "correct".

    Super white: RGB 255 255 255
    "TV" White" RGB 235 235 235
    Light Gray: RGB 180 180 180
    Medium Gray: RGB 128 128 128
    Dark Gray: RGB 64 64 64
    "TV" black: RGB 16 16 16
    Real Black: RGB 0 0 0

    You can also check out the pixel values of skin colors. Typical RGB values for fair-skinned brighter portions of a face would be in the neighborhood of Red 180, Green 160, Blue 140. Shadow areas would be somewhere around Red 80, Green 55, Blue 40. If someone in your video looks too orange, you can suspect a deficit of blue in the image and too much red. If black shoes have a blue tint, you need to reduce very dark blue or add more dark red and dark green in the same low-RGB value range. If a gray suit or dress looks too red, then either reduce red or add cyan (green + blue).

    Once you get your hands on tools like these and start using them, it gets less like witchcraft.
    Last edited by sanlyn; 19th Mar 2014 at 03:13.
    Quote Quote  
  12. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    My take on what what I've read: it seems the (avisynth) Color histograph is valuable if a standard color test pattern is part of the video or for b/w - otherwise there are easier/better tools for finding "off colors". True?

    Went back over this thread looking at histographs with hope of understanding now a bit of why/how they were used.
    vaporeon800 from Post 13
    I attempted to match the DV clip to the 550
    I think I understand the layout of each of the parts.
    Top L to R: image, avisynth (Y) histograph, U converted to b/w, V converted to b/w.
    Bottom L to R: videoscope (Y) histograph, videoscope Color (like avisynth Color histograph), videoscope histograph for U, videoscope histograph for V.

    I'm not totally at ease on how to interprete the last 3 graphs (Color, U, V). I think the U & V can be used to see if there's a + or - blue or red cast to the image depending on if there's a concentration of pixels consistantly above or below the "grey" line. Is that even close?
    Last edited by dianedebuda; 17th Feb 2014 at 15:53.
    Quote Quote  
  13. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by dianedebuda View Post
    I'm not totally at ease on how to interprete the last 3 graphs (Color, U, V). I think the U & V can be used to see if there's a + or - blue or red cast to the image depending on if there's a concentration of pixels consistantly above or below the "grey" line. Is that even close?
    At the expense of stating the obvious: you can best identify a color cast by (a) looking at the image, (b), looking at histograms and scopes of the image, and (c) look at both together.

    If you have a scene that shows a lot of blue sky, guess what "color cast" you're likely to think is there if you see a lot of blue in the 'scopes? It is highly possible (even likely) that the color cast, if it's there, isn't blue. Your "c" 550 tap-practise capture has no strong color cast, but it does have some problems: there are no clean whites, blacks are purply, and pixel readers are telling me that there's a very slight red cast in the video, most of it in the brights. But I've already posted some filter recommendations so no need to go into that again.

    You're correct: the color test pattern doesn't tell you much about your captures.
    Last edited by sanlyn; 19th Mar 2014 at 03:13.
    Quote Quote  
  14. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    you can best identify a color cast by (a) looking at the image, (b), looking at histograms and scopes of the image, and (c) look at both together.
    Reading the histogram correctly is exactly what I'm trying to accomplish. I'm using the TapPractice just as an example. Kind of like learning a foreign language - the letters look familiar and you can look up words in the dictionary, but the context...
    Quote Quote  
  15. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Histograms and 'scopes are graphical representations and mappings of data from an image. They are no substitute for the image itself. They explain what the data is doing under certain data storage or display conditions, but they're not the image itself.

    If you are using that recent tap practice avi as an example, what specifically are you looking for in the YUV and RGB scopes, other than out-spec levels or crushed/clipped elements? By looking only at scopes and graphs, there is no way to identify a color cast without comparing the scope with the image itself. A shot with a lot of green grass and trees will obviously have a lot of green in the scopes. Throw in some overcast or rain and someone holding a bright yellow umbrella that takes up 30% of the image, and you'll get yet another "color cast" out of the 'scope -- in fact, all things being equal in that particular image, and assuming the camera had one of those godawful autowhite circuits turned on, there's a good chance that the color cast will be too much red compensation, so the color cast will likely be purple. But for all we know, without an image to explain the 'scope, the 'scope itself won't tell you that much. The scope is just data. It is not organized into an image.

    If you are working with grayscale images, you could identify even a slight color cast from the material in jagabo's post #158. That's a special case. However, if your experience hasn't told you that true grayscale graphics must have equal portions of all colors in the U and V bands that center on a value comparable to RGB 128 in order to be truly "gray", and if you had no image with which to compare the graphs, then you would be left guessing about what the scope is telling you. Your best guess would be that you are looking at 'scopes of images that consist mostly of slightly off-color grays ranging from fairly bright whites to fairly dark blacks, or you are looking at a color image from which most of the color has been removed, or you are looking at a color image of entirely gray objects.
    Last edited by sanlyn; 19th Mar 2014 at 03:14.
    Quote Quote  
  16. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    I'll try to word this another way. I'm not looking for something specific to fix here. I am trying to acquire tools & skills so that if I do see something later, I have a starting place. So for the histographs, it's "what kind of problems/solutions can they help with" and "if I look at a histograph and see xxx, it probably means this or that type of problem". So for example, with the vdub capture histograph, I can use it to prevent crushed blacks and clipped whites. I can now look at an avisynth classic/default histograph and see the same thing. vapereon800 uses the AlignExplode a lot, so I'm trying to find out why - what to look for & when to use it. What can it tell me (the context)? I was speculating that the histograph U and Y -> b/w might help for general color correction, but I could be totally wrong about that. Hopefully that's a little clearer on what I'm trying to accomplish right now.

    My current goal is still to select a capture method for the hi8. I don't think my captures are too bad, but the color bleed bothers me. Maybe it can be fixed somewhat, maybe not. Want to tackle that to see if I even need to hunt for a hi8 deck. But have a few outstanding items on the ToDo list from earlier in this thread that I'm trying to clean up. Histographs was one. Collecting and learning how to use the common filters/conversions that I see on a lot of vhs/hi8 forum fixes is another. Sometimes, as a newbie, I have to "ask for fish", but I really need "to learn to fish" and you folks are great mentors.
    Last edited by dianedebuda; 18th Feb 2014 at 14:07.
    Quote Quote  
  17. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Well, you're getting there. At least you are beginning to understand that histograms of images and the images themselves don't exist apart from each other.

    The first thing one usually corrects in YUV are the levels. You can start elsewhere if you wish, but most people start with levels because they are quickly seen as the most obvious and, if they're out of spec, they make the easiest target. To most people that means "contrast". That's an illusive term, often used too loosely, and the way "contrast" controls work depends on whose contrast control you're using. The statement "ColorYUV(cont_y=-20)" uses a negative value that reduces contrast in the Y channel. The statement "ColorYUV(cont_y=20)" does the opposite: it uses a positive contrast value to increase contrast in the Y channel. Reducing Y contrast usually "shrinks" the white band in a levels histogram, so that values are drawn inward toward the middle of the bar. Increasing Y contrast usually "expands" the white band in a levels histogram, so that values are expanded outward from middle in both directions. Load any video with Avisynth and use colorYUV to raise/lower contrast for the Y channel, and you will see the Y channel band change shape in the histogram when you reload the script*.

    (* = footnate. Ah, those handy footnotes). Quick way to reload a script in VirtualDub: Don't use "Open video". Use "reopen". The script will reload and put you right back on the frame you were looking at. If you have a slow script with a ton of other plugins, it will take some seconds for the VirtualDub window to refresh. After you do this "reopen" bit a few dozen times, your system will run short of memory trying to figure out where you are and things might freeze up, so now and then just use "Open" to set things straight again.

    Another thing you will notice when you raise/lower contrast is that the U and V channels will change shape accordingly, in ways that match changes in the Y channel. In other words, raising/lowering contrast affects Y, U and V together, but mostly Y -- because when you do this with YUV, you are changing the overall illumination of everything in the image.

    You can also apply a contrast statement to the U or V channel. Raising/lowering U contrast or V contrast has the same effect as on Y -- raising contrast will expand that channel, lowering it will shrink the channel. The parameters for ColorYUV are cont_v for the V channel and cont_u for the U channel. You will also find an oddity concerning green chroma: in YUV there is no adjustment specifically for green. Why? Because with YUV, green data is not stored separately. You get green out of YUV by subtracting U and V from Y. Why did YUV engineers do this? Well, besides being perverse and frequently on strong medication (that's my guess, for starters), its cheaper to store two color values rather than three.

    Actually you can affect green by raising and lowering U and V. That's because, in effect, the U channel isn't entirely blue data and the V channel isn't entirely red data: U and V both creep into green from different ends of the spectrum. Rather than think of these YUV histograms as representing a linear band that starts at red at one end and ends with blue at the other, think of YUV as a circle instead of separate straight lines. As you know, a circle doesn't have a beginning and an end: the end of a circle is simply the beginning of the rest of the circle. The circle always remains the same "size"; if you think you're "removing" something from the circle, you're not -- you're actually moving around it.

    RGB? Well, that's different. You don't really have a separate luma channel and 3 color channels. You have pixels that contain red, green, and blue values, and the "brightness" for each color is imbedded in each pixel. True, RGB histograms usually show a white "luma" channel, but that white band is really a derived average luma for the pixels involved. One advantage of working with RGB is that you do have direct control over green, which is stored separately, and you can adjust one color without affecting the other two. You can adjust darks without affecting brights, adjust the range RGB 32 to RGB 64 without affecting other RGB values. And so forth. So, in that sense, working with RGB is somewhat more intuitive: "Give me more red but not less green or blue" is easy in RGB, but not in YUV.

    There are other ways of adjusting luma and chroma in YUV. The statement ColorYUV(off_y) will add or subtract a value (offset) in the Y channel, pushing it left or right in the histogram, but also pushing U and V (and therefore green as well) left or right at the same time. You can use "ColorYUV(off_u) to move blue left or right, but you will note that red and green will also be affected. Load an image in Avisynth and show the histogram, then use "ColorYUV(off_u=10)", which will shove the U channel to the right. Watch what happens to the other bands. True, you are watching these effects in RGB (you can't see them using any other colorspace), so what you see is really the effect that those changes have in the YUV->RGB conversion.

    How do you get accustomed this? Well, you mount up some images in an Avisynth script and start playing with ColorYUV or some other filter such as Tweak or Levels or YLevels. It's like learning to ride ye olde bicycle: you can watch someone do it it forever, but you don't really know it or feel it until you hop onto the seat yourself.

    Color bleed: sometimes you can reduce it by lowering saturation during capture. Sometimes you can relieve it a bit by lowering saturation or lowering the color's contrast in post processing. You can relieve it with various filters in Avisynth, especially before you get the video into RGB. Some popular function calls such as "MergeChroima(awarapsharp2(depth=30))" will help smooth things, and ChromaShift helps too. Some bleed reduction filters like FixChromaBleed or FFT3D or FixVHSOversharp do it by reducing saturation anyway and/or using clever masking techniques. You can always tweak saturation levels later if those plugins get too aggressive. Sometimes a line tbc will reduce it, and sometimes a tbc pass-thru with a good y/c comb filter will help. Often the bleed was actually recorded in the original, usually due to low voltage while you were recording (that's what lordsmurf told me a long time ago, anyway). A super-duper high end VCR often reduces it, along with a bunch of other problems, but high end gear isn't always available. Bleed accompanied by the usual halos and ghosting and DCT ringing make it more complicated.
    Last edited by sanlyn; 19th Mar 2014 at 03:14.
    Quote Quote  
  18. Color bleed is a fact of life with VHS. VHS only has about 40 lines of chroma resolution across the width of the frame, (compared to about 350 lines of luma resolution) so colors are always smeared. The problem isn't in how you're capturing, it's what's on the tape.

    https://forum.videohelp.com/threads/319420-Who-uses-a-DVD-recorder-as-a-line-TBC-and-wh...=1#post1980652
    https://forum.videohelp.com/threads/319420-Who-uses-a-DVD-recorder-as-a-line-TBC-and-wh...=1#post1981589

    I often crop the frame down to an object of interest when looking at waveform monitors and UV plots, especially the latter. That way you know for sure that what you see in the plot is from the object you're interested in.
    Quote Quote  
  19. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    Sorry, jagabo, but don't understand those links. Read that thread a couple times over the last couple months & most is still over my head. I would repeat LS by saying "translate into English", but I think it's just there's not much color info to deal with for vhs. This is hi8, and from what I understand, the Y is better, but the UV is perhaps only a tiny bit better than vhs. I suspect the color bleed is actually on the tape, but if there's a way to minimize...
    Quote Quote  
  20. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Well...the idea behind the pass-thru line tbc is that better timing corrections give a cleaner image overall. But it won' t stop chroma bleed from tape, whether it's Hi8 or not. You might have a few advantages with Hi8 over VHS, but they are still analog tape.

    The attached image is an "after" version of interlaced frame 62 from c hi8 TapPractice trv480 -tbc-dnr, composite 550 vdub B109-C148-Sh0 clip2.avi . A few things were fixed with a collection of light-duty tricks: reducing aliasing and edge noise, oversaturation, ringing, halos, edge ghosts, oversharpen effects, and most (but by no means all) chroma shift, bleeding, and edge ghosts.....not to mention a pesky case of dot crawl, which looked more obvious in the DV versions. One thing about tape: it will always look like tape. It won't look like DVD. But it can look a lot better than "just tape".

    Image
    [Attachment 23669 - Click to enlarge]
    Last edited by sanlyn; 19th Mar 2014 at 03:14.
    Quote Quote  
  21. Originally Posted by dianedebuda View Post
    Sorry, jagabo, but don't understand those links.
    Then suffice it to say the chroma resolution is far lower than the luma resolution on the horizontal axis:

    Y:
    Click image for larger version

Name:	Y.jpg
Views:	424
Size:	41.6 KB
ID:	23673

    U bicubic scaled to the same dimensions as the Y channel, contrast enhanced to increase visibility:
    Click image for larger version

Name:	U.jpg
Views:	395
Size:	35.9 KB
ID:	23674

    V:
    Click image for larger version

Name:	V.jpg
Views:	423
Size:	28.2 KB
ID:	23675
    Quote Quote  
  22. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    chroma resolution is far lower than the luma resolution
    That part I have understood. And I'm not under the illusion that I'm going to magically make these HD or DVD quality. Somehow I don't think I would have missed seeing a magic cure for bleeds proclaimed at least once in all the posts that I've read. And even if I had, I'm confident that one of you would have brought it to my attention. Truthfully, I'm not unhappy with the capture. The defect that struck me at this point as most noticeable is the bleed, so I mentioned it. If I can do anything about it without messing up something else, great. If not, it is what is it. I'm afraid you think I'm agonizing over this much more than I am. This capture stuff is an adventure and sometimes it's just interesting to look into blind alleys.

    Back to the books
    Quote Quote  
  23. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    The image I posted in #170 should offer some anti-bleed encouragement. Not perfect, but it sure beats the original.
    Last edited by sanlyn; 19th Mar 2014 at 03:15.
    Quote Quote  
  24. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    sanlyn, is the script available? Yesterday I "discovered" the Restoration forum & found a super interesting wedding thread you've been working on lately. Every time I'm about to get back to experimenting with avisynth beyond the histographs, I just check the couple of forums I'm "following" & find interesting stuff that sucks me in. I'm becoming a forum junky & not getting anything done on my own stuff .
    Quote Quote  
  25. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by dianedebuda View Post
    sanlyn, is the script available? Yesterday I "discovered" the Restoration forum & found a super interesting wedding thread you've been working on lately.
    That was a badly aged and damaged video, but the owner did a lot of hard hands-on work and learned to use the scripts he found in that and earlier threads. Gotta hand it him. I thought he'd lose patience at any time when he saw those complicated scripts, but he kept right on.

    I used pretty much the same denoisers on that "c" video as on the other captures, but different color settings. I've already archived it to an external drive, but I'll look tonight and fish it out.

    Originally Posted by dianedebuda View Post
    Every time I'm about to get back to experimenting with avisynth beyond the histographs, I just check the couple of forums I'm "following" & find interesting stuff that sucks me in. I'm becoming a forum junky & not getting anything done on my own stuff .
    Join the club. I know exactly how you feel. Every time I wrest myself away for a while, jagabo shows up with something new.
    Last edited by sanlyn; 19th Mar 2014 at 03:15.
    Quote Quote  
  26. Member
    Join Date
    Jan 2014
    Location
    Austin, TX
    Search Comp PM
    No hurry. Deep in Olympics right now. So maybe "thread browsing" isn't my only time sink.
    Quote Quote  
  27. Sanlyn pointed out the use of MergeChroma(aWarpSharp2(depth=30)) to sharpen chroma and ChromaShift() to better align it. aWarpSharp2() is a special kind of sharpening filter, the depth argument controls how much sharpening is applied. MergeChroma() then merges that sharpened chroma with the original luma. Sometimes you may want to sharpen the luma too but depth=30 would be too much. Compare this sharpened U channel to the earlier example:

    Click image for larger version

Name:	warpsharpu.jpg
Views:	388
Size:	22.7 KB
ID:	23686
    Quote Quote  
  28. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Thanks. I've seen other explanations of the MergeChroma_awarpsharp2 routine, but yours is easier to visualize.
    Last edited by sanlyn; 19th Mar 2014 at 03:15.
    Quote Quote  
  29. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Originally Posted by dianedebuda View Post
    vapereon800 uses the AlignExplode a lot, so I'm trying to find out why - what to look for & when to use it.
    I was matching up captures made by different devices, so after the horizontal and vertical shifts, the scopes were super useful to objectively see how much the levels differed. The isolated chroma images are also good to see when chroma noise reduction is or isn't present.

    Originally Posted by sanlyn View Post
    Another thing you will notice when you raise/lower contrast is that the U and V channels will change shape accordingly, in ways that match changes in the Y channel.
    Each channel should be independently adjustable with ColorYUV.
    Quote Quote  
  30. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    That's what I mean. Y, U and V can each be adjusted.
    Last edited by sanlyn; 19th Mar 2014 at 03:15.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!