VideoHelp Forum
+ Reply to Thread
Results 1 to 1 of 1
Thread
  1. As many here already know, video from VHS sources often has serious color bleeding. Sometimes that is caused by misaligned chroma channels, and it can be fixed by simply shifting the channel over a few pixels with something like the flaXen VHS filter.

    However, there is much more to the color bleed story. A 720x480 4:2:2 video has 360 chroma samples in each line. Even a 4:1:1 DV video will have 180 samples. However, I've read here that VHS sources have terrible chroma resolution at something like 40 samples per line. What this means is that VHS sources result in several times more color bleed than is necessary for any modern subsampling format. Sometimes the bleed will go to the left, and sometimes it will go to the right, and both will occur within the same exact image or even on the same line, because it has to do with how close a sharp change in luma is to the center of an original VHS chroma sample.

    Obviously, something like that cannot be fixed with a filter that simply displaces a channel to the left or the right. That said...I do believe that it can be fixed, at least within the limitations of our current working format (4:1:1, 4:2:2, etc.)! My idea is relatively simple. We humans see color bleeding when the same color sample is used (either in whole or in part via blending) on both sides of a clear object boundary, as defined by sharp changes in the luma channel...and those same edges can be detected with image processing and realigned with smart resampling. To fix this, I would suggest the following algorithm for an AVISynth filter in any YUV colorspace:

    1.) Run an edge-detection filter on the luma channel of the image (or each field of the interlaced image), using whatever kind of convolution, denoising, and sharpening necessary to get decent edge detection. Save this information in a temporary buffer.
    2.) Run an edge-detection filter on the coarser chroma channels of the image (or each field of the interlaced image), using whatever kind of convolution, denoising, and sharpening necessary to get decent edge detection. Save this information in a temporary buffer. Note that the chroma channel will have much softer transitions, since our capture devices probably captured some blended values as the analog signal shifted from one value to another.
    3.) The edges will be roughly aligned with each other, but they will often be separated by a certain distance due to the coarser chroma sampling in the VHS signal. Our goal is to align the edges in the chroma channels more closely with the edges in the luma channel. Working in a 4:2:2 or even 4:1:1 format, we now have a much finer grid to shift chroma values around on than the original VHS signal have. We will work with seven field-sized buffers per field:
    • The source luma buffer. (Actually, we have no further use for this, but we needed it for edge detection earlier.)
    • Two source chroma buffers.
    • The luma edge detection buffer.
    • Two chroma edge detection buffers.
    • Two output buffers for the fixed chroma values.
    4.) Everything we do from this point on will be in one dimension, on a line-by-line basis. First, trace the edge detection buffers in the current line from left to right, recording matched sets of luma edges and chroma edges to a list. This way, we know exactly which edges best line up with/correspond to one another.
    5.) Now, starting at the beginning of the line again, start tracing from left to right, copying chroma values from the source buffers to the destination buffers, until coming across an edge in any of the three buffers. If this edge is alone without any counterparts in other channels, ignore it and move on. If there are two edges, but they're both in chroma, ignore them and move on. Otherwise, if there is a luma edge and at least one corresponding chroma edge, we must operate differently until we're past all corresponding edges:
    6.) When tracing between edge boundaries, use the edge in the luma channel as a guide. The current pixel is either to the left or the right of the edge in the luma channel (or on it). If the current pixel is left of the edge, copy chroma values from left of the edge in the source chroma buffers to the final buffer. If the current pixel is on the edge, copy chroma values from on the edge (or blend) in the source chroma buffers to the final buffer. If the current pixel is to the right of the edge, copy chroma values from right of the edge in the source chroma buffers to the final buffer. If one of the chroma buffers has no edge, copy that buffer normally, pixel for pixel. Once we're past this set of edges, continue the trace as normal until hitting another edge. Lather, rinse, repeat. (Note that we may sometimes be sampling chroma from the source that was blended from nearby peak values. If we want, we can choose to resample in such a way that we sharpen these edges, for even crisper alignment of chroma and luma.)

    Note: The reason full-blown edge detection of the whole field is necessary up front is because a simple one-dimensional scan would be too easily influenced by noise.
    Improvements: Halo artifacts from prior sharpening are likely to confuse the above version of the filter, since certain edges will be duplicated. I haven't formalized how to deal with them, but I have a rough idea.

    So...has someone else already created a filter like this, or is this a novel concept?
    (Honestly, looking at my own captures, I wonder if my VCR or capture card already does something like this in hardware, since I see so little color bleeding. If not though, I think it would be a pretty nice filter...)
    Last edited by Mini-Me; 24th Nov 2010 at 14:36.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!