VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or try DVDFab DRM and remove iTunes movie & music protection! :)
+ Reply to Thread
Results 1 to 21 of 21
Thread
  1. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    Hey there folks of Videohelp
    I want the best & high quality Avisynth filter for removing noise from this video sample that I'm attaching here without much noticeable loss in detail. The noise is present in the person's dark hair. I don't care if it's a slower filter.
    Image Attached Files
    Quote Quote  
  2. i don't see any noise - perhaps you referring to scratches and spots related to worn cinema tape - search for cinema tape (movie) scratch removal / restoration). Normally such restoration involve manual work so be prepared to process frame by frame manually then good luck...

    Be prepared for some expenditures http://algosoft-tech.com/downloads/
    Last edited by pandy; 9th Apr 2018 at 05:02.
    Quote Quote  
  3. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    The Hair colour fluctuating from black to brown. Doesn't that supposed to be somekinda colour noise.
    Quote Quote  
  4. The fluctuating hair color is difficult to get rid of. You can use a temporal denoiser on the chroma to reduce it in the short term (a few frames), but it will still fluctuate over the longer term (quarter second+). If you use stronger settings you may reduce it more but you will start seeing smearing or ghosting artifacts when things are in motion.
    Quote Quote  
  5. Originally Posted by yukukuhi View Post
    The Hair colour fluctuating from black to brown. Doesn't that supposed to be somekinda colour noise.
    You can simulate chroma coring (apply some non linear gain for chroma) - small chroma values (close to 0 i.e. 128) can be set as 128 only chroma with level higher than particular threshold will be passed unchanged. You could also add some motion map and based on this chroma non linear gain. Also wavelet decomposition may help to isolate such low frequency changes.
    Quote Quote  
  6. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    I can understand your words only to quite some extent. Could you explain it in simple terms so that I can properly and fully understand.
    Quote Quote  
  7. Originally Posted by yukukuhi View Post
    I can understand your words only to quite some extent. Could you explain it in simple terms so that I can properly and fully understand.
    0 (zero) for chrominance (mean no color only luminance i.e. grayscale) is 128 (for 8 bit samples) you can use some nonlinear level processing where values close to 0 (128) like 130 or 126 can be set to 128 so you nullifying practically chrominance signal so in your case you can make hair on head black.
    this may need some experiments but should be quite easy to perform. Simple trick but may work for you. Such changing very low signal to zero is called coring.

    For something more complex you may consider to use some very lowpass spatial filter and average such low pass filtered data over time to stabilize color fluctuations if some motion detection/adaptivity you may further reduce impact introduced by temporal averaging, wavelet decomposition allow quickly extract such low frequency (i.e. large surface) signal so you avoid averaging over time small details.
    Decomposition may look like this:
    Click image for larger version

Name:	wd_scales_0.png
Views:	23
Size:	228.2 KB
ID:	45135 So residual part in your case should be this one that is considered as noise (fluctuate over time) by temporal average you may stabilize and reduce such fluctuations (i think that this is periodical change so windowed averaging should help to isolate correct signal).
    More or less this is more like statistical processing and you may need special software (avisynth with plugins may be helpful but probably there is lot of work as it will be multistage operation).
    Quote Quote  
  8. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    Ok, could you suggest me some filters to work with.
    Quote Quote  
  9. I'm not aware of any ready to filter for ffmpeg or avisynth - usually i'm not using anything above those two - you can try to use filters like VagueDenoiser under avisynth and extract residual layer and process this layer in temporal domain to create some stabilized chroma plane.
    Wavelet decomposition is available in Gimp and Krita but those two are designed to work mostly with static picture (but for sure on Gimp and Krita you have some extensions to work on sequence of pictures).
    In your case best will be some dedicated software for statistical video processing but i'm not aware of anything open source/freeware - perhaps someone may advise on some video restoration software.
    Quote Quote  
  10. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    That's no problem I can buy it if it's not a freeware.
    Quote Quote  
  11. Are you really sure the hair isn't supposed to be changing color? Especially in the first half of the clip the lighting is flickering and a lot of the changes in hair color are related to that. Shadowed dark areas aren't changing color. Do other shots in the movie have the same problem?

    In any case, I used a combination of brightness and saturation to specify what areas of the picture should be de-saturated:


    Code:
    ################################################################
    #
    #  Build an image based only on the saturation of pixels
    #
    ################################################################
    
    function GetSaturation(clip c)
    {
      U = UtoY(c)
      U = Overlay(U.ColorYUV(off_y=-128), U.Invert.ColorYUV(off_y=-128), mode="add")
    
      V = VtoY(c)
      V = Overlay(V.ColorYUV(off_y=-128), V.Invert.ColorYUV(off_y=-128), mode="add")
    
      Overlay(U, V, mode="add")
      BilinearResize(c.width, c.height)
    }
    
    ################################################################
    
    
    Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3) 
    
    # a little cleanup first
    deblock_qed(quant1=28, quant2=32)
    vInverse()
    src = last
    
    smask = GetSaturation().ColorYUV(gain_y=5000, off_y=-350).Invert() # a mask based on saturation, inverted
    bmask = ColorYUV(gain_y=2000, off_y=-400).Invert().GreyScale() # a mask based on brightness, inverted
    sbmask = Overlay(bmask, smask, mode="multiply") # a mask based on both
    Overlay(last, GreyScale(), mask=sbmask) # overlay a greyscale image only in areas of low saturation and low brightness
    
    StackVertical( StackHorizontal(src, last), StackHorizontal(bmask, smask)) # stack them 
    BilinearResize(width/2, height/2) # half size
    At the top left is the original image, top right, the processed image. Below them are the brightness mask and saturation mask for reference.
    Image Attached Files
    Quote Quote  
  12. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    Great job jagabo! I really appreciate your help. Ok now how should I go about it?

    And could you explain in detail of what you just did there.

    Below i'm attaching the full video clip. The entire video suffers the same problem of changing hair colour.
    Image Attached Files
    Last edited by yukukuhi; 12th Apr 2018 at 00:27.
    Quote Quote  
  13. The earlier script was tuned for the short clip where the hair was very dark. The full video has portions with brighter hair where the script doesn't work. I you adjusted the script to desaturate the hair in those shots other parts of the picture would be inappropriately desaturated -- like the dark skin of some of the actors. A better approach here would be to use temporal filters as suggested earlier. This combination smooths the colors temporally but runs very slow:

    Code:
    Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3) 
    
    # a little cleanup first
    deblock_qed(quant1=28, quant2=32) # smooth block artifacts from over-compression
    vInverse() # blur away residual combing from horizontal time base errors
    
    # stabilize colors
    MergeChroma(TemporalDegrain()) # works best for short duration changes
    MergeChroma(TTempSmooth(strength=8, maxr=7, lthresh=8, cthresh=10, lmdiff=8, cmdiff=12)) # smooths longer duration changes
    Image Attached Files
    Last edited by jagabo; 12th Apr 2018 at 15:58.
    Quote Quote  
  14. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    Originally Posted by jagabo View Post

    Code:
    ################################################################
    #
    #  Build an image based only on the saturation of pixels
    #
    ################################################################
    
    function GetSaturation(clip c)
    {
      U = UtoY(c)
      U = Overlay(U.ColorYUV(off_y=-128), U.Invert.ColorYUV(off_y=-128), mode="add")
    
      V = VtoY(c)
      V = Overlay(V.ColorYUV(off_y=-128), V.Invert.ColorYUV(off_y=-128), mode="add")
    
      Overlay(U, V, mode="add")
      BilinearResize(c.width, c.height)
    }
    
    ################################################################
    
    
    Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3) 
    
    # a little cleanup first
    deblock_qed(quant1=28, quant2=32)
    vInverse()
    src = last
    
    smask = GetSaturation().ColorYUV(gain_y=5000, off_y=-350).Invert() # a mask based on saturation, inverted
    bmask = ColorYUV(gain_y=2000, off_y=-400).Invert().GreyScale() # a mask based on brightness, inverted
    sbmask = Overlay(bmask, smask, mode="multiply") # a mask based on both
    Overlay(last, GreyScale(), mask=sbmask) # overlay a greyscale image only in areas of low saturation and low brightness
    
    StackVertical( StackHorizontal(src, last), StackHorizontal(bmask, smask)) # stack them 
    BilinearResize(width/2, height/2) # half size
    At the top left is the original image, top right, the processed image. Below them are the brightness mask and saturation mask for reference.
    Jagabo could you explain your script functions step by step as I'm quite curious to learn it especially the first half of your script. Is it an Avisynth script as I've not come across them ever before.

    I maybe a bit dumb but what's with the brightness and saturation mask for reference?

    Lastly thanks once again for your timely help.
    Quote Quote  
  15. The basic idea of the script was to desaturate pixels that were dark and of low saturation. To do that I blended a greyscale version of the video with the normal color video using an alpha mask that indicated which parts of the picture were dark and low saturation. If you don't know what alpha masking is read up on it. Basically, when you overlay one image onto another, the alpha mask indicates which pixels of the overlay are opaque (255), which are transparent (0), and which pixels are partially transparent (values between 1 and 254).

    In the first part of the script I create a function called GetSaturation. It converts a color image to a greyscale image that indicates the saturation of each pixel -- the brighter a pixel the more saturated was the color of that pixel. Technically it's not exactly saturation (from the HSV color model) but the sum of rectified chroma values.

    I don't know if you're familiar with the YUV color scheme. Basically, Y (greyscale intensity) is the brightness of a pixel, U and V (chroma) represent colors that are added or subtracted from the greyscale value to create the color of the pixel. Y ranges from 0 (full black) to 255 (full white). U and V have the value 128 when a pixel is a shade of grey. The more the U and V values deviate from 128 the more colorful the pixel is. Essentially, the final color is Y + (U-128) + (V-128). More info on YUV: https://en.wikipedia.org/wiki/YUV

    Code:
      U = UtoY(c)
    This converts the U channel of the source video to a greyscale image.

    Code:
      U = Overlay(U.ColorYUV(off_y=-128), U.Invert.ColorYUV(off_y=-128), mode="add")
    This rectifies U, ie the absolute value of (U - 128). All the positive values after (U - 128), and all negative values after (U-128) (inverted) are added together. See the AviSynth docs for ColorYUV and Overlay. The same is done for the V channel:

    Code:
      V = VtoY(c)
      V = Overlay(V.ColorYUV(off_y=-128), V.Invert.ColorYUV(off_y=-128), mode="add")
    Finally, the rectified U and V are added together with Overlay and the result (how colorful each pixel is) is returned to the caller.

    The main portion of the script first gets the source video:

    Code:
    Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3)
    Note that when you don't specify a name for a stream AviSynth defaults to the name "last". So that line is equivalent to

    Code:
    last = Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3)
    This is important because "last" is used later in the script.

    The script then runs a deblocking filter (smooths out blocky artifacts caused by too much compression) and a filter that blends together residual comb-like artifacts where scan lines of the two fields don't line up perfectly (horizontal time base errors of the analog tape system):

    Code:
    deblock_qed(quant1=28, quant2=32)
    vInverse()
    It then builds a mask of the saturation of each pixel:

    Code:
    smask = GetSaturation().ColorYUV(gain_y=5000, off_y=-350).Invert() # a mask based on saturation, inverted
    And adjusts it so that very low saturation areas are black, and slightly higher saturation areas are white.

    Similarly, a brightness mask is built and adjusted so that very dark pixels black and lighter pixels are white:

    Code:
    bmask = ColorYUV(gain_y=2000, off_y=-400).Invert().GreyScale() # a mask based on brightness, inverted
    Those two masks are then merged together with Overlay in Multiply mode:

    Code:
    sbmask = Overlay(bmask, smask, mode="multiply") # a mask based on both
    This has an effect something like a logical AND of the two masks. Only areas were both masks are bright are bright in the final mask.

    The original video and a greyscale version of it are blended together using that mask:

    Code:
    Overlay(last, GreyScale(), mask=sbmask)
    Only areas where the mask is white are overlaid with the greyscale picture. Areas where the mask is black are not effect (remain color).

    Finally the original image, the filtered image, the brightness mask, and the saturation mask are stacked in a 2x2 array and downscaled by half (each axis):

    Code:
    StackVertical( StackHorizontal(src, last), StackHorizontal(bmask, smask)) # stack them
    BilinearResize(width/2, height/2) # half size
    The supplied video shows both masks (but not the final multiplied mask) as a reference. If you want to see the final mask you can return(sbmask) any time after it's created.
    Quote Quote  
  16. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    Nicely explained. Very Enlightening.

    Btw, could you be kind enough to explain thiis Script as well. Like how the MergeChroma works.

    Originally Posted by jagabo View Post

    Code:
    Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3) 
    
    # a little cleanup first
    deblock_qed(quant1=28, quant2=32) # smooth block artifacts from over-compression
    vInverse() # blur away residual combing from horizontal time base errors
    
    # stabilize colors
    MergeChroma(TemporalDegrain()) # works best for short duration changes
    MergeChroma(TTempSmooth(strength=8, maxr=7, lthresh=8, cthresh=10, lmdiff=8, cmdiff=12)) # smooths longer duration changes
    Quote Quote  
  17. You could just read the docs for each of the filters.

    MergeChroma() combines the luma (Y) from the first input with the chroma (UV) of the second. Since only one input is specified it uses the luma from "last" -- the output of the previous filter.
    Quote Quote  
  18. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    Ok, could you explain this in detail: CPU2="ooooxx", Info=3?

    Also seeing that you're well knowledged, i would like to familiarize with some of the noises out there like banding, haloing, field blending & frame blending etc. along with some sample pics. Sorry, if you think i'm asking too much.
    Last edited by yukukuhi; 14th Apr 2018 at 10:32.
    Quote Quote  
  19. Again, you could read the docs for Mpeg2Source().
    Quote Quote  
  20. Originally Posted by yukukuhi View Post
    Sorry, if you think i'm asking too much.
    I don't think you're asking too much; I know you're asking too much. No one minds helping, but only when it looks like the asker has read and studied the information out there. And you haven't. In this forum and in the AviSynth forum of Doom9 are many many threads about what you want to know. The AviSynth website has much information about the filters and links to the discussions about them on Doom9. Read on there and then read some more. Perhaps the most valuable collection of docs about DVDs and especially about how to use AviSynth to work with them are the docs included in the DGMPGDec package. There are three you should read over and over again until you've virtually memorized them. The answers to your questions about "CPU2="ooooxx", Info=3" are easily found with a quick look inside the DGDecode User Manual.

    And, if you're in India, then you should very quickly familiarize yourself with unblending (SRestore) if you work with NTSC DVDs.
    Quote Quote  
  21. Member
    Join Date
    May 2016
    Location
    India
    Search Comp PM
    Cool.
    I'll begin my quest & thanks a lot for your time.
    Quote Quote  



Similar Threads