Hey there folks of Videohelp
I want the best & high quality Avisynth filter for removing noise from this video sample that I'm attaching here without much noticeable loss in detail. The noise is present in the person's dark hair. I don't care if it's a slower filter.
+ Reply to Thread
Results 1 to 21 of 21
-
-
i don't see any noise - perhaps you referring to scratches and spots related to worn cinema tape - search for cinema tape (movie) scratch removal / restoration). Normally such restoration involve manual work so be prepared to process frame by frame manually then good luck...
Be prepared for some expenditures http://algosoft-tech.com/downloads/Last edited by pandy; 9th Apr 2018 at 05:02.
-
The Hair colour fluctuating from black to brown. Doesn't that supposed to be somekinda colour noise.
-
The fluctuating hair color is difficult to get rid of. You can use a temporal denoiser on the chroma to reduce it in the short term (a few frames), but it will still fluctuate over the longer term (quarter second+). If you use stronger settings you may reduce it more but you will start seeing smearing or ghosting artifacts when things are in motion.
-
You can simulate chroma coring (apply some non linear gain for chroma) - small chroma values (close to 0 i.e. 128) can be set as 128 only chroma with level higher than particular threshold will be passed unchanged. You could also add some motion map and based on this chroma non linear gain. Also wavelet decomposition may help to isolate such low frequency changes.
-
I can understand your words only to quite some extent. Could you explain it in simple terms so that I can properly and fully understand.
-
0 (zero) for chrominance (mean no color only luminance i.e. grayscale) is 128 (for 8 bit samples) you can use some nonlinear level processing where values close to 0 (128) like 130 or 126 can be set to 128 so you nullifying practically chrominance signal so in your case you can make hair on head black.
this may need some experiments but should be quite easy to perform. Simple trick but may work for you. Such changing very low signal to zero is called coring.
For something more complex you may consider to use some very lowpass spatial filter and average such low pass filtered data over time to stabilize color fluctuations if some motion detection/adaptivity you may further reduce impact introduced by temporal averaging, wavelet decomposition allow quickly extract such low frequency (i.e. large surface) signal so you avoid averaging over time small details.
Decomposition may look like this:
So residual part in your case should be this one that is considered as noise (fluctuate over time) by temporal average you may stabilize and reduce such fluctuations (i think that this is periodical change so windowed averaging should help to isolate correct signal).
More or less this is more like statistical processing and you may need special software (avisynth with plugins may be helpful but probably there is lot of work as it will be multistage operation). -
I'm not aware of any ready to filter for ffmpeg or avisynth - usually i'm not using anything above those two - you can try to use filters like VagueDenoiser under avisynth and extract residual layer and process this layer in temporal domain to create some stabilized chroma plane.
Wavelet decomposition is available in Gimp and Krita but those two are designed to work mostly with static picture (but for sure on Gimp and Krita you have some extensions to work on sequence of pictures).
In your case best will be some dedicated software for statistical video processing but i'm not aware of anything open source/freeware - perhaps someone may advise on some video restoration software. -
Are you really sure the hair isn't supposed to be changing color? Especially in the first half of the clip the lighting is flickering and a lot of the changes in hair color are related to that. Shadowed dark areas aren't changing color. Do other shots in the movie have the same problem?
In any case, I used a combination of brightness and saturation to specify what areas of the picture should be de-saturated:
Code:################################################################ # # Build an image based only on the saturation of pixels # ################################################################ function GetSaturation(clip c) { U = UtoY(c) U = Overlay(U.ColorYUV(off_y=-128), U.Invert.ColorYUV(off_y=-128), mode="add") V = VtoY(c) V = Overlay(V.ColorYUV(off_y=-128), V.Invert.ColorYUV(off_y=-128), mode="add") Overlay(U, V, mode="add") BilinearResize(c.width, c.height) } ################################################################ Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3) # a little cleanup first deblock_qed(quant1=28, quant2=32) vInverse() src = last smask = GetSaturation().ColorYUV(gain_y=5000, off_y=-350).Invert() # a mask based on saturation, inverted bmask = ColorYUV(gain_y=2000, off_y=-400).Invert().GreyScale() # a mask based on brightness, inverted sbmask = Overlay(bmask, smask, mode="multiply") # a mask based on both Overlay(last, GreyScale(), mask=sbmask) # overlay a greyscale image only in areas of low saturation and low brightness StackVertical( StackHorizontal(src, last), StackHorizontal(bmask, smask)) # stack them BilinearResize(width/2, height/2) # half size
-
Great job jagabo! I really appreciate your help. Ok now how should I go about it?
And could you explain in detail of what you just did there.
Below i'm attaching the full video clip. The entire video suffers the same problem of changing hair colour.Last edited by yukukuhi; 12th Apr 2018 at 00:27.
-
The earlier script was tuned for the short clip where the hair was very dark. The full video has portions with brighter hair where the script doesn't work. I you adjusted the script to desaturate the hair in those shots other parts of the picture would be inappropriately desaturated -- like the dark skin of some of the actors. A better approach here would be to use temporal filters as suggested earlier. This combination smooths the colors temporally but runs very slow:
Code:Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3) # a little cleanup first deblock_qed(quant1=28, quant2=32) # smooth block artifacts from over-compression vInverse() # blur away residual combing from horizontal time base errors # stabilize colors MergeChroma(TemporalDegrain()) # works best for short duration changes MergeChroma(TTempSmooth(strength=8, maxr=7, lthresh=8, cthresh=10, lmdiff=8, cmdiff=12)) # smooths longer duration changes
Last edited by jagabo; 12th Apr 2018 at 15:58.
-
Jagabo could you explain your script functions step by step as I'm quite curious to learn it especially the first half of your script. Is it an Avisynth script as I've not come across them ever before.
I maybe a bit dumb but what's with the brightness and saturation mask for reference?
Lastly thanks once again for your timely help. -
The basic idea of the script was to desaturate pixels that were dark and of low saturation. To do that I blended a greyscale version of the video with the normal color video using an alpha mask that indicated which parts of the picture were dark and low saturation. If you don't know what alpha masking is read up on it. Basically, when you overlay one image onto another, the alpha mask indicates which pixels of the overlay are opaque (255), which are transparent (0), and which pixels are partially transparent (values between 1 and 254).
In the first part of the script I create a function called GetSaturation. It converts a color image to a greyscale image that indicates the saturation of each pixel -- the brighter a pixel the more saturated was the color of that pixel. Technically it's not exactly saturation (from the HSV color model) but the sum of rectified chroma values.
I don't know if you're familiar with the YUV color scheme. Basically, Y (greyscale intensity) is the brightness of a pixel, U and V (chroma) represent colors that are added or subtracted from the greyscale value to create the color of the pixel. Y ranges from 0 (full black) to 255 (full white). U and V have the value 128 when a pixel is a shade of grey. The more the U and V values deviate from 128 the more colorful the pixel is. Essentially, the final color is Y + (U-128) + (V-128). More info on YUV: https://en.wikipedia.org/wiki/YUV
Code:U = UtoY(c)
Code:U = Overlay(U.ColorYUV(off_y=-128), U.Invert.ColorYUV(off_y=-128), mode="add")
Code:V = VtoY(c) V = Overlay(V.ColorYUV(off_y=-128), V.Invert.ColorYUV(off_y=-128), mode="add")
The main portion of the script first gets the source video:
Code:Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3)
Code:last = Mpeg2Source("Chinnvar Movie Comedy.d2v", CPU2="ooooxx", Info=3)
The script then runs a deblocking filter (smooths out blocky artifacts caused by too much compression) and a filter that blends together residual comb-like artifacts where scan lines of the two fields don't line up perfectly (horizontal time base errors of the analog tape system):
Code:deblock_qed(quant1=28, quant2=32) vInverse()
Code:smask = GetSaturation().ColorYUV(gain_y=5000, off_y=-350).Invert() # a mask based on saturation, inverted
Similarly, a brightness mask is built and adjusted so that very dark pixels black and lighter pixels are white:
Code:bmask = ColorYUV(gain_y=2000, off_y=-400).Invert().GreyScale() # a mask based on brightness, inverted
Code:sbmask = Overlay(bmask, smask, mode="multiply") # a mask based on both
The original video and a greyscale version of it are blended together using that mask:
Code:Overlay(last, GreyScale(), mask=sbmask)
Finally the original image, the filtered image, the brightness mask, and the saturation mask are stacked in a 2x2 array and downscaled by half (each axis):
Code:StackVertical( StackHorizontal(src, last), StackHorizontal(bmask, smask)) # stack them BilinearResize(width/2, height/2) # half size
-
-
You could just read the docs for each of the filters.
MergeChroma() combines the luma (Y) from the first input with the chroma (UV) of the second. Since only one input is specified it uses the luma from "last" -- the output of the previous filter. -
Ok, could you explain this in detail: CPU2="ooooxx", Info=3?
Also seeing that you're well knowledged, i would like to familiarize with some of the noises out there like banding, haloing, field blending & frame blending etc. along with some sample pics. Sorry, if you think i'm asking too much.Last edited by yukukuhi; 14th Apr 2018 at 10:32.
-
I don't think you're asking too much; I know you're asking too much. No one minds helping, but only when it looks like the asker has read and studied the information out there. And you haven't. In this forum and in the AviSynth forum of Doom9 are many many threads about what you want to know. The AviSynth website has much information about the filters and links to the discussions about them on Doom9. Read on there and then read some more. Perhaps the most valuable collection of docs about DVDs and especially about how to use AviSynth to work with them are the docs included in the DGMPGDec package. There are three you should read over and over again until you've virtually memorized them. The answers to your questions about "CPU2="ooooxx", Info=3" are easily found with a quick look inside the DGDecode User Manual.
And, if you're in India, then you should very quickly familiarize yourself with unblending (SRestore) if you work with NTSC DVDs.
Similar Threads
-
VHS capturing - luma/chroma crosstalk? What do you think?
By Slennox in forum RestorationReplies: 60Last Post: 14th Aug 2016, 08:45 -
Chroma noise
By Tafflad in forum RestorationReplies: 16Last Post: 29th Feb 2016, 22:23 -
Avisynth filter from Chroma noise removal?
By Srivas in forum RestorationReplies: 5Last Post: 27th Dec 2015, 08:49 -
Avisynth, Change Chroma not Luma
By lolman in forum EditingReplies: 4Last Post: 18th Oct 2015, 11:33 -
VCR: weird flickering and squiggly chroma noise
By Brad in forum Capturing and VCRReplies: 2Last Post: 25th May 2015, 00:22