I want to denoise a video but I just want to denoise parts with plain colors (no gradients and black). it is a cartoon.
How could I make a mask with masktools2 that targets those colors specifically and ignore gradients?
Thanks.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 19 of 19
Thread
-
-
Using MergeChroma it's easy to denoise the video any way you like and then use just the chroma portion of the denoising in the final result.
I'm not exactly sure if that's what you had in mind, though as I have no idea what you mean by 'plain colors'. -
here's a simple mask to separate detail from flat areas:
Code:Org=Last ## blur detail to taste Blur(1.0).Blur(1.0).Blur(1.0) Subtract(Org) mt_lut("x 128 - abs 8 * 32 - ", chroma="-128") ##......................^offset ##..................^multiplier ## expand mask to taste mt_expand(chroma="-128") mt_expand(chroma="-128")
[Attachment 45216 - Click to enlarge]
[Attachment 45217 - Click to enlarge] -
It doesn't? (the mask should actually be inverted to pass gradients and block details, but after that it should work, given a little tuning as noted in the script)
-
mt_lut (lut = LookUp Table) uses revers polish notation:
https://en.wikipedia.org/wiki/Reverse_Polish_notation
x is the luma value of a pixel. abs means the absolute value. So with algebraic notation his equation is:
abs(x -128) * 8 - 32
But the image he's starting with is the result of subtracting the blurred image from the original image. If you subtract a positive value from a smaller positive value you get a negative value. Pixels can't have negative values so Subtract() adds 126 (halfway between rec.601 full black, Y=16, and full white, Y=235) to the result when working with YUV video (so the 128 in his code should be 126). A better way to handle this would be to use mt_lutxy instead of Subtract() and mt_lut():
Code:Org=Last ## blur detail to taste Blurry = Blur(1.0).Blur(1.0).Blur(1.0) mt_lutxy(Org, Blurry, "x y - abs 8 * 32 - ", chroma="-128") # abs(x-y) * 8 - 32 ##...................................^offset ##...............................^multiplier ## expand mask to taste mt_expand(chroma="-128") mt_expand(chroma="-128")
-
By the way, the way I interpreted your original post was that you wanted to reduce noise in flat shaded areas but not in blacks or gradients. So in a frame like:
You want to reduce noise in the character but not the gradient in the background. raffriff42's algorithm produces:
As you can see it protects the sharp edges but does not differentiate between flat shaded areas and shallow gradients. That's what I was pointing out in post #5. That mask is not very different from a simple edge mask:
Code:mt_edge("cartoon") mt_expand() mt_expand()
I tried to find a method that would differentiate the shallow gradient from the flat shaded areas (I thought it was an interesting problem) but it turns out it's pretty hard to do because the flat areas aren't really flat (even after blurring away the noise) so they're hard to differentiate from the gradient.Last edited by jagabo; 18th Apr 2018 at 18:46.
-
That explanation is too technical for my current knowledge, I have to read more on the subject. Yes it's correct that I want to denoise flat surfaces while ignoring gradients and black. To be honest, the color black is giving me a headache because avisynth doesn't have something as simple as photoshop's level slide where you could just move the slide and all the noise in black areas would be gone so I've had to get creative with the saturation and the contrast to the point where I managed to remove all the noise on black areas.
-
A black mask is trivial:
Code:mt_binarize(30) # or whatever threshold you want, invert/expand/inpand as necessary
Code:function show_black(clip v, int level) { mt_binarize(v, level, chroma="-128") Subtitle(String(level)) } ImageSource("source.jpg") ConvertToYV12() Trim(0,255) src = last Animate(0,255, "show_black", last,0, last,255) StackHorizontal(src,last)
Last edited by jagabo; 18th Apr 2018 at 19:31.
-
Yes I managed to binarize it and then blur the edges so the change from black to grey isn't too drastic by resizing down and up the video. but then whenever I tried filtering it I saw no change, I think the piece I was missing was the Expand feature. otherwise I don't understand why after getting my black mask right it still didn't work. I think I could get better luck by overlaying a black blank clip and be done with it.
do you think it's possible that by resizing down and then up, the white in the mask isn't really white and has some transparency? That could explain why it's not working whenever I apply the filter. it could also be that I created a small color cast in order to fix a banding problem even Flash3kyuu deband couldn't fix. and as I read on another thread from doom9 I think. yuv doesn't work like rgb, meaning I could try to make it blacker but the red would still be there.
as you can see I have a lot to learn but I want to get it right, that's why I didn't ask all this. since I consider masktools2 the key to fixing this problem.Last edited by nMaib0; 18th Apr 2018 at 19:54.
-
-
or maybe user error... if you saw absolutely no change .
It might help to post the script -
jagabo, how would you go about picking making a mask with m_lut that select these flat colors (no gradients) I marked with a red dot but with a bit of inpand so it doesn't screw the edge when I apply the blurring?
https://forum.videohelp.com/images/imgfiles/jRHF7qZ.jpg -
Are you sure you're using the masks correctly ? If you're using overlay or mt_merge, 100% white is the area affected or included. Black is the area not affected, or completely excluded. In those cartoon posts above, they are basically line masks. So unless you inverted the mask, you would be filtering the lines, not the "plain colors" . Unless you were filtering then overlaying back the original lines with a line mask ? I'm guessing the former if you saw "no change" .
If in doubt, post your script and a sample of your source
Are those red dots accurate ? For example, is the brown shirt excluded ? The anterior deltoid is the same "shade" of blue/purple as the lower eye rims so I'm assuming it should be included. Probably you want the BG excluded, the dark lines excluded, the bright areas like front of eyes, front of face excluded ?
In that example you can combine masks (But it won't necessarily apply to your case, you might need to do some things differently)
In general, you can create a mask defined by other characteristics, combine them with masktools operators. But the common approaches in avisynth are
1) brightness range e.g. luma mask . e.g. you might want to include/exclude certain ranges say y = 0-45 or dark areas . The "lumamask" function in dogway's masks pack is a fantastic helper function. Because you can specify b (black) and w(white) points.
2) hue range for specific colors e.g. a certain shade of blue. e.g maskhs() in avisynth. It can be used with start/end hue and combined with min/max saturation . It's not as accurate as other effects type programs, which are used for keying, color manipulation (secondary color correction), unfortunately the results are pretty "rough" in avisynth
3) edge masks / line masks - many different ways in avisynth . This is probably the most used in terms of anime, because denoisers tend to destroy lines and dark areas.
You can "fix" areas like lines by overlaying back the original lines using a line mask (or differently - usually lightly - filtered version) .
In this example, a hue/saturation mask was used to isolate the background . That was combined with bright mask using a luma mask, isolating the bright face elements. Then a 3rd mask consisting of dark areas (such as dark lines) was added. Those are EXclusion areas, so they need to be inverted (they are currently white and need to be black). For visualization, a green overlay was applied (you would apply whatever filters , denoisers etc..)
Code:ImageSource("source.jpg") converttoyv12() a=last #hue and saturation mask, to isolate BG a.converttoyv24() maskhs(minsat=25, starthue=0, endhue=270, coring=false) converttoyv12() mt_expand().mt_expand(u=-128,v=-128) bg=last #bright mask to isolate front face, front eyes, shoulder highlights lumamask(a,a,b=215,w=230, show=true) levels(0,1,50,0,255,false) removegrain(2) mt_expand(u=-128,v=-128) bright=last #dark mask to isoloate dark areas lumamask(a,a,b=0,w=115, show=true, upper=true) mt_binarize(40) dark=last #combine masks and invert to make them "white" for inclusion masks overlay(bg, bright, mode="add") overlay(last, dark, mode="add") invert() incl=last #create a green clip for testing g=blankclip(a, color=color_green) overlay(a,g,mask=incl)
Last edited by poisondeathray; 20th Apr 2018 at 21:44.
-
I tried to work out a way of masking shallow gradients but not flat shaded areas. I wasn't successful but I'll describe what I did and why it didn't work.
Withn gradients, Y, U, and V will change across the gradient. In perfectly flat areas there will be no Y, U, V transitions. So if one built a map of all the transitions (ie, where a pixel differs from one of its neighbors) one might be able to filll the area between the transitions with mt_expand(). For example, a smooth greyscale gradient in the background and a flat shaded box in the middle:
The box in the middle looks like it's a gradient but that's an optical illusion. If you check it in and editor you'll see that it is indeed flat. The gradient changes RGB values every eighth pixel horizontally. After mapping the transitions:
You can see there is a vertical bar indicating the transitions every eight pixels. And the flat shaded box shows no transitions, as expected. Now we call mt_expand() four times:
All the areas between the transitions have filled in. The box in the middle has shrunken. So we call mt_inpand() four times:
The box in the middle is restored to its ~original size and the gradient in the background has remained filled. It's not restricted to just horizontal or vertical gradients. Gradients at any angle work. The same source image, rotated, and with the same filtering:
So we have a mask that differentiates between shallow gradients and large flat shade areas. Shallower gradients require more calls to mt_expand() to fill them in. And small flat shaded areas also get filled in (not shown here but it's obvious why this happens).
So much for the theory. The problem I was running into is that real world cartoons had significant amounts of brightness/color variation in the flat areas -- so much of them filled in too. Even with some very heavy filtering I couldn't get them cleared out. TNLMeans() was best at flattening out the flat shaded areas but still left some transitions. Maybe someone else has a better noise reduction filter for that?
Similar Threads
-
Shifted colors when importing Avisynth-filtered videos in Magix NLE
By abolibibelot in forum Newbie / General discussionsReplies: 14Last Post: 28th Dec 2017, 13:09 -
[avisynth] FFMS2 version 2.22 get green colors
By marcorocchini in forum Newbie / General discussionsReplies: 2Last Post: 16th Oct 2015, 12:04 -
[Solved] Avisynth : colors problems
By Kdmeizk in forum Video ConversionReplies: 50Last Post: 30th Apr 2015, 07:20 -
Avisynth - Overlay on certain colors
By Ninelpienel in forum EditingReplies: 13Last Post: 13th Apr 2015, 14:18 -
Avisynth Santiag() making image and colors a tad brighter.
By killerteengohan in forum RestorationReplies: 4Last Post: 27th Jul 2014, 19:01