How can I add, for instance something like TTempSmooth to the gray midtone in this clip (the window frame and the gray curtain), and mask out the rest of the dark areas?
+ Reply to Thread
Results 1 to 30 of 36
Basically you need to build a mask using the absolute values of the (chroma channels - 128). I'm pretty sure masktools can do this but I don't have time to look into it now.
It's making sense. So am I masking the gray area or the dark areas surrounding it? Also, what does 128 stand for here?
There are no "gray midtones" in the sample clip. The window frame is purple. The curtains are blue. The same noise exists everywhere in the frame. It's just easier to see in the darker areas. The clip also has some rapid flicker. Even if you apply a denoiser to only certain parts of the image, the remaining parts would still flicker but the masked parts wouldn't. That would look weird, at best.Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end. -- Henry David Thoreau
Oops, I remembered your original post incorrectly and wrote this reply thinking you wanted to filter the grayscale areas but not the colorful areas... You should be able to extrapolate from this to get what you want...
The actual value you want to subtract is 130*, not 128. I'm using mt_masktools to build the mask. I don't know if this is exactly what you want. It filters areas of low color saturation but not areas of high color saturation.
Mpeg2Source("sample1.demuxed.d2v").TFM().TDecimate() umask=UtoY().BicubicResize(width,height).mt_lut("x 130 - abs").mt_lut("x 12 - 16 *") vmask=VtoY().BicubicResize(width,height).mt_lut("x 130 - abs").mt_lut("x 12 - 16 *") mask=Merge(umask, vmask) mt_merge(McTemporalDenoise(settings="high"), last, mask)
You'll have to play around with the mt_lut() values to get exactly the effect you want.
UtoY() and VtoY() convert the U and V channels to Y. Since this is YV12 they have to be scaled back to the main clip's width and height. mt_lut("x 130 - abs") generates an image where anything that is gray gets the value 0, anything that has colors is >0 (ie, abs(y-130)). mt_lut("x 12 - 16 *") first subtracts 12 (basically a threshold amount of color) then multiples by 16 to accentuate the mask (ie, (y-12) * 16). Merge() blends the two masks together into one mask. Now anything that is (nearly) grayscale is black in the mask, anything that is colorful is white. Finally, we use mt_merge() to merge a highly filtered version of the video with a non filtered version of the video based on the mask. Here's what the mask looks like:
If you want only the gray window frame and wooden planks filtered, but not the black parts of the picture, you'll have to add (and massage) the luma channel to the mask.
* when the chroma channels of a pixel are 130 the pixel has no color, ie, it's a shade of gray. Any deviation from 130 in either of the chroma channels adds color to the pixel. The more the deviation the greater the color saturation.
Last edited by jagabo; 12th May 2012 at 19:04.
These are the kinds of explanations about the workings of masktools I keep looking for. Been thru almost 2 years of doom9's thread on this plugin and have yet to figure out what masktools and its functions are actually ndoing (well, some of them are relatively obvious, but mt_lut has me puizzled). Any ideas on a source where I can start studying the innards of these masking techniques?
Last edited by sanlyn; 13th May 2012 at 10:17.Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end. -- Henry David Thoreau
mt_lut() basically performs mathematical operations on video data. Since all the sources are 8 bit per channel it uses a lookup table (hence the "lut" in the name) to optimize for speed. Ie, there are only 256 possible inputs so there are only 256 possible outputs. The program pre-calculates the outputs for all 256 possible inputs (0 to 255) and puts them in a table. Then when it operates on the video data it performs a quick table lookup rather than the slower calculations.
Suppose for example you want to perform a 2x luma gain, Y' = 2 * Y, or mt_lut("2 x *"). You first build a table of results for all possible values of Y:
You don't need to store both Y and Y', Y is implied by the position in the table, Y' = table[Y]. So the table only includes the output values. Accessing a table like this faster than preforming the calculations each time, especially in an interpreted system (ie, the text string you supply has to be interpreted into mathematical operations).
You specify the operations you want to perform in reverse polish notation:
If you've ever used an HP calculator you are familiar with this.
Instead of mt_lut("x 130 - abs 12 - 16 *"), you could use
mt_lut(mt_polish("(abs(x-130) - 12) * 16"))
@jagabo: I'm still puzzled why you used 130 instead of 128 in the earlier reply.
VirtualDub and the on-screen results were 130. I forgot that VirtualDub was using a rec.601 matrix to display the video. So I was seeing a contrast stretched result. If you use ConvertToRGB(matrix="PC.601") you'll see the proper value is 128, not 130.
I'm going to review the MT_Mask documents and I bet 47 consecutive paychecks that it is not written in layman's terms.
In any event jag, thanks for the script. I'm toying around with it and am going to keep reading up to get a better understanding of how to do this.
Polish notations. Pheh!
How about ENGLISH explanations with these things?
Thanks for that look into mt_lut, jagabo. It's making sense now, and I found some Wikipedia material that I get into for more detail. These posts will be very handy on my latest torture trip -- er, I mean my latest restoration project.
Several of unclescoob's posts here and earlier seem to have a lot of obvious flicker. Are these sources second or third generation copies of other recordings?Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end. -- Henry David Thoreau
It's not exactly light reading, the sort of thing that's a useful reference if you already understand it, but lacking in introduction and explanation.
It's a pity jagabo didn't write the manual - he has a flair for explaining things and his description of mt_lut is the clearest I've seen.
Jagabo - I am trying to filter the gray frames and pineboards in the clip, and leave the dark scene unfiltered because I am trying to avoid blurring the black details. However, I think I can accomplish this with TemporalDegrain for this particular scene. So I might not have to mess with masking after all. We'll see.
Check out some work I've done with just MCTemporal and TTempSmooth (note, I did not use TTEmpSmooth with the entire episode, just trimmed this particular clip. We've reviewed this clip in my previous thread). Why do I feel like I'm setting myself up here? Ok, in any event, here goes..
The first one is the clip prior to denoising, and the second is a denoised clip...
Realize that the filtered clip is taking longer, as I encoded at a higher bitrate than the original. Sorry about that for those who wish to watch.
So I'm getting the silent treatment, honey? Awwww, come on, what did I do?? Was it the comment about how big your head looks in that turtleneck?? Honeeyyy????
Since your doing all that filtering you should sharpen the chroma channels and shift them left by a few pixels. Otherwise the filtered video looked pretty good.
No. That sounds silly and makes no sense.
You really are an idiot and an *******.
Unsharpened chroma on the left, sharpened on the right (U on top, V on bottom, from your "after" video):
Alternating original, sharpened chroma sample:
Last edited by jagabo; 14th May 2012 at 22:17.
Well, since you asked...
You need to do some more line-edge cleanup, your bright levels are out of spec, red is oversaturated, ... Well, but the clips look pretty good. I don't think one or two plugins is going to fill all your needs. Throw in a line cleaner or two (Dehalo_alpha, FlastLineDarken) and you need something for some of the dot crawl (check the right border on a couple of the after shots), and occasional anti-alias (the latter you can activate in MCTD, but it will sure slow it down a lot). Don't forget RemoveSpots(); there are several of 'em in the after clip. LSFMod would also come in handy. The flicker you complained about in one of the shots with the redhead is still there. But all that these involve is a little tweaking.
I was busy torturing myself with a troublesome stage in a restoration, so I didn't get back in this subject, but...I thought you might be busy with the clip and scene you opened this thread with. What happened to it?
Last edited by sanlyn; 14th May 2012 at 22:24.
Oowwwwwwww!!!!!! I LOVE it when you talk dirty to me, Jag!!
First question...what filter do you use to preview the clip in that manner?
Secondly, you don't even explain how you sharpen the chroma, other than "U on top, V on bottom". Ok, the luma sample is on the top, and the chroma is on the bottom. I see that. What's the script? As usual, you explain things to people as if they already KNOW the answer. Hence defeating the purpose of calling this site VideoHELP
Thirdly, your attached AVI is nothing but one frame with the word 'original' on the top left. Is this my ORIGINAL clip, or your tweaked version of my original?
Last edited by unclescoob; 15th May 2012 at 07:58.
You can sharpen chroma and/or luma separately. Here's one way of doing it with LSFMod as a sharpener:
Read about it here:http://avisynth.org/mediawiki/MergeChroma
backatthefirehouseafter_flickerandhop.m2v . Easy to see in the background. Also white spots in the attached clip in frames 0 to 1, 72-73, 11-112, 127-128. And projector hop. And a broken line in the desk lamp, frame 120.
You have crushed colors crashing against the top and bottom borders of the waveform below. Elevated Red crashes at its top border; luma, green and blue too close to the bottoms. Black borders have been cropped before capping this waveform:
Or if you prefer, here it is as a traditional RGB histogram:
No problem. All of this can be tweaked. But you'll need more than just two plugins to it.
Last edited by sanlyn; 15th May 2012 at 08:54.
Try this: Instead of oversaturating colors, try adjusting levels in YUV first. Results will be within spec and will likely look better without having to go crazy over a single color:
LOL that almost looks greyscale. But I'll give it a shot.
If it looks grayscale, your monitor is uncalibrated. The average consumer monitor is so far outta whack in color, gamma, color depth, oversaturation, d-luma errors, and outta site rgb off-scaling, no photog or camera buff would use them uncalibrated. Anyway, you don't need calibration to see the problem: just look at the histograms.
And here's how it's properly done to common standards: http://www.tftcentral.co.uk/reviews/eye_one_display2.htm
what I meant was, that your sample almost looks too desaturated. Lifeless. There's color, yes, but very little. Why does everyone here have to take things to outer space? I make a comment about the results, and my monitor's calibration comes into question.
I've been reading some tutorials from Scintilla at AnimeMusicVideos.org and let me be quite frank...they're LEGIBLE. His guides help you understand the filters without the aggravating jargon that everyone seems to use here all the time. He makes it fun to learn this.
When a question is posted on VideoHelp.com and it's answered in vague, unecessarily complicated jargon, others jump in to further confuse you with "or you can also do it this way..jargon jargon jargon". When the newbie tries it and posts his results, they're "ok". When a regular "expert" posts his/hers/it's results, you all compliment each other (rightfully so) but it gets to the point borderlining a textual circle jerk (i.e. "Ooh, good idea jagabo, you know this better than I do!"). I'm done with VH.com. I'm taking the rest of this road alone and will learn through trial and error. I'm also going to continue reading Scintilla's guides. Have I learned here? Sure, alot actually, no question about it. But at the cost of unecessary aggravation. Working with video is aggravating (but rewarding) enough without the added fanboy crap. "Elevated red crashes" and "blue too close to the bottom" Sanlyn??? Really???? I just wanted to tweak and clean my vid a bit, I didn't need a damn catscan performed on it!
Last edited by unclescoob; 15th May 2012 at 10:42.
Jagabo, I love you baby!