I have a source that seems to have some artifacts on it. They are already in the source and they become much more visible when I adjust the levels.
Is there any decent way to reduce the visibility of or clean these? I really like the look of it better with the levels adjustment, but I don't like that those are much more visible.
Here is a Zoomed in example comparison for easy visibility (Flip through with arrow keys or mouse click)
TNLMeans seems to get rid of them, but it is such a destructive DNR filter that is smearing out most detail. I do not wish to cover it up with dithering either. Perhaps there is a better alternative I am not thinking of at the moment?
Here is the levels adjustment I am making and a before and after screenshot.
Code:ColorYUV(levels="TV->PC", opt="coring") ColorYUV(off_y=6, opt="coring") smoothtweak(saturation=1.00, brightness=-1, contrast=1.00, dither=-1, interp=0, limiter=false) tweak(cont=0.99)
+ Reply to Thread
Results 1 to 6 of 6
Last edited by killerteengohan; 18th Jun 2020 at 16:29.
The levels adjustments by ColorYUV makes the existing artifacts more visible because of localized contrast differences. If you look at histogram(), you will see "gaps" in the waveform
1) So one option is to use dithering during the levels adjustment. Dithering sort of "hides" or covers up the artifacts
Use smoothlevels , or levels(dither=true) for your adjustments ; or do the filtering at a higher bitdepth and downconvert back to 8bit using some dithering
But sometimes dithering can obscure fine details and lines, it's a trade off
This is a rough approximation using smoothlevels
ImageSource("unknown_10870 Before.png") ConvertToYV12() SmoothLevels(18,1.07,235,0,255, Lmode=3, brightSTR=10, darkSTR=25, chroma=100) Histogram()
ImageSource("unknown_10870 After.png") ConvertToYV12() Histogram()
eg. higher bitdepth conversion with dithering downcoversion
ImageSource("unknown_10870 After.png") ConvertToYV12() convertbits(16) ColorYUV(levels="TV->PC", opt="coring") ColorYUV(off_y=6, opt="coring") smoothtweak(saturation=1.00, brightness=-1, contrast=1.00, dither=-1, interp=0, limiter=false) tweak(cont=0.99) convertbits(8, dither=1) #error diffusion; use "0" for ordered histogram()
2) Another option is to apply some filtering with masks, but with line protection, +/- other masks to preserve other details
3) Or use something like waifu2x or anime4k - even if you're not upscaling - they can clean up areas and protect lines. (But don't use the avisynth version of waifu2x, way too slow. Use the vapoursynth caffe version (GPU))
Last edited by poisondeathray; 17th Jun 2020 at 12:47.
Almost everything you're doing with ColorYUV(), Tweak() and SmoothTweak() can be done in a single SmoothTweak() . That will reduce the posterization. And if you enable dithering it will become invisible -- at the cost of some noise, of course.
SmoothTweak(contrast=1.153, brightness=-11, saturation=1.140, dither=-1, interp=0, limiter=true)
ColorYUV(off_y=-21).ColoRYUV(off_y=21) #crush blacks at 21 ColorYUV(off_y=23).ColoRYUV(off_y=-23) #crush whites at 232
You know Gohan, I really don't mean this in a bad way. And believe me, I gave it some thought before deciding to go ahead with this response but....
...for a guy who gave me a major chastising about me "not knowing what I'm doing" as opposed to you, who should be trusted because you've been "working with anime for 14 years", you sure ask alot of newbie questions around here. This ain't the only one. I've seen other questions you've been asking.
And to add to that, instead of providing a 60-second, unprocessed clip of your source (as you accused me of providing a "measly" 11-second clip of mine), you just provide 2 images!
Amazing how the rules don't apply to some of us?
Now I've been lurking around this website for years and know for a fact that this sort of thing is generally not tolerated around here. People will tell you that images of your source do not help at all; to provide an unprocessed sample. In fact, at this point in time in this site, people downright ignore that sort of thing because they've grown tired of repeating it.
The fact that jagabo, out of all people, did not demand this from you, is very surprising! Either the chap is getting up there in age or he really has taken a liking to you!
Either way, why not try the following:
Mctemporaldenoise("medium", deblock=true, twopass=true, sharp=false, ec=true)
If I DO need to process lighter pixels further, then I run it through my dark lines mask and postprocess with TNLMEANS() or DEEN().
But TNLMEANS bands like a mo'fo. So MCTemporalDenoise(settings="medium") or McTemporaldenoise(settings="low") may help a bit.
Last edited by Betelman; 17th Jun 2020 at 21:47.
You know, betelman; if I wanted t hear from an @**Hole, I'd fart!
There is no rule about samples, this particular thing does not require a sample video like the things you were asking about did. I provide video samples when they are needed or asked for. This can be quickly answered with a simple image only. No downloading or video needed. I'm not asking about things like interlacing, or stuff that needs more than one frame to be looked at. Usually a video sample is wanted, but its not needed for everything and every question.
Jagabo isn't a dumbass, and does not need a video for this particular thing.
Your suggestions are crap and damaging, which is exactly what I wanted to avoid. The higher bitdepth suggestion by poisondeathray however is a good one.