VideoHelp Forum

Try DVDFab and download streaming video, copy, convert or make Blu-rays,DVDs! Download free trial !
+ Reply to Thread
Results 1 to 6 of 6
Thread
  1. I have a source that seems to have some artifacts on it. They are already in the source and they become much more visible when I adjust the levels.

    Is there any decent way to reduce the visibility of or clean these? I really like the look of it better with the levels adjustment, but I don't like that those are much more visible.

    Here is a Zoomed in example comparison for easy visibility (Flip through with arrow keys or mouse click)
    https://slow.pics/c/oWP5mi6d

    TNLMeans seems to get rid of them, but it is such a destructive DNR filter that is smearing out most detail. I do not wish to cover it up with dithering either. Perhaps there is a better alternative I am not thinking of at the moment?


    Here is the levels adjustment I am making and a before and after screenshot.

    https://slow.pics/c/CFQvCovK

    Code:
    ColorYUV(levels="TV->PC", opt="coring")
    ColorYUV(off_y=6, opt="coring")
    smoothtweak(saturation=1.00, brightness=-1, contrast=1.00, dither=-1, interp=0, limiter=false)
    tweak(cont=0.99)
    Last edited by killerteengohan; 18th Jun 2020 at 15:29.
    Quote Quote  
  2. The levels adjustments by ColorYUV makes the existing artifacts more visible because of localized contrast differences. If you look at histogram(), you will see "gaps" in the waveform

    1) So one option is to use dithering during the levels adjustment. Dithering sort of "hides" or covers up the artifacts

    Use smoothlevels , or levels(dither=true) for your adjustments ; or do the filtering at a higher bitdepth and downconvert back to 8bit using some dithering

    But sometimes dithering can obscure fine details and lines, it's a trade off

    This is a rough approximation using smoothlevels
    Code:
    ImageSource("unknown_10870 Before.png")
    ConvertToYV12()
    SmoothLevels(18,1.07,235,0,255, Lmode=3, brightSTR=10, darkSTR=25, chroma=100)
    Histogram()
    You can see the banding in the waveform introduced by ColorYUV
    Code:
    ImageSource("unknown_10870 After.png")
    ConvertToYV12()
    Histogram()
    EDIT - coloryuv can work at higher bitdepths and has autoscaling arguments

    eg. higher bitdepth conversion with dithering downcoversion
    Code:
    ImageSource("unknown_10870 After.png")
    ConvertToYV12()
    convertbits(16)
    ColorYUV(levels="TV->PC", opt="coring")
    ColorYUV(off_y=6, opt="coring")
    smoothtweak(saturation=1.00, brightness=-1, contrast=1.00, dither=-1, interp=0, limiter=false)
    tweak(cont=0.99)
    convertbits(8, dither=1) #error diffusion; use "0" for ordered
    histogram()


    2) Another option is to apply some filtering with masks, but with line protection, +/- other masks to preserve other details


    3) Or use something like waifu2x or anime4k - even if you're not upscaling - they can clean up areas and protect lines. (But don't use the avisynth version of waifu2x, way too slow. Use the vapoursynth caffe version (GPU))
    Last edited by poisondeathray; 17th Jun 2020 at 11:47.
    Quote Quote  
  3. Almost everything you're doing with ColorYUV(), Tweak() and SmoothTweak() can be done in a single SmoothTweak() . That will reduce the posterization. And if you enable dithering it will become invisible -- at the cost of some noise, of course.

    Code:
    SmoothTweak(contrast=1.153, brightness=-11, saturation=1.140, dither=-1, interp=0, limiter=true)
    If you really want blacks crushed at 21 and white crushed at 232 (no true black or whites) follow that with:

    Code:
    ColorYUV(off_y=-21).ColoRYUV(off_y=21) #crush blacks at 21
    ColorYUV(off_y=23).ColoRYUV(off_y=-23) #crush whites at 232
    I recommend you enable dithering and interpolation.
    Quote Quote  
  4. You know Gohan, I really don't mean this in a bad way. And believe me, I gave it some thought before deciding to go ahead with this response but....

    ...for a guy who gave me a major chastising about me "not knowing what I'm doing" as opposed to you, who should be trusted because you've been "working with anime for 14 years", you sure ask alot of newbie questions around here. This ain't the only one. I've seen other questions you've been asking.

    And to add to that, instead of providing a 60-second, unprocessed clip of your source (as you accused me of providing a "measly" 11-second clip of mine), you just provide 2 images!

    Amazing how the rules don't apply to some of us?

    Now I've been lurking around this website for years and know for a fact that this sort of thing is generally not tolerated around here. People will tell you that images of your source do not help at all; to provide an unprocessed sample. In fact, at this point in time in this site, people downright ignore that sort of thing because they've grown tired of repeating it.

    The fact that jagabo, out of all people, did not demand this from you, is very surprising! Either the chap is getting up there in age or he really has taken a liking to you!

    Either way, why not try the following:

    Code:
    Mctemporaldenoise("medium", deblock=true, twopass=true, sharp=false, ec=true)
    Then top it off with the following to dither out the artifacts, particularly in your dark areas.

    Code:
    GradFun2DBmod(str=3.0)
    With animation, I usually use Smdegrain() with a prefilter of 2 for the medium glaussian blur. That generally takes care of business with no further need for additional filtering. Smdegrain() is a wonderful little filter that takes care of alot of junk with almost NO detail loss whatsoever. I've tested this thing over time with alot of sources and have seen some wonders.

    If I DO need to process lighter pixels further, then I run it through my dark lines mask and postprocess with TNLMEANS() or DEEN().

    But TNLMEANS bands like a mo'fo. So MCTemporalDenoise(settings="medium") or McTemporaldenoise(settings="low") may help a bit.
    Last edited by Betelman; 17th Jun 2020 at 20:47.
    Quote Quote  
  5. Originally Posted by Betelman View Post
    You know Gohan, I really don't mean this in a bad way. And believe me, I gave it some thought before deciding to go ahead with this response but....

    ...for a guy who gave me a major chastising about me "not knowing what I'm doing" as opposed to you, who should be trusted because you've been "working with anime for 14 years", you sure ask alot of newbie questions around here. This ain't the only one. I've seen other questions you've been asking.

    And to add to that, instead of providing a 60-second, unprocessed clip of your source (as you accused me of providing a "measly" 11-second clip of mine), you just provide 2 images!

    Amazing how the rules don't apply to some of us?

    Now I've been lurking around this website for years and know for a fact that this sort of thing is generally not tolerated around here. People will tell you that images of your source do not help at all; to provide an unprocessed sample. In fact, at this point in time in this site, people downright ignore that sort of thing because they've grown tired of repeating it.

    The fact that jagabo, out of all people, did not demand this from you, is very surprising! Either the chap is getting up there in age or he really has taken a liking to you!

    Either way, why not try the following:

    Code:
    Mctemporaldenoise("medium", deblock=true, twopass=true, sharp=false, ec=true)
    Then top it off with the following to dither out the artifacts, particularly in your dark areas.

    Code:
    GradFun2DBmod(str=3.0)
    With animation, I usually use Smdegrain() with a prefilter of 2 for the medium glaussian blur. That generally takes care of business with no further need for additional filtering. Smdegrain() is a wonderful little filter that takes care of alot of junk with almost NO detail loss whatsoever. I've tested this thing over time with alot of sources and have seen some wonders.

    If I DO need to process lighter pixels further, then I run it through my dark lines mask and postprocess with TNLMEANS() or DEEN().

    But TNLMEANS bands like a mo'fo. So MCTemporalDenoise(settings="medium") or McTemporaldenoise(settings="low") may help a bit.

    You know, betelman; if I wanted t hear from an @**Hole, I'd fart!

    There is no rule about samples, this particular thing does not require a sample video like the things you were asking about did. I provide video samples when they are needed or asked for. This can be quickly answered with a simple image only. No downloading or video needed. I'm not asking about things like interlacing, or stuff that needs more than one frame to be looked at. Usually a video sample is wanted, but its not needed for everything and every question.

    Jagabo isn't a dumbass, and does not need a video for this particular thing.

    Your suggestions are crap and damaging, which is exactly what I wanted to avoid. The higher bitdepth suggestion by poisondeathray however is a good one.
    Quote Quote  
  6. Originally Posted by poisondeathray View Post
    The levels adjustments by ColorYUV makes the existing artifacts more visible because of localized contrast differences. If you look at histogram(), you will see "gaps" in the waveform

    1) So one option is to use dithering during the levels adjustment. Dithering sort of "hides" or covers up the artifacts

    Use smoothlevels , or levels(dither=true) for your adjustments ; or do the filtering at a higher bitdepth and downconvert back to 8bit using some dithering

    But sometimes dithering can obscure fine details and lines, it's a trade off

    This is a rough approximation using smoothlevels
    Code:
    ImageSource("unknown_10870 Before.png")
    ConvertToYV12()
    SmoothLevels(18,1.07,235,0,255, Lmode=3, brightSTR=10, darkSTR=25, chroma=100)
    Histogram()
    You can see the banding in the waveform introduced by ColorYUV
    Code:
    ImageSource("unknown_10870 After.png")
    ConvertToYV12()
    Histogram()
    EDIT - coloryuv can work at higher bitdepths and has autoscaling arguments

    eg. higher bitdepth conversion with dithering downcoversion
    Code:
    ImageSource("unknown_10870 After.png")
    ConvertToYV12()
    convertbits(16)
    ColorYUV(levels="TV->PC", opt="coring")
    ColorYUV(off_y=6, opt="coring")
    smoothtweak(saturation=1.00, brightness=-1, contrast=1.00, dither=-1, interp=0, limiter=false)
    tweak(cont=0.99)
    convertbits(8, dither=1) #error diffusion; use "0" for ordered
    histogram()


    2) Another option is to apply some filtering with masks, but with line protection, +/- other masks to preserve other details


    3) Or use something like waifu2x or anime4k - even if you're not upscaling - they can clean up areas and protect lines. (But don't use the avisynth version of waifu2x, way too slow. Use the vapoursynth caffe version (GPU))
    The higher bit depth will probably work great. I just do not have avisynth+ to be able to use convertbits right now. I'll try that at a later time. Thanks!
    Quote Quote  



Similar Threads