VideoHelp Forum
+ Reply to Thread
Results 1 to 26 of 26
Thread
  1. For animated sources, is it possible to sharpen the entire video, yet leave black lines looking a bit softer?

    Every time I try to sharpen a video, when I get it up high enough to look nice and crisp, it appears oversharpened or the black lines become very intense looking.

    I love the really sharp image, but I hate how it appears oversharpened with strong and sometimes aliased black lines.
    Quote Quote  
  2. You could create an edge mask, then overlay the sharpened image with the original image using that mask. The trick is getting the mask just right. An edge mask will include edges as well as black lines. You might be able to further limit that with a brightness mask. An example using your recent transformers clip with just an edge mask (no cleanup):


    Code:
    LWLibavVideoSource("transformers.mkv") 
    
    sharp = Sharpen(1.0) # just something sharper
    emask = mt_edge().mt_expand().Blur(1.5).GreyScale().Invert() # mask of edges
    sharper = Overlay(last, sharp, mask=emask)
    
    Interleave(last, sharper, emask)
    Last edited by jagabo; 21st Oct 2017 at 11:07.
    Quote Quote  
  3. Originally Posted by jagabo View Post
    You could create an edge mask, then overlay the sharpened image with the original image using that mask. The trick is getting the mask just right. An edge mask will include edges as well as black lines. You might be able to further limit that with a brightness mask. An example using your recent transformers clip with just an edge mask (no cleanup):


    Code:
    LWLibavVideoSource("transformers.mkv") 
    
    sharp = Sharpen(1.0) # just something sharper
    emask = mt_edge().mt_expand().Blur(1.5).GreyScale().Invert() # mask of edges
    sharper = Overlay(last, sharp, mask=emask)
    
    Interleave(last, sharper, emask)

    it is possible to apply the banding filter only to dark or black areas where it is very noticeable. ?

    thanks
    Quote Quote  
  4. Use an brightness mask to separate dark areas from bright areas. Something along the lines of

    Code:
    Overlay(deband_filter(), last, mask=Tweak(bright=-16, coring=false).Tweak(cont=4.0, coring=false))
    deband_filter() is whatever filter you use for debanding. Adjust the bright and cont values to get just the parts of the image you want.

    Here you can see the effect:

    Code:
    function GreyRamp()
    {
       black = BlankClip(color=$000000, width=1, height=256)
       white = BlankClip(color=$ffffff, width=1, height=256)
       StackHorizontal(black,white)
       BilinearResize(512, 256)
       Crop(128,0,-128,-0)
    }
    
    GreyRamp().ConvertToYV12(matrix="PC.601")
    PointResize(width*4, height)
    Overlay(AddGrain(50), last, mask=Tweak(bright=-16, coring=false).Tweak(cont=4.0, coring=false))
    TurnRight().Histogram().TurnLeft()
    Click image for larger version

Name:	sample.jpg
Views:	1381
Size:	31.9 KB
ID:	43518
    Quote Quote  
  5. Originally Posted by jagabo View Post
    Use an brightness mask to separate dark areas from bright areas. Something along the lines of

    Code:
    Overlay(deband_filter(), last, mask=Tweak(bright=-16, coring=false).Tweak(cont=4.0, coring=false))
    deband_filter() is whatever filter you use for debanding. Adjust the bright and cont values to get just the parts of the image you want.

    Here you can see the effect:

    Code:
    function GreyRamp()
    {
       black = BlankClip(color=$000000, width=1, height=256)
       white = BlankClip(color=$ffffff, width=1, height=256)
       StackHorizontal(black,white)
       BilinearResize(512, 256)
       Crop(128,0,-128,-0)
    }
    
    GreyRamp().ConvertToYV12(matrix="PC.601")
    PointResize(width*4, height)
    Overlay(AddGrain(50), last, mask=Tweak(bright=-16, coring=false).Tweak(cont=4.0, coring=false))
    TurnRight().Histogram().TurnLeft()
    Image
    [Attachment 43518 - Click to enlarge]
    Would it also be possible to just apply the denoiser to everything but the black lines?
    Quote Quote  
  6. Originally Posted by frank_zappa View Post
    Would it also be possible to just apply the denoiser to everything but the black lines?
    Like in the first post (sharpening everything except dark lines) Use an brightness/edge mask to overlay the denoised image with the non-denoised image. Something along the lines of:

    Code:
    WhateverSource()
    
    denoised = WhateverDenoiser()
    edgemask = mt_edge().mt_expand().Blur(1.5).GreyScale().Invert() # mask of edges
    Overlay(last, denoised, mask=edgemask)
    Quote Quote  
  7. Originally Posted by jagabo View Post
    Originally Posted by frank_zappa View Post
    Would it also be possible to just apply the denoiser to everything but the black lines?
    Like in the first post (sharpening everything except dark lines) Use an brightness/edge mask to overlay the denoised image with the non-denoised image. Something along the lines of:

    Code:
    WhateverSource()
    
    denoised = WhateverDenoiser()
    edgemask = mt_edge().mt_expand().Blur(1.5).GreyScale().Invert() # mask of edges
    Overlay(last, denoised, mask=edgemask)
    it would be possible to copy this script to avisynth since it is written in Vapoursynth, or you could also understand the function and create a similar one in Avisynth, it is very interesting if you can read it friend, a lot of people look for a mask like this one and a good debanding only in black(darks) areas where you can notice more.

    https://kageru.moe/article.php?p=edgemasks

    https://kageru.moe/article.php?p=adaptivegrain
    Quote Quote  
  8. Here's his kirsh function (I had to find and fix a bug -- he had a couple numbers transposed in one of his matrices) using AviSynth GeneralConvolution() and Overlay():

    Code:
    #######################################################################
    #
    # Kirsch edge detection algorithm.  Requires RGB32 input, outputs RGB32
    #
    #######################################################################
    
    function Kirsch(clip c)
    {
        c
    
        e0 = GeneralConvolution(matrix=" 5,  5,  5, -3,  0, -3, -3, -3, -3")
        e1 = GeneralConvolution(matrix="-3,  5,  5,  5,  0, -3, -3, -3, -3")
        e2 = GeneralConvolution(matrix="-3, -3,  5,  5,  0,  5, -3, -3, -3")
        e3 = GeneralConvolution(matrix="-3, -3, -3,  5,  0,  5,  5, -3, -3")
        e4 = GeneralConvolution(matrix="-3, -3, -3, -3,  0,  5,  5,  5, -3")
        e5 = GeneralConvolution(matrix="-3, -3, -3, -3,  0, -3,  5,  5,  5")
        e6 = GeneralConvolution(matrix=" 5, -3, -3, -3,  0, -3, -3,  5,  5")
        e7 = GeneralConvolution(matrix=" 5,  5, -3, -3,  0, -3, -3, -3,  5")
    
        e01 = Overlay(e0, e1, mode="lighten")
        e23 = Overlay(e2, e3, mode="lighten")
        e45 = Overlay(e4, e5, mode="lighten")
        e67 = Overlay(e6, e7, mode="lighten")
    
        e0123 = Overlay(e01, e23, mode="lighten")
        e4567 = Overlay(e45, e67, mode="lighten")
    
        Overlay(e0123, e4567, mode="lighten")
    }
    
    #######################################################################
    It's pretty slow though. It looks like he darkened his sample mask image a bit. You may need to do so too.
    Quote Quote  
  9. Originally Posted by jagabo View Post
    Here's his kirsh function (I had to find and fix a bug -- he had a couple numbers transposed in one of his matrices) using AviSynth GeneralConvolution() and Overlay():

    Code:
    #######################################################################
    #
    # Kirsch edge detection algorithm.  Requires RGB32 input, outputs RGB32
    #
    #######################################################################
    
    function Kirsch(clip c)
    {
        c
    
        e0 = GeneralConvolution(matrix=" 5,  5,  5, -3,  0, -3, -3, -3, -3")
        e1 = GeneralConvolution(matrix="-3,  5,  5,  5,  0, -3, -3, -3, -3")
        e2 = GeneralConvolution(matrix="-3, -3,  5,  5,  0,  5, -3, -3, -3")
        e3 = GeneralConvolution(matrix="-3, -3, -3,  5,  0,  5,  5, -3, -3")
        e4 = GeneralConvolution(matrix="-3, -3, -3, -3,  0,  5,  5,  5, -3")
        e5 = GeneralConvolution(matrix="-3, -3, -3, -3,  0, -3,  5,  5,  5")
        e6 = GeneralConvolution(matrix=" 5, -3, -3, -3,  0, -3, -3,  5,  5")
        e7 = GeneralConvolution(matrix=" 5,  5, -3, -3,  0, -3, -3, -3,  5")
    
        e01 = Overlay(e0, e1, mode="lighten")
        e23 = Overlay(e2, e3, mode="lighten")
        e45 = Overlay(e4, e5, mode="lighten")
        e67 = Overlay(e6, e7, mode="lighten")
    
        e0123 = Overlay(e01, e23, mode="lighten")
        e4567 = Overlay(e45, e67, mode="lighten")
    
        Overlay(e0123, e4567, mode="lighten")
    }
    
    #######################################################################
    It's pretty slow though. It looks like he darkened his sample mask image a bit. You may need to do so too.
    Thanks so much , is possible with YUV colors ?

    How can you use this more detailed mask to clean the image of the banding and also apply some filter cleaning such as Dfttest, etc?

    Code:
    # Quick overview of all scripts described in this article:
    ################################################################
    
    # Use retinex to greatly improve the accuracy of the edge detection in dark scenes.
    # draft=True is a lot faster, albeit less accurate
    def retinex_edgemask(src: vs.VideoNode, sigma=1, draft=False) -> vs.VideoNode:
        core = vs.get_core()
        src = mvf.Depth(src, 16)
        luma = mvf.GetPlane(src, 0)
        if draft:
            ret = core.std.Expr(luma, 'x 65535 / sqrt 65535 *')
        else:
            ret = core.retinex.MSRCP(luma, sigma=[50, 200, 350], upper_thr=0.005)
        mask = core.std.Expr([kirsch(luma), ret.tcanny.TCanny(mode=1, sigma=sigma).std.Minimum(
            coordinates=[1, 0, 1, 0, 0, 1, 0, 1])], 'x y +')
        return mask
    
    
    # Kirsch edge detection. This uses 8 directions, so it's slower but better than Sobel (4 directions).
    # more information: https://ddl.kageru.moe/konOJ.pdf
    def kirsch(src: vs.VideoNode) -> vs.VideoNode:
        core = vs.get_core()
        w = [5]*3 + [-3]*5
        weights = [w[-i:] + w[:-i] for i in range(4)]
        c = [core.std.Convolution(src, (w[:4]+[0]+w[4:]), saturate=False) for w in weights]
        return core.std.Expr(c, 'x y max z max a max')
    
    
    # should behave similar to std.Sobel() but faster since it has no additional high-/lowpass or gain.
    # the internal filter is also a little brighter
    def fast_sobel(src: vs.VideoNode) -> vs.VideoNode:
        core = vs.get_core()
        sx = src.std.Convolution([-1, -2, -1, 0, 0, 0, 1, 2, 1], saturate=False)
        sy = src.std.Convolution([-1, 0, 1, -2, 0, 2, -1, 0, 1], saturate=False)
        return core.std.Expr([sx, sy], 'x y max')
    
    
    # a weird kind of edgemask that draws around the edges. probably needs more tweaking/testing
    # maybe useful for edge cleaning?
    def bloated_edgemask(src: vs.VideoNode) -> vs.VideoNode:
        return src.std.Convolution(matrix=[1,  2,  4,  2, 1,
                                           2, -3, -6, -3, 2,
                                           4, -6,  0, -6, 4,
                                           2, -3, -6, -3, 2,
                                           1,  2,  4,  2, 1], saturate=False)
    Last edited by frank_zappa; 29th Oct 2017 at 16:14.
    Quote Quote  
  10. Originally Posted by frank_zappa View Post
    is possible with YUV colors ?
    GeneralConvolution() only works in RGB32. Just convert your video to RGB32 before calling Kirsch(). And convert the resulting mask to YUV if necessary.

    Originally Posted by frank_zappa View Post
    How can you use this more detailed mask to clean the image of the banding and also apply some filter cleaning such as Dfttest, etc?
    The same way you use any other edge mask.
    Quote Quote  
  11. Originally Posted by jagabo View Post
    Originally Posted by frank_zappa View Post
    is possible with YUV colors ?
    GeneralConvolution() only works in RGB32. Just convert your video to RGB32 before calling Kirsch(). And convert the resulting mask to YUV if necessary.

    Originally Posted by frank_zappa View Post
    How can you use this more detailed mask to clean the image of the banding and also apply some filter cleaning such as Dfttest, etc?
    The same way you use any other edge mask.
    like this?
    would you please give us an example

    Code:
    denoised = WhateverDenoiser()
    edgemask = mt_edge().mt_expand().Blur(1.5).GreyScale().Invert() # mask of edges
    Overlay(last, denoised, mask=edgemask)
    Quote Quote  
  12. Originally Posted by frank_zappa View Post
    would you please give us an example
    Code:
    WhateverSource()
    denoised = WhateverDenoiser()
    edgemask = ConvertToRGB32().Kirsch().ConvertToYV12(matrix="pc.601")
    Overlay(denoised, last, mask=edgemask)
    In practice you may want to manipulate the edgemask a little to fine tune it.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    Originally Posted by frank_zappa View Post
    would you please give us an example
    Code:
    WhateverSource()
    denoised = WhateverDenoiser()
    edgemask = ConvertToRGB32().Kirsch().ConvertToYV12(matrix="pc.601")
    Overlay(denoised, last, mask=edgemask)
    In practice you may want to manipulate the edgemask a little to fine tune it.
    is possible an example please ? thanks so much
    Quote Quote  
  14. For example, you might want to eliminate noise and low contrast edges:

    WhateverSource()
    denoised = WhateverDenoiser()
    edgemask = ConvertToRGB32().Kirsch().ConvertToYV12(matrix="pc .601")
    edgemask = edgemask.mt_binarize(120).Blur(1.0).Blur(1.0)
    Overlay(denoised, last, mask=edgemask)

    Original image:
    Click image for larger version

Name:	source.jpg
Views:	1312
Size:	130.4 KB
ID:	43538

    Result form just Kirsch:
    Click image for larger version

Name:	kirsch.jpg
Views:	1336
Size:	280.6 KB
ID:	43539

    Result after adding mt_binarize and blur:
    Click image for larger version

Name:	kirschplus.jpg
Views:	1328
Size:	101.2 KB
ID:	43540
    Quote Quote  
  15. Originally Posted by jagabo View Post
    For example, you might want to eliminate noise and low contrast edges:

    WhateverSource()
    denoised = WhateverDenoiser()
    edgemask = ConvertToRGB32().Kirsch().ConvertToYV12(matrix="pc .601")
    edgemask = edgemask.mt_binarize(120).Blur(1.0).Blur(1.0)
    Overlay(denoised, last, mask=edgemask)

    Original image:
    Image
    [Attachment 43538 - Click to enlarge]


    Result form just Kirsch:
    Image
    [Attachment 43539 - Click to enlarge]


    Result after adding mt_binarize and blur:
    Image
    [Attachment 43540 - Click to enlarge]
    Is it possible to use only the Kirsch () mask or is it not recommended?

    thanks so much
    Quote Quote  
  16. Anything's possible. Just find works for the video.
    Quote Quote  
  17. thanks so much !

    it is possible a little help with "Macross" the other video I sent you


    https://mega.nz/#!VGYgyBDA!gS6xU7ybR7-qOyv33-eMaj88gTaFwr6yJfqBIL1HYh4

    is very complicated: (
    Last edited by frank_zappa; 30th Oct 2017 at 00:38.
    Quote Quote  
  18. Originally Posted by jagabo View Post
    Anything's possible. Just find works for the video.
    Code:
    denoised = MCTD(settings="HIGH", AA=false, protect=true, edgeclean=true, stabilize=true, enhance=true) 
    edgemask = ConvertToRGB32().Kirsch().ConvertToYV12(matrix="pc.601")
    edgemask = edgemask.mt_binarize(120).Blur(1.0).Blur(1.0)
    Overlay(denoised, last, mask=edgemask)
    I canT activate the MT mode to increase the speed, I'm trying even with avisynth+ and it can't be done.

    any help please
    Quote Quote  
  19. error message?
    Quote Quote  
  20. Originally Posted by jagabo View Post
    error message?
    doesn't say anything just doesn't load on the x264, It takes too long and then comes out error, deadlock or something.
    Quote Quote  
  21. Originally Posted by frank_zappa View Post
    Originally Posted by jagabo View Post
    error message?
    doesn't say anything just doesn't load on the x264, It takes too long and then comes out error, deadlock or something.
    Code:
     
    Warning: Input process did not respond for 60 seconds, potential deadlock...
    [2017-10-30][10:58:37] 
    [2017-10-30][10:58:37] PROCESS EXITED WITH ERROR CODE: 1
    Quote Quote  
  22. Use standard debugging procedure to determine which filter is causing the problem -- replace or eliminate filters one at at time. But I suspect you are running out of memory with MCTD.
    Quote Quote  
  23. Originally Posted by jagabo View Post
    Use standard debugging procedure to determine which filter is causing the problem -- replace or eliminate filters one at at time. But I suspect you are running out of memory with MCTD.
    i have 16 GB 2667
    Quote Quote  
  24. If you are running 32 bit VirtualDub it can only use 2GB.
    Quote Quote  
  25. Originally Posted by jagabo View Post
    If you are running 32 bit VirtualDub it can only use 2GB.
    its x64 (VirtualDub_FilterMod_40463)

    Quote Quote  
  26. Originally Posted by jagabo View Post
    If you are running 32 bit VirtualDub it can only use 2GB.
    my friend a huge favor this feature only applies banding to black areas would be possible to migrate the script to avisynth please

    https://kageru.moe/article.php?p=adaptivegrain

    Vapoursynth script

    https://kageru.moe/blog/adaptivegrain.py

    a lot of people fight against banding, I hope you can help us , thanks so so much
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!