VideoHelp Forum
+ Reply to Thread
Results 1 to 11 of 11
Thread
  1. I was wondering why TIVTCIsCombed is unable to pick on combing artifacts during fade to blacks? Please take a look at the picture for an example of what I'm referring to. I've tried playing with cthresh but to no avail. How would you go about detecting fade to blacks? (combed or not is fine)

    Image
    [Attachment 54246 - Click to enlarge]
    Quote Quote  
  2. You can use something like:

    Code:
    testclip = Sharpen(0.0, 1.0)  # accentuate combing
    altclip = Blur(0.0, 1.0).Sharpen(0.0, 0.7).Subtitle("decombed") # a quick decombed alternate clip
    
    ConditionalFilter(testclip, altclip, last, "IsCombedTIVTC") # replaced combed frames with decombed frames
    But with your sample image TFM(cthresh=2) removed the combing. You may have problems with other shots that get deinterlaced inappropriately by the post processor.
    Last edited by jagabo; 25th Jul 2020 at 08:38.
    Quote Quote  
  3. Indeed with a very low threshold it will be detected naturally, but so will most of the clip.

    Sharpening seems interesting, but this particular footage has a few artifacts that when sharpened are considered combing. I'll have to test your solution on a different project. Meanwhile isn't it possible to detect fade to blacks naturally by looking at the chroma value of each frame?
    Quote Quote  
  4. Do you have a clip one can test with (with a sample fade and sections where the usual filters mess up the picture)? I don't have one handy (though I've seen it many times in the past).
    Last edited by jagabo; 25th Jul 2020 at 11:10.
    Quote Quote  
  5. Sorry for the late reply.

    I've tried a cleaner section of the video, but the sharpening unfortunately does not help. Doing so is in the end pretty equivalent to lowering the threshold in TIVTC: an increase in false positives. And what's worse, not all fades are detected either. I've realized that the combing was apparent on any kind of fades (to/from black/white, crossfades etc.) and that this kind of combing even if present on the whole picture is much harder to detect with TIVTC. Probably due to the difference in chroma or luma between each field?

    In any case, I can't seem to automate the process without throwing in a lot of false positives. And a manual filtering of each fade can be painstakingly long. I remembered that in an older thread you suggested doing this

    Code:
    NextChroma = MergeChroma(Trim(1,0)) # replace chroma with the next frame's chroma
    TestClip = StackHorizontal(UtoY(), VtoY())
    ConditionalFilter(TestClip, NextChroma, last, "IsCombedTIVTC")
    I've yet to try it for this particular case. In theory I imagine it could work for fades to and from solid colors, but not really for crossfades.

    Sadly, I cannot provide a sample due to the nature of the content. I'll check if a different source can be used for this purpose.
    Quote Quote  
  6. That other script fragment was for detecting comb artifacts in the chroma, then replacing the combed chroma with chroma from the next frame. I wouldn't really expect it to help here.

    I usually just use vinverse() to remove that type of residual combing. I can potentially damage some good frames though.
    Last edited by jagabo; 30th Jul 2020 at 16:47.
    Quote Quote  
  7. I came up with this.

    Code:
    ###################################################
    #
    # Muild a mask of areas where a pixel has a
    # darker pixel above and below it.
    #
    ##################################################
    
    function FindCombing(clip v, int "threshold")
    {
        threshold = default(threshold, 10)
    
        # find vertical low to high transitions
        Subtract(v, v.blur(0.0, 1.0))
        GeneralConvolution(0, "
            0 -8  0
            0  8  0
            0  0  0", chroma=false, alpha=false)
        lth = mt_lut("x 126 -")
    
        # find vertical high to low transitions
        Subtract(v, v.blur(0.0, 1.0))
        GeneralConvolution(0, "
            0  0  0
            0  8  0
            0 -8  0", chroma=false, alpha=false)
        htl = mt_lut("x 126 -")
    
        # Logical AND(ish)
        Overlay(lth, htl, mode="multiply")
        mt_binarize(threshold)
        mt_expand()
        mt_inpand()
        mt_inpand(chroma="-128")
    }
    
    
    ##################################################
    
    Mpeg2Source("Invader Zim E1.d2v", Info=3) 
    TFM(pp=0)  # field match, not post processing (leave residual combing)
    
    testclip = FindCombing(30)
    #return(StackHorizontal(last, testclip.ColorYUV(analyze=true))) # for analysis
    vi = vInverse().Subtitle("vInverse") # subtitle just for analysis
    
    # replace frames with residual comb artifacts with vInverse() frames (or whatever you want)
    ConditionalFilter(testclip, last, vi, "AverageLuma()", "lessthan", "2")
    
    # TDecimate() here, but we want to see every frame
    I only have one test clip (a cartoon with fades added after a hard telecine) to try it with. I don't know how well it will work with other video. You'll probably have to play around with the thresholds.
    Quote Quote  
  8. Hmm even with a very low threshold I'm afraid this is not working. I have a hard time understanding precisely your code, but it seemed like a good idea checking pixels above and below.

    I've managed to crop a sample from the source. https://mega.nz/file/O8IDVQzL#l2YrJgXkOZhJz1AZiP0CAzHyDTqIf95IzSPnao5WHGA

    I usually just use vinverse() to remove that type of residual combing. I can potentially damage some good frames though.
    I'm curious, do you mean on the whole clip, or do you manually target the problematic sections?
    Quote Quote  
  9. Vinverse deinterlaces only when it 'sees' interlacing. Since it's not perfect, jagabo is correct in saying good frames could be damaged. However, it can easily be set just to remove residual interlacing in specific sequences of frames.

    Or you could bob it with QTGMC or another bobber and decimate from there. No more interlacing.
    Quote Quote  
  10. There's so little motion in that clip... is it 30p except for the fades at 30i? I wasn't sure so I didn't try to IVTC or decimate. In any case, the clip I was testing with had a much faster fade. I changed the two thresholds in the script and got this:

    Code:
    ###################################################
    #
    # Build a mask of areas where a pixel has a
    # darker pixel above AND below it.
    #
    ##################################################
    
    function FindCombing(clip v, int "threshold")
    {
        threshold = default(threshold, 10)
    
        # find vertical low to high transitions
        Subtract(v, v.blur(1.0, 1.0))
        GeneralConvolution(0, "
            0 -8  0
            0  8  0
            0  0  0", chroma=false, alpha=false)
        lth = mt_lut("x 126 -")
    
        # find vertical high to low transitions
        Subtract(v, v.blur(1.0, 1.0))
        GeneralConvolution(0, "
            0  0  0
            0  8  0
            0 -8  0", chroma=false, alpha=false)
        htl = mt_lut("x 126 -")
    
        # Logical AND(ish)
        Overlay(lth, htl, mode="multiply")
        mt_binarize(threshold)
        mt_expand()
        mt_inpand()
        mt_inpand(chroma="-128")
    }
    
    
    ##################################################
    
    Mpeg2Source("2020-07-31.d2v", CPU2="ooooxx", Info=3) 
    
    testclip = FindCombing(0)
    #return(StackHorizontal(last, testclip.ColorYUV(analyze=true)))
    vi = vInverse().Subtitle("vInverse")
    
    # replace frames with residual comb artifacts with vInverse()
    ConditionalFilter(testclip, last, vi, "AverageLuma()", "lessthan", "10")
    I enabled the return(StackHorizontal) line and tried different thresholds to FindCombing until I found a value that showed a lot of white when there are comb artifacts, and a lot less when there aren't any. Then looked at the average luma to determine what would be a good value to use later in the ConditionalFilter().

    Another variation used:

    Code:
    Mpeg2Source("2020-07-31.d2v", CPU2="ooooxx", Info=3) 
    
    testclip = FindCombing(last.Sharpen(0.0, 1.0), 10) # vertical sharpen to accentuate combing
    #return(StackHorizontal(last, testclip.ColorYUV(analyze=true)))
    vi = vInverse().Subtitle("vInverse")
    
    # replace frames with residual comb artifacts with vInverse()
    ConditionalFilter(testclip, last, vi, "AverageLuma()", "lessthan", "10")
    Again, these scripts may hit false positives on other shots so you may have to tune if further. Or maybe you won't be able to find a compromise that has few enough false positives and false negatives for you.
    Quote Quote  
  11. There's so little motion in that clip... is it 30p except for the fades at 30i?
    Correct. Forgive me I was not able to provide much more than 3 seconds.

    I enabled the return(StackHorizontal) line and tried different thresholds to FindCombing until I found a value that showed a lot of white when there are comb artifacts, and a lot less when there aren't any. Then looked at the average luma to determine what would be a good value to use later in the ConditionalFilter()
    I see now. This iteration is actually quite effective! A couple missed frames but nothing too problematic. My script will easily be able to interpret a small gap as insignificant. Thank you very much jagabo.

    EDIT: On second thought, I'll have to test if it can work with crossfades too...
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!