VideoHelp Forum
+ Reply to Thread
Results 1 to 12 of 12
Thread
  1. I have a problem with this footage because after deinterlacing it keeps flickering because the luma (and chroma) of each two adjacent frames obtained after deinterlacing differ significantly. Any advice on how to handle this issue is welcome.
    Image Attached Files
    Quote Quote  
  2. How did you deinterlace it? I see no flickering myself.

    Can't you bob it with a decent double-rate deinterlacer?
    Quote Quote  
  3. The bottom field seems to be slightly darker (or more saturated, or higher contrast) than the top field. Try to tweak one of the fields to align it visually with the other field.
    Perhaps something like
    Code:
    AssumeTFF()
    QTGMC()
    e=selecteven().tweak(bright=1.0,sat=1.015).SMDegrain()
    o=selectodd().SMDegrain()
    interleave(e,o)
    vinverse()
    There might be more appropriate QTGMC settings though to cure the issue, or deflicker algos.
    Image Attached Files
    Last edited by Sharc; 31st May 2021 at 09:23.
    Quote Quote  
  4. Thank you Sharc for your effort. I just thought perhaps this issue is known and there is some filter to deal with that. Also, in static scenes it's more visible that picture in the bottom field is kinda skewed and it introduces some slight shaking which is not visible in dynamic scenes.

    I would also like to read some explanation why these issues are present because I like to learn and understand.
    Quote Quote  
  5. The deflicker filters I know of adjust by the average brightness of the frames. But in this video the difference in brightness varies along the vertical axis. So adjusting the average will still leave some flickering. I averaged the first 1024 frames together (blurred), subtracted the two fields to get a spacial map of the average difference in brightness, then used that to adjust one of the fields, making it match the other. That worked pretty well:

    Code:
    #######################################################################
    #
    # Average 1024 blurred frames (really fields as used here) together
    # to get a single frame with the average brightness.
    #
    #######################################################################
    
    function AverageLuma1024(clip v)
    {
        v.BilinearResize(16,72)
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        BilinearResize(v.width, v.height)
        Trim(0, length=1)
    }
    
    #######################################################################
    
    
    Mpeg2Source("sample.d2v", CPU2="ooooxx", Info=3) 
    
    sep = SeparateFields()
    adjust = Subtract(sep.SelectEven().AverageLuma1024(), sep.SelectOdd().AverageLuma1024()).ColorYUV(off_y=-121).GreyScale()
    
    SeparateFields()
    e = SelectEven()
    e = Overlay(e, adjust, mode="Subtract").ColorYUV(off_y=5)
    o = SelectOdd()
    
    Interleave(e,o)
    Weave()
    QTGMC()
    
    #######################################################################
    Image Attached Files
    Quote Quote  
  6. Thanks a lot jagabo for this elegant solution. Are these luma offsets (-121 and 5) manually adjusted? Also, this solves brightness issue but I still find problematic the fact that every second field is skewed or whatever it is compared with the first one. In static scenes it's more visible that picture is shaking slightly. Is there a way to fix this problem?
    Quote Quote  
  7. Originally Posted by Santuzzu View Post
    Thanks a lot jagabo for this elegant solution. Are these luma offsets (-121 and 5) manually adjusted?
    Yes. If you examine a Histogram() of the mask just after Subtract() you'll see most of the values around 128:

    Image
    [Attachment 59230 - Click to enlarge]


    If you added that with Overlay() many pixels would be way over 255. If you subtracted many pixels would be less than 0. So I pulled the brightness down to bring the darkest pixels to zero (ignoring the crap at the top and bottom of the frame):

    Image
    [Attachment 59231 - Click to enlarge]


    After subtracting that from the even fields the whole fields was a little darker than the odd fields. So I added 5 units back to Y.

    Originally Posted by Santuzzu View Post
    Also, this solves brightness issue but I still find problematic the fact that every second field is skewed or whatever it is compared with the first one. In static scenes it's more visible that picture is shaking slightly. Is there a way to fix this problem?
    The best way to fix that is with a line time base corrector in your capture chain. But this comes pretty close:

    Code:
    SeparateFields()
    e = SelectEven()
    o = SelectOdd().HShear(0.4)
    Interleave(e,o)
    Weave()
    I added it right after Mpeg2Source() in the earlier script. HShear comes with the Rotate package: http://avisynth.nl/index.php/Rotate
    Quote Quote  
  8. One more question... Why did you downscale the input to (16, 72) in the averaging function? Is it due to computational efficiency or something else?
    Last edited by Santuzzu; 3rd Jun 2021 at 13:43.
    Quote Quote  
  9. Originally Posted by Santuzzu View Post
    One more question... Why did you downscale the input to (16, 72) in the averaging function? Is it due to computational efficiency or something else?
    Partially for efficiency. But mostly to reduce spacial noise and detail -- I was afraid it would introduce noise in the final result. Playing around with the intermediate size doesn't seem to make much of a difference in the output though.
    Quote Quote  
  10. Thank you jagabo once again for your valuable help!
    Quote Quote  
  11. Originally Posted by Santuzzu
    I want to run a simple script and save the mask stored in the variable adjust to some external file so that I don't need to execute this time-consuming part again and again. I tried a simple code that saves the mask losslessly (uncompressed) but not sure whether it is due to colorspace conversions or something else, I just get different results when I load the file. Can you help, please?
    Un-comment the two lines after "sep = SeparateFields()" to save just the adjust video (a single frame):

    Code:
    #######################################################################
    #
    # Average 1024 blurred frames (really fields as used here) together
    # to get a single frame with the average brightness.
    #
    #######################################################################
    
    function AverageLuma1024(clip v)
    {
        v.BilinearResize(16,72)
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        Merge(SelectEven(), SelectOdd())
        BilinearResize(v.width, v.height)
        Trim(0, length=1)
    }
    
    #######################################################################
    
    
    Mpeg2Source("sample.d2v", CPU2="ooooxx", Info=3) 
    
    sep = SeparateFields()
    
    #adjust = Subtract(sep.SelectEven().AverageLuma1024(), sep.SelectOdd().AverageLuma1024()).ColorYUV(off_y=-121).GreyScale()
    #return(adjust)
    adjust = AviSource("sample.adjust.avi")
    
    SeparateFields()
    e = SelectEven()
    e = Overlay(e, adjust, mode="Subtract").ColorYUV(off_y=5)
    o = SelectOdd()
    
    Interleave(e,o)
    Weave()
    QTGMC()
    I suspect your editor converted the YV12 video to RGB with a rec.601 conversion. That would have crushed all the superblacks below Y=16. Since all the adjustment value are below Y=16 the video would be completely changed. You can save it as a uncompressed YV12 video in VirtualDub by using Video -> Direct Stream Copy mode. Then when you load the adjustment video it will be identical to the generated video. sample.adjust.avi attached.
    Image Attached Files
    Quote Quote  
  12. Thank you as always, Direct Stream Copy solved the problem.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!