VideoHelp Forum




+ Reply to Thread
Results 1 to 20 of 20
  1. Hello!

    I have this mkv source where movement seems to be well deinterlaced, but some still frames (like the one attached below) are combed.

    Is there any way one could decomb in such case? I mean, the combing is very sudden and easy to notice, especially because the frame before the artifact happens is decombed and looks fine.
    Image Attached Files
    Quote Quote  
  2. but some still frames (like the one attached below)
    you attached an interlaced clip,...
    you did not show the still frame where you see the combing
    you did not share how you deinterlace
    -> in case you use Avisynth and/or Vapoursynth residual combing removal filters like vinverse or vinverse2 might help.
    using:
    Code:
    clip = havsfunc.QTGMC(Input=clip, Preset="Fast", TFF=True, opencl=True)
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    clip = clip[::2]
    and a quick scroll through the clip didn't show any combing I noticed directly,...

    Cu Selur

    Ps.: additional QTGMC(InputType=2) might also help,..
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  3. Originally Posted by Selur View Post
    you attached an interlaced clip,...
    Well, the DVD Video standard is indeed interlaced. However, I don't see much combing, except in still frames.

    Originally Posted by Selur View Post
    you did not show the still frame where you see the combing
    I'll attach below both the combed frame and the decombed frame, the combed one being a couple of frames after the normal one. The combing is very noticeable on Wolf Bachofner's neckband and glasses.

    Originally Posted by Selur View Post
    you did not share how you deinterlace
    I actually didn't deinterlace anything. What you see is an MKV copy of a DVD video, so no processing was done on that video, at least not from my side. I tried to use HandBrake's Default Decomb preset. That didn't help the combing on the still frames, but it definitely created some combing where there wasn't any before the processing and it made movement somewhat choppier in some parts of the video (specifically the end credits). I also tried using Decomb EEDI2 Bob, but that was abominable - it made movement very choppy just about everywhere.

    I tried using StaxRip's QTGMC, but it cannot load that plugin: «Cannot load file 'Plugins_JPSDR.dll'. Platform returned code 1114.». If you say the QTGMC filter is worth trying to get running, I'll look into it.
    Image Attached Thumbnails Click image for larger version

Name:	non-combed.png
Views:	118
Size:	1.59 MB
ID:	57063  

    Click image for larger version

Name:	combing neckband glasses.png
Views:	100
Size:	1.40 MB
ID:	57064  

    Quote Quote  
  4. Here's an AviSynth script that uses mt_motion() and ConditionalFilter() to QTGMC() only still frames:

    Code:
    v1 = LWLibavVideoSource("title_t00_edit.mkv", cache=false, prefer_hw=2) 
    v2 = v1.QTGMC(FPSDivisor=2).Subtitle("qtgmc")
    motion = mt_motion(v1, thT=255).GreyScale()
    
    ConditionalFilter(motion, v1, v2, "AverageLuma()", "greaterthan", "4.0")
    The QTGMC'd frames are subtitled to make them obvious. Remove the Subtitle() once you're satisfied it's working correctly. You may have to adjust the "4.0" threshold to catch more or less still frames. Also, this code has a false positive at the very first frame of the video and may miss the last duplicate of a series.
    Quote Quote  
  5. Another approach is to manually specify blocks of frames to replace with the first frame of that block:

    Code:
    ##########################################################################
    #
    #  Replace a block of frames with copies of the first frame of the block
    #
    # frame_first is the first frame of the block (which will be copied to the rest of the block
    # frame_last is the last frame of the block to be replaced
    #
    ##########################################################################
    
    function LockFrame(clip v, int frame_first, int frame_last)
    {
        Trim(v, 0, frame_first) + Trim(v, frame_last+1, 0)
        Loop(frame_last-frame_first+1, frame_first, frame_first)
    }
    
    ##########################################################################
    
    
    v1 = LWLibavVideoSource("title_t00_edit.mkv", cache=false, prefer_hw=2).ShowFrameNumber()
    
    LockFrame(v1, 58, 98) # repeat frame 58 over frames 59 to 98
    Note that the very last frame of the video can't be replaced with LockFrame(). You have to manually identify which blocks you want to lock. But since this is probably only needed for a short portion of the video it's probably not much trouble.
    Quote Quote  
  6. Originally Posted by Selur View Post
    Code:
    clip = havsfunc.QTGMC(Input=clip, Preset="Fast", TFF=True, opencl=True)
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    clip = clip[::2]
    This worked, but made the credits choppy. I'll attach both an original and an edited part of the credits.

    Code:
    v1 = LWLibavVideoSource("title_t00_edit.mkv", cache=false, prefer_hw=2) 
    v2 = v1.QTGMC(FPSDivisor=2).Subtitle("qtgmc")
    motion = mt_motion(v1, thT=255).GreyScale()
    
    ConditionalFilter(motion, v1, v2, "AverageLuma()", "greaterthan", "4.0")
    I cannot get this working. Here's the error I'm getting:

    Code:
    Traceback (most recent call last):
    File "src\cython\vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
    File "src\cython\vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
    File "C:/Users/----/Desktop/Untitled.vpy", line 4, in 
    clip = core.ffms2.Source("C:\\Users\\----\\Desktop\\credits part.mkv", threads=6)
    NameError: name 'LWLibavVideoSource' is not defined
    I've settled for VapourSynth as an interpreter for these scripts. I haven't used coding to process video until now, so AviSynth and VapourSynth are totally new to me. I am (somewhat) familliar with C++, so I'm not completely new to programming.

    Another thing that bothers me is that the output has a huge bitrate (~80 Mbps). Is there any way one could limit that in the script or can that be done with ffmpeg?
    Image Attached Files
    Last edited by antoniu200; 30th Jan 2021 at 13:36.
    Quote Quote  
  7. This worked, but made the credits choppy. I'll attach both an original and an edited part of the credits.
    you could try to bob it (remove the 'clip = clip[::2]' part from the script)
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  8. Yep, that worked well. Thank you for your help!

    Is there any way I can sharpen the source using VapourSynth and QTGMC? Currently, I seem to need fft3dfilter, which is incompatible with Vapour. There are some sources of ported fft3dfilter, but I have no idea how to build those GitHub sources. Sure, I could learn, but that would be too much of a hassle over 5 frames of combing. So, is there anybody that has a dll of fft3dfilter for VapourSynth?

    Anyway, the process was pretty slow for a 45 minute video (1.1x speed). Wouldn't jagabo's idea make the processing faster, if it worked for me? I also have a feeling ffmpeg's average bitrate setting (libx264 -b:v 4M) was hard on resources. Is there no way of copying the source where no QTGMC is applied and re-encode strictly what is needed?
    Quote Quote  
  9. https://vsdb.top/ should have links to all the githubs&co which should have precompiled dlls.
    QTGMC has no sharpening, but there are tons of filters for sharpening for Vapoursynth,..
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  10. Originally Posted by antoniu200 View Post
    Code:
    v1 = LWLibavVideoSource("title_t00_edit.mkv", cache=false, prefer_hw=2) 
    v2 = v1.QTGMC(FPSDivisor=2).Subtitle("qtgmc")
    motion = mt_motion(v1, thT=255).GreyScale()
    
    ConditionalFilter(motion, v1, v2, "AverageLuma()", "greaterthan", "4.0")
    I cannot get this working. Here's the error I'm getting:

    Code:
    Traceback (most recent call last):
    File "src\cython\vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
    File "src\cython\vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
    File "C:/Users/----/Desktop/Untitled.vpy", line 4, in 
    clip = core.ffms2.Source("C:\\Users\\----\\Desktop\\credits part.mkv", threads=6)
    NameError: name 'LWLibavVideoSource' is not defined
    I've settled for VapourSynth as an interpreter for these scripts.
    The script I gave was for AviSynth. Maybe someone can translate it to VapourSynth for you. Sample encoding attached.
    Image Attached Files
    Quote Quote  
  11. Also, have you processed this video (in post #1) already? It's already been badly deinterlaced with a "discard field and duplicate line" function -- a poor choice. In light of that you will need some different processing. But I don't want to spend time on it until I know whether that's really your starting point.
    Last edited by jagabo; 30th Jan 2021 at 17:15.
    Quote Quote  
  12. Ok, here's script for the original video, assuming it's all you have:

    Code:
    LWLibavVideoSource("title_t00_edit.mkv")
    mot = mt_motion( thT=255).GreyScale()
    ConditionalFilter(mot, QTGMC(InputType=2).Subtitle("qtgmc(InputType=2)"), QTGMC(FPSDivisor=2).Subtitle("qtgmc(fpsdivisor=2)"), "AverageLuma()", "greaterthan", "4.0")
    Image Attached Files
    Quote Quote  
  13. Originally Posted by antoniu200 View Post
    I actually didn't deinterlace anything. What you see is an MKV copy of a DVD video, so no processing was done on that video, at least not from my side.
    This should answer your question.

    a "discard field and duplicate line" function
    So what you're saying is they discarded one of the fields that made an entire frame and then, the lines that weren't new were replaced with duplicates of the new lines?

    Code:
    LWLibavVideoSource("title_t00_edit.mkv")
    mot = mt_motion( thT=255).GreyScale()
    ConditionalFilter(mot, QTGMC(InputType=2).Subtitle("qtgmc(InputType=2)"), QTGMC(FPSDivisor=2).Subtitle("qtgmc(fpsdivisor=2)"), "AverageLuma()", "greaterthan", "4.0")
    Not sure how your script works, would it be ok if I asked you to explain it a bit? Does it get applied all through the video?

    EDIT: I just watched the processed mkv you sent me. It does get applied all throughout the video. I'd have to view it on my PC screen, since right now I'm on my phone.

    EDIT 2: I'll also give AviSynth another try, maybe I can get it working on my PC.
    Last edited by antoniu200; 31st Jan 2021 at 05:34.
    Quote Quote  
  14. Before I get to my computer, I read through this PDF file from Intel's website regarding deinterlacing: https://www.intel.com/content/dam/altera-www/global/en_US/pdfs/literature/wp/wp-01117-...nterlacing.pdf
    My conclusion is, by looking at how that video looks compared to other, more recent DVDs, I would say they used Bob with Interpolation, since everything seems blurred, but not necesarily detail-lacking.

    What I think the solution would be is to apply QTGMC (Bob with Interpolation) on those still images and, after that, denoise and sharpen the video file.

    If I am wrong, please say so and explain as good as you can, so I can understand what's going on.
    Quote Quote  
  15. Originally Posted by antoniu200 View Post
    Originally Posted by antoniu200 View Post
    I actually didn't deinterlace anything. What you see is an MKV copy of a DVD video, so no processing was done on that video, at least not from my side.
    This should answer your question.
    I just wanted to be sure.

    Originally Posted by antoniu200 View Post
    a "discard field and duplicate line" function
    So what you're saying is they discarded one of the fields that made an entire frame and then, the lines that weren't new were replaced with duplicates of the new lines?
    Yes, one field was discarded and replaced with a copy of the remaining field. That is, each line of the one field is duplicated. You can see the aliasing on diagonal edges that results from that. In the MPEG 2 compressed source video it's not perfect because of the errors caused by the lossy compression. Here's an 8x point enlargement of a small part of one frame:

    Image
    [Attachment 57075 - Click to enlarge]


    Originally Posted by antoniu200 View Post
    Code:
    LWLibavVideoSource("title_t00_edit.mkv")
    mot = mt_motion( thT=255).GreyScale()
    ConditionalFilter(mot, QTGMC(InputType=2).Subtitle("qtgmc(InputType=2)"), QTGMC(FPSDivisor=2).Subtitle("qtgmc(fpsdivisor=2)"), "AverageLuma()", "greaterthan", "4.0")
    Not sure how your script works, would it be ok if I asked you to explain it a bit? Does it get applied all through the video?
    After loading the video with LWlibavVideoSource() it builds a motion map with mt_motion(). In parts of the video that are still the motion map is almost totally black (again, lossy compression may create a few non-black pixels). In parts of the video where there is motion there are many white pixels. Here's the motion map showing the differences between frames 0 and 1:

    Image
    [Attachment 57076 - Click to enlarge]


    ConditionalFilter() takes three video clips as inputs. The first clip is used for analysis. Depending on that analysis it returns a frame from one of the other two clips. That is, it selects a frame from one clip or the other depending the analysis of the test clip. In this case the test clip is the motion map -- where there is motion we will take the frame from the second clip, when there is no motion we will take the frame from the third clip. The second clip is a copy of the source video that has been cleaned up by QTGMC's facility for fixing the result of bad deinterlacing. It smooths out all the aliasing created by the bad deinterlacing. The third clip is source video deinterlaced by QTGMC to remove the comb artifacts. The decision of which frame to pick is based on if the average brightness of the test (motion map) clip with a threshold of 4.0.

    If you don't want to filter the moving frames you can replace the second clip with "last" (without the quotes). AviSynth uses the clip named last whenever you don't specify a name. So a simple script like:

    Code:
    LWlibavVideoSource("filename.ext")
    QTGMC()
    really means

    Code:
    last = LWlibavVideoSource("filename.ext")
    last = QTGMC(last)
    return(last)
    I believe VapourSynth doesn't allow that shorthand. It requires all streams to be explicitly named.

    Originally Posted by antoniu200 View Post
    EDIT: I just watched the processed mkv you sent me. It does get applied all throughout the video. I'd have to view it on my PC screen, since right now I'm on my phone.
    Yes, all frames are processed. The motion frames have their aliasing artifacts smoothed by QTGMC, and the still frames are deinterlaced by QTGMC.

    Originally Posted by antoniu200 View Post
    EDIT 2: I'll also give AviSynth another try, maybe I can get it working on my PC.
    Unfortunately, QTGMC is one of the hardest third party filters to get working in AviSynth because it requires several other third party filters. Since they are all created by different people version number difference can lead to problems. I used 64 bit AviSynth+ for all the processing of your video. The processing is rather slow
    Quote Quote  
  16. By the way, the difference between the above script and simply using QTGMC(FPSDivisor=2) on the entire video isn't that big. You should consider consider just using that in VapourSynth.
    Last edited by jagabo; 31st Jan 2021 at 09:18.
    Quote Quote  
  17. Originally Posted by antoniu200 View Post
    Originally Posted by Selur View Post
    Code:
    clip = havsfunc.QTGMC(Input=clip, Preset="Fast", TFF=True, opencl=True)
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    clip = clip[::2]
    This worked, but made the credits choppy. I'll attach both an original and an edited part of the credits.

    If you use any method of conditional deinterlacing/processing, the credits will be half sampled at 25p, not 50p, so it will be less smooth. The credits are true interlaced content (50 samples / second). The rest of the content is 25

    The only ways to keep fluidity is to double rate deinterlace everything (e.g. QTGMC on everything, but that leaves duplicate frames on 25p sections), or VFR (mixed 25p, 50p), which is more difficult to do and less compatible in some scenarios(e.g. NLE's) . Or process it at 50p then re-interlace it . Interlaced (PAFF) or MBAFF material also has problems in some scenarios (e.g. web, some devices)



    Another thing that bothers me is that the output has a huge bitrate (~80 Mbps). Is there any way one could limit that in the script or can that be done with ffmpeg?
    Vapoursynth / Avisynth frameserve uncompressed video - you cannot limit the bitrate in the script - it has to be done with encoding settings. You can change the encoding settings with any encoder. FFMpeg can be compiled with avisynth and vapoursynth support (so it has a native demuxer and can read .vpy or .avs scripts directly)



    This is a similar conditional processing approach in vapoursynth , I use a gamma boost with levels to debug (frames which get affected will look brighter) - this way you can adjust settings to make sure you cover all frames you need, and ignore frames you don't need affected. It would be much faster than QTGMC'ing everything. You can use faster QTGMC settings; there are only minor quality differences between faster and slower settings, but the speed differential might be 1-10x slower/faster

    Code:
    import vapoursynth as vs
    import functools
    import havsfunc as haf
    core = vs.get_core()
    
    clip = core.lsmas.LWLibavSource(r'PATH\title_t00_edit.mkv')
    #clip = core.lsmas.LWLibavSource(r'PATH\credits part.mkv')
    
    def conditionalDeint(n, f, orig, deint):
        if f.props['_Combed']:
            return deint
        else:
            return orig
    
    #deint = core.std.Levels(clip, min_in=0, max_in=255, gamma=4, min_out=0, max_out=255, planes=[0])
    deint = haf.QTGMC(clip, Preset='faster', TFF=True)
    deint = deint[::2]
    
    
    combProps = core.tdm.IsCombed(clip, metric=1, cthresh=6)
    clip = core.std.FrameEval(clip, functools.partial(conditionalDeint, orig=clip, deint=deint), combProps)
    
    clip.set_output()
    Last edited by poisondeathray; 31st Jan 2021 at 09:23.
    Quote Quote  
  18. Originally Posted by jagabo View Post
    Also, have you processed this video (in post #1) already? It's already been badly deinterlaced with a "discard field and duplicate line" function -- a poor choice. In light of that you will need some different processing. But I don't want to spend time on it until I know whether that's really your starting point.
    Thank you very much for your in-depth explanation, I really needed it. I ended up using Hybrid anyway, since I didn't find the energy to mess with AviSynth.

    The end result was something well deinterlaced and the detail (and lack thereof) was very noticeable. So noticeable it made me think this was most probably some TV Rip and turned to DVD by some cheapo studio. I used QTGMC with Bob, Very Fast preset and Lossless 1 config. Also used some MAA2 Anti-Aliasing at 6x. The whole process took around 40 minutes to complete.

    Anyway, I'm giving up on trying to improve this 720x288 "DVD".

    Thank you to everyone who helped and taught me things about deinterlacing!

    If you use any method of conditional deinterlacing/processing, the credits will be half sampled at 25p, not 50p, so it will be less smooth. The credits are true interlaced content (50 samples / second). The rest of the content is 25
    Well, how did they manage to make the credits 50i, if the entire video is marked as 25i? Or is the end marking wrong?
    Quote Quote  
  19. Originally Posted by antoniu200 View Post

    If you use any method of conditional deinterlacing/processing, the credits will be half sampled at 25p, not 50p, so it will be less smooth. The credits are true interlaced content (50 samples / second). The rest of the content is 25
    Well, how did they manage to make the credits 50i, if the entire video is marked as 25i? Or is the end marking wrong?
    "25i" and "50i" mean the same thing , just different naming conventions. Both are 25 frames per second interlaced, or 50 fields per second

    A "PAL" DVD is always "25i" signal . But 25p sections are essentially 2:2 pulldown (both fields belong to the same time) .

    When you double rate deinterlace, 50 fields per second become 50 frames per second - that's what the animated credits are . But 25p sections become 25p with duplicates (50p with each frame repeated)
    Quote Quote  
  20. Understood, thank you!
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!