VideoHelp Forum
+ Reply to Thread
Page 4 of 4
FirstFirst ... 2 3 4
Results 91 to 119 of 119
Thread
  1. Originally Posted by Selur View Post
    Is there an Avisynth equivalent of vsdpir? Or is it/can it be included in Hybrid?
    totally missed that, vsdpir (https://github.com/HolyWu/vs-dpir), doubt there will be a Avisynth port since it used pytorch.
    Got an addon for Hybird (atm. ~9 GB download; mainly vsgan models and pytorch dependency) which adds VSGAN , DPIR (https://github.com/HolyWu/vs-dpir), FFDnet (https://github.com/HolyWu/vs-ffdnet/tree/master/vsffdnet) and vs-RIFE (https://github.com/HolyWu/vs-rife). Send my a pm about it and I can send you a link.
    Thank you. Interesting. Is all this worth the effort for "natural" movie or captured VHS "natural" video content as well, or is the benefit mostly for anime or CGI content? Are there typical scenarios when to consider it?
    Quote Quote  
  2. Not totally sure atm. (haven't had time to do enough testing)
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. Another new scenario for me. This one is Progressive, but suffers from blending frame. So I assume the only thing needed here is SRestore? as I don't need to De-interlace or TFM it?
    SRestore 25 or 23.976 didn't yield good enough results. SRestore 23.976 fix some problem in other sections of the video, but not here. Thanks!
    Image Attached Files
    Quote Quote  
  4. That video has too much blending for SRestore to work well. If you use it to reduce the frame rate to 12 or 12.5 fps you may get rid of most of the blending in the character animation but the panning shots will be noticeably jerky/flickery.
    Quote Quote  
  5. I'm pretty sure I prefer the blending over jerky/flicker video. Will stick with SRestore 23.976 and it fixed a lot of the other problem, and the rest - it is what is is. Thank you
    Quote Quote  
  6. Does anyone know what exactly in QTGMC cause the jerky movement (QTGMC.mp4)? As this is progressive, I'm using QTGMC for cleaning up the bad capture and de-noising. I found it more effective than calling santiag(), and that QTGMC() + SMDegrain() has better results then just SMDegrain():

    SRestore(25)
    QTGMC(Preset="Faster", Inputtype=2)
    But it doesn't matter which preset I'm using, the end results are jerky, compared to when I use the exact script without QTGMC (without_qtgmc.mp4). Also attaching the source.

    Thanks as always!
    Image Attached Files
    Quote Quote  
  7. Your source was originally progressive but has been through an NTSC to PAL conversion with field blending. You can't call SRestore() while the frames are still interlaced. And since the video has field blending you need to double frame rate deinterlace before calling SRestore(). And the correct frame rate is 24 or 23.976, not 25.

    Code:
    AviSource("source.avi")
    AssumeTFF()
    QTGMC(preset="faster")
    SRestore(23.976)
    Quote Quote  
  8. Thank. That indeed solved the problem. But why the current frame rate is 23.976? I'm planing the same part one by side, and the 23.976 is more jerky compared to the 25.
    Quote Quote  
  9. I'm seeing the opposite. At 25 fps the output contains a duplicate every 25 frames -- creating an obvious jerk every second. Note that SRestore() can take a while to lock into a pattern. So the first second or so of a clip may not be as smooth as the rest.
    Quote Quote  
  10. It's possible I have something else in the script that cause that. I will try your minimal example above when home.

    There's something I have been thinking about yesterday. When I ripped and post-processed DVDs back at the day (they were 30FPS NTSC DVDs), the recommendation was to use TFM(). That would mostly lead to a 3-2 pull-down TDecimate() would clean it up.

    On all the VHS material (the Cartoons really) I worked with, I never used TFM(), and I never seen any pull-down. The results after bobbing was either 1 movement per frame, and then I would just QTGMC(FPSDividor=2), or 2 movement per frame, means the content is progressive and there's nothing I need to do with it. On some cases, De-interlacing would have blended frames, so I would use SRestore.

    Is TFM just Bober? when should it be used on the VHS captured content (if it should be used at all)?
    Quote Quote  
  11. Cartoons are never animated at 50 fps or 60 fps. On film they almost always have character animation at 12 fps and panning shots at 24 fps. When converted to NTSC video they are usually go through 3:2 pulldown. When film is converted to PAL video it is often sped up to 25 fps. Cheap PAL productions often start with analog NTSC video tape that goes through a standards converter that resizes the frame and adds field blending to convert 59.94 fields per second to 50 fields per second. That's much cheaper than going back to the original film (which often doesn't exist anymore anyway) and digitizing it.

    TFM is a field matcher. It starts with one field of a frame, then looks at the previous field and next field to see which leads to the least combing. With film sources one of those fields will match with no comb artifacts. Sometimes there is still some combing because a field is orphaned (or a bad capture, or some other problem). If it finds residual combing it deinterlaces the areas with comb artifacts (you can disable this if you want with the pp option).

    You can interleave two calls to TFM() to bob a video. The simplest form it looks like this:

    Code:
    Interleave(TFM(field=0), TFM(field=1))
    Depending on whether the frames are TFF or BFF you may need to reverse the field values.

    But this is only useful in some special circumstances.
    Quote Quote  
  12. Thank you for the professional answer as always

    So I'm trying to connect the dots in here. The Cartoons on the DVD were indeed non-interlaced 30FPS, and you can easily see the 3-2 Pull-Downs on the Source file. The way we handled it back then was:

    TFM().TDecimate()
    The TDecimate part is clear. It takes the 3-2 pull down, and It Decimate the duplicated frames, so it will end up with 1 movement per frame on panning shot.
    But the original video was already 3-2, so why we call TFM() and the TDecimate() and not TDecimate() directly?

    Now, the VHS captures - are interlaced cartoons at 25FPS. Post QTGMC de-interlacing, they are at 50FPS, and that's as you mention is the wrong FPS. The way I "Solved" it is by using FPSDivisor=2 (or SRestore for blended frames). This end up with 1 movement per frame again on panning shots.

    So am I'm right to assume I will not be using TFM() on interlaced content? TFM() is only used for non-interlaced content where there's pull-downs?
    Quote Quote  
  13. Source.avi in post #96 is interlaced PAL. After QTGMC().SRestore(25) it still has a duplicate frame every 25 frames. So you need to SRestore to 24 fps, not 25 fps.
    Quote Quote  
  14. So what would be the TFM().TDecimate(cycle=25, cycler=1)
    So would that:

    QTGMC(preset="faster")
    SRestore(23.976)
    Be exactly like that:

    QTGMC(preset="faster")
    TFM().TDecimate(cycle=25, cycler=1)
    ?
    Quote Quote  
  15. QTGMC + Denoise + sRestore to 12 + frame interpolation to 24 -> not smooth

    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  16. Which Denoiser are your using here (and which settings), and which frame interpolation you used? (ie. Can you please post your script ).
    Quote Quote  
  17. I had a 2nd read on all this thread (now after multiple videos under my belt) and tried to summarize everything. Mostly for my future-self, other readers, and to confirm with you I understand everything correctly. Note everything below is specifically for cartoons.

    1. Find a panning shot, and walk it frame by frame. If every 2nd frame has a combing effect to it, It means it's interlaced video. Another way to verify it, is to run "Yadif(1)" (adjust TFF/BFF Accordingly) and the results should be one movement per frame. If you see blended frames post Yadif, it means the video is frame blended.
    2. If every frame has combing in it, It means it's phase-shifted.
    3. If you see no combing at all, It's progressive.

    Now, how to handle each of the scenario:
    1. If the video is interlaced, de-interlace it using QTGMC, and bring it back to either 25 or 24 using SRestore. Check which one doesn't leave any jagged panning.
    2. If Video is interlaced and frame-blended, do the same - use SRestore 25 or 23.976.
    3. If it's phase-shifted, TRM() it.
    4. If Video is progressive, check that during the panning shot, there is movement every 1 frame. If that not the case, check the pattern. 3:2 for example, means you need to run TDecimate(), if the 25th frame is duplicated, use TDecimate(cycle=25, cycler=1), etc. If you see blended frames, use SRestore like you do with interlaced.
    I think that's about it.
    The only question I still trying to find answers for, is:
    1. Why on the DVD Cartoons we worked on, we used TFM().TDecimate(), instead of just TDecimate() (those cartoon were progressive, NTSC, and I see no differences between using TDecimate() and TFM().TDecimate()).
    2. When the cartoon is interlaced, and has no blending frames. Is SRestore the right way to down it back to 25/24? Or perhaps I should use something else like AssumeFPS?

    Thanks!
    Quote Quote  
  18. Which Denoiser are your using here (and which settings), and which frame interpolation you used? (ie. Can you please post your script ).
    Didn't keep the script used QTGMC preset=Fast, DPIR as deblocker/denoiser iirc mit strength 30, sRestore defaults aside from frate, and Rife with anime model for the frame interpolation.

    So the script probably looked like:
    Code:
    # Imports
    import os
    import sys
    import ctypes
    # Loading Support Files
    Dllref = ctypes.windll.LoadLibrary("I:/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Import scripts folder
    scriptPath = 'I:/Hybrid/64bit/vsscripts'
    sys.path.insert(0, os.path.abspath(scriptPath))
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/FrameFilter/RIFE/RIFE.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/libtemporalmedian.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DenoiseFilter/NEO_FFT3DFilter/neo-fft3d.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/EEDI3.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/NNEDI3CL.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/temporalsoften.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/scenechange.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll")
    # Import scripts
    import G41Fun
    import adjust
    import havsfunc
    # source: 'C:\Users\Selur\Desktop\Source.avi'
    # current color space: YUV422P8, bit depth: 8, resolution: 720x576, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: top field first
    # Loading source using FFMS2
    clip = core.ffms2.Source(source="C:/Users/Selur/Desktop/Source.avi",cachefile="E:/Temp/avi_57f7b5c99d1dc57d9f89d294923ed1fa_853323747.ffindex",fpsnum=25,format=vs.YUV422P8,alpha=False)
    # making sure input color matrix is set as 470bg
    clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 25
    clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # setting field order to what QTGMC should assume (top field first)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=2)
    # Deinterlacing using QTGMC
    clip = havsfunc.QTGMC(Input=clip, Preset="Fast", TFF=True, opencl=True) # new fps: 50
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    from vsdpir import DPIR
    # adjusting color space from YUV422P8 to RGBS for vsDPIRDeblock
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # deblocking using DPIRDeblock
    clip = DPIR(clip=clip, strength=30.000, device_index=0)
    # adjusting color space from RGBS to YUV444P16 for vssRestore
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P16, matrix_s="470bg", range_s="limited")
    # adjusting frame count and rate with sRestore
    clip = havsfunc.srestore(source=clip, frate=12.000, omode=6, speed=9, thresh=16, mode=2)
    # Color Adjustment
    clip = adjust.Tweak(clip=clip, hue=0.00, sat=1.30, cont=1.00, coring=True)
    clip = G41Fun.SpotLess(clip=clip)
    # cropping the video to 688x564
    clip = core.std.CropRel(clip=clip, left=16, right=16, top=6, bottom=6)
    # adjusting color space from YUV444P16 to RGBS for vsRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # adjusting frame count&rate with RIFE
    clip = core.rife.RIFE(clip, model=2) # new fps: 24
    # adjusting output color from: RGBS to YUV420P10 for x265Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg", range_s="limited")
    # set output frame rate to 24.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24, fpsden=1)
    # Output
    clip.set_output()
    This was just a quick test to how much frame interpolation would help or not, I did not fine-tune any of the filters.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  19. Originally Posted by Okiba View Post
    So what would be the TFM().TDecimate(cycle=25, cycler=1)
    So would that:

    QTGMC(preset="faster")
    SRestore(23.976)
    Be exactly like that:

    QTGMC(preset="faster")
    TFM().TDecimate(cycle=25, cycler=1)
    ?
    Definitely not the same as you can easily verify with your "source.avi" of post #96
    The first gives 23.976fps, the second gives 48fps which doesn't make sense.
    Quote Quote  
  20. Thanks Selur! Might be a good introduction to Vapoursynth

    Definitely not the same as you can easily verify with your "source.avi" of post #96
    Yea, sorry. I forgot to mentioned post #107 is after some research I did including the original source.avi file.
    Quote Quote  
  21. Originally Posted by Okiba View Post
    So what would be the TFM().TDecimate(cycle=25, cycler=1)
    So would that:

    QTGMC(preset="faster")
    SRestore(23.976)
    Be exactly like that:

    QTGMC(preset="faster")
    TFM().TDecimate(cycle=25, cycler=1)
    ?
    No. First of all, TDecimate(Cycle=25, CycleR=1) would reduce the frame rate from 50 fps to 48 fps. You would need TDecimate(Cycle=25, CycleR=13) to reduce it to 24 fps. Secondly, SRestore() preferentially removes blended frames, whereas TDecimate() preferentially removes duplicate frames. The latter will retain many more blended frames.
    Quote Quote  
  22. Thanks again everyone! So I assume that if no one correct the summary I posted above, It's correct

    Secondly, SRestore() preferentially removes blended frames, whereas TDecimate() preferentially removes duplicate frames. The latter will retain many more blended frames.
    So if there's not blended frames after de-interlacing, there's no difference between using SRestore(25) and AssumeFPS(25)?
    Quote Quote  
  23. Originally Posted by Okiba View Post
    Thanks again everyone! So I assume that if no one correct the summary I posted above, It's correct
    There was so much wrong with it that I didn't have the energy to correct it all.

    Originally Posted by Okiba View Post
    Secondly, SRestore() preferentially removes blended frames, whereas TDecimate() preferentially removes duplicate frames. The latter will retain many more blended frames.
    So if there's not blended frames after de-interlacing, there's no difference between using SRestore(25) and AssumeFPS(25)?
    That's not correct.
    Quote Quote  
  24. There was so much wrong with it that I didn't have the energy to correct it all.
    Hehe, OK. Fair enough

    Thanks everyone!
    Quote Quote  
  25. Originally Posted by jagabo View Post
    Originally Posted by Okiba View Post
    Thanks again everyone! So I assume that if no one correct the summary I posted above, It's correct
    There was so much wrong with it that I didn't have the energy to correct it all.
    LOL, same here
    @OP never mind. It's better trying to understand the concepts rather than trying to establish sets of rules which may fail with the next video source
    Last edited by Sharc; 15th Aug 2021 at 11:26.
    Quote Quote  
  26. Originally Posted by Sharc View Post
    It's better trying to understand the concepts rather than trying to establish sets of rules which may fail with the next video source
    I agree. The OP doesn't seem to have grasped the fundamentals.
    Quote Quote  
  27. I'll start with point #1

    Originally Posted by Okiba View Post
    1. Find a panning shot, and walk it frame by frame. If every 2nd frame has a combing effect to it, It means it's interlaced video.
    If you see progressive frames like that during motion the video likely isn't fully interlaced, it's more likely telecined from a progressive source. But, sometimes with analog caps, a field is missed during capture and the previous field is used again. Those frames will appear progressive because both fields are the same. But that wouldn't normally be in a regular pattern like you describe here.

    Originally Posted by Okiba View Post
    Another way to verify it, is to run "Yadif(1)" (adjust TFF/BFF Accordingly) and the results should be one movement per frame.
    Yes. That indicates interlaced video.

    Originally Posted by Okiba View Post
    If you see blended frames post Yadif, it means the video is frame blended.
    It could be frame blended or field blended. If you see pairs of identical (deinterlacing artifacts aside) blended frames it's frame blended. If you see some frames with blending, others not, it field blended.
    Last edited by jagabo; 17th Aug 2021 at 18:22.
    Quote Quote  
  28. As a side note: Frames might also just appear progressive if none or mainly vertical motion is happening. So make sure to look at horizontal movement,..
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  29. I'll start with point #1
    As a side note: Frames might also just appear progressive if none or mainly vertical motion is happening. So make sure to look at horizontal movement,..
    Thank you for clearing it out for me.

    It's better trying to understand the concepts rather than trying to establish sets of rules which may fail with the next video source
    I agree. The OP doesn't seem to have grasped the fundamentals.
    Video, is hard.

    I was but 8 years old, when people moved from CRTs to LCD. And just until a year ago - when my father asked me to digitize the family camcorder footage, I knew nothing of video. I have to learn which VHS hardware I should/shouldn't use, set up VirtualDub with the proper settings, figure out which video format to use, and how to make it smaller, how to properly archive videos, what's AviSynth is, what QTGMC means and the list just goes on and on. Every step I do forward, I also have to move downwards as more information unfolds. For non-professionals, reading the Wiki page Telecine is complex. There's a lot the page expected your to know, and you need to keep reading other sources. On top of that, there isn't much information about those things online. You won't be finding a YouTube channel, or even a blog explaining when to use TFM(). I assume that's because the analog technology is disappearing from the world. I also suspect there's not a lot of professional people who really know the technology out there those days. So in this sense, this forum is a gold-mine.

    So it's left for the casual user to learn by reverse engineering, and that's what I did up until now. Posting a video, getting answers, drill into those answer, asking more questions, trying things. And that's something leads to errors. Your right to mentioned I'm not familiar with the concept. Being familiar with the concepts, requires a LOT of investment. I am getting there, but as a hobbyist, It's happening slow. So the easiest way, at least for me - to keep pushing forward, was to extract logic, by examples.

    That's being said, I completely understand the lack of energy the walk-me-through and correct my errors. Especially as I'm asking a lot of questions And I understand that if I want to get better at that, I have to invest more time with the fundamentals. Anyhow, I appreciate any person helping another person over the internet, and I thank you nevertheless.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!