VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 34
Thread
  1. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Hi there, recently I started digitizing my parents Marriage tapes which were recorded on something else before transferred to VHS tapes. I've paid a local store to do the job but I think they did a poor job. Though admittedly the source was in poor condition as well. At least they were kind of enough to let me keep the original interlaced mpeg2.
    I've uploaded their footage to utube for reference: https://www.youtube.com/watch?v=K7qzI5QGwWo

    I was thinking of using QTGMC to deinterlace but then thought, might as well touch up the video where I can. To start off, does anyone know what kind of artifacts are present in this sample? There are lots of jagged lines and the color glitches. I hope someone can lead me in the right direction.
    Image Attached Images  
    Image Attached Files
    Quote Quote  
  2. 1. Deinterlacing is just going to add to all your problems. Forget about that for now. It really is not needed for anything, and it certainly is not going to do anything to fix all the problems you have. In fact, because of #2 below, deinterlacing will just make things worse.

    2. The Sample.mp4 file is full of duplicate frames. It also has aspect ratio problems (the black bars at the top and bottom).

    3. Do you know anything at all about what equipment was used to originally take the video? Was it some sort of professional tape, like Beta SP (not to be confused with the consumer Sony Beta format, which was a rival to VHS)? You say it was "recorded on something else before being transferred to VHS." What was it recorded on?

    4. Do you know anything about the VHS tapes that you had the local store transfer? Were they, in fact, PAL format, or were they something else? I am trying to figure out why you are getting so many duplicate frames.

    5. Is the Sample.MP4 file something that you cut directly from what the transfer store gave you, or did YOU re-encode their video? If you did, then that will further degrade it and make it difficult for people in this forum to provide correct answers. You always need to do a lossless cut from the original video so that we know what you're working with.

    Neglecting all the problems already mentioned, the video is filled with what look like tracking and time base errors. Both can only be corrected by doing a better transfer. See if you can find another place to have the tape re-transferred.
    Quote Quote  
  3. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    1. I only want to deinterlace so that I can upload it to YouTube for my overseas relatives. I would also upscale it so it won't be totally butchered by YouTube's low bitrate compression.

    3. Sadly no, my parents are tech-illiterates and the person filming was my uncle, who died long ago from cancer. Though I presume it was probably something cheap, as my mom said it looked pretty bad even back then.

    4. It has be to PAL, I live in South-East Asia.
    The shop owner told me the duplicate frames were due to the condition of my tapes. It was very moldy and I remember playing with the tapes when I was still a kid. So it was in very bad condition

    5. I exported the sample from Sony Vegas, my bad. Would Mpg2Cut2 be suitable for lossless cuts?

    I'm afraid VHS recovery is not easy to find here. This is the only store I can find that still does it, most others have already closed down. I would have to travel to another state for another store.
    Quote Quote  
  4. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Here's a sample I cut using Mpg2Cut2
    Image Attached Files
    Quote Quote  
  5. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    You may get a better transfer if you send it away to an expert. But maybe not, as the tape could be creased or otherwise damaged physically.
    Quote Quote  
  6. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Working with what I have, what would I need to do to restore it to a better state?
    Quote Quote  
  7. I use Vegas every day, and have for over fifteen years. You need to match your rendering settings to the original clip size and aspect ratio. That is why you had those black bars. When you see black bars that are not in the original, you have done something wrong, and you are throwing away precious resolution, something you don't have to spare when dealing with VHS video.

    This sample looks much, much better, so quite a few of the problems in that original sample came from what you did in Vegas.

    Except for the duplicate frames and the obvious tracking problem, this looks like pretty decent VHS footage. You can adjust the gamma (contrast and histogram end points) in Vegas using Levels and/or Color Curves. This will help "punch it up" a little by providing a little more contrast. Don't do too much, or you'll lose detail. Use the Vegas Videoscop, set to the Waveform display, to help you get the correct contrast levels.

    You can get rid of the single-frame duplicates by using the FilldropsI() AVISynth function I created years ago. You'll have to use AVISynth, and if you don't yet know how to do that, it is beyond what I can do in a single forum post to teach you. Here is that function, however, in case you do know AVISynth or want to try:

    Code:
    function filldropsI (clip c)
    {
      even = c.SeparateFields().SelectEven()
      super_even=MSuper(even,pel=2)
      vfe=manalyse(super_even,truemotion=true,isb=false,delta=1)
      vbe=manalyse(super_even,truemotion=true,isb=true,delta=1)
      filldrops_e = mflowinter(even,super_even,vbe,vfe,time=50)
    
      odd  = c.SeparateFields().SelectOdd()
      super_odd=MSuper(odd,pel=2)
      vfo=manalyse(super_odd,truemotion=true,isb=false,delta=1)
      vbo=manalyse(super_odd,truemotion=true,isb=true,delta=1)
      filldrops_o = mflowinter(odd,super_odd,vbo,vfo,time=50)
    
      evenfixed = ConditionalFilter(even, filldrops_e, even, "YDifferenceFromPrevious()", "lessthan", "0.1")
      oddfixed  = ConditionalFilter(odd,  filldrops_o, odd,  "YDifferenceFromPrevious()", "lessthan", "0.1")
    
      Interleave(evenfixed,oddfixed)
      Weave()
    }
    This works extremely well and, in most cases, you won't be able to detect the synthesized frame that is inserted for the second duplicate.

    There is, of course, absolutely nothing you can do about the tearing along the top, unless the camera is on a tripod and not moving. In that case you can create a duplicate track in Vegas; on the lower of the two tracks find a frame where the top of the frame is OK (no tearing), and repeat that frame over and over again; and then use the mask function (in the pan/crop box in Vegas) on the video in the top track to let the bottom track show through. As long as there is not too much motion happening at the top of the frame (which is pretty typical of a lot of video) this trick will work.
    Quote Quote  
  8. Here is a link to a version of your clip with he single-frame dups removed (nothing can be done about the 1/2 second freeze). I also changed the gamma to make it a little punchier and did very light denoising with MDegrain2. A little sharpening might bring out a little more apparent (not real) detail.

    Slightly Improved Version

    I did look at the video by separating it out into individual fields, and some of the glitches are only in one field. It might be possible to bob the video; detect glitches that only happen in one field, but not the other; use filldrops (the progressive version) to create a new field from the surrounding ones; and then put everything back to interlaced. That would not be easy, however, and won't help with the tracking noise at the top of the frame.

    The way to deal with the tracking is to get your own VHS desk and switch it to manual tracking. Take the sections which have bad glitches and then "tune" the tracking control until the top of the frame (where the problems happen) looks OK. Ignore any new noise that starts to show up at the bottom of the frame. Capture that, even if the rest of the frame looks bad. Then, line up the original capture and the new capture and use the masking feature in Vegas to combine the top part of your new capture with the bottom part of the old capture.
    Quote Quote  
  9. Some sharpening too:

    Code:
    Mpeg2Source("Clip.d2v", CPU2="ooooxx", Info=3) 
    
    ColorYUV(gain_y=20, off_y=-4, gamma_y=50, cont_u=20, cont_v=20)
    
    SeparateFields()
    even = SelectEven().RemoveDirtMC(80)
    odd = SelectOdd().RemoveDirtMC(80)
    Interleave(even, odd)
    Weave()
    QTGMC(preset="fast")
    Stab()
    
    ReplaceFramesMC(308,2)
    ReplaceFramesMC(502,4)
    ReplaceFramesMC(520,4)
    ReplaceFramesMC(526,2)
    ReplaceFramesMC(544,15)
    
    Spline36Resize(432, 320)
    TemporalDegrain(SAD1=200, SAD2=150, sigma=8)
    MergeChroma(aWarpSharp(depth=20))
    ChromaShift(l=-4)
    Sharpen(0.3)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=768, fheight=576)
    aWarpSharp(depth=5)
    Sharpen(0.3)
    Image Attached Files
    Quote Quote  
  10. Nice job on the color, jagabo. I did mine on my uncalibrated laptop and, looking at it now on my main, calibrated, comuter, I got the colors too saturated. I also like the slight sharpening.
    Quote Quote  
  11. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    I knew that eventually I would've to deal with Avisynth, honestly I'm a little afraid. How would i use filldrops with qtgmc? And forgive me for my lack of knowledge, but for the script above by jagabo, why do I need to downscale it only to upscale it again? If possible I'd also like to upscale it to 1080p just for uploading to YouTube. I'd make another one without upscaling for personal use.
    Quote Quote  
  12. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Also, if I'm correct qtgmc has the Bob function to interpolate frames. Wouldnt that conflict with filldrops?
    Quote Quote  
  13. Originally Posted by HansLau View Post
    How would i use filldrops with qtgmc?
    filldropsI() starts with interlaced frames and outputs interlaced frames. So you just call QTGMC() after filldropsI(). I called QTGMC() first then manually specified which frames to replace with ReplaceFramesMC(). That lets you replace more than one missing frame (2 frames after QTGMC) but it doesn't work well on the 1/2 second freeze. And it's tedious if you have lots of video to work with.

    Originally Posted by HansLau View Post
    why do I need to downscale it only to upscale it again?
    With blurry sources Sharpen() works better if you downscale the video first, then upscale with nnedi3, aWarpSharp, and Sharpen().


    Originally Posted by HansLau View Post
    If possible I'd also like to upscale it to 1080p just for uploading to YouTube. I'd make another one without upscaling for personal use.
    nnedi3 can upscale to larger sizes. You'll also want to convert to rec.709 colors if you go with HD resolutions .

    Code:
    ColorMatrix(mode="rec.601->rec.709")
    Sharpen(0.3)
    nnedi3_rpow2(4, cshift="Spline36Resize", fwidth=1440, fheight=1080)
    You'll probably want to adjust the Sharpen() and aWarpSharp() calls too.

    Sometimes a stepwise upscale will work better:

    Code:
    Sharpen(0.3)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=768, fheight=576)
    aWarpSharp(depth=5)
    Sharpen(0.4)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1440, fheight=1080)
    aWarpSharp(depth=5)
    Sharpen(0.5)
    It's easy to overdo the sharpening -- creating halos and increasing noise.
    Image Attached Files
    Last edited by jagabo; 6th Apr 2018 at 18:44.
    Quote Quote  
  14. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    I'm still learning the basics of Avisynth scripting now, so I haven't tried processing the codes above. The entire footage is handheld so I'm not sure if I can find a stable section to overlay it using Vegas. Rerecording it is something I might do in the future. But in the later sections of the video, there is many duplicate frames that sometimes lasts for more than a second. Is there anything I can do to alleviate this?
    Image Attached Files
    Quote Quote  
  15. Magic does not exist in the real world.


    Motion estimation, which is the technology at the heart of "filldropsI" and the other methods presented here, tracks groups of pixels from one frame to the next and, when you want to create a frame in between existing frames, either for repair, slow motion, or frame rate conversion, it estimates where each group of pixels would be located at some intermediate point in time. The shorter the duration of time between the two frames, the better this estimation works. It is usually almost perfect when there is only 1/60 of a second between frames, but by the time you get to 1/24 of a second, the resulting frames can start to show artifacts.

    Some types of scenes can work almost perfectly: the movement from panning a camera horizontally across a static (non-moving) landscape usually produces perfect results. On the other end of the spectrum, a person walking in front of a camera often produces horrendous artifacts because arms and legs move in opposition to the main motion.

    Repairing a dozen or more consecutive frames cannot be done -- even if you used motion tracking software, which is similar but requires the YOU define the objects to be tracked -- because there is no way to accurately predict where everything is going to go.

    There are some tricks that can work however, if you are willing to take the time. If you use Vegas, it has "velocity envelopes." You can use these to subtly and gradually slow down the motion for 1-3 seconds prior to the extended motion freezing. Then, after the freeze you do the reverse, speeding the motion back up again. You do this to create enough additional frames so that you can cut out all the freeze frames and replace them with the slow motion. You also add enough additional frames beyond what it need to replace the frozen frames so you have enough to create a cross fade. This avoids having a "jump cut," which is what your freeze frame problem has created.

    This will work, and it may look better than what you have now. I have written dozens and dozens of scripts within Vegas (it has its own scripting language), so it might be possible to automate this process.
    Quote Quote  
  16. I just looked at the sample you uploaded. Unlike the previous sample, this one has massive numbers of drops so that you don't have even 1/2 second of continuous video before another extended drop happens. I have no idea how to fix this, and I suspect it cannot be fixed.

    If you still have the tapes, the ONLY solution is to have them re-transferred. Whoever did it used a technology that was unable to encode frames as they were losing sync. I do not have knowledge of all the capture cards and technologies that are used, but if you transfer using old-fashioned DV technology, it can capture pretty much anything. Using a VCR with a time base corrector (TBC) might help stabilize some of the timing signals that are causing the capture card to lose sync.

    Finally, some VCRs have a menu option to put out a blue screen whenever the video is beginning to tear or corrupt or go to snow. You want to turn that off. As bad as video glitches can be, with AVISynth, you can often do something with corrupted video frames, as long as there is something there. Once it is 100% gone, you end up with what you have.

    So, the "solution" is to get a new transfer.
    Last edited by johnmeyer; 7th Apr 2018 at 06:51. Reason: typo
    Quote Quote  
  17. Yes, what you really need to do is capture the video again. Those duplicate frames are not on the tape; they happen during capture. When the capture device loses sync it repeats the last frame -- preferable to just returning garbage. Your best bet is to use a different VCR and an old Panasonic ES10 or ES15 DVD recorder as a passthrough device. Those have a built in line time base corrector that may be able to clean up the sync.
    Quote Quote  
  18. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    I don't have funds to recapture now, so I'll have to live with it.
    So I've been reading up on AviSynth and my head's still spinning. I cobbled together a working script, but its really slow and looks.. not horrible? I think I denoised too much though, I removed a lot of apparent detail, faces look flat.
    I̶ ̶w̶a̶s̶ ̶a̶b̶l̶e̶ ̶t̶o̶ ̶g̶e̶t̶ ̶S̶t̶a̶c̶k̶H̶o̶r̶i̶z̶o̶n̶t̶a̶l̶ ̶w̶o̶r̶k̶i̶n̶g̶ ̶b̶u̶t̶ ̶n̶o̶w̶ ̶i̶t̶ ̶d̶o̶e̶s̶n̶'̶t̶ ̶a̶n̶y̶m̶o̶r̶e̶.̶ ̶H̶o̶w̶ ̶i̶s̶ ̶t̶h̶i̶s̶?̶ got it
    A̶n̶d̶ ̶I̶ ̶c̶a̶n̶'̶t̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶l̶i̶f̶e̶ ̶o̶f̶ ̶m̶e̶ ̶g̶e̶t̶ ̶s̶o̶u̶n̶d̶ ̶w̶o̶r̶k̶i̶n̶g̶ Mix it on MeGUI
    Tbh I don't think some of the filters are even applying...

    can u please help point out how wrong i am?
    function filldropsI (clip c)
    {
    even = c.SelectEven().DeVCR(25).RemoveDirtMC(70)
    super_even=MSuper(even,pel=2)
    vfe=manalyse(super_even,truemotion=true,isb=false, delta=1)
    vbe=manalyse(super_even,truemotion=true,isb=true,d elta=1)
    filldrops_e = mflowinter(even,super_even,vbe,vfe,time=50)

    odd = c.SelectOdd().DeVCR(25).RemoveDirtMC(70)
    super_odd=MSuper(odd,pel=2)
    vfo=manalyse(super_odd,truemotion=true,isb=false,d elta=1)
    vbo=manalyse(super_odd,truemotion=true,isb=true,de lta=1)
    filldrops_o = mflowinter(odd,super_odd,vbo,vfo,time=50)

    evenfixed = ConditionalFilter(even, filldrops_e, even, "YDifferenceFromPrevious()", "lessthan", "0.2")
    oddfixed = ConditionalFilter(odd, filldrops_o, odd, "YDifferenceFromPrevious()", "lessthan", "0.2")

    Interleave(evenfixed,oddfixed)
    Weave()
    }

    LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\fft3dfilter.dll")
    LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\agc.dll")
    LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\colormatrix.dll")
    Import("C:\Program Files (x86)\AviSynth\plugins\RemoveDirtMC.avsi")

    SetMemoryMax(1024)
    SetMTMode(5)
    Mpeg2Source(d2v="clip.d2v", cpu2="ooooxx", info=3)
    audio=NicMPG123Source("Clip Tc0 L2 2ch 48 256 DELAY 144ms.mp2")
    AssumeFPS(50)
    SetMTMode(2,4)
    a = last

    ColorYUV(gain_y=20, off_y=-4, gamma_y=50, cont_u=20, cont_v=20)
    ChromaShift(C=-2,L=-4) # (RGB32, YUY2, YV12) align chroma over luma

    AssumeTFF().SeparateFields()
    filldropsI(last)
    setmtmode(5)
    HDRagc(coef_gain=1.3,corrector=0.9,protect=1) # (YUV2/YV12 progressive) , High Dynamic Range Automatic Gain Control
    #HDRAGC(coef_gain=0,8 ,coef_sat=1 )
    SetMTMode(2,4)
    Cnr2("oxx",8,16,191,100,255,32,255,false)
    FixChromaBleedingMod()
    SmoothUV()
    Levels(33,1.0,255,0,240,coring=false,dither=true)

    QTGMC(preset="medium",border=true)
    StabMod()

    ColorMatrix(mode="rec.601->rec.709")
    TemporalDegrain(SAD1=200, SAD2=150, sigma=8)
    MergeChroma(aWarpSharp(depth=20))
    Sharpen(0.3)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=768, fheight=576)
    aWarpSharp(depth=5)
    Sharpen(0.4)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1440, fheight=1080)
    aWarpSharp(depth=5)
    Sharpen(0.5) #LSFmod(edgemode=2)
    #AddGrainc(1.0,1.0)

    #StackHorizontal(a , b)
    SetMTMode(1)
    GetMTMode(false) > 0 ? distributor() : last
    I'm not seeking perfection, just anything to improve what I already have
    Last edited by HansLau; 11th Apr 2018 at 09:36.
    Quote Quote  
  19. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    It doesnt look much better, in fact in some ways worse..Whats the artifacts in the hands?? Back to square one for me.
    Quote Quote  
  20. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Upload
    Image Attached Thumbnails Click image for larger version

Name:	Sampling2nd000506.bmp
Views:	255
Size:	4.17 MB
ID:	45145  

    Image Attached Files
    Quote Quote  
  21. You've glued a lot of different scripts together without considering overlapping functions. For example ColoryYUV() is adjusting levels and saturation, and so ire HDRAGC() and Levels(). Simplify your script and deal with issues one at a time, adding to the script as you go, until you get what you want.
    Quote Quote  
  22. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Got it, I'd get to adjusting that. Btw, would it be better to resize it 1350x1080 instead of 1440x1080? It would be upscaling x1.85 in both axis instead of x1.85 vertically and x2 horizontally.
    Last edited by HansLau; 12th Apr 2018 at 04:28.
    Quote Quote  
  23. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Due to the duplicate frames, deinterlacing with QTGMC causes the video to hitch, move back and forth. I can't remove the duplicate frames but can't I remove the stutter from deinterlacing at least?
    Quote Quote  
  24. Your source has a display aspect ratio of 4:3. So it's upscaled to a 4:3 frame size, 1440x1080. To be more accurate, since you have an ITU 720x576 cap, the 4:3 image is contained in a ~704x576 portion of the frame (the capture device capture a little extra width in case the picture isn't exactly centered). So it would be more technically correct to crop those extra pixels away before upscaling.

    Code:
    Crop(8,0,-8,-0)
    nnedi3_rpow2(....)
    Quote Quote  
  25. Using QTGMC on any of those captures is going to create a huge mess. QTGMC is a front-end for various motion estimation filters (MVTools2) and when you feed that video that contains duplicates and also has big gaps, it will produce horrible artifacts.

    Your video has lots of problems, and deinterlacing is not the solution to any of them.
    Last edited by johnmeyer; 12th Apr 2018 at 11:19.
    Quote Quote  
  26. Originally Posted by HansLau View Post
    Due to the duplicate frames, deinterlacing with QTGMC causes the video to hitch, move back and forth. I can't remove the duplicate frames but can't I remove the stutter from deinterlacing at least?
    I'm not seeing any back-and-forth movement with QTGMC().

    Code:
    Mpeg2Source("Clip.d2v", Info=3)
    QTGMC()
    Back-and-forth motion indicates the wrong field order. Your video is top field first (TFF). Mpeg2Source() should set the field order flag. If it doesn't specify it yourself with AssumeTFF() right before calling QTGMC(). Some filters can change the field order (cropping an odd number of lines off the top of the frame, for example). So watch for that.
    Quote Quote  
  27. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    @johnmeyer
    I suppose your right, motion interpolating it would only make it worse. I guess I'll stick to regular deinterlace for YouTube. I'm keeping it interlaced for my own storage. I thought the only major problem with my source is the dropped frames?

    @jagabo
    It's pretty obvious, especially in this clip where it is readily apparent. And I can't believe I didn't notice the border, even my mom noticed that and I didn't.
    Last edited by HansLau; 12th Apr 2018 at 11:57.
    Quote Quote  
  28. Member
    Join Date
    Apr 2018
    Location
    Malaysia
    Search Comp PM
    Stutter
    Image Attached Files
    Quote Quote  
  29. Add SelectEven() after QTGMC. That will reduce the frame rate to 25p, and you'll still have duplicate frames, but the back-and-forth movement will be gone. I'll write more later, when I have time.
    Quote Quote  
  30. Regarding the back-and-forth motion after QTGMC:

    Each frame of interlaced video contains two separate half-pictures (called fields) taken at two different times. The two images are intended to be seen separately and sequentially. So you're 29.97 fps interlaced video is supposed to be displayed as 59.94 fields per second. So with each frame indicated by two numbered fields in parenthesis, a video like:

    Code:
    (0,1) (2,3) (4,5) (6,7)...
    the fields are displayed one at a time as:

    Code:
    0, 1, 2, 3, 4, 5, 6, 7...
    But your video has may duplicate frames from a problem during capture:

    Code:
    (0,1) (2,3) (2,3) (2,3) (2,3) (2,3) (4,5) (4,5) (4,5) (4,5) (4,5) (4,5) (6,7)...
    Frame (2,3) appears five times, frame (4,5) appears six times. So what you see at playback (or after QTGMC) is:

    Code:
    0, 1, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 4, 5, 4, 5, 4, 5, 4, 5, 4, 5, 4, 5, 6, 7
    All that alternating back and forth between 2 and 3, and 4 and 5, is responsible for the back and forth movement. At 59.94 frames per second it looks very jittery. And, of course, all those repeat frames appear instead of the frames that should have been there, making motion very jerky overall.

    Using SelectEven() after QTGMC() discards every other frame. So you're left with a:

    Code:
    0, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 4, 6...
    That gets rid of the back-and-forth movement. But at the cost of less smooth motion in parts of the video that don't have duplicate frames. And, of course, all the duplicates frames are there instead of the frames that were lost during capture, so motion is still jerky.

    The only practical solution for this video is to re-capture the tape with better equipment, thereby avoiding the missing and duplicate frames.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!