VideoHelp Forum
+ Reply to Thread
Results 1 to 20 of 20
Thread
  1. Hello everyone.

    I'm trying to recover at the best of my possibilities a video that comes from a very old and damaged vhs.

    I invested much on the capture hardware (on a normal hardware I barely distinguished something in the video) and this is a piece of the video in very bad conditions.
    I attach a sample clip of the video with my issue (CLIP2.avi). As you can see a field in particular is messed up.

    My idea then was to deinterlace with QTGMC, doubling the frame rate, and substituting the frames resulted from the bad field with motion estimated interpolations of the good field. Then I reinterlace again.
    This is the script I'm using:

    Code:
    LoadPlugin("cnr2.dll")
    
    avisource("CLIP2.avi")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC(Preset="Placebo")
    
    even = SelectEven()
    odd = SelectOdd()
    
    evenr = even.DoubleFPS2().SelectOdd()
    oddr = odd.DoubleFPs2().SelectOdd()
    
    function DoubleFPS2(clip source){
    super = MSuper(source, pel=2, hpad=0, vpad=0, rfilter=4)
    backward_1 = MAnalyse(Cnr2(Cnr2(super)), chroma=true, isb=true, blksize=16, searchparam=3, plevel=0, search=3, badrange=(-24))
    forward_1 = MAnalyse(Cnr2(Cnr2(super)), chroma=true, isb=false, blksize=16, searchparam=3, plevel=0, search=3, badrange=(-24))
    backward_2 = MRecalculate(super, chroma=true, backward_1, blksize=8, searchparam=1, search=3)
    forward_2 = MRecalculate(super, chroma=true, forward_1, blksize=8, searchparam=1, search=3)
    backward_3 = MRecalculate(super, chroma=true, backward_2, blksize=4, searchparam=0, search=3)
    forward_3 = MRecalculate(super, chroma=true, forward_2, blksize=4, searchparam=0, search=3)
    MBlockFps(source, super, backward_3, forward_3, num=2*FramerateNumerator(source), den=FramerateDenominator(source), mode=0, blend=false)
    }
    
    
    
    evenmod= ConditionalFilter(even, oddr, even, "YDifferenceFromPrevious()", "greaterthan", "20")
    oddmod= ConditionalFilter(odd, evenr, odd, "YDifferenceFromPrevious()", "greaterthan", "20")
    
    a = interleave(evenmod,oddmod)
    a
    Cnr2()
    Cnr2()
    ConvertToYUY2()
    
    separatefields
    
    even=selectEven().SelectEven()
    odd=selectOdd().selectodd()
    
    interleave(even,odd)
    weave

    The result is quite decent. (CLIP3.avi). The proof that there is an improvement is that the deinterlaced CLIP3.avi looks much better in every frame than the deinterlaced CLIP2.avi -in my subjective opinion-.

    My problem is: if i interlace again at the end, as I do, (following the principle that if the source is interlaced then the final restored output video should be interlaced), I have an interlaced video coming from a deinterlaced one. And this it's not good, right?

    Is there a better way to do this restoration?
    Maybe is there a way for making the bad field be a motion estimated interpolation of the good fields but without deinterlacing?
    Shouldn't use this solution cause a strange flickering because the top field is 1 pixel upper than the bottom field?

    Thank you for the suggestions!
    Image Attached Files
    Quote Quote  
  2. Originally Posted by benzio View Post
    following the principle that if the source is interlaced then the final restored output video should be interlaced
    No. If you need interlaced video re-interlace it. If not, don't.
    Quote Quote  
  3. I think you are doing everything just fine, and if it looks good, that is the proof.

    I've done this exact same thing. Here is the script I created. I am NOT saying that this is better than your script. Instead I am merely offering it so you can see if you find anything that you can use.

    Code:
    #Replace Bad Field With Motion Estimated Field from Good Field.avs
    #John H. Meyer
    #
    
    source = AVISource("e:\fs.avi").assumetff()
    fields = SeparateFields(source)             
    
    even = SelectEven(fields) 
    odd  = SelectOdd(fields)  
    
    #Change the following to point to the good field
    good = odd
    #good = even
    
    # Double the height to create spatial interpolation
    double_height = Spline36Resize(good,good.width,good.height*2)  
    
    #Double the frame rate to create temporal interpolation
    estimated_clip = DoubleFPS2(double_height) 
    
    #Keep only the motion-interplated fields
    replacement_fields = estimated_clip.selectodd().assumeframebased().separatefields().selectodd() 
    
    #Interleave the synthesize field that will replace the bad field, with the good field
    Interleave(good,replacement_fields)
    final=Weave()
    
    #Return the fixed video
    return final
    
    
    #-------------------------------
    #Debugging functions
    
    #Before you start, enable this next line temporarily so you can see which field is the good field
    #stackvertical(even,odd)
    
    #When you are almost finished, enable this next line temporarily to check your work
    #return final.SeparateFields
    
    #------------------------------------
    function DoubleFPS2(clip source) {
       super = MSuper(source, pel=2)
       back = MAnalyse(super, chroma=false, isb=true,   blksize=16, overlap=4, searchparam=3, plevel=0, search=4)
       forw = MAnalyse(super, chroma=false, isb=false,  blksize=16, overlap=4, searchparam=3, plevel=0, search=4)
    
    #  MBlockFPS(source, super, back, forw, num=2*FramerateNumerator(source), den=FramerateDenominator(source), mode=0,thSCD2=130,thSCD1=400)
    
    #  Alternate motion estimation. 
      MFlowFPS(source, super, back, forw, num=2*FramerateNumerator(source), den=FramerateDenominator(source),thSCD2=130,thSCD1=400)
    
    }
    Quote Quote  
  4. A much simpler method if most of the video has the bad field:

    Code:
    AviSource("CLIP2.avi") 
    AssumeTFF()
    SeparateFields()
    SelectEven()
    nnedi3(dh=true)
    DoubleFPS2()
    Adding Stab() would reduce a lot of the vertical jitter.
    Quote Quote  
  5. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Lots of work to do, lots of ways to approach it.
    - halos
    - chroma offset
    - chroma noise

    Quickly playing around with it.
    Image Attached Files
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  6. @lordsmurf

    I read in a lot of threads that you assert that if the source is interlaced then you must respect the source because the deinterlacing algorithms get better year by year. And if you deinterlace you stuck your video to a particular deinterlacing technique without the possibility of getting better. And this make sense.

    This is the reason why I would like to find a solution that does not involve deinterlacing.

    The clip you posted is deinterlaced, double frame rate.
    If you re-interlace a deinterlaced video with doubled frame rate made with QTGMP can you say that the final interlaced video is better than the first interlaced?
    Are you really discarding all the informations coming from the deinterlacing interpolation and keeping only the informations coming from the noise-reduction filters that QTGMP has?


    @jagabo
    Yes, one field is particulary messed up but it has also good frames.
    I don't want to discard all that precious information by discarding the whole bad field just doing this:
    Code:
    AviSource("CLIP2.avi") 
    AssumeTFF()
    SeparateFields()
    SelectEven()
    nnedi3(dh=true)
    DoubleFPS2()
    Quote Quote  
  7. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Sometimes deinterlace is a forced action of restoration.
    In general, yes, you should try to keep interlaced as interlaced. But sometimes you have no choice.

    Like I said, this project has some options in how the restoration is performed.
    I can see it going both ways.
    In my quick example, I chose deinterlaced.
    Last edited by lordsmurf; 20th Feb 2018 at 13:29. Reason: typos
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  8. If you do choose deinterlaced, make sure to choose "60p" and not "30p". Otherwise you lose half your temporal resolution and the video will look quite different, lacking video's usual fluidity. You will even see some judder on horizontal camera pans.

    I think your decision to leave it interlaced is a good one.
    Quote Quote  
  9. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by johnmeyer View Post
    If you do choose deinterlaced, make sure to choose "60p" and not "30p". Otherwise you lose half your temporal resolution and the video will look quite different, lacking video's usual fluidity. You will even see some judder on horizontal camera pans.
    I think your decision to leave it interlaced is a good one.
    59.94i > 59.94p is interpolated, and the output is also non-compliant to most formats.
    Interpolation = fabricating what did not exist.
    The 59.94i was not a complete image, and does not necessarily contain 59.94 moments in time.
    The idea that you're "losing" temporals is not really accurate. You're losing half AFTER having made it doubled.
    Most of those newly created frames are just dupe frames.

    There's also a lot of oddities in motion going on.

    You're not going to see judder. That's a 23.976>29.97 concept, not 29.97~59.94.

    29.97p is arguably the same as 59.94i in terms of motion. Indeed, it's 59.94i that gives the "soap opera look" (a stupid nonsense name).

    As I said elsewhere, 59.94p is great for atypical uses (forensic recovery, archiving non-playable, etc), but terrible for sitting on the with a bowl of popcorn. The exception is some streaming platforms.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  10. Originally Posted by lordsmurf View Post
    Originally Posted by johnmeyer View Post
    If you do choose deinterlaced, make sure to choose "60p" and not "30p". Otherwise you lose half your temporal resolution and the video will look quite different, lacking video's usual fluidity. You will even see some judder on horizontal camera pans.
    I think your decision to leave it interlaced is a good one.
    59.94i > 59.94p is interpolated, and the output is also non-compliant to most formats.
    Interpolation = fabricating what did not exist.
    The 59.94i was not a complete image, and does not necessarily contain 59.94 moments in time.
    The idea that you're "losing" temporals is not really accurate. You're losing half AFTER having made it doubled.
    Most of those newly created frames are just dupe frames.

    There's also a lot of oddities in motion going on.

    You're not going to see judder. That's a 23.976>29.97 concept, not 29.97~59.94.

    29.97p is arguably the same as 59.94i in terms of motion. Indeed, it's 59.94i that gives the "soap opera look" (a stupid nonsense name).

    As I said elsewhere, 59.94p is great for atypical uses (forensic recovery, archiving non-playable, etc), but terrible for sitting on the with a bowl of popcorn. The exception is some streaming platforms.
    First of all, you are correct that "60p" is usually 59.94 fps. However, "60p" is common nomenclature and so, unless it is important to include the 1000/1001 correction in the discussion, I just use the common vernacular. Since the distinction is not important for this discussion, I just used the nominal numbers.

    However, my main reason for posting is that you are making a common mistake that will mislead the OP when you say "The 59.94i was not a complete image, and does not necessarily contain 59.94 moments in time." Actually, it most definitely DOES contain 59.94 moments in time, and that fact is incredibly important.

    The mistake many people make is to think of interlaced video as a frame when, in fact, when the video is played, there is never a complete frame on the screen at one time. Until we had the ability to freeze a frame with a VCR, and then later, in our NLEs, no one knew or cared about this distinction.

    Instead, as you obviously know, one field is displayed and then 1/60 of a second later (I'm rounding) the next field is displayed, and so on. As far as our brain is concerned, we see roughly 60 separate event every second. As a result -- and this is important -- 29.97p is absolutely, positively NOT the same as 59.94i in terms of motion. I don't want to be a jerk by being too strong with that statement, but I don't want the OP to be misled into doing the wrong thing.

    You can prove to yourself how different 29.97p (a.k.a. 30p) is from 59.94i (a.k.a. 30i) simply taking some 29.97 interlaced video where you rapidly pan the camera horizontally. Change that to 29.97 progressive and you will find that you get judder, because you are only getting 30 events per second and not 60. The "persistence of vision" that makes your mind see a series of individual discrete images as motion doesn't fully kick in until well above 30 events per second. That is why the 60 events (fields) per second that you get with interlaced video works so incredibly well. It is one of the more spectacularly clever tricks ever invented, because it gives us the super-smooth "soak opera look" that we associate with video, but within a bandwidth that is economical.

    I do these frame rate conversions all the time, and I can tell you from experience that 30 frame per second progressive video behaves and looks a LOT like 24p film, and nothing like interlaced 30 fps video. It is for this reason that I made my post above, because the OP will significantly degrade the look of his video if he were to deinterlace to 30p instead of 60p.

    Finally, you are definitely correct that many devices will not play 60p material, although that is becoming more rare.
    Quote Quote  
  11. johnmeyer is correct, of course. 30i video (I refuse to call it 60i -- oops I just did!) definitely has 60 different motions per second. You can see it in any medium speed panning shot. And surely benzio is aware of this since he's motion interpolating the good field to replace the bad field.
    Quote Quote  
  12. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    my 2 cents.

    johnmeyer is only correct in that their are 60ish motion action a second. but they are 60 motions that occurs in only every other line in the video. only 240 lines of a 480 height video move every 60th of a second. and that's also why converting 30i to 60p is dumb. if you convert every field to a frame you are "making up" half the vertical resolution of every frame with garbage that doesn't exist in the original video.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  13. Originally Posted by aedipuss View Post
    but they are 60 motions that occurs in only every other line in the video. only 240 lines of a 480 height video move every 60th of a second. and that's also why converting 30i to 60p is dumb.
    Converting to 60p may or may not be dumb. Since I do it all the time, I don't agree. For unblending field-blended garbage it's an absolute necessity.

    But, to say that only 240 lines of the 480 lines of video are moving at a time is misleading, at best. VHS tapes store video by fields. The CRT televisions for which VCRs were created only played fields. Although it's rare, even DVDs can store videos as discrete fields. Therefore we can say all frames of a tape come from different points in time and all lines in those frames are from the same point in time and there are 60 of those frames every second. It's only when captured that those fields are combined into interlaced frames.

    if you convert every field to a frame you are "making up" half the vertical resolution of every frame with garbage that doesn't exist in the original video.
    That's also misleading at best, plain wrong at worst.
    Quote Quote  
  14. It is all about what your brain "sees," and that in turn is all about perception and persistence of vision. The key thing is that at any viewing distance, when simply watching TV, your eyes and brain most definitely do not perceive the temporal difference between adjacent fields (except, perhaps, in pathological video not normally seen in real life).

    By contrast, the difference between having 30 things happening per second and 60 things happening per second (whether those "things" are fields or frames) is easily observed and stunningly obvious. Not subtle.

    While I certainly wouldn't argue that 60p (59.94 progressive frames per second) and 60i (29.97 interlaced frames per second) look totally identical, I'll bet that if you showed 100 people side-by-side video that was shot with two identical cameras mounted next to each other, one set to take 60i and the other 60p, many would not notice the difference if shown on a modern LCD TV with decent deinterlacing (which most now have).

    60i is a pretty good fake.

    But, by contrast, I'd also argue that if you did the same test with one camera taking 24p, and the identical camera set to take 60p which was later decimated down to 30p (since I don't think there are any cameras that natively shoot 30 fps progressive), most people would easily detect the same artifacts on fast horizontal pans on BOTH videos. Put another way: 24p and 30p look very much the same, and if you don't like the artifacts created by that ancient film cadence, then for goodness sake, don't set QTGMC (or other deinterlacer) to give you 30 fps progressive.

    When you start with 30 (29.97) fps interlaced and want to end up with progressive video, you will have to create new stuff that didn't exist before, whether you end up with 30 fps progressive or 60 fps progressive. The latter gives you something that will have pretty much the same "feel" as the original interlaced video, whereas the latter will become something quite different. That is why I see no reason to ever create 30 fps progressive, except for the point Lordsmurf made when he noted that some playback equipment (like my 5-year-old Samsung LCD TV) won't play 60p.

    And, that brings up my final point, about what to deliver to clients (including yourself). When I deliver video on thumbdrives to clients, I always want to retain the original video feel and the only way to do that and still be certain that it plays on everyone's sets (I hate getting returns) is to deliver 29.97 interlaced (60i). When you think about it (and I've thought about it a lot), there really is no other alternative: if I deliver 60p, some people will return the thumbdrive because it won't play; if I deliver 30p, it will look like film, not video; but if I deliver 29.97 interlaced, it will play on everything and look like normal video.

    But interlaced video, you say, yucchhh! Fine and dandy, but everyone watches interlaced video all the time (so-called 1080i) because most broadcasters in North America send out either 720p or 1080i. No one, outside of forums like this, ever complains.

    Therefore what I deliver is exactly what they are already watching.

    This is one more reason to leave it interlaced (29.97 interlaced, or 60i -- or, to make jagabo happy -- 30i).
    Quote Quote  
  15. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by manono View Post
    It's only when captured that those fields are combined into interlaced frames.
    That's my point. We don't have a choice in how the data is acquired. It's interlaced, the end. We get a woven mess of half-height frames. Detail is missing in every field, because it's not a frame. And sometimes that missing detail obscures motion.

    @all /end direct response to manono
    Analogy time...

    Imagine you're standing before a grand vista. Beautiful view. Grand Tetons, Grand Canyon, somewhere in Yellowstone, etc.
    Now close one eye.
    That's 59.94p raw half-height frames from fields of VHS. You're only getting part of the sequential image. Faster, but only partial.
    Interlace the 59.94 (59.94i/29.97fps) to get the full picture/frame .... aka open both eyes.
    To keep both eyes open at 59.94p requires fabricated/interpolated data be created, every frame altered to restore viewability.

    Once upon a time, we just did mix of discarding, bob'ing, antialias at best, hope for the best. These days, we have complex scripting that mimics the kind of hardware that Faroujda had (arguably even better). But it's not a mere act of separating fields. 59.94 progressive frames are not hiding behind 29.97 interlace frames. We have to make that happen with interpolation.

    I'm all for keeping the 59.94p, I sometimes do it too -- but I know it takes 200% space, not compliant to most formats, and may not work with many devices.

    The idea that you're discarding half the data is not entirely correct, because it's only being discarded AFTER the interpolation. You made new data, then tossed half of it. So it's not really the same. Drop-frame tosses data, not these complex algorithms. I'd argue that the extra frames are a byproduct, not necessarily the product itself. It's sort of like the concepts of bit depth and resolution, where it's best to have more to discard, and the discard is a known event at the end/output of the overall workflow. That we can choose to keep this byproduct of these QTGMC/etc methods are a nice added feature, as it's not available with most other deinterlacers.

    Anyway, as also stated here -- leave it interlaced when you can. Best advice.

    Remember that I said sometimes restoring video forces your hand on interlace. Part of that overall process can result in some lost motion via filtering (again, a known end result), so the notion of 59.94 is reduced yet again. Something else to consider.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  16. The relationship between frame rate and file size is not linear when encoded to create the same visual quality using a delivery codec, Specifically, 60 fps does NOT take twice the file size as 30 fps unless you are encoding each frame independently, i.e., you are not using a codec which encodes differences between frames. Since all delivery codecs (i.e., codecs used to create video that is distributed on disc, over the air, satellite, or cable) use inter-frame coding, using groups of pictures (GOP), those codecs don't increase file sizes linearly.

    The reason is that when you double the number of frames per second, you also cut in half the difference between those frames. So, yes there are more frames to encode, but since for fourteen out of every fifteen frames (if the GOP is 15) you are only encoding the differences, and since with twice the frame rate, those differences are half as large, the codec doesn't have to use as many bits to represent those smaller differences. There is an increase in file size, but when going from 30 to 60 fps, the increase is surprisingly small.

    This becomes more true as the frame rate gets faster. For those who studied limit theory in math class, it is one of those deals where, as the number of frames per second approaches infinity, the difference between frames approaches zero. While at first that seems like an unsolvable puzzle, as you know if you studied limit theory, in most cases it is actually quite easy to come up with a very specific, deterministic answer to what happens. I have not seen this analysis for video encoding using inter-frame codecs, but it would be an interesting read.
    Quote Quote  
  17. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Yes, if you include GOP encoding, not 200%.
    Even lossless may not be 200%.

    I should have said "up to 200%" to be more accurate.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  18. Originally Posted by lordsmurf View Post
    Yes, if you include GOP encoding, not 200%.
    Even lossless may not be 200%.

    I should have said "up to 200%" to be more accurate.
    Actually 200% is totally accurate for lossless or for interframe encoding (like DV AVI). So with the qualifications you just posted, I think you are completely correct.
    Quote Quote  
  19. Originally Posted by johnmeyer View Post
    I think you are doing everything just fine, and if it looks good, that is the proof.

    I've done this exact same thing. Here is the script I created. I am NOT saying that this is better than your script. Instead I am merely offering it so you can see if you find anything that you can use.

    Code:
    #Replace Bad Field With Motion Estimated Field from Good Field.avs
    #John H. Meyer
    #
    
    source = AVISource("e:\fs.avi").assumetff()
    fields = SeparateFields(source)             
    
    even = SelectEven(fields) 
    odd  = SelectOdd(fields)  
    
    #Change the following to point to the good field
    good = odd
    #good = even
    
    # Double the height to create spatial interpolation
    double_height = Spline36Resize(good,good.width,good.height*2)  
    
    #Double the frame rate to create temporal interpolation
    estimated_clip = DoubleFPS2(double_height) 
    
    #Keep only the motion-interplated fields
    replacement_fields = estimated_clip.selectodd().assumeframebased().separatefields().selectodd()
    
    #Interleave the synthesize field that will replace the bad field, with the good field
    Interleave(good,replacement_fields)
    final=Weave()
    
    #Return the fixed video
    return final
    
    
    #-------------------------------
    #Debugging functions
    
    #Before you start, enable this next line temporarily so you can see which field is the good field
    #stackvertical(even,odd)
    
    #When you are almost finished, enable this next line temporarily to check your work
    #return final.SeparateFields
    
    #------------------------------------
    function DoubleFPS2(clip source) {
       super = MSuper(source, pel=2)
       back = MAnalyse(super, chroma=false, isb=true,   blksize=16, overlap=4, searchparam=3, plevel=0, search=4)
       forw = MAnalyse(super, chroma=false, isb=false,  blksize=16, overlap=4, searchparam=3, plevel=0, search=4)
    
    #  MBlockFPS(source, super, back, forw, num=2*FramerateNumerator(source), den=FramerateDenominator(source), mode=0,thSCD2=130,thSCD1=400)
    
    #  Alternate motion estimation. 
      MFlowFPS(source, super, back, forw, num=2*FramerateNumerator(source), den=FramerateDenominator(source),thSCD2=130,thSCD1=400)
    
    }

    Out of interest:
    What is the reason for restoring the original height before interpolation (and subsequent field separation)? Better interpolation quality or to prevent some vertical field offset ?
    Could the same be done on field height base, i.e. without resizing the field to full frame height?

    Edit:
    All clear now; one has to synthesize the replacement_field temporally and spatially from the good field. Never mind....
    Last edited by Sharc; 26th Feb 2018 at 14:06.
    Quote Quote  
  20. Hi lordsmurf, could you please share your script?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!