VideoHelp Forum
+ Reply to Thread
Results 1 to 15 of 15
Thread
  1. Hello, I'm working with some incorrectly scanned 8mm film. Some of which was scanned straight to DVD, and some of which was done in 2k.
    All of the scans appear to have been done at 30fps, so some of the files have clean duplicated frames, and some of them have duplicated frames with a ghosting effect, but no interlacing lines. I've attached a frame which shows the ghosting.

    Image
    [Attachment 67903 - Click to enlarge]


    I do not know what the original frame rate of the film was unfortunately. I'm assuming something like 16fps?
    I've ran all the films through After Effects, and tried out all of the different pulldown options but none of them are giving the desired result.

    The original films are either lost or have been destroyed, so these digital copies is all that is left and it's critical I get these back into some clean video.

    Would anyone know how these extra frames could be removed? Any help would be greatly appreciated.

    Thank you.
    Quote Quote  
  2. Avisynth got a bunch of field and frame blending removal filters (http://avisynth.nl/index.php/External_filters#Fieldblending_and_Frameblending_removal) that might help or not.
    Can't tell from an image.
    You need to share video clips if you are looking for less general help.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. Here is a sample clip from the SD source, the VOB files straight off the DVDs.
    In premiere it's showing as 29.97i LFF, but I'm not seeing any interlacing lines, just the frame blur.

    Image
    [Attachment 67907 - Click to enlarge]


    Vimeo Link, download enabled:
    https://vimeo.com/776812080/6d5e77209e
    Quote Quote  
  4. In the sample clip, every 2nd, 3rd frame are duplicates. (Unfortunately there is no other information contained in fields that could help, ie. it's a progressive content clip)

    So in that 29.97 clip, in avisynth you could remove them with SelectEvery(3,0,1) if the pattern was repeating (it is in that sample) and that would give you 19.98FPS unique frames (but many are blended)

    A safer, adaptive way might be to use 1 in 3 decimation (in case the video was edited or cadence breaks). e.g .TDecimate(cycle=3, cycler=1)

    8mm should be either 16FPS or 18FPS. It's possible it was sped up in projector to 20FPS and recorded at 29.97 resulting in that duplicate pattern

    You can slow it down after decimation with AssumeFPS(16) or AssumeFPS(18) . AssumeFPS is analgous to "interpreting the footage" in Adobe (ie. the frame count is the same, but the assigned FPS is higher or lower)


    The blended frames look to be at least partially motion blur and shutter related. I would probably leave it at that -

    ... but there are ways to interpolate /replace bad frames with good ones using motion interpolation/optical flow or machine learning algorithms. There is no automatic way to do this accurately, you'd have to specify the frames. (Basically you synethesize in between frames from adjacent "good, non motion blurred" frames. And these methods don't always work well - sometimes bad artifacts, but sometimes they do work well. Hit and miss.

    Other experimental methods are to remove motion blur (again, many "AI" or machine learning algorithms proposed to do this. Adobe has one too in photoshop)
    Last edited by poisondeathray; 30th Nov 2022 at 19:23.
    Quote Quote  
  5. The problem is how it was captured. The capture device blended adjacent frames. SRestore might be able to help, but I don't have any good ideas.
    Quote Quote  
  6. Originally Posted by johnmeyer View Post
    SRestore might be able to help, but I don't have any good ideas.
    Srestore will not help this type of situation; srestore can generally help in situations where there is information in other clean fields . It can selectively choose the clean fields over the blended ones
    Quote Quote  
  7. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    Using tdecimate: (5,2)
    Image Attached Files
    Quote Quote  
  8. Left side is decimation using selectevery(3,0,1) to get rid of the duplicates, right is the same but replacing some of the more severely blended frames using RIFE interpolation. Some of the slightly blended frames are left in to keep the natural motion ( peaks/valleys), otherwise motion starts looking too smooth/synthetic. 60Fps example is interpolated 4x to 64fps, then slowed down to 60FPS (Duplicates were used for the left, RIFE again on the right). These are "square pixel"/non AR corrected

    You could improve any type of interpolation by cleaning up the "reference" frames beforehand (If you interpolate from dirty/scratched frames or ones with errors, those errors propogate into the synthesized frames). Also if you reduced motion blur first, the synthesized frames would be more clear.
    Quote Quote  
  9. Originally Posted by poisondeathray View Post
    Left side is decimation using selectevery(3,0,1) to get rid of the duplicates, right is the same but replacing some of the more severely blended frames using RIFE interpolation. Some of the slightly blended frames are left in to keep the natural motion ( peaks/valleys), otherwise motion starts looking too smooth/synthetic. 60Fps example is interpolated 4x to 64fps, then slowed down to 60FPS (Duplicates were used for the left, RIFE again on the right). These are "square pixel"/non AR corrected

    You could improve any type of interpolation by cleaning up the "reference" frames beforehand (If you interpolate from dirty/scratched frames or ones with errors, those errors propogate into the synthesized frames). Also if you reduced motion blur first, the synthesized frames would be more clear.
    Is there any way to use this method to restore it's original frame rate, assuming 16 or 18, and then I can do restoration work from that and do FPS conversions after.

    Regarding the blended frames, his shutter speed would have to be quite low to get the motion blur, all of the films from this individual have this frame blur so I'm almost certain its a result of the poor capture. Can these be removed entirely?
    Quote Quote  
  10. Originally Posted by Eventide View Post
    Is there any way to use this method to restore it's original frame rate, assuming 16 or 18, and then I can do restoration work from that and do FPS conversions after.
    I don't understand what you mean by "original frame rate" .

    My earlier post explained how to get 16 or 18 fps unique frames, but blended. It's just decimation of the duplicate frames (dropping 1 duplicate out of every 3 frames from 29.97 results in 19.98), and slowdown from 19.98 to 16 FPS (or any framerate if you want)

    Regarding the blended frames, his shutter speed would have to be quite low to get the motion blur, all of the films from this individual have this frame blur so I'm almost certain its a result of the poor capture. Can these be removed entirely?
    Yes it's definitely a bad capture.

    But there a component of motion blur, even on some the clean frames. The reason I mention it is when you reverse the motion blur, that helps in interpolation. A cleaner, sharper frame as a reference point helps to generate a sharper interpolated frame

    There is no way to restore the original unblended film frames, because they are not present in your sample (sometimes clean film frames can be partially present in fields - that's where srestore can help).

    You can try "fake" it with various interpolation methods, to replace the "bad" frames; one is demonstrated in the example above, but that method requires at least some good reference frames to interpolate from. You can actually do that in Adobe too using optical flow. Various algorithms might work on some frames but fail on others (resulting in artifacts, sometimes bad)

    Unless you want to get more involved with manual ID/cleanup, I would probably leave it at the decimation step with blends
    Last edited by poisondeathray; 1st Dec 2022 at 00:46.
    Quote Quote  
  11. Once you have the cleanest frames you can get, and once you eliminate all dup frames, you can simply set the header in the video to set the playback speed. It's like changing the speed on a projector: same frames, but played at a faster or slower rate.
    Quote Quote  
  12. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    PDR your last example is pretty darn good!
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    Left side is decimation using selectevery(3,0,1) to get rid of the duplicates, right is the same but replacing some of the more severely blended frames using RIFE interpolation. Some of the slightly blended frames are left in to keep the natural motion ( peaks/valleys), otherwise motion starts looking too smooth/synthetic. 60Fps example is interpolated 4x to 64fps, then slowed down to 60FPS (Duplicates were used for the left, RIFE again on the right). These are "square pixel"/non AR corrected

    You could improve any type of interpolation by cleaning up the "reference" frames beforehand (If you interpolate from dirty/scratched frames or ones with errors, those errors propogate into the synthesized frames). Also if you reduced motion blur first, the synthesized frames would be more clear.
    Is RIFE interpolation something that creates artificial frames, similar to Twixtor? Your last example looks nice, but I can tell there is some AI smoothing going on there. Is there a way to do something similar to what you have done, but in 24fps instead of 60? I have zero experience with using Avisynth, so if there is a specific line of code I can feed into the command line that would be much appreciated!

    Thank you
    Quote Quote  
  14. Originally Posted by Eventide View Post

    Is RIFE interpolation something that creates artificial frames, similar to Twixtor? Your last example looks nice, but I can tell there is some AI smoothing going on there. Is there a way to do something similar to what you have done, but in 24fps instead of 60? I have zero experience with using Avisynth, so if there is a specific line of code I can feed into the command line that would be much appreciated!


    Yes, similar concept as twixtor. Similar artifacts too, even in the 16fps version if you look closely

    Sometimes you might get slightly better or worse with twixtor, or different algorithms such as mvtools2 in avisynth, or kronos in AE/nuke, or Resolve. If you can't "solve" a set of bad frames, swapping to another algorithm might work. There are other "AI" algorithms as well such as DAIN, CAIN a few others. DAIN is very very slow , but there are some cases where it can produce better results than RIFE. A workable solve is much better starting point than manually fixing in photoshop or similar. You still might have to do a bit of clean up pre interpolation and post interpolation, but the interpolation - if it works ok - is a huge time saver compared to full manual fixing and compositing

    Within RIFE, you can swap between models (right now there are 26 variants), and 1 might work for a given frame. In can be a lot of trial and error, but in my experience, RIFE is best overall for general usage if you compare to Resolve, Adobe, Twixtor, MVtools2, SVPflow, Kronos, a few others. The older block based, motion vector based algorithms tend to have more failings. The RIFE v2.3 and v2.4 models are the "best" for general use. v4.x models can achieve non integer frame rates and are faster to process, but the quality is noticably worse. There are certain scenarios where all interpolation methods almost always fail - eg. repeating patterns such as "picket fences" and might require manual guidance (e.g. mattes, trackpoints in twixtor, compositing)

    If you prefer Adobe and it's GUI's you can do it with morphcut, optical flow or in AE with pt_FrameRestorer script - there are demo videos that go over them. The original v1 pt_FrameRestorer free version used AE's pixel motion, and v2 is not free but can use twixtor engine. It's the same idea, you mark bad frames, and the 1 good frame before, 1 good frame after are used for reference points

    I actually used vapoursynth for that example, but RIFE was recently made available in avisynth too - and it's the NCNN version which can be run on most GPU's not just NVidia. I wrote some helper functions to assist interpolating over frames a while back, and in the process of translating them to avisynth, I'll post them when I've finished it should be today or tomorrow.

    If I was starting out new, I would say avisynth is easier to use by quite a bit. It's been around longer, and there are more users to help out- very useful. But vapoursynth runs on python and is more amenable to many "machine learning" projects and filters . Both have learning curves, and require time gathering prerequisites

    To be clear - you have to manually identify which frames are good or bad and you specify the frame ranges to interpolate over . Sometimes you want to include some minor defective frames as reference, because it keeps some of the natural motion intact if you have many consecutive frames to interpolate over (some sequences were 6 in a row in your example ), otherwise it can look "robotic". It helps if you use an editor like vsedit with output nodes (swap with number keys), or avspmod in avisynth. It's time consuming because it involves manual (human eyes) identification, and typing . (OTOH, it's way less time than pure manual cleanup/paint/compositing). At least in AE, you can use markers , the GUI is a bit easier to use for most people. For me, avspmod or vsedit is actually faster in some ways because of the number hotkeys for nodes or tabs - you can compare multiple versions in different tabs and hot swap them with number keys - easier/faster to determine what settings or filters work best

    This is what my vpy script looked like, I only did ~200 frames for the demo. I cloned out dirt on frame 190, because it was used as a reference. There is a generic RX function that can interpolate over any number e.g. RX(r, x) , but it is based on RIFE v4 model - and I don't like v4+ models' quality. So RX1 means 1 frame, RX2 2frames, and so forth. The number is the 1st frame inclusive. For example RX5(r,3) means 5 frames, starting frame frame 3. So it would be 3,4,5,6,7 using reference frames 2 and 8. There is also a model option, e.g. RX5(r,3,2) would use model 2. I posted the vapoursynth versions on the doom9 forum. I changed around the syntax a bit for avisynth version, I'm just debugging it a bit now and will post it soon

    Code:
    clip = core.lsmas.LibavSMASHSource(r'8mm-sample-prores (Original).mov')
    clip = core.std.SetFrameProp(clip, prop="_FieldBased", intval=0) # progressive
    sel = core.std.SelectEvery(clip, cycle=3, offsets=[0,1])
    sel = core.std.AssumeFPS(sel, fpsnum=16, fpsden=1)
    r=sel
    
    r = RX5(r,3)
    r = RX1(r,11)
    r = RX2(r,13)
    r = RX5(r,17)
    r = RX5(r,24)
    r = RX4(r,31)
    r = RX6(r,37) 
    r = RX6(r,44) 
    r = RX3(r,54)
    r = RX6(r,58)
    r = RX5(r,66)
    r = RX6(r,72)
    r = RX4(r,81)
    r = RX3(r,86)
    r = RX2(r,90)
    r = RX1(r,94)
    r = RX2(r,96)
    r = RX3(r,101)
    r = RX2(r,105)
    r = RX2(r,108)
    r = RX2(r,111)
    r = RX1(r,114)
    r = RX4(r,116)
    r = RX6(r,121)
    r = RX6(r,128)
    r = RX3(r,135)
    r = RX2(r,140)
    r = RX2(r,143)
    r = RX3(r,146)
    r = RX5(r,150)
    r = RX6(r,156)
    r = RX6(r,163)
    r = RX6(r,170)
    r = RX2(r,180)
    r = RX5(r,185) #dirt190
    r = RX6(r,191)
    r = RX5(r,199)
    
    r.set_output()

    Yes, you can retime to any FPS - If you have an algorithm that works ok for that source sequence, and provided the source frames are clean - otherwise you propogate the problems. eg. if you have dirt or scratch on a frame, those new frames will have similar dirt and scratches . Here is that example retimed from 16.0fps to 24.0fps.

    If you're using RIFE < v4.0 models, you have to up sample to a higher power of 2 multiple 2x,4x,8x etc.. then down convert . I find that generally down conversion can use a faster algorithm such as mvtools2/framerateconverter, because the "heavy lifting" is done in the up sampling. Mathematically you could think of it as interpolating to LCM lowest common multiple fps, and taking evenly spaced frames in time. I used RIFE twice to get 16=>64FPS, then FrameRateConverterMix (using mvtools2 as the engine) for the downsample to 24FPS
    Image Attached Files
    Quote Quote  
  15. Avisynth RIFE interpolation frame replacement helper functions.
    Requirements (I might have missed some)

    AVS+ RIFE
    https://github.com/Asd-g/AviSynthPlus-RIFE

    MVTools2 (pinterf branch, for down conversions)
    https://github.com/pinterf/mvtools

    FramerateConverterMIX (dogway version, for high bit depth support)
    https://github.com/Dogway/Avisynth-Scripts)
    https://github.com/Dogway/Avisynth-Scripts/blob/master/MIX%20mods/FrameRateConverterMIX.avsi

    AVSResize (for pixel format conversions)
    http://avisynth.nl/index.php/Avsresize


    Code:
    function RXr(clip Source, int N, int X)
    {
    #RIFE interpolation
    # N is number of the 1st frame in Source that needs replacing.
    # X is total number of frames to replace
    #e.g. RXr(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for mflowfps interpolation
    
    start=Source.assumeframebased().trim(N-1,-1) #one good frame before, used for interpolation reference point
    end=Source.assumeframebased().trim(N+X,-1) #one good frame after, used for interpolation reference point
    
    start+end
    AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    
    z_ConvertFormat(pixel_type="RGBPS", resample_filter="bicubic", colorspace_op="709:709:709:l=>rgb:709:709:f")
    RIFE(model=9, factor_num=X+1, factor_den=1)
    
    z_ConvertFormat(pixel_type=Source.PixelType, resample_filter="bicubic", colorspace_op="rgb:709:709:f=>709:709:709:l")
    AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining
    Trim(1, X+1) #trim ends, leaving replacement frames
    
    Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0)
    }
    
    
    
    	
    function RXr1(clip Source, int "FirstFrame", int "Model")
    {
    #RIFE interpolation
    #assumes YUV input
    Model = Default(Model, 5) #2.3
    clip1 = Source.AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    start=Clip1.Trim(FirstFrame-1,-1)
    end=Clip1.Trim(FirstFrame+1,-1)
    startend = start + end
    r = startend.z_ConvertFormat(pixel_type="RGBPS", resample_filter="bicubic", colorspace_op="709:709:709:l=>rgb:709:709:f")
    r = r.RIFE(model=Model)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType, resample_filter="bicubic", colorspace_op="rgb:709:709:f=>709:709:709:l")
    r = r.Trim(1,1).AssumeFPS(1)
    #FrameRateConverter(output="Flow")
    a = clip1.Trim(0, FirstFrame-1)
    b = clip1.Trim(FirstFrame+1,0)
    join = a+r+b
    join = join.AssumeFPS(FrameRate(Source))
    return join
    }
    	
    function RXr2(clip Source, int "FirstFrame", int "Model")
    {
    #RIFE interpolation
    #assumes YUV input
    Model = Default(Model, 5) #2.3
    clip1 = Source.AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    start=Clip1.Trim(FirstFrame-1,-1)
    end=Clip1.Trim(FirstFrame+2,-1)
    startend = start + end
    r = startend.z_ConvertFormat(pixel_type="RGBPS", resample_filter="bicubic", colorspace_op="709:709:709:l=>rgb:709:709:f")
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType, resample_filter="bicubic", colorspace_op="rgb:709:709:f=>709:709:709:l")
    r = r.FrameRateConverterMix(NewNum=3, NewDen=1, output="Flow")
    r = r.Trim(1,2).AssumeFPS(1)
    a = clip1.Trim(0, FirstFrame-1)
    b = clip1.Trim(FirstFrame+2,0)
    join = a+r+b
    join = join.AssumeFPS(FrameRate(Source))
    return join
    }
    	
    function RXr3(clip Source, int "FirstFrame", int "Model")
    {
    #RIFE interpolation
    #assumes YUV input
    Model = Default(Model, 5) #2.3
    clip1 = Source.AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    start=Clip1.Trim(FirstFrame-1,-1)
    end=Clip1.Trim(FirstFrame+3,-1)
    startend = start + end
    r = startend.z_ConvertFormat(pixel_type="RGBPS", resample_filter="bicubic", colorspace_op="709:709:709:l=>rgb:709:709:f")
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType, resample_filter="bicubic", colorspace_op="rgb:709:709:f=>709:709:709:l")
    r = r.FrameRateConverterMix(NewNum=4, NewDen=1, output="Flow")
    r = r.Trim(1,3).AssumeFPS(1)
    a = clip1.Trim(0, FirstFrame-1)
    b = clip1.Trim(FirstFrame+3,0)
    join = a+r+b
    join = join.AssumeFPS(FrameRate(Source))
    return join
    }	
    
    function RXr4(clip Source, int "FirstFrame", int "Model")
    {
    #RIFE interpolation
    #assumes YUV input
    Model = Default(Model, 5) #2.3
    clip1 = Source.AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    start=Clip1.Trim(FirstFrame-1,-1)
    end=Clip1.Trim(FirstFrame+4,-1)
    startend = start + end
    r = startend.z_ConvertFormat(pixel_type="RGBPS", resample_filter="bicubic", colorspace_op="709:709:709:l=>rgb:709:709:f")
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType, resample_filter="bicubic", colorspace_op="rgb:709:709:f=>709:709:709:l")
    r = r.FrameRateConverterMix(NewNum=5, NewDen=1, output="Flow")
    r = r.Trim(1,4).AssumeFPS(1)
    a = clip1.Trim(0, FirstFrame-1)
    b = clip1.Trim(FirstFrame+4,0)
    join = a+r+b
    join = join.AssumeFPS(FrameRate(Source))
    return join
    }		
    
    
    function RXr5(clip Source, int "FirstFrame", int "Model")
    {
    #RIFE interpolation
    #assumes YUV input
    Model = Default(Model, 5) #2.3
    clip1 = Source.AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    start=Clip1.Trim(FirstFrame-1,-1)
    end=Clip1.Trim(FirstFrame+5,-1)
    startend = start + end
    r = startend.z_ConvertFormat(pixel_type="RGBPS", resample_filter="bicubic", colorspace_op="709:709:709:l=>rgb:709:709:f")
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType, resample_filter="bicubic", colorspace_op="rgb:709:709:f=>709:709:709:l")
    r = r.FrameRateConverterMix(NewNum=6, NewDen=1, output="Flow")
    r = r.Trim(1,5).AssumeFPS(1)
    a = clip1.Trim(0, FirstFrame-1)
    b = clip1.Trim(FirstFrame+5,0)
    join = a+r+b
    join = join.AssumeFPS(FrameRate(Source))
    return join
    }		
    
    function RXr6(clip Source, int "FirstFrame", int "Model")
    {
    #RIFE interpolation
    #assumes YUV input
    Model = Default(Model, 5) #2.3
    clip1 = Source.AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    start=Clip1.Trim(FirstFrame-1,-1)
    end=Clip1.Trim(FirstFrame+6,-1)
    startend = start + end
    r = startend.z_ConvertFormat(pixel_type="RGBPS", resample_filter="bicubic", colorspace_op="709:709:709:l=>rgb:709:709:f")
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.RIFE(model=Model)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType, resample_filter="bicubic", colorspace_op="rgb:709:709:f=>709:709:709:l")
    r = r.FrameRateConverterMix(NewNum=7, NewDen=1, output="Flow")
    r = r.Trim(1,6).AssumeFPS(1)
    a = clip1.Trim(0, FirstFrame-1)
    b = clip1.Trim(FirstFrame+6,0)
    join = a+r+b
    join = join.AssumeFPS(FrameRate(Source))
    return join
    }
    The functions could probably be cleaned up a bit. I hardcoded them to expect YUV input, there is probably more elegant way to navigate input pixel types, but it will return the input pixel type in high bit depths as well. e.g. ProRes 10bit422 would output 10bit422

    I called them RXr (for the generic any frame count version), RXr1, RXr2, because the "r" stands for the RIFE version. The original RX functions use MVTools2, so you can easily swap between them in a script
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!