VideoHelp Forum

Try DVDFab and copy Ultra HD Blu-rays and DVDs! Or rip iTunes movies and music! Download free trial !
+ Reply to Thread
Page 3 of 3
FirstFirst 1 2 3
Results 61 to 67 of 67
Thread
  1. You should post your final script, or at least parts of it to show what you did. It will help others in the future.
    Quote Quote  
  2. Originally Posted by jagabo View Post
    You should post your final script, or at least parts of it to show what you did. It will help others in the future.
    Unfortunately, I don't have the original script anymore, but I can rewrite it.

    I created two different videos, one with combing interpolated, and one with combing interpolated and also using the half-height method.

    I interpolated comb frames but I also interpolated the occasional dupe frame.

    Code:
    FFmpegSource2("LikeToySoldiers.mkv", atrack=1) #import video file with audio
    tdecimate()
    function ReplaceFramesSVPFlow(clip Source, int N, int X)
    {
    # N is number of the 1st frame in Source that needs replacing.
    # X is total number of frames to replace
    #e.g. ReplaceFramesSVPFLow(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for SVPFlow interpolation
    
    start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point
    end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point
    
    start+end
    AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    
    super=SVSuper("{gpu:1}")
    vectors=SVAnalyse(super, "{}")
    SVSmoothFps(super, vectors, "{rate:{num:"+String(X+1)+", den:1}}", url="www.svp-team.com", mt=1)
    
    AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining
    Trim(1, framecount-1) #trim ends, leaving replacement frames
    
    Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0)
    }
    
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    # and repeat

    For the second version, I did the same thing, but resized to half height. This was used only for several shots where interpolation did not suffice.


    Code:
    FFmpegSource2("LikeToySoldiers.mkv", atrack=1) #import video file with audio
    tdecimate()
    function ReplaceFramesSVPFlow(clip Source, int N, int X)
    {
    # N is number of the 1st frame in Source that needs replacing.
    # X is total number of frames to replace
    #e.g. ReplaceFramesSVPFLow(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for SVPFlow interpolation
    
    start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point
    end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point
    
    start+end
    AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    
    super=SVSuper("{gpu:1}")
    vectors=SVAnalyse(super, "{}")
    SVSmoothFps(super, vectors, "{rate:{num:"+String(X+1)+", den:1}}", url="www.svp-team.com", mt=1)
    
    AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining
    Trim(1, framecount-1) #trim ends, leaving replacement frames
    
    Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0)
    }
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    # and repeat
    
    Lanczos4Resize(864,240) #resize to half height
    nnedi3_rpow2(2) #double both height and length
    Lanczos4Resize(854,480) #stretch back to original AR
    I then manually edited together the best parts of each version using Premiere Pro. I exported that file, and ran it through Topaz Video Enhance AI, which helped a lot with the banding issues and detail.

    The machine learning algorithms add a bunch of noise, so I imported it back into AviSynth and ran a simple TemporalDegrain2() on it. I find de-graining after the upscale preserved much more detail.

    The biggest problem left is the aliasing, which can be fixed using santiag after I resize to half-height, dosen't seem to work until I resize to half-height though. It works on most (but not all) of the aliasing issues, however, the detail loss is way too much, and I'd rather have minor aliasing.
    Last edited by embis2003; 6th Sep 2020 at 09:42. Reason: Edited mistake in the script, got confused with another project that was PAL.
    Quote Quote  
  3. Originally Posted by lordsmurf View Post
    Originally Posted by embis2003 View Post
    First of all, I'd like to apologize for how hostile this thread got. I think its fair to say I got frustrated.
    Very respectful comment.

    nearly perfect repair of this video.
    For those curious, here is my final result:
    It's nowhere near perfect, and I see lots of remnant issues and artifacts, but it seems fine for what it is. I could put on the HDTV, sit back, and enjoy it for the few minutes of length. The source was overly distracting, whereas the final file distractions become less obvious with normal viewing distance. I've seen better, I've seen worse. Until better source is located, you can call this a win.
    I appreciate that! I find its still the best "quality" of the video around, and that allows me to look past some of those issues, however, if anybody in the future can propose a fix to said problems I would not be opposed to starting the project over again. now that I know my way around a little more, it probably wouldn't take as long as 9 months.
    Quote Quote  
  4. Originally Posted by embis2003 View Post

    The biggest problem left is the aliasing, which can be fixed using santiag after I resize to half-height, dosen't seem to work until I resize to half-height though. It works on most (but not all) of the aliasing issues, however, the detail loss is way too much, and I'd rather have minor aliasing.


    1) There are sections with temporal aliasing that remain in this last version, that you could probably improve with temporal AA filters, such as QTGMC in progressive mode (just splice in with trim() to mix/match sections) such as the CG grafitti ~00:01:01, a few other sections the like couch ~00:01:11 , a few others. They can be globally filtered or limited to parts within frames with masking/roto . Then you plug in the clean frames to the neural net +/- additional filtering

    I used the mkv version. The avs version has a slight red shift, I think from awarpsharp2; but the vapoursynth version does not . grafitti_compare.mp4 below (this is globally filtered (entire frames)
    Code:
    LWLibavVideoSource("1.mkv")
    tdecimate()
    assumefps(24000,1001)
    trim(1484,1522) #cg grafitti section
    awarpsharp2(depth=4)
    santiag(3,3)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
    awarpsharp2(depth=4)
    santiag(2,2)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)

    2) But others sections which would take some semi-manual work, motion tracking (a tracking repair) , some compositing and masking (roto) - such as the TV text ~00:01:13, ~00:02:23 . Basically you redo the text and zoom .

    Just an observation, but many generic neural net algorithms mess up text upscaling. Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm. Ideally you'd redo the book text, but that is not an easy image or font to find. You need 1 clean frame (ideally the largest size, zoomed in with everything visible, and you do the zoom backwards with the motion tracked data)
    Image Attached Files
    Quote Quote  
  5. 1) There are sections with temporal aliasing that remain in this last version, that you could probably improve with temporal AA filters, such as QTGMC in progressive mode (just splice in with trim() to mix/match sections) such as the CG grafitti ~00:01:01, a few other sections the like couch ~00:01:11 , a few others. They can be globally filtered or limited to parts within frames with masking/roto . Then you plug in the clean frames to the neural net +/- additional filtering

    I used the mkv version. The avs version has a slight red shift, I think from awarpsharp2; but the vapoursynth version does not . grafitti_compare.mp4 below (this is globally filtered (entire frames)
    Code:
    LWLibavVideoSource("1.mkv")
    tdecimate()
    assumefps(24000,1001)
    trim(1484,1522) #cg grafitti section
    awarpsharp2(depth=4)
    santiag(3,3)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
    awarpsharp2(depth=4)
    santiag(2,2)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
    That improvement is very impressive, and I do find that little detail loss less distracting than the aliasing. I never thought to use QTGMC in progressive mode, man, I forgot that its a pretty badass dealiaser if used correctly.

    But others sections which would take some semi-manual work, motion tracking (a tracking repair) , some compositing and masking (roto) - such as the TV text ~00:01:13, ~00:02:23 . Basically you redo the text and zoom.
    This seems more plausible, I would just have to identify that font.

    Just an observation, but many generic neural net algorithms mess up text upscaling Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm.
    Yeah, it seems there is a threshold to how small text can be before topaz messes it up. I might elect to have the book sequence scaled up using nnedi if I decide to redo it.

    You need 1 clean frame (ideally the largest size, zoomed in with everything visible, and you do the zoom backwards with the motion tracked data)
    WOW, funny enough, I actually thought of this exact idea, however, I declared it impossible since even in the most zoomed in part where all the text is visible, it still doesn't look very sharp. It only gets sharp enough after some parts are cut off of the frame. Also, the kids head moves and blocks some of the text, so i couldn't get a full capture.

    Ideally you'd redo the book text, but that is not an easy image or font to find.
    If I could identify that font, I suppose I could add in and the motion track text, roto the kids head, add some blur to match, because the text is simply just lyrics from the song. But that sounds unlikely.
    Quote Quote  
  6. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by embis2003 View Post
    Just an observation, but many generic neural net algorithms mess up text upscaling Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm.
    Yeah, it seems there is a threshold to how small text can be before topaz messes it up.
    Topaz is not, and never has been, known for quality filters. This "AI" video upscaler is not any different. Newbies are bamboozled because of some Youtube videos, and because the software has a dummy-friendly GUI, but it's really pretty lousy software.

    It reminds me of NeatVideo, vReveal, Super Resolution, and some others. Avisynth makes those look quaint.
    Quote Quote  
  7. Originally Posted by lordsmurf View Post
    Originally Posted by embis2003 View Post
    Just an observation, but many generic neural net algorithms mess up text upscaling Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm.
    Yeah, it seems there is a threshold to how small text can be before topaz messes it up.
    Topaz is not, and never has been, known for quality filters. This "AI" video upscaler is not any different. Newbies are bamboozled because of some Youtube videos, and because the software has a dummy-friendly GUI, but it's really pretty lousy software.

    It reminds me of NeatVideo, vReveal, Super Resolution, and some others. Avisynth makes those look quaint.
    For the sake of not having this thread turn hostile again, I will just say, I disagree. In my experience, the results the software spits out can be downright amazing, it just depends on the source material. The tech is still in an infancy, and I don't consider it "AI" either, however, its noise reduction and detail recreation capabilities are very impressive to me.
    Quote Quote  



Similar Threads