VideoHelp Forum
+ Reply to Thread
Page 3 of 3
FirstFirst 1 2 3
Results 61 to 81 of 81
Thread
  1. You should post your final script, or at least parts of it to show what you did. It will help others in the future.
    Quote Quote  
  2. Originally Posted by jagabo View Post
    You should post your final script, or at least parts of it to show what you did. It will help others in the future.
    Unfortunately, I don't have the original script anymore, but I can rewrite it.

    I created two different videos, one with combing interpolated, and one with combing interpolated and also using the half-height method.

    I interpolated comb frames but I also interpolated the occasional dupe frame.

    Code:
    FFmpegSource2("LikeToySoldiers.mkv", atrack=1) #import video file with audio
    tdecimate()
    function ReplaceFramesSVPFlow(clip Source, int N, int X)
    {
    # N is number of the 1st frame in Source that needs replacing.
    # X is total number of frames to replace
    #e.g. ReplaceFramesSVPFLow(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for SVPFlow interpolation
    
    start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point
    end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point
    
    start+end
    AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    
    super=SVSuper("{gpu:1}")
    vectors=SVAnalyse(super, "{}")
    SVSmoothFps(super, vectors, "{rate:{num:"+String(X+1)+", den:1}}", url="www.svp-team.com", mt=1)
    
    AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining
    Trim(1, framecount-1) #trim ends, leaving replacement frames
    
    Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0)
    }
    
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    # and repeat

    For the second version, I did the same thing, but resized to half height. This was used only for several shots where interpolation did not suffice.


    Code:
    FFmpegSource2("LikeToySoldiers.mkv", atrack=1) #import video file with audio
    tdecimate()
    function ReplaceFramesSVPFlow(clip Source, int N, int X)
    {
    # N is number of the 1st frame in Source that needs replacing.
    # X is total number of frames to replace
    #e.g. ReplaceFramesSVPFLow(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for SVPFlow interpolation
    
    start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point
    end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point
    
    start+end
    AssumeFPS(1) #temporarily FPS=1 to use mflowfps
    
    super=SVSuper("{gpu:1}")
    vectors=SVAnalyse(super, "{}")
    SVSmoothFps(super, vectors, "{rate:{num:"+String(X+1)+", den:1}}", url="www.svp-team.com", mt=1)
    
    AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining
    Trim(1, framecount-1) #trim ends, leaving replacement frames
    
    Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0)
    }
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    ReplaceFramesSVPFlow(????,?) #interpolate frame
    # and repeat
    
    Lanczos4Resize(864,240) #resize to half height
    nnedi3_rpow2(2) #double both height and length
    Lanczos4Resize(854,480) #stretch back to original AR
    I then manually edited together the best parts of each version using Premiere Pro. I exported that file, and ran it through Topaz Video Enhance AI, which helped a lot with the banding issues and detail.

    The machine learning algorithms add a bunch of noise, so I imported it back into AviSynth and ran a simple TemporalDegrain2() on it. I find de-graining after the upscale preserved much more detail.

    The biggest problem left is the aliasing, which can be fixed using santiag after I resize to half-height, dosen't seem to work until I resize to half-height though. It works on most (but not all) of the aliasing issues, however, the detail loss is way too much, and I'd rather have minor aliasing.
    Last edited by embis2003; 6th Sep 2020 at 09:42. Reason: Edited mistake in the script, got confused with another project that was PAL.
    Quote Quote  
  3. Originally Posted by lordsmurf View Post
    Originally Posted by embis2003 View Post
    First of all, I'd like to apologize for how hostile this thread got. I think its fair to say I got frustrated.
    Very respectful comment.

    nearly perfect repair of this video.
    For those curious, here is my final result:
    It's nowhere near perfect, and I see lots of remnant issues and artifacts, but it seems fine for what it is. I could put on the HDTV, sit back, and enjoy it for the few minutes of length. The source was overly distracting, whereas the final file distractions become less obvious with normal viewing distance. I've seen better, I've seen worse. Until better source is located, you can call this a win.
    I appreciate that! I find its still the best "quality" of the video around, and that allows me to look past some of those issues, however, if anybody in the future can propose a fix to said problems I would not be opposed to starting the project over again. now that I know my way around a little more, it probably wouldn't take as long as 9 months.
    Quote Quote  
  4. Originally Posted by embis2003 View Post

    The biggest problem left is the aliasing, which can be fixed using santiag after I resize to half-height, dosen't seem to work until I resize to half-height though. It works on most (but not all) of the aliasing issues, however, the detail loss is way too much, and I'd rather have minor aliasing.


    1) There are sections with temporal aliasing that remain in this last version, that you could probably improve with temporal AA filters, such as QTGMC in progressive mode (just splice in with trim() to mix/match sections) such as the CG grafitti ~00:01:01, a few other sections the like couch ~00:01:11 , a few others. They can be globally filtered or limited to parts within frames with masking/roto . Then you plug in the clean frames to the neural net +/- additional filtering

    I used the mkv version. The avs version has a slight red shift, I think from awarpsharp2; but the vapoursynth version does not . grafitti_compare.mp4 below (this is globally filtered (entire frames)
    Code:
    LWLibavVideoSource("1.mkv")
    tdecimate()
    assumefps(24000,1001)
    trim(1484,1522) #cg grafitti section
    awarpsharp2(depth=4)
    santiag(3,3)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
    awarpsharp2(depth=4)
    santiag(2,2)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)

    2) But others sections which would take some semi-manual work, motion tracking (a tracking repair) , some compositing and masking (roto) - such as the TV text ~00:01:13, ~00:02:23 . Basically you redo the text and zoom .

    Just an observation, but many generic neural net algorithms mess up text upscaling. Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm. Ideally you'd redo the book text, but that is not an easy image or font to find. You need 1 clean frame (ideally the largest size, zoomed in with everything visible, and you do the zoom backwards with the motion tracked data)
    Image Attached Files
    Quote Quote  
  5. 1) There are sections with temporal aliasing that remain in this last version, that you could probably improve with temporal AA filters, such as QTGMC in progressive mode (just splice in with trim() to mix/match sections) such as the CG grafitti ~00:01:01, a few other sections the like couch ~00:01:11 , a few others. They can be globally filtered or limited to parts within frames with masking/roto . Then you plug in the clean frames to the neural net +/- additional filtering

    I used the mkv version. The avs version has a slight red shift, I think from awarpsharp2; but the vapoursynth version does not . grafitti_compare.mp4 below (this is globally filtered (entire frames)
    Code:
    LWLibavVideoSource("1.mkv")
    tdecimate()
    assumefps(24000,1001)
    trim(1484,1522) #cg grafitti section
    awarpsharp2(depth=4)
    santiag(3,3)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
    awarpsharp2(depth=4)
    santiag(2,2)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
    That improvement is very impressive, and I do find that little detail loss less distracting than the aliasing. I never thought to use QTGMC in progressive mode, man, I forgot that its a pretty badass dealiaser if used correctly.

    But others sections which would take some semi-manual work, motion tracking (a tracking repair) , some compositing and masking (roto) - such as the TV text ~00:01:13, ~00:02:23 . Basically you redo the text and zoom.
    This seems more plausible, I would just have to identify that font.

    Just an observation, but many generic neural net algorithms mess up text upscaling Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm.
    Yeah, it seems there is a threshold to how small text can be before topaz messes it up. I might elect to have the book sequence scaled up using nnedi if I decide to redo it.

    You need 1 clean frame (ideally the largest size, zoomed in with everything visible, and you do the zoom backwards with the motion tracked data)
    WOW, funny enough, I actually thought of this exact idea, however, I declared it impossible since even in the most zoomed in part where all the text is visible, it still doesn't look very sharp. It only gets sharp enough after some parts are cut off of the frame. Also, the kids head moves and blocks some of the text, so i couldn't get a full capture.

    Ideally you'd redo the book text, but that is not an easy image or font to find.
    If I could identify that font, I suppose I could add in and the motion track text, roto the kids head, add some blur to match, because the text is simply just lyrics from the song. But that sounds unlikely.
    Quote Quote  
  6. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by embis2003 View Post
    Just an observation, but many generic neural net algorithms mess up text upscaling Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm.
    Yeah, it seems there is a threshold to how small text can be before topaz messes it up.
    Topaz is not, and never has been, known for quality filters. This "AI" video upscaler is not any different. Newbies are bamboozled because of some Youtube videos, and because the software has a dummy-friendly GUI, but it's really pretty lousy software.

    It reminds me of NeatVideo, vReveal, Super Resolution, and some others. Avisynth makes those look quaint.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  7. Originally Posted by lordsmurf View Post
    Originally Posted by embis2003 View Post
    Just an observation, but many generic neural net algorithms mess up text upscaling Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm.
    Yeah, it seems there is a threshold to how small text can be before topaz messes it up.
    Topaz is not, and never has been, known for quality filters. This "AI" video upscaler is not any different. Newbies are bamboozled because of some Youtube videos, and because the software has a dummy-friendly GUI, but it's really pretty lousy software.

    It reminds me of NeatVideo, vReveal, Super Resolution, and some others. Avisynth makes those look quaint.
    For the sake of not having this thread turn hostile again, I will just say, I disagree. In my experience, the results the software spits out can be downright amazing, it just depends on the source material. The tech is still in an infancy, and I don't consider it "AI" either, however, its noise reduction and the detail it seems to create are very impressive to me.
    Last edited by embis2003; 27th Mar 2021 at 07:21.
    Quote Quote  
  8. Originally Posted by poisondeathray View Post
    Originally Posted by embis2003 View Post

    The biggest problem left is the aliasing, which can be fixed using santiag after I resize to half-height, dosen't seem to work until I resize to half-height though. It works on most (but not all) of the aliasing issues, however, the detail loss is way too much, and I'd rather have minor aliasing.


    1) There are sections with temporal aliasing that remain in this last version, that you could probably improve with temporal AA filters, such as QTGMC in progressive mode (just splice in with trim() to mix/match sections) such as the CG grafitti ~00:01:01, a few other sections the like couch ~00:01:11 , a few others. They can be globally filtered or limited to parts within frames with masking/roto . Then you plug in the clean frames to the neural net +/- additional filtering

    I used the mkv version. The avs version has a slight red shift, I think from awarpsharp2; but the vapoursynth version does not . grafitti_compare.mp4 below (this is globally filtered (entire frames)
    Code:
    LWLibavVideoSource("1.mkv")
    tdecimate()
    assumefps(24000,1001)
    trim(1484,1522) #cg grafitti section
    awarpsharp2(depth=4)
    santiag(3,3)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
    awarpsharp2(depth=4)
    santiag(2,2)
    qtgmc(preset="very slow", inputtype=2, sharpness=0.1)

    2) But others sections which would take some semi-manual work, motion tracking (a tracking repair) , some compositing and masking (roto) - such as the TV text ~00:01:13, ~00:02:23 . Basically you redo the text and zoom .

    Just an observation, but many generic neural net algorithms mess up text upscaling. Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm. Ideally you'd redo the book text, but that is not an easy image or font to find. You need 1 clean frame (ideally the largest size, zoomed in with everything visible, and you do the zoom backwards with the motion tracked data)

    There are progressive video clips with limited “short combing” that QTGMC works good on!

    Regarding the OP's first clip.
    I have seen the exact same combing in progressive mp4 clips?
    Example shows original and 500% size for easy viewing.
    Image
    [Attachment 76975 - Click to enlarge]

    This suggests this exact combing could be result of particular editing practice.

    I have used Vdub Internal Filter called field bob.
    Loaded twice through Vdub filter add function.

    For example:
    Set first field bob to smooth/down.
    Set the second to smooth/smooth.
    Depends on clip.

    QTGMC works better in a final process after field bob, but this might be a result of my QTGMC settings?

    ------------
    What was decided about the best processing for the OP's first video "combing only"?
    I am not interested in manual editing, just full clip processing.

    The thread was hard to follow, could you compile a basic code based on repairing the OP’s first sample.
    It could be very handy to process OP type of video.

    Thanks.
    Last edited by Charles-Roberts; 13th Feb 2024 at 13:15.
    Quote Quote  
  9. Originally Posted by Charles-Roberts View Post
    Regarding the OP's first clip.
    I have seen the exact same combing in progressive mp4 porn clips?
    There are different types of "combing" with different causes , and therefore different solutions.

    There were old threads dealing with this on some similar issues on porn videos; not all the treatments are the same


    Depends on clip.

    QTGMC works better in a final process after field bob, but this might be a result of my QTGMC settings?
    It depends on the specific video


    What was decided about the best processing for the OP's first video "combing only"?
    That eminem video in the 1st post is a special case as many of the frames are actually ok. The entire video wasn't done the same way, and it wouldn't be wise to process it using 1 method - or you risk creating new artifacts and problems on the "good" frames .

    Since that old post, there are better interpolation methods using RIFE - cleaner results, fewer artifacts for frame interpolation than using mvtools2 or svpflow. It requires 1 good frame before and after, and they interpolate the frame(s) in between. There are RIFE interpolation functions posted in threads here and doom9 . In general, RIFE produces better results for frame interpolation. MVtools2 and SVPflow have higher chance of edge occlusions and "blobby" edge artifacts, as well as other problems



    I am not interested in manual editing, just full clip processing.
    Then you would get poor results if your case was like his. You would damage "good" frames. You would need to apply different filters to different frames


    The thread was hard to follow, could you compile a basic code based on repairing the OP’s first sample.
    It could be very handy to process OP type of video.

    Thanks.
    It depends on the specific problem and specific video.

    If it's been upscaled from the original resolution using a progressive algorithm, while still interlaced with interleaved fields - then you might be able to "undo" it using a reverse kernel (such as debicubic , debilinear) to reconstruct the original fields. There were examples of code used on porn videos in another thread. But if it's downscaling, then you cannot really fix it properly, because more information is lost.

    Start a new thread if you want advice on specific video
    Quote Quote  
  10. The clip I refereed to is exactly the same as OP video, in the areas and frequency, placement and appearance of combing.
    This is why I mentioned clips having something the same in treatment to be so similar. We are talking exact, as best can be noted.
    Whatever was done, is very likely identical for both videos.

    The material is not suitable for forum, and in this case the treatment required may well be exactly like OP,
    only using the new approaches you talk of.

    Not worth starting a new thread without being able to supply material.
    I would just like more info about RIFE interpolation functions.

    I will need to study up on this from scratch.
    Perhaps a short sample of code to get started?
    Quote Quote  
  11. Originally Posted by Charles-Roberts View Post
    The clip I refereed to is exactly the same as OP video, in the areas and frequency, placement and appearance of combing.
    This is why I mentioned clips having something the same in treatment to be so similar. We are talking exact, as best can be noted.
    Whatever was done, is very likely identical for both videos.

    Frequency - are you describing the pattern in a particular frame as a spatial description; or among a range of frames - a temporal descriptoin?

    ie. Do you have clean reference frames to interpolate "from" ? Otherwise that method won't work for you.

    What is the pattern of clean vs. combed frames ?

    A more typical case for a porn video mishandling is simple progressive scaling, with interleaved fields - That's not quite what the OP has, because there are many good frames during motion


    Not worth starting a new thread without being able to supply material.
    I would just like more info about RIFE interpolation functions.

    I will need to study up on this from scratch.
    Perhaps a short sample of code to get started?
    There are several variations, but a versatile RIFE based function that can interpolate multiple consecutive "bad" frames is in post #16
    https://forum.videohelp.com/threads/407293-ReplaceFrameX-InsertFrameX
    Quote Quote  
  12. Originally Posted by poisondeathray View Post

    A more typical case for a porn video mishandling is simple progressive scaling, with interleaved fields.

    There are several variations, but a versatile RIFE based function that can interpolate multiple consecutive "bad" frames is in post #16
    https://forum.videohelp.com/threads/407293-ReplaceFrameX-InsertFrameX
    The form of combing is the same, but there are more areas where good reference frames are not available.
    What you have said is correct.
    I will need the versatile RIFE based function to interpolate multiple consecutive "bad" frames!
    I have assumed the fps can remain the same with this process?

    I have AVISynth installed, do I need any other bits to make the function work?

    Should I convert to original dimensions and aspect before, during or after this?
    I have not looked into it, but might be able to find the original SAR/DAR before what was likely down-scaled video.

    Thanks.
    Last edited by Charles-Roberts; 12th Feb 2024 at 13:24. Reason: Moved unread forward
    Quote Quote  
  13. Originally Posted by Charles-Roberts View Post
    The form of combing is the same, but there are more areas where good reference frames are not available.
    What you have said is correct.
    I will need the versatile RIFE based function to interpolate multiple consecutive "bad" frames!

    I have assumed the fps can remain the same with this process?

    Yes, FPS remains the same .

    But once you have more than a few consecutive "bad" frames, interpolation because less useful as a technique - because it cannot recreate the actual missing data from missing time samples. The "tweening" motion will look very robotic and fake. RIFE (or related methods) use linear interpolation between 2 good frames - and real life motion is usually not linear at all

    The more typical porn case is it started with interlaced video. The badly resized / badly deinterlaced version is 1/2 the field rate. So for PAL area, it would be 25FPS, when it should have been double rate deinterlaced to 50FPS (For NTSC, it would be analogous 29.97FPS and 59.94FPS) . Motion is smoother at 50 FPS or 59.94 FPS - that's what the video should have been - because it's usually "video", not film

    If you can get a clean 25p or 29.97p single rate version from processing, then you can try RIFE on the whole thing to synthesize 50p or 59.94p - to emulate what it should have been

    If the version you have is upscaled (larger than 720x480 "NTSC", or 720x576 for "PAL"), then I would try the inverse kernel methods, such as DeBicubic, Debilinear.


    I have AVISynth installed, do I need any other bits to make the function work?
    The dependencies are listed in the requirements list

    https://github.com/Asd-g/AviSynthPlus-RIFE
    https://github.com/Asd-g/AviSynthPlus-RIFE/releases


    Should I convert to original dimensions and aspect before, during or after this?
    I have not looked into it, but might be able to find the original SAR/DAR before what was likely down-scaled video.

    .
    What are the current dimensions ? And what should they have been ? e.g. a PAL DVD original source would have been 720x576

    For frame interpolation cases you'd usually convert after, but in the upscaled case the inverse kernel tries to reverse the scaling method used back to the original and hopefully "fixes" the fields
    Quote Quote  
  14. Originally Posted by poisondeathray View Post
    What are the current dimensions ? And what should they have been ? e.g. a PAL DVD original source would have been 720x576
    I will have to look into it when on desktop.

    Thank you for all the information, I have copied it all to study.
    Have plenty of info to get started.


    Below is the QTGMC code I use for remnant interlace on progressive video.
    It works good for most material but not good on this combing.
    PHP Code:
    Import("C:\Program Files (x86)\[url=https://www.videohelp.com/software/Avisynth]AviSynth[/url]+\plugins+\QTGMC.avsi")
    DirectShowSource("C:\video.avi")
    #ConvertToYV12
    QTGMCPreset="Placebo"InputType=2SourceMatch=3Lossless=2NoiseProcess=2GrainRestore=0.4NoiseRestore=0.15Sigma=1.8NoiseDeint="Generate"StabilizeNoise=true )
    QTGMCPreset="Placebo"InputType=3SourceMatch=3Lossless=2NoiseProcess=2GrainRestore=0.4NoiseRestore=0.15Sigma=1.8NoiseDeint="Generate"StabilizeNoise=true )
    Repairtb)
    #PrevGlobals="Reuse" 
    Are there improvement that can be made to code without over bluring source?
    This type of combing in OP and my material is not what I normally encounter!
    Quote Quote  
  15. Originally Posted by Charles-Roberts View Post

    Below is the QTGMC code I use for remnant interlace on progressive video.
    It works good for most material but not good on this combing.
    PHP Code:
    Import("C:\Program Files (x86)\[url=https://www.videohelp.com/software/Avisynth]AviSynth[/url]+\plugins+\QTGMC.avsi")
    DirectShowSource("C:\video.avi")
    #ConvertToYV12
    QTGMCPreset="Placebo"InputType=2SourceMatch=3Lossless=2NoiseProcess=2GrainRestore=0.4NoiseRestore=0.15Sigma=1.8NoiseDeint="Generate"StabilizeNoise=true )
    QTGMCPreset="Placebo"InputType=3SourceMatch=3Lossless=2NoiseProcess=2GrainRestore=0.4NoiseRestore=0.15Sigma=1.8NoiseDeint="Generate"StabilizeNoise=true )
    Repairtb)
    #PrevGlobals="Reuse" 
    Are there improvement that can be made to code without over bluring source?
    This type of combing in OP and my material is not what I normally encounter!
    It would depend on the specific video characteristics
    Quote Quote  
  16. ok
    Quote Quote  
  17. Originally Posted by poisondeathray View Post

    What are the current dimensions ? And what should they have been ? e.g. a PAL DVD original source would have been 720x576

    For frame interpolation cases you'd usually convert after, but in the upscaled case the inverse kernel tries to reverse the scaling method used back to the original and hopefully "fixes" the fields
    The RIFE models is a big download, might get back to this another time.
    This might be put in the two hard basket.

    No access to an original, but I have discovered the video was down-scaled from 1920x1080 59.94.
    Sorry I have no details with me about the downloaded version but about 1400x770 30fps Progressive.

    I intend to try up-scaling to 1920x1080 29.97.
    Retry my QTGMC progressive code.
    Another experiment could be re-interlacing the up-scaled 29.97 version and re-deinterlace.

    What is used to re-interlace?
    Quote Quote  
  18. Originally Posted by Charles-Roberts View Post
    No access to an original, but I have discovered the video was down-scaled from 1920x1080 59.94.
    Sorry I have no details with me about the downloaded version but about 1400x770 30fps Progressive.
    It probably was HD interlaced 1920x1080i29.97 . Since there was downscaling, it's unlikely that you'd get any benefit from using any reverse kernel methods - they can be helpful in the opposite situation when video is poorly upscaled


    Another experiment could be re-interlacing the up-scaled 29.97 version and re-deinterlace.

    What is used to re-interlace?
    A proper re-interlace requires you to start with clean 59.94p - re-interlacing probably won't help you

    Code:
    #Start with 59.94p or 50p (for PAL areas)
    AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
    Quote Quote  
  19. Originally Posted by poisondeathray View Post

    A proper re-interlace requires you to start with clean 59.94p - re-interlacing probably won't help you

    Code:
    #Start with 59.94p or 50p (for PAL areas)
    AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
    Thanks, the interlacing code will be interesting.

    Not sure if I should start a fresh thread?
    I have been going through forum threads on related subjects and wondering if jagabo 's script would be worth trying.

    I am trying to work it out, and wondering if its worth posting the question on old thread or make a new one?
    https://forum.videohelp.com/threads/404277-Bad-interlace-lines-on-progressive-video#post2643462

    The johnmeyer script to repair bad deinterlacing makes my head hurt just looking at it http://forum.doom9.org/showthread.php?p=1686309#post1686309

    Hypothetical videos with no samples can create issues.
    Calling it vid.mp4 1400x770 30fps Progressive from from HD interlaced 1920x1080i 29.97.
    Last edited by Charles-Roberts; 13th Feb 2024 at 12:13.
    Quote Quote  
  20. I think it's always worth trying all of them

    Even if they don't improve the issue for your specific video - maybe they are beneficial for some slightly different video with similar problems - and you will have more "tools in the toolbelt"
    Quote Quote  
  21. Honestly, if I knew how to recreate these artifacts exactly, I could probably train a neural net to specifically tackle the issue. But the best solution i've found is interpolation of the frames (if there is enough clean frames). if there isn't, then resizing to half height (for example for a video that is 720 x 540, then you'd resize to 720 x 220), which will blur the combing away. and then using some upscaling algorithim to double the height, such as the many ESRGAN models or derivations, or even topaz (sorry lordsmurf). then resize back to proper aspect ratio using the alg of your choice. though, if you can handle the blur, you can skip the neural net.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!