VideoHelp Forum
+ Reply to Thread
Results 1 to 18 of 18
Thread
  1. Member
    Join Date
    Mar 2021
    Location
    Spain
    Search PM
    Hi, It's my first time posting here because I couldn't find a solution to my problem.

    A few months ago I brought my old VHS tapes to a "professional" video company here where I live, I thought they could do a better job than me digitalizing my tapes. When I received them I checked the video files and realized they had a lot of interlacing artifacts and combs, I searched on forums and found what de-interlacing could do.
    The problem is that de-interlacing is for interlaced videos (like VHS) and what I received was:

    Progressive video (I checked It with different programs and all of them classified the video files as progressive)
    Video resolution: 1280x720 (720x576 VHS video and black bars all around)(We use PAL)

    Even though It's progressive I tried to de-interlace with different programs but (I think) I got what you can expect of de-interlacing a progressive video.
    Here is a capture from the original video:
    Image
    [Attachment 57864 - Click to enlarge]


    Here is a capture from the "de-interlaced" video:
    Image
    [Attachment 57867 - Click to enlarge]


    I got the same results on different programs using different methods (Yadif, Bob, etc)

    I know there are programs that can apply a blur effect to the jagged lines but I still have to deal with the "ghosting" of the image and even if it can be removed I don't think is gonna look good enough. I don't find any better solution rather than buying a capture device and re-capture again by myself using interlaced format, what are your thoughts on this?
    Is there any good way to fix it? If It's not what should I do to not make the same mistakes when re-capturing by myself?

    PD: I've tried to see if the original tape had this problems and It doesn't, that means the problem is on the capturing process not on the tape.
    Last edited by Strift; 19th Mar 2021 at 09:19.
    Quote Quote  
  2. Cut a 30 sec part with visible interlaced movements from your original video (NOT the deinterlaced one) with Avidemux and post it here.
    Quote Quote  
  3. Member
    Join Date
    Dec 2020
    Location
    Germany
    Search PM
    copping like this
    Image
    [Attachment 57872 - Click to enlarge]


    and then rezising that to 4:3
    Image
    [Attachment 57873 - Click to enlarge]


    and then to 768x576
    Image
    [Attachment 57874 - Click to enlarge]


    should do the trick deinterlacing it properly
    Image
    [Attachment 57875 - Click to enlarge]



    using VirtualDub2
    Quote Quote  
  4. Member
    Join Date
    Mar 2021
    Location
    Spain
    Search PM
    Originally Posted by ProWo View Post
    Cut a 30 sec part with visible interlaced movements from your original video (NOT the deinterlaced one) with Avidemux and post it here.
    Here It is
    VHS-7.mkv
    Quote Quote  
  5. You can further blur the two fields together with a vertical blur followed by a vertical sharpen. Something like this in AviSynth:
    Code:
    Blur(0.0, 1.0)
    Sharpen(0.0, 0.7)
    This will make it look much like a full blend deinterlace of the original interlaced video.

    Here's something like that (downscaling instead of blurring) with some more cleanup:
    Code:
    LWLibavVideoSource("VHS-7.mkv", cache=false, prefer_hw=2) 
    src = last
    Spline36Resize((width/12)*4, height/2)
    # A good denoise filter here would help
    aWarpSharp2(depth=5)
    Sharpen(0.5)
    nnedi3_rpow2(4, cshift="Spline36Resize", fwidth=src.width, fheight=src.height)
    aWarpSharp2(depth=5)
    Sharpen(0.5)
    You might want to crop away those black borders too. A sample of the second script is attached.

    Oh, I just noticed the video was originally PAL. You want to remove the duplicate frames with TDecimate(Cycle=6, CycleR=1).
    Image Attached Files
    Quote Quote  
  6. Used my clever FFmpeg-GUI.

    First resized with crop detect:
    Image
    [Attachment 57879 - Click to enlarge]


    then set the encoder like this
    Image
    [Attachment 57880 - Click to enlarge]


    checked Avisynth, create script, edit script, added jagabos script (lightly modified for PAL SD setting)
    Image
    [Attachment 57882 - Click to enlarge]


    tested script with Test Script, clicked on convert, done (encoding).
    Clicked on Multiplex, the encoded videostream was already loaded, clicked on Audiostream, selected your sample mkv, clicked on Target file, accepted the proposed filename, set DAR as 4:3, mkv as container, clicked on multiplex, ALL DONE.
    Image Attached Files
    Quote Quote  
  7. Demand your money back plus some more for wasting your time and for them being incompetent fools. Then either take the tapes somewhere else or do it yourself.
    Quote Quote  
  8. I agree with manono. But here's johnmeyer's method of dealing with this type of problem:
    https://forum.doom9.org/showthread.php?p=1685187#post1685187
    Quote Quote  
  9. I have a bunch of similar videos. Here is one. I would appreciate not just an answer, but an explanation what the suggested script is doing, so I could adapt it to other videos. I just want to deal with interlacing, I don't care about noise or sharpness or whatever else. It would be nice if this could be done in VirtualDub without Avisynth, but if needed, I can download additional DLLs to my existing avisynth install

    I think that I don't want adaptive motion detection and instead I want a rather rigid script that would separate frames into fields based on on the thickness of a "line". I think in this video lines are 8 pixels high. Ideally, I want to have 60p out of this bad 30p.
    Image Attached Files
    Quote Quote  
  10. The original field structure has been messed up I think.
    A first quick attempt to fix it by blurring and Synthesizing one field to get 60i (60 interlaced fields per second), using AviSynth:
    Code:
    DGSource("nomination.dgi") #or your source filter
    f=4
    spline36resize(width/2,height/f)
    spline36resize(width*2,height*f)
    source=sharpen(0.4,1.0)
    
    #synthesizing one field:
    blocksize=16 #or try blocksize=8
    super=source.MSuper(pel=2)
    bvec=super.MAnalyse(isb=true,blksize=blocksize)
    fvec=super.MAnalyse(isb=false,blksize=blocksize)
    out=source.MFlowFps(super, bvec, fvec, num=0000, den=1) #60p
    out_i=out.separatefields().selectevery(4,0,3).weave()  #re-interlace for 30i
    
    return out #or out_i for 30i interlaced
    60p:
    Image Attached Files
    Last edited by Sharc; 9th May 2022 at 07:54.
    Quote Quote  
  11. @Sharc: Why are you halving the width? Wouldn't it be better to keep the full width?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  12. Originally Posted by Selur View Post
    @Sharc: Why are you halving the width? Wouldn't it be better to keep the full width?
    For no real good reason, I think. An experimental leftover, probably. Little harm, no real benefit. May be skipped, I agree.
    Quote Quote  
  13. @Sharc, thanks! It would not work with my original file: DGIndex would not load the video for indexing, and trying to open the original file with DirectShowSource() would produce gray rectangle for video. So I had to re-encode it to Cineform, and it worked from there.

    Now I need to figure out how exactly this script works, and maybe tweak it a little, because all resolution has been lost (snap-bad.png in the bottom) I resized the bad one to the same size as the good one (it was twice small in either direction).

    I changed the beginning of the script to:

    Code:
    f=2
    spline36resize(width,height/f)
    spline36resize(width,height*f)
    This helped (snap-average.png in the middle)

    Still, I wonder whether more resolution could be preserved. I think I understand the basic approach - halve the height, then restore it back, this should get rid of one field. But then where that other field comes from, where 60fps comes from? I don't get it. Reading up on the MVTools, are these 60fps fake, interpolated, not the original ones? I thought I could separate fields baked into the frame based on the regularity of combing.
    Image Attached Thumbnails Click image for larger version

Name:	snap-good.png
Views:	49
Size:	120.3 KB
ID:	64754  

    Click image for larger version

Name:	snap-average.png
Views:	27
Size:	92.7 KB
ID:	64756  

    Click image for larger version

Name:	snap-bad.png
Views:	63
Size:	75.3 KB
ID:	64757  

    Quote Quote  
  14. Originally Posted by ConsumerDV View Post
    Still, I wonder whether more resolution could be preserved. I think I understand the basic approach - halve the height, then restore it back, this should get rid of one field. But then where that other field comes from, where 60fps comes from? I don't get it. Reading up on the MVTools, are these 60fps fake, interpolated, not the original ones? I thought I could separate fields baked into the frame based on the regularity of combing.
    Yes, the approach was
    Step1: Get somehow rid of the "combing artifacts" by applying some filtering (like vertical blur, vertical subsample, trial and error vertical down-/upscale ...) to eventually obtain an "acceptable" progressive sequence.
    Step2: Interpolate the progressive sequence to the desired framerate (doubling to 60p in your case) using mvtools.
    (Step3: Re-interlace 60p->30i if an interlaced output is required)
    https://forum.doom9.org/showpost.php?p=1450981&postcount=3

    So no attempt was made to recover the original fields (an often futile exercise IMO, even more so for color video once the original fields structure has been garbled by some vertical resizing). The "missing" frames for 60p are purely synthesized from adjacent 30p frames. Interpolation may work well or it can badly fail (keywords: broken arms and legs). One has to try.

    Maybe someone has a better approach.
    Last edited by Sharc; 11th May 2022 at 03:42.
    Quote Quote  
  15. The biggest part of the problem is that the video was slightly resized vertically while it was interlaced. This has caused the two fields to contaminate each other and they can no longer be cleanly separated. On top of that the interlacing structure has been further damaged leaving scanline pairs interleaved rather than single scan lines. I don't see any good way of fixing that.

    I blurred the fields together (differently), denoised, and sharpened:

    Code:
    function UnSharpMask(clip v, float radius, float strength)
    {
      blurry = v.BinomialBlur(VarY=radius, VarC=radius, Y=3, U=3, V=3) # or GaussianBlur
      edges = Subtract(v, blurry).ColorYUV(off_y=2).ColorYUV(cont_y=(int(strength*radius*256.0)-256.0))
      Overlay(v, edges.ColorYUV(off_y=-128), mode="Add")
      Overlay(last, edges.Invert().ColorYUV(off_y=-128), mode="Subtract")
      ColorYUV(off_u=-1, off_v=-1) #overlay is causing U and V to increase by 1
    }
    
    LWLibavVideoSource("nomination.mp4", cache=false, prefer_hw=2) 
    AssumeTFF()
    SeparateFields()
    Blur(1.0, 1.0).Sharpen(0.7, 0.7)
    Weave()
    Blur(1.0, 1.0).Sharpen(0.7, 0.7)
    SMDegrain (tr=3, thSAD=500, refinemotion=true, contrasharp=false, PreFilter=4, mode=0, truemotion=true, plane=0, chroma=false)
    UnSharpMask(3.0, 0.4)
    UnSharpMask(1.5, 0.3)
    GreyScale()
    I concentrated more on sharpening and probably went a little too far. Everyone looks a little plastic-y.

    You could double the frame rate after that but I don't see much point.

    Oh, by the way, Sharc's video in post #10 has 60p frames but it's encoded interlaced. It should be encoded progressive.
    Image Attached Files
    Last edited by jagabo; 11th May 2022 at 09:44.
    Quote Quote  
  16. How about something like this:

    Code:
    // I think this is the correct number, can be adjusted to taste
    line_width = 8 
    
    for_each_frame(source_frame) {
    
      // This is what Sharc's script does: get rid of combing by scaling down and then back up
      frame1 = double_frame(halve_frame(source_frame))
    
      // Shift the picture up
      modified_source_frame = add_pixels_on_bottom(crop_pixels_on_top(source_frame, line_width), line_width)
    
      // Same scaling down and up with to obtain a frame from the second field
      frame2 = double_frame(halve_frame(modified_source_frame))
    
      // Return two frames instead of one, doubling the frame rate
      return (frame1, frame2)
    }
    Now if someone could make a working script off of this
    Quote Quote  
  17. Now if someone could make a working script off of this
    In Avisynth, this should to what you suggested:
    Code:
    ClearAutoloadDirs()
    SetFilterMTMode("DEFAULT_MT_MODE", MT_MULTI_INSTANCE)
    LoadPlugin("I:\Hybrid\64bit\Avisynth\AVISYN~1\LSMASHSource.dll")
    # loading source: C:\Users\Selur\Desktop\nomination.mp4
    # color sampling YV12@8, matrix: bt470, scantyp: progressive, luminance scale: limited
    LWLibavVideoSource("C:\Users\Selur\Desktop\NOMINA~1.MP4",cache=false,format="YUV420P8", prefer_hw=0,repeat=true)
    # current resolution: 480x320
    
    # WHAT YOU SUGGESTED -  START
    
    # fconstants
    h = height
    w = width
    line_width=8
    
    frame1 = last
    modified_source_frame = last
    
    frame1 = frame1.Spline36Resize(w,h/2).Spline36Resize(w,h) # down and up
    
    modified_source_frame = modified_source_frame.Crop(0,line_width,0,0).AddBorders(0,0,0,line_width) # crop lines at the top, add lines at the bottom 
    frame2 = modified_source_frame.Spline36Resize(w,h/2).Spline36Resize(w,h) # down and upscale
    
    Interleave(frame1, frame2) # output two frames
    AssumeFPS(60) # adjust fps
    
    # WHAT YOU SUGGESTED - END
    
    #  output: color sampling YV12@8, matrix: bt470, scantyp: progressive, luminance scale: limited
    return last
    I guess you wanted to splt the source, into fields that way,... but that will not work.
    Last edited by Selur; 11th May 2022 at 14:46.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  18. You really should ask for a refund
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!