Hi, It's my first time posting here because I couldn't find a solution to my problem.
A few months ago I brought my old VHS tapes to a "professional" video company here where I live, I thought they could do a better job than me digitalizing my tapes. When I received them I checked the video files and realized they had a lot of interlacing artifacts and combs, I searched on forums and found what de-interlacing could do.
The problem is that de-interlacing is for interlaced videos (like VHS) and what I received was:
Progressive video (I checked It with different programs and all of them classified the video files as progressive)
Video resolution: 1280x720 (720x576 VHS video and black bars all around)(We use PAL)
Even though It's progressive I tried to de-interlace with different programs but (I think) I got what you can expect of de-interlacing a progressive video.
Here is a capture from the original video:
[Attachment 57864 - Click to enlarge]
Here is a capture from the "de-interlaced" video:
[Attachment 57867 - Click to enlarge]
I got the same results on different programs using different methods (Yadif, Bob, etc)
I know there are programs that can apply a blur effect to the jagged lines but I still have to deal with the "ghosting" of the image and even if it can be removed I don't think is gonna look good enough. I don't find any better solution rather than buying a capture device and re-capture again by myself using interlaced format, what are your thoughts on this?
Is there any good way to fix it? If It's not what should I do to not make the same mistakes when re-capturing by myself?
PD: I've tried to see if the original tape had this problems and It doesn't, that means the problem is on the capturing process not on the tape.
+ Reply to Thread
Results 1 to 23 of 23
-
Last edited by Strift; 19th Mar 2021 at 08:19.
-
copping like this
[Attachment 57872 - Click to enlarge]
and then rezising that to 4:3
[Attachment 57873 - Click to enlarge]
and then to 768x576
[Attachment 57874 - Click to enlarge]
should do the trick deinterlacing it properly
[Attachment 57875 - Click to enlarge]
using VirtualDub2 -
You can further blur the two fields together with a vertical blur followed by a vertical sharpen. Something like this in AviSynth:
Code:Blur(0.0, 1.0) Sharpen(0.0, 0.7)
Here's something like that (downscaling instead of blurring) with some more cleanup:
Code:LWLibavVideoSource("VHS-7.mkv", cache=false, prefer_hw=2) src = last Spline36Resize((width/12)*4, height/2) # A good denoise filter here would help aWarpSharp2(depth=5) Sharpen(0.5) nnedi3_rpow2(4, cshift="Spline36Resize", fwidth=src.width, fheight=src.height) aWarpSharp2(depth=5) Sharpen(0.5)
Oh, I just noticed the video was originally PAL. You want to remove the duplicate frames with TDecimate(Cycle=6, CycleR=1). -
Used my clever FFmpeg-GUI.
First resized with crop detect:
[Attachment 57879 - Click to enlarge]
then set the encoder like this
[Attachment 57880 - Click to enlarge]
checked Avisynth, create script, edit script, added jagabos script (lightly modified for PAL SD setting)
[Attachment 57882 - Click to enlarge]
tested script with Test Script, clicked on convert, done (encoding).
Clicked on Multiplex, the encoded videostream was already loaded, clicked on Audiostream, selected your sample mkv, clicked on Target file, accepted the proposed filename, set DAR as 4:3, mkv as container, clicked on multiplex, ALL DONE. -
Demand your money back plus some more for wasting your time and for them being incompetent fools. Then either take the tapes somewhere else or do it yourself.
-
I agree with manono. But here's johnmeyer's method of dealing with this type of problem:
https://forum.doom9.org/showthread.php?p=1685187#post1685187 -
I have a bunch of similar videos. Here is one. I would appreciate not just an answer, but an explanation what the suggested script is doing, so I could adapt it to other videos. I just want to deal with interlacing, I don't care about noise or sharpness or whatever else. It would be nice if this could be done in VirtualDub without Avisynth, but if needed, I can download additional DLLs to my existing avisynth install
I think that I don't want adaptive motion detection and instead I want a rather rigid script that would separate frames into fields based on on the thickness of a "line". I think in this video lines are 8 pixels high. Ideally, I want to have 60p out of this bad 30p. -
The original field structure has been messed up I think.
A first quick attempt to fix it by blurring and Synthesizing one field to get 60i (60 interlaced fields per second), using AviSynth:
Code:DGSource("nomination.dgi") #or your source filter f=4 spline36resize(width/2,height/f) spline36resize(width*2,height*f) source=sharpen(0.4,1.0) #synthesizing one field: blocksize=16 #or try blocksize=8 super=source.MSuper(pel=2) bvec=super.MAnalyse(isb=true,blksize=blocksize) fvec=super.MAnalyse(isb=false,blksize=blocksize) out=source.MFlowFps(super, bvec, fvec, num=0000, den=1) #60p out_i=out.separatefields().selectevery(4,0,3).weave() #re-interlace for 30i return out #or out_i for 30i interlaced
Last edited by Sharc; 9th May 2022 at 06:54.
-
@Sharc: Why are you halving the width? Wouldn't it be better to keep the full width?
users currently on my ignore list: deadrats, Stears555, marcorocchini -
-
@Sharc, thanks! It would not work with my original file: DGIndex would not load the video for indexing, and trying to open the original file with DirectShowSource() would produce gray rectangle for video. So I had to re-encode it to Cineform, and it worked from there.
Now I need to figure out how exactly this script works, and maybe tweak it a little, because all resolution has been lost (snap-bad.png in the bottom) I resized the bad one to the same size as the good one (it was twice small in either direction).
I changed the beginning of the script to:
Code:f=2 spline36resize(width,height/f) spline36resize(width,height*f)
Still, I wonder whether more resolution could be preserved. I think I understand the basic approach - halve the height, then restore it back, this should get rid of one field. But then where that other field comes from, where 60fps comes from? I don't get it. Reading up on the MVTools, are these 60fps fake, interpolated, not the original ones? I thought I could separate fields baked into the frame based on the regularity of combing. -
Yes, the approach was
Step1: Get somehow rid of the "combing artifacts" by applying some filtering (like vertical blur, vertical subsample, trial and error vertical down-/upscale ...) to eventually obtain an "acceptable" progressive sequence.
Step2: Interpolate the progressive sequence to the desired framerate (doubling to 60p in your case) using mvtools.
(Step3: Re-interlace 60p->30i if an interlaced output is required)
https://forum.doom9.org/showpost.php?p=1450981&postcount=3
So no attempt was made to recover the original fields (an often futile exercise IMO, even more so for color video once the original fields structure has been garbled by some vertical resizing). The "missing" frames for 60p are purely synthesized from adjacent 30p frames. Interpolation may work well or it can badly fail (keywords: broken arms and legs). One has to try.
Maybe someone has a better approach.Last edited by Sharc; 11th May 2022 at 02:42.
-
The biggest part of the problem is that the video was slightly resized vertically while it was interlaced. This has caused the two fields to contaminate each other and they can no longer be cleanly separated. On top of that the interlacing structure has been further damaged leaving scanline pairs interleaved rather than single scan lines. I don't see any good way of fixing that.
I blurred the fields together (differently), denoised, and sharpened:
Code:function UnSharpMask(clip v, float radius, float strength) { blurry = v.BinomialBlur(VarY=radius, VarC=radius, Y=3, U=3, V=3) # or GaussianBlur edges = Subtract(v, blurry).ColorYUV(off_y=2).ColorYUV(cont_y=(int(strength*radius*256.0)-256.0)) Overlay(v, edges.ColorYUV(off_y=-128), mode="Add") Overlay(last, edges.Invert().ColorYUV(off_y=-128), mode="Subtract") ColorYUV(off_u=-1, off_v=-1) #overlay is causing U and V to increase by 1 } LWLibavVideoSource("nomination.mp4", cache=false, prefer_hw=2) AssumeTFF() SeparateFields() Blur(1.0, 1.0).Sharpen(0.7, 0.7) Weave() Blur(1.0, 1.0).Sharpen(0.7, 0.7) SMDegrain (tr=3, thSAD=500, refinemotion=true, contrasharp=false, PreFilter=4, mode=0, truemotion=true, plane=0, chroma=false) UnSharpMask(3.0, 0.4) UnSharpMask(1.5, 0.3) GreyScale()
You could double the frame rate after that but I don't see much point.
Oh, by the way, Sharc's video in post #10 has 60p frames but it's encoded interlaced. It should be encoded progressive.Last edited by jagabo; 11th May 2022 at 08:44.
-
How about something like this:
Code:// I think this is the correct number, can be adjusted to taste line_width = 8 for_each_frame(source_frame) { // This is what Sharc's script does: get rid of combing by scaling down and then back up frame1 = double_frame(halve_frame(source_frame)) // Shift the picture up modified_source_frame = add_pixels_on_bottom(crop_pixels_on_top(source_frame, line_width), line_width) // Same scaling down and up with to obtain a frame from the second field frame2 = double_frame(halve_frame(modified_source_frame)) // Return two frames instead of one, doubling the frame rate return (frame1, frame2) }
-
Now if someone could make a working script off of this
Code:ClearAutoloadDirs() SetFilterMTMode("DEFAULT_MT_MODE", MT_MULTI_INSTANCE) LoadPlugin("I:\Hybrid\64bit\Avisynth\AVISYN~1\LSMASHSource.dll") # loading source: C:\Users\Selur\Desktop\nomination.mp4 # color sampling YV12@8, matrix: bt470, scantyp: progressive, luminance scale: limited LWLibavVideoSource("C:\Users\Selur\Desktop\NOMINA~1.MP4",cache=false,format="YUV420P8", prefer_hw=0,repeat=true) # current resolution: 480x320 # WHAT YOU SUGGESTED - START # fconstants h = height w = width line_width=8 frame1 = last modified_source_frame = last frame1 = frame1.Spline36Resize(w,h/2).Spline36Resize(w,h) # down and up modified_source_frame = modified_source_frame.Crop(0,line_width,0,0).AddBorders(0,0,0,line_width) # crop lines at the top, add lines at the bottom frame2 = modified_source_frame.Spline36Resize(w,h/2).Spline36Resize(w,h) # down and upscale Interleave(frame1, frame2) # output two frames AssumeFPS(60) # adjust fps # WHAT YOU SUGGESTED - END # output: color sampling YV12@8, matrix: bt470, scantyp: progressive, luminance scale: limited return last
Last edited by Selur; 11th May 2022 at 13:46.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
Hello all folks. I understand it's an oooooooold thread but i am dealing exactly with the same situation as the OP. So, as soon as i saw the magnificent job that Sharc has done, i said: This is the man!!!
So, i drop my avi video at VirtualDub2 and i try to run the above script by changing the 60 frames to 50 (original video is 25)
DGSource("TEST.dgi")
f=4
spline36resize(width/2,height/f)
spline36resize(width*2,height*f)
source=sharpen(0.4,1.0)
#synthesizing one field:
blocksize=16 #or try blocksize=8
super=source.MSuper(pel=2)
bvec=super.MAnalyse(isb=true,blksize=blocksize)
fvec=super.MAnalyse(isb=false,blksize=blocksize)
out=source.MFlowFps(super, bvec, fvec, num=0000, den=1) #50p
out_i=out.separatefields().selectevery(4,0,3).weav e() #re-interlace for 50i
return out #or out_i for 50i interlaced
but i always get this type of error:
Can someone please explain to me what i am doing wrong here? Thank you in advance. -
Can someone please explain to me what i am doing wrong here
That said: DGSource is not suitable for avi input. Better use AviSource.users currently on my ignore list: deadrats, Stears555, marcorocchini -
Thank you very much for the response Selur!! So, if i understand correctly, i need to install DGDecNV and copy the dlls into Virtualdub plugin folder?
I have no idea what the DGSource or the AviSource are. I just copied Sharc's script only changing the name of the file and the frame rate since my video is running at 25 fps. (usual stuff...interlaced video with an awful deinterlacing, tagged as progressive). -
No.
The script that he used is an Avisynth script, so it requires Avisynth and at least a basic understanding on how Avisynth works.
If you understand the Avisynth basics (how filtering with it works etc.) the rest should become clear.users currently on my ignore list: deadrats, Stears555, marcorocchini
Similar Threads
-
Is there a way to process youtube encoded video with bad interlacing effect
By thepricey2 in forum RestorationReplies: 8Last Post: 1st Feb 2021, 17:17 -
Help with de-interlacing a video.
By Parhelion in forum Newbie / General discussionsReplies: 8Last Post: 19th Sep 2020, 16:35 -
MP4 repair (AAC audio repair)
By holger123 in forum Newbie / General discussionsReplies: 16Last Post: 5th Jun 2020, 17:47 -
De-interlacing video
By CarloJ63 in forum Newbie / General discussionsReplies: 5Last Post: 4th Oct 2018, 09:29 -
Finding individual "bad" frames in video and save frame number, or repair
By johnmeyer in forum RestorationReplies: 6Last Post: 13th Dec 2016, 20:59