I seem to have ran into another issue during my home made tapes capturing process...
Basically in several of my tapes, I have some sporadically issues of either dropped or duplicated frames - I cannot be sure yet.
Using bobbing deinterlacing methods (25 to 50fps) is causing a strange issue in which the field order "swaps" at different moments of the whole video.
I show here a brief example of one of my attempts. In this case, I forced Assume TFF in Vapoursynth QTGMC but the deinterlaced video starts with the classical issue of stuttering, showing an apparent BFF video for the first second (and first tenths of frames). Out of nowhere, the video becomes smooth.
In this particular snippet, the "inversion" occurs at the same time/frame which, by using VirtualDub to skim the video, it detects a "Dropped Frame".
Such inversions happen in other moments of the 3h capture. I captured using a JVC SVHS HR-S7722 with in-line TBC, no external TBC, I/O Data GV-USB2 and AmaRecTV. Not the best, but surely not the worst.
Am I "doomed" to encode my deinterlaced raw files without bobbing? Discarding half of the fields does not produce the problem, but I will loose the chance to encode to smooth deinterlaced 50fps...
Thanks in advance!
+ Reply to Thread
Results 1 to 9 of 9
-
-
In lieu of a better capture or an automatic method being found and the amount of field order changes is not too many,
a short Avisynth script can help - perhaps you can do something similar in Vapoursynth
Code:LWLibavVideoSource("C:\Users\Public\Documents\frame_issue_interlaced.avi") v1=trim(0,20) v2=trim(21,0).crop(0,1,-0,-0).addborders(0,0,0,1) v1 ++ v2 ConvertToYV12(interlaced=true) qtgmc(preset="medium")
-
Hello davexnet,
what a clever idea! I tried it out in the snippet and surely enough, it worked!
So I assume I have to hunt down along the video all field order changes to the frame, and insert an additional function to introduce another correction, right?
How did you figure out that the inversion occurred at Frame 21? You checked the deinterlaced file and gathered that at frame 42, the inversion occurs? Then simply divided by 2?
Sorry if I am making a mess out of this. -
Yes that's the calculation pretty much. I also added a short text to the segments for seeing that it changed as I expected.
Code:v1=trim(0,20).subtitle("v1") v2=trim(21,0).crop(0,1,-0,-0).addborders(0,0,0,1).subtitle("v2")
-
There is field duplicate happens at 21 frame, so assuming that's always the case, you can try this simple script to log all such frames into txt file.
Just load it into VirtualDub and press F5 (or File -> Run video analysis pass.. )
Code:global path= "G:\_320GB\500gb\out\test y\" FFVideoSource(path+"drop fields issue_interlaced.avi", cache=false) c1=separatefields.selectevery(2,0) c2=separatefields.selectevery(2,1) logdups (c1) logdups (c2) function logdups (clip c) { c frameevaluate (""" diff = RT_FrameDifference (last, last, current_frame, current_frame-1) (diff<0.001 && current_frame!=0) ? RT_TxtWriteFile (string(current_frame), path+"dups.txt", append=true) : nop """) }
-
Hello again,
buzz1891, thank you for your suggestion. I ran the code through my file and it detected several duplicates of the interlaced file.
When cross checking the affected frames and their position in the deinterlaced file, I realized that not all detected duplicate files led to a "field order swap", in which the deinterlacing led to jittery movement.
Meaning, I am unsure I can blindly take the frame numbers out of the dups.txt, insert them in the "reverse the field order" script from davexnet and deinterlace the avi.
After several hours testing this, one idea came to my mind. It's potentially either not feasible or "too dumb", but I would like to run it through you:
As an alternative to visually (or with your script) catching the field order swap and introduce a reversal before the deinterlacing process, why dont I simply run an encoding process with deinterlacing on the original file twice: once with AssumeTFF(), the other with AssumeBFF(). Then, I could edit out only the "properly deinterlaced" segments of the video and "glue" them together, specially if I find a way to do this trimming and editing without re-encoding.
I have some experience with music production and editing, so such splitting of audio takes and joining them together is fairly commonplace. For the final product, the seams between the segments do not need to be flawless: I believe I can live with "jumps" between the cuts, as long as 99% of the video is smoothly deinterlaced, with no audio delay.
Bellow, a visual representation of my idea. Green represents "properly deinterlaced segments", red the opposite:
[Attachment 81710 - Click to enlarge]
I already dread the amount of time I will spend on this, and the point of achieving a capture chain with 0 dropped or duplicated frames is still better than this.
Still, what do you think? -
Just to let everyone in the future know: I tried out the above described method using the Software LosslessCut on two generated .mp4 encoded files. It's dead easy:
1- Take the first encoded deinterlaced file with AssumeTFF(), cut out segments which have no jerky motion.
2 - Export the segments using the generated segment name JUST with timestamps.
3 - Invert the segment selection using LosslessCut and export the .CSV file with Frame numbers.
4 - Take the second encoded deinterlaced file with AssumeBFF(), import the .CSV file
5 - Export the segments same as step 2
6 - Start LosslessCut for the third and final time, drag and drop all segments which should be listed by name and in the proper order, let the software merge the clips together.
Not automatic, but as fast as I possibly imagined. I am rather satisfied with the end result, as no re-encoding occurs during the exporting process, so it is all quickly done. -
Nice idea. It looks much more easier doing it in NLE, like loading lossless or prores or something and edit it right there. Export mp4. Using Magic Vegas allows using x264 export (Voucoder plugin). Maybe you went with lossless cuts and edits because you do not like your NLE H264 encoding.
-
After some trial & error (working with runtime filters is a real PITA), I think I've come with the code that implements your idea. Basically it takes each odd frame of bobbed clip and compares vs motion compensated version. If discrepancy is big enough, it switches to alternate field order version. It's not perfect thou, as the detection threshold (thr=0.93) may need some adjustment per source type, especially if there is very subtle movement only. Still naked eye can miss it too, so ..
Upd: added global motion estimation (`melog` file, generated by MAnalyse+MDepan)
Code:global melist=RT_ReadTxtFromFile(melog, Lines=0, Start=0) global thr=0.93 # <= the only parameter to adjust, slightly! global bff=last.assumebff.qtgmc("fast") global tff=last.assumetff.qtgmc("fast") c0 = bff.converttoyuv420 frc0 = c0.trim(0,-1) + c0.selectodd.FRC2() # 2x fps dc1= c0.selecteven dc2= frc0.selecteven global dcbob= interleave(dc1,dc1) global dcfrc= interleave(dc2,dc2) bff scriptclip (""" n= current_frame current_frame= n+2 ssim2f = SSIM_FRAME (dcfrc, dcbob) current_frame= n-2 ssim2b = SSIM_FRAME (dcfrc, dcbob) current_frame = n ssim0 = SSIM_FRAME (dcfrc, dcbob) ssim= (ssim2f+ssim0+ssim2b)/3 trigger = n>0 && n<framecount-1 ? chkgm(n, melist) || ssim<thr : ssim<thr ConditionalFilter(bff, tff, bff, "trigger", "==", "true", false) subtitle(y=20, string(ssim)) # <= turn off subtitles after adjusting `global thr`! subtitle(y=40, string(trigger)) """) prefetch(6) return last function chkgm (int fno, string melist) { # checks for back&forth global motion me_curr = RT_TxtGetLine(melist,line=fno).leftstr(23) me_curr_y= eval(me_curr.rightstr(8).trimall) me_curr_x= eval(me_curr.leftstr(15).rightstr(8).trimall) me_next = RT_TxtGetLine(melist,line=fno+1).leftstr(23) me_next_y= eval(me_next.rightstr(8).trimall) me_next_x= eval(me_next.leftstr(15).rightstr(8).trimall) me_prev = RT_TxtGetLine(melist,line=fno-1).leftstr(23) me_prev_y= eval(me_prev.rightstr(8).trimall) me_prev_x= eval(me_prev.leftstr(15).rightstr(8).trimall) x = me_curr_x * me_prev_x <=0 && me_curr_x * me_next_x <=0 y = me_curr_y * me_prev_y <=0 && me_curr_y * me_next_y <=0 #x=false y=true return x||y } function FRC2 (clip c, int "knum", int "kden", int "dct", \ int "thSCD1", int "thSCD2", int "bs", int "ol", bool "mt") { mt = default (mt, false) knum = default (knum, 2) kden = default (kden, 1) dct = default (dct, 5) blksize = default (bs, 8) overlap = default (ol, 0) thSCD1 = default (thSCD1, 400) thSCD2 = default (thSCD2, 130) search=4 scaleCSAD=0 border=8 c AddBorders(border, border, border, border) sc_full = MSuper(hpad=0,vpad=0,mt=mt) pre = BicubicResize (width/2,height/2) sc_ds = pre.MSuper (hpad=0,vpad=0,mt=mt) bv_ds = sc_ds.MAnalyse (isb=true, blksize=blksize, overlap=overlap, search=search, dct=dct, scaleCSAD=scaleCSAD, mt=mt) fv_ds = sc_ds.MAnalyse (isb=false, blksize=blksize, overlap=overlap, search=search, dct=dct, scaleCSAD=scaleCSAD, mt=mt) bv = MScaleVect (bv_ds, 2) fv = MScaleVect (fv_ds, 2) MFlowFps (sc_full,bv,fv, thSCD1=thSCD1, thSCD2=thSCD2, num=knum*FrameRateNumerator, den=kden*FrameRateDenominator) if (border > 0) { crop (border,border,-border,-border) } }
Last edited by buzz1891; 20th Sep 2024 at 15:44.
Similar Threads
-
VHS capture having incorrect CHROMA field order.
By useraxe in forum RestorationReplies: 10Last Post: 7th Jul 2024, 14:49 -
NTSC to PAL source with Frame-blending and field order judder
By PRAGMA in forum RestorationReplies: 19Last Post: 12th Apr 2023, 13:38 -
How do you know the cause of dropped frames capturing VHS?
By bigbadben in forum Newbie / General discussionsReplies: 3Last Post: 18th Oct 2020, 07:54 -
25fps with duplicates and dropped frames
By embis2003 in forum RestorationReplies: 6Last Post: 12th Oct 2020, 00:43 -
Blended Frames during QTGMC - 25fps assuming bad authoring to PAL format?
By LaserBones in forum Newbie / General discussionsReplies: 3Last Post: 18th Jul 2020, 06:33