+ Reply to Thread
Results 31 to 37 of 37
Thread
-
The simplest explanation for "interpolation" is an algorithm to create pixels. eg. If I upscale an image, that is an example of spatial interpolation .
Yadif creates missing pixels from dropped scanlines (all deinterlacers do) , but in a spatial manner, not temporal. There is a field check for spatial and temporal motion and differences. ie. It's not temporal interpolation, it's temporal analysis.
Mode =1 is not temporal interpolation, because on interlaced content, 50 samples /s are already represented. No new samples in time are being synthesized. Interlaced content is a spatial deficiency (single field) . Mode=0 is not temporal either - each field becomes a frame. It's a spatial interpolation . On 25p content, no new samples in time are being generated either. If you apply mode=1 you have 25p in 50p. It's still 25p just duplicates. The temporal sampling resolution is not changed using yadif
Optical flow is an example of temporal interpolation. e.g. 25p to 50p content. New in-between samples in time are generated. You actually have double the motion samples
For some reason I cannot see the test video (I'll try another browser maybe)
The one thing that's still not "solved" here is the performance issue that I mentioned early on. Yes, you can use many filters (algorithms) that are guaranteed to produce the correct result. However, it's possible doing that would be very slow and it is generally not critically important that the result would be guaranteed to be 'correct'. You mentioned that whether or not I can "see" the difference depends on this and that - you are exactly right!
yes, field matching is slower , it's still several times realtime for me (filter only). I get about 1/2 speed comparedd to yadif. Bwdif is aobut 1.4-1.5x faster than yadif for me
If it helps to understand, this is not my job and no one is paying me a dime to do this stuff. I have a lot of videos here that I want to process fairly quickly. Just simply put, I do NOT have time to closely analyse each of them and spend time making a tailored AviSynth filter chain for each of them. But of course in what I KNOW to be telecined progressive content, I'd like to avoid losing detail unnecessarily.
What's interesting to me is that the fieldmatch filter seems to be quite slow for some reason. Like I mentioned, simply merging lines from adjacent fields with a known, constant pattern should be faster than ANY alternative here, especially something like yadif that performs complex image analysis.
So I guess stupid question time: If yadif can process this stuff at 10-20x realtime, why does fieldmatch not process it at a HUNDRED times realtime?
Your case is not a fixed known pattern. Otherwise the entire thing would be clean (no combing, and you apply no filter) , or the entire thing would be combed no clean sections (then you might be able to use the select filter with interleave in a pattern, it might be faster) -
I'm seeing about 3x realitime at best. For example, using the video in post #26, on an 8 core 16 thread CPU (i9 9900K):
Using fieldmatch:
Code:ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf fieldmatch=order=tff:mode=pc -f null - threads seconds --------------------- 1 25.630 2 14.766 4 14.207 8 14.127
Code:ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf yadif=mode=0:parity=0 -f null - threads seconds ------------------ 1 13.554 2 7.445 4 4.296 8 2.690 16 2.140
Of course, if one is using slow CPU encoding the low speed of fieldmatch doesn't matter much. -
Yes it seems poorly optimized. Avisynth's TFM (pp=0) seems not very well optimized either ( playing with prefetch values didn't help much)
vapoursynth's TFM (PP=0) seems to utilize resources properly. vapoursynth threading is generally better than avisynth or ffmpeg -
Do the frame flags (interlaced, progressive) possibly impact the speed of the fieldmatching?
The file "24p to 25 euro-pulldown-tff.mp4" of post #26 was euro-telecined by means of a script and then x264 encoded as interlaced TFF.
The file here uses x264's --euro-pulldown which flags the frames differently.Last edited by Sharc; 24th Jan 2022 at 03:18.
-
Field matching speeds are about the same (per frame) with that progressive video. That video has fewer frames and proportionately shorter times.
-
Similar Threads
-
Restoring NTSC (29.97 or 24?), released in PAL (25) (reverse telecine?)
By videokassette in forum Video ConversionReplies: 7Last Post: 9th Nov 2021, 20:17 -
Simple FFmpeg HEVC 1080i (DVBS2 HD) to 1080 25p, problems with interlacing
By Mr Whippy in forum Video ConversionReplies: 2Last Post: 27th Feb 2021, 04:07 -
FFMPEG convert 50P to 25P?
By marcorocchini in forum Newbie / General discussionsReplies: 17Last Post: 29th Dec 2019, 15:02 -
Bad PAL telecine dvd help
By spiritgumm in forum Video ConversionReplies: 6Last Post: 4th Apr 2018, 19:06 -
Virtualdub: GUI convert PAL interlace to NTSC interlace
By kalemvar1 in forum Video ConversionReplies: 4Last Post: 23rd Sep 2017, 15:30