+ Reply to Thread
Results 31 to 37 of 37
Yadif creates missing pixels from dropped scanlines (all deinterlacers do) , but in a spatial manner, not temporal. There is a field check for spatial and temporal motion and differences. ie. It's not temporal interpolation, it's temporal analysis.
Mode =1 is not temporal interpolation, because on interlaced content, 50 samples /s are already represented. No new samples in time are being synthesized. Interlaced content is a spatial deficiency (single field) . Mode=0 is not temporal either - each field becomes a frame. It's a spatial interpolation . On 25p content, no new samples in time are being generated either. If you apply mode=1 you have 25p in 50p. It's still 25p just duplicates. The temporal sampling resolution is not changed using yadif
Optical flow is an example of temporal interpolation. e.g. 25p to 50p content. New in-between samples in time are generated. You actually have double the motion samples
For some reason I cannot see the test video (I'll try another browser maybe)
The one thing that's still not "solved" here is the performance issue that I mentioned early on. Yes, you can use many filters (algorithms) that are guaranteed to produce the correct result. However, it's possible doing that would be very slow and it is generally not critically important that the result would be guaranteed to be 'correct'. You mentioned that whether or not I can "see" the difference depends on this and that - you are exactly right!
yes, field matching is slower , it's still several times realtime for me (filter only). I get about 1/2 speed comparedd to yadif. Bwdif is aobut 1.4-1.5x faster than yadif for me
If it helps to understand, this is not my job and no one is paying me a dime to do this stuff. I have a lot of videos here that I want to process fairly quickly. Just simply put, I do NOT have time to closely analyse each of them and spend time making a tailored AviSynth filter chain for each of them. But of course in what I KNOW to be telecined progressive content, I'd like to avoid losing detail unnecessarily.
What's interesting to me is that the fieldmatch filter seems to be quite slow for some reason. Like I mentioned, simply merging lines from adjacent fields with a known, constant pattern should be faster than ANY alternative here, especially something like yadif that performs complex image analysis.
So I guess stupid question time: If yadif can process this stuff at 10-20x realtime, why does fieldmatch not process it at a HUNDRED times realtime?
Your case is not a fixed known pattern. Otherwise the entire thing would be clean (no combing, and you apply no filter) , or the entire thing would be combed no clean sections (then you might be able to use the select filter with interleave in a pattern, it might be faster)
ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf fieldmatch=order=tff:mode=pc -f null - threads seconds --------------------- 1 25.630 2 14.766 4 14.207 8 14.127
ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf yadif=mode=0:parity=0 -f null - threads seconds ------------------ 1 13.554 2 7.445 4 4.296 8 2.690 16 2.140
Of course, if one is using slow CPU encoding the low speed of fieldmatch doesn't matter much.
The file "24p to 25 euro-pulldown-tff.mp4" of post #26 was euro-telecined by means of a script and then x264 encoded as interlaced TFF.
The file here uses x264's --euro-pulldown which flags the frames differently.
Last edited by Sharc; 24th Jan 2022 at 03:18.
Field matching speeds are about the same (per frame) with that progressive video. That video has fewer frames and proportionately shorter times.