VideoHelp Forum

+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 37 of 37
Thread
  1. Originally Posted by non-vol View Post
    What's interesting to me is that the fieldmatch filter seems to be quite slow for some reason.
    Given ffmpeg's low CPU usage when using it I suspect it's single threaded. If you're concerned about speed just run several at the same time.
    Quote Quote  
  2. Originally Posted by non-vol View Post
    I'm quoting the specific part above because it shows that we do basically see eye to eye here. The term "interpolation" is also a very, very general term and easily misunderstood, I would guess not many actually WOULD understand it to mean anything even close to as complex as whatever yadif does. If you specifically mention "temporal interpolation", then - sure, I guess.
    The simplest explanation for "interpolation" is an algorithm to create pixels. eg. If I upscale an image, that is an example of spatial interpolation .

    Yadif creates missing pixels from dropped scanlines (all deinterlacers do) , but in a spatial manner, not temporal. There is a field check for spatial and temporal motion and differences. ie. It's not temporal interpolation, it's temporal analysis.

    Mode =1 is not temporal interpolation, because on interlaced content, 50 samples /s are already represented. No new samples in time are being synthesized. Interlaced content is a spatial deficiency (single field) . Mode=0 is not temporal either - each field becomes a frame. It's a spatial interpolation . On 25p content, no new samples in time are being generated either. If you apply mode=1 you have 25p in 50p. It's still 25p just duplicates. The temporal sampling resolution is not changed using yadif

    Optical flow is an example of temporal interpolation. e.g. 25p to 50p content. New in-between samples in time are generated. You actually have double the motion samples


    For some reason I cannot see the test video (I'll try another browser maybe)
    some embedding issue, it should be ok now

    The one thing that's still not "solved" here is the performance issue that I mentioned early on. Yes, you can use many filters (algorithms) that are guaranteed to produce the correct result. However, it's possible doing that would be very slow and it is generally not critically important that the result would be guaranteed to be 'correct'. You mentioned that whether or not I can "see" the difference depends on this and that - you are exactly right!

    yes, field matching is slower , it's still several times realtime for me (filter only). I get about 1/2 speed comparedd to yadif. Bwdif is aobut 1.4-1.5x faster than yadif for me


    If it helps to understand, this is not my job and no one is paying me a dime to do this stuff. I have a lot of videos here that I want to process fairly quickly. Just simply put, I do NOT have time to closely analyse each of them and spend time making a tailored AviSynth filter chain for each of them. But of course in what I KNOW to be telecined progressive content, I'd like to avoid losing detail unnecessarily.
    The bottleneck is encoding. Even fieldmatching gives you many times realtime. Use something like NVEnc HEVC. You need a bit more bitrate to match libx265, but it's much faster



    What's interesting to me is that the fieldmatch filter seems to be quite slow for some reason. Like I mentioned, simply merging lines from adjacent fields with a known, constant pattern should be faster than ANY alternative here, especially something like yadif that performs complex image analysis.

    So I guess stupid question time: If yadif can process this stuff at 10-20x realtime, why does fieldmatch not process it at a HUNDRED times realtime?
    Fieldmatch is more complex in many ways than yadif

    Your case is not a fixed known pattern. Otherwise the entire thing would be clean (no combing, and you apply no filter) , or the entire thing would be combed no clean sections (then you might be able to use the select filter with interleave in a pattern, it might be faster)
    Quote Quote  
  3. Originally Posted by poisondeathray View Post
    Even fieldmatching gives you many times realtime.
    I'm seeing about 3x realitime at best. For example, using the video in post #26, on an 8 core 16 thread CPU (i9 9900K):

    Using fieldmatch:

    Code:
    ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf fieldmatch=order=tff:mode=pc -f null -
    
    threads	seconds
    ---------------------
    1	25.630
    2	14.766	
    4	14.207
    8	14.127
    You can see that speed hardly increase over threads=2 (and CPU usage never gets over 15 percent), Using combmatch=full is even slower. But using yadif:

    Code:
    ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf yadif=mode=0:parity=0 -f null -
    
    threads	seconds
    ------------------
    1	13.554	
    2	 7.445
    4	 4.296
    8	 2.690
    16	 2.140
    Speed is still increasing at threads = 16 and CPU usage has maxed out. All are far faster than fieldmatch.

    Of course, if one is using slow CPU encoding the low speed of fieldmatch doesn't matter much.
    Quote Quote  
  4. Originally Posted by jagabo View Post

    You can see that speed hardly increase over threads=2 (and CPU usage never gets over 15 percent),
    Yes it seems poorly optimized. Avisynth's TFM (pp=0) seems not very well optimized either ( playing with prefetch values didn't help much)

    vapoursynth's TFM (PP=0) seems to utilize resources properly. vapoursynth threading is generally better than avisynth or ffmpeg
    Quote Quote  
  5. Originally Posted by jagabo View Post
    Originally Posted by poisondeathray View Post
    Even fieldmatching gives you many times realtime.
    I'm seeing about 3x realitime at best. For example, using the video in post #26, on an 8 core 16 thread CPU (i9 9900K):

    Using fieldmatch:

    Code:
    ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf fieldmatch=order=tff:mode=pc -f null -
    
    threads	seconds
    ---------------------
    1	25.630
    2	14.766	
    4	14.207
    8	14.127
    You can see that speed hardly increase over threads=2 (and CPU usage never gets over 15 percent), Using combmatch=full is even slower. But using yadif:

    Code:
    ffmpeg -y -benchmark -threads N -i "24p to 25 euro-pulldown-tff.mp4" -vf yadif=mode=0:parity=0 -f null -
    
    threads	seconds
    ------------------
    1	13.554	
    2	 7.445
    4	 4.296
    8	 2.690
    16	 2.140
    Speed is still increasing at threads = 16 and CPU usage has maxed out. All are far faster than fieldmatch.

    Of course, if one is using slow CPU encoding the low speed of fieldmatch doesn't matter much.
    Do the frame flags (interlaced, progressive) possibly impact the speed of the fieldmatching?
    The file "24p to 25 euro-pulldown-tff.mp4" of post #26 was euro-telecined by means of a script and then x264 encoded as interlaced TFF.
    The file here uses x264's --euro-pulldown which flags the frames differently.
    Image Attached Files
    Last edited by Sharc; 24th Jan 2022 at 03:18.
    Quote Quote  
  6. Field matching speeds are about the same (per frame) with that progressive video. That video has fewer frames and proportionately shorter times.
    Quote Quote  
  7. Originally Posted by jagabo View Post
    Field matching speeds are about the same (per frame) with that progressive video. That video has fewer frames and proportionately shorter times.
    Fewer frames yes. It is soft-telecined with x264's --euro-pulldown option, I think. Frame number is same as the original 24fps source, hence it doesn't even require fieldmatching. Just ignore the pulldown flags.
    Quote Quote  



Similar Threads