VideoHelp Forum

+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 37
Thread
  1. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Hi, quick question. Have to ask because so far no search result has touched on this, most everything related is only relevant to NTSC, unfortunately.

    I, however, am dealing with some source video that has been telecined for PAL broadcast. At least I strongly suspect as much, the telecine part. The sources clearly have both progressive and true interlaced parts within them. Viewing unfiltered, this is visible as combing in some parts, but not others. Sort of like watching a movie where the movie itself is progressive (BR or DVD rips for all I know), but some stuff in commercial breaks is not, and sometimes - subtitles are interlaced and present as combed.

    Now, actually I know this should be a very straightforward procedure.

    ...But I have read so much stuff and the different filters and their documentation don't really make it clear how anything actually happens. Often it is recommended to use, for example, the filters phase, fieldmatch, decimate - and very often yadif or another de-interlacer in addition to the actual de-telecine procedure. What would be the purpose of doing all that stuff if in the end the source would go through yadif anyway? That's what I don't understand.

    One reason I'm asking this actually is because using only yadif, or bwdif, or even pp=lb can work, but I'm not sure if pp=lb causes additional blur to the progressive portions, and bwdif while producing good quality, is very slow.

    ...

    So, in a nutshell. Converting interlaced (or telecined) PAL 50i back to 25p. Interlaced to progressive, nothing more, only consideration is to properly de-telecine the (very certainly) telecined parts.

    How to do this ?
    Quote Quote  
  2. The purpose is to not degrade the full progressive frames. When you deinterlace a progressive content frame, you essentially reduce the resolution in half.


    In theory, it would be

    Code:
    -vf fieldmatch=combmatch=full,bwdif=deint=interlaced
    https://ffmpeg.org/ffmpeg-filters.html#fieldmatch

    This should field match 25p sections , returning the full quality 25p frames, and for sections that are interlaced, they will be flagged, and bwdif will deinterlace only those flagged sections only as to not degrade the progressive sections

    Sometimes you have to adjust thresholds and settings, and preview, readjust. For example , it might miss a small interlaced overlay. Or it might be too aggressive and degrade a progressive section

    It's less ideal with ffmpeg. It's easier to do with avisynth/avspmod or vapoursynth/vsedit, where you can preview more easily and adjust for proper settings. Yes you can use ffplay or mpv to preview ffmpeg pipe, but it's not as easy, scrubbing is less straight forward, more difficult to make adjustments and get feedback


    Also, for those interlaced content parts, and you go to 25p, you discard half the information in those parts (single rate deinterlace) . Interlaced content would be properly displayed at 50p, because 50 moments in time /sec are represented by interlaced content


    EDIT: the default mode in ffmpeg for bwdif is send_field, but for yadif is send_frame. So set bwdif to mode=0 for 25p

    Code:
    -vf fieldmatch=combmatch=full,bwdif=mode=0:deint=interlaced
    Last edited by poisondeathray; 19th Jan 2022 at 08:47.
    Quote Quote  
  3. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Again spreading misinformation about mpv.
    Quote Quote  
  4. Why not use QTGMC instead? It's way better than the bloody bwdif
    Quote Quote  
  5. Originally Posted by s-mp View Post
    Why not use QTGMC instead? It's way better than the bloody bwdif
    You can't with ffmpeg alone; you'd need avisynth or vapoursynth

    QTGMC (alone) also degrades progressive content, albeit less than bwdif. You can use QTGMC with TFM clip2 (so apply only when combing is detected)
    Quote Quote  
  6. I bet that the "interlaced" sections are field-delayed. So the proposed field matcher is double right: It will leave the progressive sections alone and correct the field delay in the other sections, without any deinterlacing. Maybe credits or subtitles are really interlaced, so the command poisondeathry gave you is triple right.
    Quote Quote  
  7. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Very quick reply inbetween here, where I unfortunately need to quote myself:

    ...
    using only yadif, or bwdif, or even pp=lb can work, but I'm not sure if pp=lb causes additional blur to the progressive portions, and bwdif while producing good quality, is very slow.
    I emphasized the relevant part. Regardless of what is 'correct'... just "-vf bwdif" alone makes for around a 60% speed penalty, and that's pretty bad seeing as the encoding rate here (turbo boost off for thermal reasons) is usually slightly below 1x anyway. You know, x265 encoding itself is slow.

    So unless QTGMC is many times faster than bwdif... I don't see why it's even part of this discussion.
    Quote Quote  
  8. Originally Posted by non-vol View Post
    Very quick reply inbetween here, where I unfortunately need to quote myself:

    ...
    using only yadif, or bwdif, or even pp=lb can work, but I'm not sure if pp=lb causes additional blur to the progressive portions, and bwdif while producing good quality, is very slow.
    I emphasized the relevant part. Regardless of what is 'correct'... just "-vf bwdif" alone makes for around a 60% speed penalty, and that's pretty bad seeing as the encoding rate here (turbo boost off for thermal reasons) is usually slightly below 1x anyway. You know, x265 encoding itself is slow.

    So unless QTGMC is many times faster than bwdif... I don't see why it's even part of this discussion.


    For me, bwdif is faster than even yadif (yadif was known as one of the fastest deinterlacers). bwdif is about 10-20x faster than QTGMC default preset (slower preset) . QTGMC is much much slower than bwdif or yadif, even at the faster settings. But the quality is better. You decide what tradeoffs you want to make

    It might be that bwdif is using some SIMD that your CPU does not have and yadif does not take advantage of. Bwdif seems to be faster for most people. Maybe you have older CPU ?

    You can change the fieldmatch comb fallback to yadif instead of bwdif if you want. For me yadif is slower (both testing filter only, and complete encoding) , and slightly lower quality than bwdif

    Code:
    -vf fieldmatch=combmatch=full,yadif=deint=interlaced
    You might get faster results using GPU deinterlacing, such as yadif_cuda. On many systems, it's actually significantly slower, because of the memory transfers . But it could free up a few CPU cycles, and since x265 is your bottleneck it might end up being overall
    Last edited by poisondeathray; 17th Jan 2022 at 23:42.
    Quote Quote  
  9. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Originally Posted by poisondeathray View Post
    The purpose is to not degrade the full progressive frames. When you deinterlace a progressive content frame, you essentially reduce the resolution in half.
    That is if you do field discarding/doubling... right? Or if you do something really wrong. Other methods of "de-interlacing" (a catch-all term afaik) that I know of include simple bobbing, blending, and things such as yadif, bwdif and even more advanced techniques that use some form of very compute-intensive data analysis to identify which parts of frames exhibit combing.

    In this case that's "wasted work". I already know the source is progressive. Take the odd lines from one field, the even ones from the second, interleave them and that's it, that is the full resolution frame. There's no need to analyse the picture at all, since the pattern is exactly known beforehand.

    But like I said, practically all of the documentation, examples, tutorials, all of it... it's always something to do with 3:2 patterns etc. aka NTSC conversions which makes things more difficult to understand. If I've ever seen an article, forum post, etc. explaining how to do just simple PAL to PAL "conversions"... I don't know when and where and can't find it.

    In theory, it would be

    Code:
    -vf fieldmatch=combmatch=full,bwdif=deint=interlaced
    https://ffmpeg.org/ffmpeg-filters.html#fieldmatch

    This should field match 25p sections , returning the full quality 25p frames, and for sections that are interlaced, they will be flagged, and bwdif will deinterlace only those flagged sections only as to not degrade the progressive sections

    Sometimes you have to adjust thresholds and settings, and preview, readjust. For example , it might miss a small interlaced overlay. Or it might be too aggressive and degrade a progressive section
    I'll take a look at that again after this post, I have definitely read that doc for fieldmatch. There's also definitely some reason why I haven't used it.

    It's less ideal with ffmpeg. It's easier to do with avisynth/avspmod or vapoursynth/vsedit, where you can preview more easily and adjust for proper settings. Yes you can use ffplay or mpv to preview ffmpeg pipe, but it's not as easy, scrubbing is less straight forward, more difficult to make adjustments and get feedback
    I don't know... it works ok. And it should work ok for things like this, for example if the field order was specified wrong, obviously you would immediately see that. I have used AviSynth in the past (many, many years ago) but truth be told that's really needlessly complicated for what I'm doing. When I have 20, 30 videos that I simply need to re-encode, I don't have time to figure out and write what is essentially a small computer program, meaning the script - and I don't have time to wait for everything to process as I know it will be slow. (does anyone else still remember the times when you were warned that you're going to need a FAST computer to be able to view videos with yadif in real-time? (Sub-)Standard definition videos? And if you wanted to do ANY more complex filtering in Avisynth or whatever - you would "preview" it at maybe 3 frames per second? Yeah.)

    Also, for those interlaced content parts, and you go to 25p, you discard half the information in those parts (single rate deinterlace) . Interlaced content would be properly displayed at 50p, because 50 moments in time /sec are represented by interlaced content
    That is correct. In this particular use case, that is irrelevant - I simply do not care about preserving those parts or at least preserving them that WELL (linear blend or some such would be fine for those parts). I'm looking for something that would restore the full progressive frames from the telecined parts - something that would do it quickly.

    Like I said, just bwdif alone DOES do a good job basically performing just "ivtc", but it is PAINFULLY slow.

    cheers
    Quote Quote  
  10. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Originally Posted by poisondeathray View Post
    For me, bwdif is faster than even yadif (yadif was known as one of the fastest deinterlacers). bwdif is about 10-20x faster than QTGMC default preset (slower preset) . QTGMC is much much slower than bwdif or yadif, even at the faster settings. But the quality is better. You decide what tradeoffs you want to make

    It might be that bwdif is using some SIMD that your CPU does not have and yadif does not take advantage of. Bwdif seems to be faster for most people. Maybe you have older CPU ?
    Hmm... that is very interesting. I actually don't know WHY it is so slow, as a matter of fact it only seems to be slow, or slowER, when it's in the chain, in the ffmpeg command line with all the other stuff as well. I don't recall if yadif is any faster either, I'll have to check that again. Mostly it's just that ANY filter causes the encode speed to dive.

    I have a 10th gen intel, so I'm very sure bwdif as it is could process HD video at well over 10x realtime. But within ffmpeg it causes the encode speed to dive. If the bottleneck was something else like the encoder itself, there wouldn't be such a large difference in just having "-vf ..." in the command.

    You might get faster results using GPU deinterlacing, such as yadif_cuda. On many systems, it's actually significantly slower, because of the memory transfers . But it could free up a few CPU cycles, and since x265 is your bottleneck it might end up being overall
    (off-topic)
    For my NVENC experiments, I'd like to try this... unfortunately although GPU decode and encode actually indeed DO work completely in graphics memory (see https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/ ) ... the filters do not work. Everything else cuda/cuvid/nvenc related works, but the deint or scale filters unfortunately do not.
    Quote Quote  
  11. Originally Posted by non-vol View Post
    Originally Posted by poisondeathray View Post
    The purpose is to not degrade the full progressive frames. When you deinterlace a progressive content frame, you essentially reduce the resolution in half.
    That is if you do field discarding/doubling... right? Or if you do something really wrong. Other methods of "de-interlacing" (a catch-all term afaik) that I know of include simple bobbing, blending, and things such as yadif, bwdif and even more advanced techniques that use some form of very compute-intensive data analysis to identify which parts of frames exhibit combing.
    Yes - the way "deinterlacer" is being used in this context - is interpolation of missing scan lines using some algorithm.

    A deinterlacer applied to progressive content basically throws away 1 of the fields of the pair, and tries to reconstruct the missing field . Essentially you're cutting the spatial resolution in half . For interlaced content , you start with missing fields in the first place, so applying a "deinterlacer" is no problem for interlaced content. For progressive content, deinterlacing degrades the image. The higher the quality the starting content, the more visible the degredation. Also the finer the lines, certain patterns and angles (straight lines), the more deinterlacing artifacts (jaggy lines, aliasing) are produced by typical algorithms (you discarded 1/2 the information, something has to give).


    In this case that's "wasted work". I already know the source is progressive. Take the odd lines from one field, the even ones from the second, interleave them and that's it, that is the full resolution frame. There's no need to analyse the picture at all, since the pattern is exactly known beforehand.
    Sure, if you have 100% progressive content, then you didn't need to start the thread. Do nothing. You're done.

    But it can be field shifted (misaligned fields), they can be broadcast like that too. If you just do what you said, it would look combed. You have to re-align, that's part of what field matching can do

    Also, the analysis is to see which parts have interlaced content, which is why I presume you started this thread. So progressive parts are weaved, interlaced parts are filtered with bwdif (or whatever)

    The sources clearly have both progressive and true interlaced parts within them. Viewing unfiltered, this is visible as combing in some parts, but not others. Sort of like watching a movie where the movie itself is progressive (BR or DVD rips for all I know), but some stuff in commercial breaks is not, and sometimes - subtitles are interlaced and present as combed.
    ^ so you don't have 100% progressive content, and/or it could be field shifted in some parts

    Ideally, progressive content gets weaved with aligned fields (you're calling it "interleaved"), interlaced content gets deinterlaced double rate, so that becomes VFR. Everything plays how it was intended. If you watch a UK drama, it gets displayed at 25p, but the commercial of football (soccer) match gets double rate deinterlaced to 50p by your TV. Motion is smooth for that commercial.


    I'm looking for something that would restore the full progressive frames from the telecined parts - something that would do it quickly.

    Like I said, just bwdif alone DOES do a good job basically performing just "ivtc", but it is PAINFULLY slow.
    There are many that would say you are butchering the progressive parts with bwdif... just saying. Maybe you don't care or don't notice the problems. It does partially depend on what you started with, and the type and quality of content.

    You decide where you want to make tradeoffs . speed, quality, ease of use, etc...

    I've never heard anybody say bwdif is slow. You're the 1st.




    Hmm... that is very interesting. I actually don't know WHY it is so slow, as a matter of fact it only seems to be slow, or slowER, when it's in the chain, in the ffmpeg command line with all the other stuff as well. I don't recall if yadif is any faster either, I'll have to check that again. Mostly it's just that ANY filter causes the encode speed to dive.

    I have a 10th gen intel, so I'm very sure bwdif as it is could process HD video at well over 10x realtime. But within ffmpeg it causes the encode speed to dive. If the bottleneck was something else like the encoder itself, there wouldn't be such a large difference in just having "-vf ..." in the command.

    "Dive" - Do you mean immediately dive, or gradually over time ... like a memory leak ?

    What are your approx speeds with/without -vf bwdif ? or yadif? Just ballpark. ie. encode no filter vs. encode + bwdif (or yadif)

    You don't get much faster deinterlacing that those. Yes, many times realtime speed. Just "bob" is slightly faster in terms of deinterlacer speed


    You might get faster results using GPU deinterlacing, such as yadif_cuda. On many systems, it's actually significantly slower, because of the memory transfers . But it could free up a few CPU cycles, and since x265 is your bottleneck it might end up being overall
    (off-topic)
    For my NVENC experiments, I'd like to try this... unfortunately although GPU decode and encode actually indeed DO work completely in graphics memory (see https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/ ) ... the filters do not work. Everything else cuda/cuvid/nvenc related works, but the deint or scale filters unfortunately do not.

    No doubt full GPU decode/filter/encode will be the fastest. Just NVEnc HEVC encode is so much faster than x265. I was suggesting some GPU filters combined with "CPU" encoding. To perhaps speed up parts of the pipeline and utilize slack resources more efficiently

    You can do combinations if you didn't like "GPU encoding" and wanted CPU encoding. e.g. DGDecNV/DGSource for avs/vpy has the purevideo deinterlacer option built in - the quality is about the same as yadif. So GPU decode, GPU deinterlace => avs or vpy. That frees up a few CPU% and faster x265 encoding . Even just GPU decoding alone frees up a few% that would have been wasted on CPU decoding for HD,UHD etc.., and it's very robust, frame accurate and indexed (and free now too). Or DGBob is exactly yadif cuda in avs/vpy. So you are mixing GPU/CPU transfers (which will be slower compared to full GPU) .
    Quote Quote  
  12. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    again, one very quick correction: this is telecined content I'm talking about. Progressive content, possibly from a progressive digital intermediate made from old Betacam tapes or some such, but in the case of films: most probably ripped from DVD's or blu-rays. There is no "missing" information in the source. It is progressive content broadcast as an interlaced stream.

    Also again, the way you use the term "deinterlace" is absolutely foreign to me. As I understand it, the most common "deinterlacers" are very well aware that the FRAME information in interlaced streams is simply distributed across the two half-resolution fields. Even linear blend works like this afaik - it blends odd rows from one frame with even rows from another. It's not AT ALL the same as discarding and line doubling.

    Simply DISCARDING odd or even fields is one very specific way of getting rid of interlace combing. By no means does "de-interlacing" automatically mean "throwing information away". Yes this is semantics - extremely important semantics. You are very free to correct me in case I'm wrong, this is simply the impression I've always had. Inverse telecine in my mind IS one method of "de-interlacing" - the end result is progressive video with no combing visible.

    Something wrong with this impression of mine? It could be simply a different interpretation of "deinterlace" that we have, but I'm not sure anymore. One thing I can assure you of is that I know what interlaced video means, I know how stuff used to be broadcast for viewing on 50 Hz analog CRT tv's, how the interlacing works because of that precise 50 Hz flickering, etc. In PAL world, films have always been telecined "by default" - assuming 25 FRAMES per second, the display simply flickered between showing "odd" rows from one field and "even" rows from the other field. (*)

    Please don't take me wrong. I'm aware you know quite a lot about video, I would assume much more so than me. Where I'm lost right now is that we seem to be talking about different things!

    (*) and yes, I know that to get films to 25 fps, or to "50 Hz", the content is sped up accordingly because that was by far the easiest way and (unfortunately) it has stuck all the way to the present day.

    cheers
    Quote Quote  
  13. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Originally Posted by poisondeathray View Post
    In this case that's "wasted work". I already know the source is progressive. Take the odd lines from one field, the even ones from the second, interleave them and that's it, that is the full resolution frame. There's no need to analyse the picture at all, since the pattern is exactly known beforehand.
    Sure, if you have 100% progressive content, then you didn't need to start the thread. Do nothing. You're done.
    No. This is obviously not the case, and this is not what I meant. See the following.

    ... the analysis is to see which parts have interlaced content, which is why I presume you started this thread.
    ...
    No. No. All the while I was referring to the "progressive source content", I was referring to the content that has been telecined. As I'm sure you agree, in telecined content the progressive, the original source content is present and intact (in principle).

    I started this thread with the intention of finding out how to de-telecine PAL material. We don't really need to discuss "deinterlacing" that much at all if it creates confusion instead of clearing it.
    Quote Quote  
  14. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    I started this thread with the intention of finding out how to de-telecine PAL material
    The attached document may help on what to do.

    When I meet real interlaced sequences (fields are from different moment in time, i.e. live studio programs) mixed with "progressive" sequences (fields are from same moment in time, i.e. telecine with PAL speed-up) i do:
    1- loss deinterlace the first then process; do nothing on the second then process; interlace back the first and merge
    or
    2- deinterlace the first then process; do nothing on the second then process; double frame rate of second then merge
    Image Attached Thumbnails Exotic Interlacing(English).pdf  

    Quote Quote  
  15. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    That's interesting! Ist dat Maschine-Translated from se originalen German Dokument?

    I'm not sure Avisynth (or similar) is necessary or, indeed, useful because performance IS a consideration. Looking at the documentation, the FFMPeg filters fieldmatch and decimate theoretically can perform the needed procedure. However, even fieldmatch seems to be too complicated for this, since based on what the docs say, it necessarily has additional logic to "mark" this and that as interlaced or progressive, and none of that would be needed here. From the beginning, one concern has been that fieldmatch+decimate would in fact not be any faster than for example yadif or bwdif (may they reduce quality or not). An extremely simple filter that would simply merge (weave, interleave, whatever term you like to use) from adjacent fields together without any additional considerations for anything.

    I guess I'll have to try and see.

    P. S. Regarding bwdif degrading quality... would it not pretty much be assumed that in the case of an source that happens to be actually progressive in the first place, bwdif would "see" exactly that, it would detect and reconstruct the progressive frames? I can imagine it could falsely detect the wrong fields belonging to the same frame or something... but, I don't know. In any case, a true de-telecine or ivtc filter would obviously be the best choice, instead of some other more computationally intensive deinterlacer.

    P. P. S. Also, in fact, pp=lb is very fast (no unnecessary logic...) and in this case works reasonably well, as it should in theory... an unfortunate side effect is the small amount of blur that it causes.
    Quote Quote  
  16. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    P. P. P. S. BTW, This is all pretty heavy for "newbie" discussion. If some guy walked up to me on a street and asked if I knew how to encode videos, I probably wouldn't say he needs to learn how to read/write AviSynth scripts . Most newbies, most people don't know a fraction of a bit about how to do these sorts of things on computers
    Quote Quote  
  17. Again (one trial): See 2.1.2 in the translated German document, that's what your source is, a mix between field-shifted and field-correct frames, but all progressive.
    Phase-shifted aka field-delayed PAL is 100% (99,9%) progressive, if you just re-shift the fields. Every field-matcher will do this almost perfectly, and you will have NO loss in opposite to any de-interlacer. Also it will leave each progressive part alone. So the first proposal of poisondeathray is 100% correct.
    Quote Quote  
  18. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Originally Posted by Quint View Post
    Again (one trial): See 2.1.2 in the translated German document, that's what your source is, a mix between field-shifted and field-correct frames, but all progressive.
    I have many sources like this, but yes, this is the phenomenon we are concerned with.

    Phase-shifted aka field-delayed PAL is 100% (99,9%) progressive, if you just re-shift the fields. Every field-matcher will do this almost perfectly, and you will have NO loss in opposite to any de-interlacer. Also it will leave each progressive part alone. So the first proposal of poisondeathray is 100% correct.
    As far as I'm aware, there has been no dispute with poisondeathray regarding whether something is "correct" or not. I have not finished running all the tests I'd prefer yet, but it should be pointed out that pdr's first proposal 1. uses bwdif in addition to fieldmatch and 2. the output is 50p, not 25p.

    However, I actually came back here to hopefully clear some confusion for anyone possibly reading this. I quote from the AviSynth wiki (http://avisynth.nl/index.php/Yadif) :

    ... check pixels of previous, current and next frames to re-create the missed field ...
    Based on that, "de-interlacing" can not be said to equal "reducing resolution in half". I believe bwdif is a variant of yadif and works similarly. In practice, this type of de-interlacer actually performs "field matching".

    Much of the discussion has been about the performance aspect. I think I mentioned that I believe those filters are more complex than needed for this task. Indeed, based on that description, at least yadif indeed performs image analysis and thus very likely "sees" aka detects which parts of which fields belong to which frame. This is VERY different from ONLY taking odd and even rows of pixels serially in exactly the same pattern and merging them together to form frames, which I believe would be significantly faster.

    It's also very difficult to tell a visual difference between bwdif only (25p) and the fieldmatch+bwdif (50p) combination.

    You could also say there is not only one "correct". It's really: whatever works.
    Quote Quote  
  19. Originally Posted by non-vol View Post
    Inverse telecine in my mind IS one method of "de-interlacing" - the end result is progressive video with no combing visible.
    The way it's used here, deinterlacing and IVTC are 2 different things.

    Deinterlacing was described earlier

    IVTC (inverse telecine) is fieldmatching +/- decimation +/- comb fallback. Completely different than "deinterlacing" - the way "deinterlacing" is used here or ffmpeg

    Fieldmatching is the 1st step for all IVTC cases, NTSC or PAL. It's the 1st step in reversing the telecine to recover the original progressive frames.





    All the while I was referring to the "progressive source content", I was referring to the content that has been telecined. As I'm sure you agree, in telecined content the progressive, the original source content is present and intact (in principle).

    I started this thread with the intention of finding out how to de-telecine PAL material. We don't really need to discuss "deinterlacing" that much at all if it creates confusion instead of clearing it.
    As explained earlier - that is what field matching does. Note the distinction between deinterlacing

    Field matching is rearranging the fields. Think of it as putting them back in their proper places, if they were rearranged in the telecine process. If they were already aligned, then that's fine too.

    comb fall back is for after field matching, when there is still combing left over. This occurs when there are still misaligned temporally (sometimes fieldmatching is not 100% accurate or makes a mistake on tricky sources, some sources are "noisy" or problematic interfering with the matching), or spatially (sometimes there is "wiggle" in old telecine), or orphaned (e.g broadcast edit that cuts a field off, resulting in field shifting), or interlaced content (like some commercials). All these situations result in combing after field matching. This fallback uses "deinterlacing" to address those problems, and only when combing detected, as to not degrade the "good" frames

    Interlaced content are things like the some types of graphical overlays , lower thirds that you see on news tickers, many commercials, sports. They can be interspersed in broadcasts. If you examine individual fields, there are 50 moments in time represented per second, not 25.



    The sources clearly have both progressive and true interlaced parts within them. Viewing unfiltered, this is visible as combing in some parts, but not others. Sort of like watching a movie where the movie itself is progressive (BR or DVD rips for all I know), but some stuff in commercial breaks is not, and sometimes - subtitles are interlaced and present as combed
    "true interlaced parts" suggests interlaced content to me so maybe this is the disconnect in language . Regardless, it's still the same procedure

    So what is the cause of your combing? Is it 100% only field shifting ? It is 100% progressive content, just some fields are misaligned? If so, the field matching alone is your answer. If you have x% that is interlaced content after field matching, you will get x% residual combing - That is what comb fall back and "deinterlacing" is for, to address the residual combing caused by the mechanisms listed earlier.



    However, even fieldmatch seems to be too complicated for this, since based on what the docs say, it necessarily has additional logic to "mark" this and that as interlaced or progressive, and none of that would be needed here. From the beginning, one concern has been that fieldmatch+decimate would in fact not be any faster than for example yadif or bwdif (may they reduce quality or not).
    Fieldmatch is what you need, unless you 100% see no combing. Fieldmatching is the 1st step for IVTC in all cases.

    Otherwise you would see no combing and you do nothing. ie. It's 100% progressive and 100% aligned and 100% no combing to start with.



    An extremely simple filter that would simply merge (weave, interleave, whatever term you like to use) from adjacent fields together without any additional considerations for anything..
    This is field matching

    normal progressive
    Aa Bb Cc

    field shifted. It looks "combed". It's still progressive content, but the fields are misaligned
    aB bC cD




    P. S. Regarding bwdif degrading quality... would it not pretty much be assumed that in the case of an source that happens to be actually progressive in the first place, bwdif would "see" exactly that, it would detect and reconstruct the progressive frames? I can imagine it could falsely detect the wrong fields belonging to the same frame or something... but, I don't know. In any case, a true de-telecine or ivtc filter would obviously be the best choice, instead of some other more computationally intensive deinterlacer.
    No, bdwif is a "deinterlacer" .

    You're describing field matching with combfallback using bwdif (or xyz deinterlacer)

    Field matching +/- decimation +/- comb fallback is IVTC. It's reversing the telecine and addressing any residual comb problems

    Full deinterlacing blindly applied everywhere is not IVTC. It degrades everything. Notice in yadif and bwdif
    deint

    Specify which frames to deinterlace. Accepts one of the following values:

    0, all

    Deinterlace all frames.
    1, interlaced

    Only deinterlace frames marked as interlaced.

    The deint=interlaced mode does not "blindly" deinterlace all frames . Fieldmatch passes the analysis data to yadif or bwdif, so selective deinterlacing fallback is used. So you don't damage the progressive frames . ie. it's selective.




    I think something is wrong; in ffmpeg, fieldmatch with yadif resulting in (25p), but fieldmatch with bwdif does not (50p) . It should be 25p for the latter. I think a bug somewhere. I think you add fps=25 it should work.

    EDIT: the default mode in ffmpeg for bwdif is send_field, but for yadif is send_frame. So set bwdif to mode=0
    Last edited by poisondeathray; 18th Jan 2022 at 10:11.
    Quote Quote  
  20. Originally Posted by non-vol View Post
    You could also say there is not only one "correct". It's really: whatever works.
    Of course. But "What works" in this case should be not to use something lossy where 99.9% could be restored without loss. In avisynth you may also do the corrections manually, for each portion separately to avoid any wrong match. That would be even more "correct". Forgive if the term "correct" did not express what I wanted to say with it - not native English speaker.
    I also worked with a lot of these sources, very common in PAL (I am from a PAL country) where parts of a source are field-shifted. This has nothing to do with telecining, there are no double fields or frames that had to be decimated after field-matching, so there is no "inverse telecine" necessary.
    There are, however, cases where only location shots had been shot on film (later accelerated to 25fps), and the interior shots taken with PAL-TV-cameras. Then these scenes are truely interlaced, but it's not very common.
    Quote Quote  
  21. If the video uses 2:2:2:2:2:2:2:2:2:2:2:3 pulldown it will alternate between 12 progressive frames then 13 interlaced frames (or vice versa). That can be restored to the original 24 fps film frames with field matching followed by decimation (remove 1 duplicate frame out of every 25). In ffmpeg:

    Code:
    -vf fieldmatch=combmatch=full,decimate=cycle=25
    I rarely see this though. Much more common is a simple phase shift. I.e. tff broadcast captured as bff, or vice versa (all frames show comb artifacts when there's motion). That's fixed by a simple

    Code:
    -vf fieldmatch=combmatch=full
    Then you have more complex issues like 25p video overlaid with 25i titles, effects, etc.
    Quote Quote  
  22. Originally Posted by jagabo View Post
    If the video uses 2:2:2:2:2:2:2:2:2:2:2:3 pulldown it will alternate between 12 progressive frames then 13 interlaced frames (or vice versa). That can be restored to the original 24 fps film frames with field matching followed by decimation (remove 1 duplicate frame out of every 25). In ffmpeg:

    Code:
    -vf fieldmatch=combmatch=full,decimate=cycle=25
    I rarely see this though.
    Yes, example for this "Euro-pulldown" here. It has sometimes been used when the sound was of particular importance, avoiding pitch-shift or pitch-correction of the original 24fps sound track.
    Image Attached Files
    Last edited by Sharc; 18th Jan 2022 at 13:19.
    Quote Quote  
  23. Originally Posted by Sharc View Post
    Yes, example for this "Euro-pulldown" here. It has sometimes been used when the sound was of particular importance, avoiding pitch-shift or pitch-correction of the original 24fps sound track.
    Not only. At the beginning of the ninetees they used it quite often, especially with cartoon, but also in a lot of "cheaper" real series, also often with blendings, and sometimes with sick changing patterns.
    But in this case there are progressive frames, so it's an original PAL source, and very probably with shifted fields in some scenes. No NTSC-PAL conversion.

    Edit: I just remembered: Not only in cheaper series, they also used it in Star Trek TNG when aired in ZDF (Germany).
    Last edited by Quint; 18th Jan 2022 at 16:00.
    Quote Quote  
  24. There's something odd about the video in post #22. Or maybe it's in the way LSMASH interprets it. But the combed fields seem to have the fields in the wrong spacial order. You can see this at the border between the active picture and the letterbox bars. All the frames with combing show a normal line, followed by a black line, then a normal line again. So the combed frames need a SwapFields() before the field match. In AviSynth:

    Code:
    LSMASHVideoSource("24p to 25 euro-pulldown.mp4") 
    AssumeTFF()
    
    ConditionalFilter(last, SwapFields().AssumeBFF(), last, "IsCombedTIVTC()")
    
    TFM()
    TDecimate(Cycle=25, CycleR=1)
    I don't know how to do this in ffmpeg.

    Also the ffmpeg filter sequences I gave in post #21 aren't quite right. Something like this works better:

    Code:
    -vf fieldmatch=order=tff:mode=pc,decimate=cycle=25
    Quote Quote  
  25. Originally Posted by non-vol View Post

    However, I actually came back here to hopefully clear some confusion for anyone possibly reading this. I quote from the AviSynth wiki (http://avisynth.nl/index.php/Yadif) :

    ... check pixels of previous, current and next frames to re-create the missed field ...
    Based on that, "de-interlacing" can not be said to equal "reducing resolution in half". I believe bwdif is a variant of yadif and works similarly. In practice, this type of de-interlacer actually performs "field matching".
    You're misinterpreting what is posted. Yadif attempts to recreate the missing field. It's not "missing" in progressive content. You cause it to go missing when you deinterlace

    It is about half resolution when deinterlacing progressive content. the "missed field" should be a clue... Progressive content doesn't have "missed field"

    Deinterlacing is not "field matching" .


    To clarify :

    I wrote
    A deinterlacer applied to progressive content basically throws away 1 of the fields of the pair, and tries to reconstruct the missing field . Essentially you're cutting the spatial resolution in half .
    It's essentially a true statement. You start with a single field, and some algorithms like bwdif interpolate based on a spatial and temporal check in an attempt to interpolate the dropped scanlines. Sometimes a deinterlacing algorithm reduces the resolution more than half, sometimes less. Average around half. The reduction in resolution is measurable on resolution test charts, test patterns such as angles, wedges, trumpets, zone planes, etc..(with motion if you're testing motion adapative). The quality loss is measureable with metrics. The end result is always lower resolution , lower quality , compared to field matching on progressive content (unless there is no motion, for some deinterlacing algorithms that can weave with no motion). This is why there is a general rule to "never deinterlace progressive content." (There are some exceptions with very bad sources that can benefit from some types of deinterlacers like QTGMC and post processing)

    Here is an exerpt from a test chart, it's an APNG, it should animate in most browsers



    The full vertical resolution is recovered by field matching, while it's about half for bwdif and yadif. Yadif you could argue is slightly less. Both deinterlacers worsen the diagonals and cause ringing/edge enhancement on the plus pattern (bwdif worse for the edge enhancement)

    fieldmatch PSNR 98.965271 dB
    bwdif PSNR 28.689996 dB
    yadif PSNR 28.157285 dB

    Weaving 2 fields that belong to the same frame, forms the full quality progressive frame - is essentially field matching (the searching for , and combining of matching fields of the same frame if they are misaligned).

    BUT deinterlacing such as bwdif, yadif - NEVER weaves 2 fields belonging to the same frame cleanly in motion. The defining characteristic for all "deinterlacers" (the way they are defined here and many video forums) is that they use some algorithm to interpolate the missing field (that is dropped for progressive content). Some look to adjacent fields farther than 1 away to fill in the information (temporal). Field matching is not deinterlacing (the way "deinterlacing" is defined here)- field matching recovers basically all (~99%) the actual resolution because both original fields are matched, but deinterlacing never does . Deinterlacing is generally closer to ~50%

    Yadif and bwdif have a check for spatial, and temporal (motion). If you have no motion, ie. repeated frames, a static image - bwdif actually does well, it weaves the fields as you should. The "W" in BW is weave. It stands for "Bob Weave". That's what a proper motion adpative deinterlacer is supposed to do, one of the selling points of "interlace" in the first place: Full progressive resolution when there is no motion. The amount or magnitude of motion and spatial differences triggers more areas to be deinterlaced, thus more artifacts appear in scenes that have higher motion, compared to lower motion. It's true that the interpolation used by yadif and bwdif are smarter than just discarding a field but it's still worse than field matching in the context of progressive content.

    But as mentioned earlier, whether or you can "see" the difference depends on your eyes and the type of content. When you deinterlace progressive content - lower frequencies , coarse details are less adversely affected than higher frequency , finer details. The higher the quality, the finer the details of the source (higher effective resolution) the more likely you 'll see the problems when deinterlacing progressive content. On the test chart exerpt, if you look at the lines <500 , they are barely affected, just some aliasing and along diagonals. Many "HD" sources are low quality, soft to begin with and do not have remotely close to 1080 actual effective resolution. They might be closer to 600 or 700 lines to begin with. For those, you might not notice as much degredation. Also content differences - lines, angles, diagonals, like fences , buildings will will generate aliased flickering deinterlacing artifacts in motion. ie. Motion artifacts, buzzing lines. Test charts usually have high contrast edges, but the source content might be more organic and hide the deinterlacing artifacts. Certain angles might be handled better or worse with certain algorithms. Many threads posted on this forum deal with handling the artifacts created from deinterlacing progressive content.

    I mentioned this earlier, but there can be potential problems with field matching post comb detection /deinterlacing; If it's wrongly detected according to the thresholds and settings used, a true progressive frame might be inadvertently deinterlaced and degraded . So you might have to adjust settings more suitable for your source, or use overrides (difficult to do in ffmpeg directly). If your source is clean, and only field shifted (none of the earlier mentioned mechanisms for "combing"), you can safely disable the comb detection and selective deint post processing and use field matching only

    The post field match method discussed applies to whole frames. But there are more selective methods, such as using comb masking. Instead of entire frame being put through the deinterlacer (when certain thresholds were met) , only portions of the frame which show combing can be processed, limiting the damage even farther. You might do that when there are interlaced overlays, text, lower thirds, credits "on top" or beside progressive content material (e.g split screen credit roll)
    Last edited by poisondeathray; 23rd Jan 2022 at 09:43.
    Quote Quote  
  26. Originally Posted by jagabo View Post
    There's something odd about the video in post #22. Or maybe it's in the way LSMASH interprets it. But the combed fields seem to have the fields in the wrong spacial order. You can see this at the border between the active picture and the letterbox bars. All the frames with combing show a normal line, followed by a black line, then a normal line again.....
    Yes, I messed the spacial field order up, sorry. Thanks for pointing it out.
    This version should be correct.
    The problem with such telecined film is that (most?) TV's will not properly IVTC (=fieldmatch+decimate) it but just apply bob-deinterlacing, causing damage to the progressive frames as explained before.
    Image Attached Files
    Last edited by Sharc; 19th Jan 2022 at 07:45.
    Quote Quote  
  27. Yes, the new video doesn't have the swapped field problem. With AviSynth:

    Code:
    LSMASHVideoSource("24p to 25 euro-pulldown-tff.mp4") 
    AssumeTFF()
    TFM()
    TDecimate(Cycle=25, CycleR=1)
    with ffmpeg:

    Code:
    ffmpeg -y -i "24p to 25 euro-pulldown-tff.mp4" -vf fieldmatch=combmatch=full:order=tff,decimate=cycle=25 -c:v libx264 -preset veryfast -crf 20 -c:a copy 24p.mkv
    or faster (simpler field match):

    Code:
    ffmpeg -y -i "24p to 25 euro-pulldown-tff.mp4" -vf fieldmatch=order=tff:mode=pc,decimate=cycle=25 -c:v libx264 -preset veryfast -crf 20 -c:a copy 24p.mkv
    Restored to 24p version attached (last ffmpeg command, except preset=slow):
    Image Attached Files
    Last edited by jagabo; 19th Jan 2022 at 08:09. Reason: added sample
    Quote Quote  
  28. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Originally Posted by Quint View Post
    Originally Posted by non-vol View Post
    You could also say there is not only one "correct". It's really: whatever works.
    Of course. But "What works" in this case should be not to use something lossy where 99.9% could be restored without loss. In avisynth you may also do the corrections manually, for each portion separately to avoid any wrong match. That would be even more "correct". Forgive if the term "correct" did not express what I wanted to say with it - not native English speaker.
    Absolutely no need to apologize! I mentioned this in an earlier post, I was indeed myself wondering whether or not we (all of us) were needlessly arguing about semantics - specifically whether we were actually talking about the exact same things but using different terms. I mean, if you take the literal dictionary definition of "inter", "lacing" "leaving", "weaving", etc. there's plenty of room for simple misunderstandings. Were using highly specific, technical/engineering terms that may or may not be literally standardised.

    I also worked with a lot of these sources, very common in PAL (I am from a PAL country) where parts of a source are field-shifted. This has nothing to do with telecining, there are no double fields or frames that had to be decimated after field-matching, so there is no "inverse telecine" necessary.
    There are, however, cases where only location shots had been shot on film (later accelerated to 25fps), and the interior shots taken with PAL-TV-cameras. Then these scenes are truely interlaced, but it's not very common.
    Hmm... interesting. I'm not sure but it's possible I also have content like this, at least I have heard of such practice. This could be higher-profile (film is expensive) European stuff intended for PAL broadcast.

    However, I've got the impression that in these cases the film was also shot at 25 fps, exactly because it was designed to end up being... PAL. Not sure though, and this can be tricky to find out. I know some older European TV series were shot entirely on film, just like in the US, but likewise I've at least heard that the film was shot at 25 fps. Why would they have shot the material at 24 fps, if they knew in advance that it would then be sped up? I'd imagine that'd have complicated the audio part quite a bit... but who knows.
    Quote Quote  
  29. Member
    Join Date
    Nov 2019
    Location
    Europe
    Search PM
    Originally Posted by poisondeathray View Post
    ...

    It's true that the interpolation used by yadif and bwdif are smarter than just discarding a field but it's still worse than field matching in the context of progressive content.
    First of all, thanks for your detailed post(s). Again, I have not by any means intended to challenge you on anything. I'm sure what you've said is 'correct'.

    I'm quoting the specific part above because it shows that we do basically see eye to eye here. The term "interpolation" is also a very, very general term and easily misunderstood, I would guess not many actually WOULD understand it to mean anything even close to as complex as whatever yadif does. If you specifically mention "temporal interpolation", then - sure, I guess.

    But as mentioned earlier, whether or you can "see" the difference depends on your eyes and the type of content. When you deinterlace progressive content - lower frequencies , coarse details are less adversely affected than higher frequency , finer details. The higher the quality, the finer the details of the source (higher effective resolution) the more likely you 'll see the problems when deinterlacing progressive content. On the test chart exerpt, if you look at the lines <500 , they are barely affected, just some aliasing and along diagonals. Many "HD" sources are low quality, soft to begin with and do not have remotely close to 1080 actual effective resolution. They might be closer to 600 or 700 lines to begin with. For those, you might not notice as much degredation. Also content differences - lines, angles, diagonals, like fences , buildings will will generate aliased flickering deinterlacing artifacts in motion. ie. Motion artifacts, buzzing lines. Test charts usually have high contrast edges, but the source content might be more organic and hide the deinterlacing artifacts. Certain angles might be handled better or worse with certain algorithms. Many threads posted on this forum deal with handling the artifacts created from deinterlacing progressive content.

    I mentioned this earlier, but there can be potential problems with field matching post comb detection /deinterlacing; If it's wrongly detected according to the thresholds and settings used, a true progressive frame might be inadvertently deinterlaced and degraded . So you might have to adjust settings more suitable for your source, or use overrides (difficult to do in ffmpeg directly). If your source is clean, and only field shifted (none of the earlier mentioned mechanisms for "combing"), you can safely disable the comb detection and selective deint post processing and use field matching only

    The post field match method discussed applies to whole frames. But there are more selective methods, such as using comb masking. Instead of entire frame being put through the deinterlacer (when certain thresholds were met) , only portions of the frame which show combing can be processed, limiting the damage even farther. You might do that when there are interlaced overlays, text, lower thirds, credits "on top" or beside progressive content material (e.g split screen credit roll)
    For some reason I cannot see the test video (I'll try another browser maybe)

    I do understand the concept of interlaced broadcasts, and "true interlaced" content, where the temporal resolution is 50 Hz and the vertical resolution is effectively halved in high motion scenes. I do understand this is different from progressive content that has been telecined.
    There is no misunderstanding related to this.

    The one thing that's still not "solved" here is the performance issue that I mentioned early on. Yes, you can use many filters (algorithms) that are guaranteed to produce the correct result. However, it's possible doing that would be very slow and it is generally not critically important that the result would be guaranteed to be 'correct'. You mentioned that whether or not I can "see" the difference depends on this and that - you are exactly right!

    If it helps to understand, this is not my job and no one is paying me a dime to do this stuff. I have a lot of videos here that I want to process fairly quickly. Just simply put, I do NOT have time to closely analyse each of them and spend time making a tailored AviSynth filter chain for each of them. But of course in what I KNOW to be telecined progressive content, I'd like to avoid losing detail unnecessarily.

    What's interesting to me is that the fieldmatch filter seems to be quite slow for some reason. Like I mentioned, simply merging lines from adjacent fields with a known, constant pattern should be faster than ANY alternative here, especially something like yadif that performs complex image analysis.

    So I guess stupid question time: If yadif can process this stuff at 10-20x realtime, why does fieldmatch not process it at a HUNDRED times realtime?

    Thanks and cheers
    Quote Quote  
  30. Originally Posted by non-vol View Post
    However, I've got the impression that in these cases the film was also shot at 25 fps, exactly because it was designed to end up being... PAL. Not sure though, and this can be tricky to find out. I know some older European TV series were shot entirely on film, just like in the US, but likewise I've at least heard that the film was shot at 25 fps. Why would they have shot the material at 24 fps, if they knew in advance that it would then be sped up? I'd imagine that'd have complicated the audio part quite a bit... but who knows.
    Because it was common standard and the machines worked like that: Filmscanning and bringing it to 50i "at the same time", similar to telecining in NTSC where film was scanned and "at the same time" interlaced and pulldowned to 29.97fps, absolute standard procedures. I never heard of film being shot in 25fps in the first place in the times that the above was common practice. I will ask someone who should know, an interesting question. Today when shot digitally they of course use 25p or 50i (or is it correctly 25i, as I read not long ago somewhere here? )
    Quote Quote  



Similar Threads