VideoHelp Forum
+ Reply to Thread
Results 1 to 7 of 7
Thread
  1. Member
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    Hi guys,

    I wanted to make sure I'm understanding correctly the situation with interlaced content and modern day viewing. I shall preface the subject with a clear qualifier - I am archiving untouched, interlaced copies of the videos I am capturing for future use. This question is purely about creating current day viewing copies.

    So with that in mind, I wanted to make sure my understanding of this is correct. I'll base this on NTSC sources, just to save writing out different frame rates.

    A. A true interlaced source video would be 29.97fps, and each frame is comprised two unique fields, which are half height of the full res. So a 720x480 capture has 720x240 sized fields, 59.94 fields per second.
    B. To display interlaced footage on any modern non-CRT screen, the footage must be de-interlaced, correct? Viewing without de-interlacing will result in an image full of combing. This is either done via hardware, or software, and with varying methods - some good, some bad.
    C. Due to the above, if you're creating copies for watching in 2020 it makes the most sense to de-interlace it yourself, using the best method currently possible. Which as I understand, is QTGMC.
    D. This means the watching copies encoded would be 59.94fps progressive, with each half-height field interpolated into a full size 720x480 frame via QTGMC, along with other stuff it does.

    Is anything I've stated there incorrect? Is there any reason you would still encode interlaced videos for this purpose? I guess the crux of the question is if de-interlacing at the source is a better idea than letting whichever hardware or software player do it, potentially not as well? And that in the case of NTSC sources, 59.94fps progressive is "correct" if de-interlaced properly? Otherwise you're just throwing away half of the fields, if you output 29.97fps progressive? A lot of people seem to get the idea that the extra frames are faked, or that motion is now "too smooth", but I think that's just a result of them watching improperly de-interlaced content for years?

    Obviously as well this question only applies to home viewing of these videos using modern codecs like H.264. If we're talking about videos produced for TV broadcast then they will probably want interlaced footage, also if you're creating a DVD or Blu-Ray. But yeah, those are different scenarios that I'm not concerned with here.


    Thanks
    Quote Quote  
  2. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    If my understanding is correct, a modern digital tv actually de-interlaces the interlaced video for you.
    Quote Quote  
  3. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    DB83 is correct.

    So, to answer your questions:
    A - correct
    B - correct
    C - "most sense" depends on how you intend to view, using what player, with what capabilities, and versus how much time and energy (and money?) you put into converting with QTGMC or similar deint algorithms. Plus that's to be compared against the innate degradation involved in re-compressing a clip (which MUST occur when you deint, unless going lossless, but I expect that to be a rarity).
    D - correct, assuming you go the QTGMC, etc route.

    E - C was partially incorrect, depending on priorities (personal preference, etc).
    You very well might leave interleaved even while re-encoding, if you were to be making DVDs to share (that would give you the most natural quality transfer, and would still play properly on everyone's machines). - oops, just saw that last paragraph. Ignore this then.

    Also, if you do processing to the clips, it is MUCH better to do processing on original (non-interpolated) fields - as long as interlacing is taken into account (or bypassed temporarily by unweaving then re-weaving later). Processing interpolated fields means you are making further guesses based on original guesses, which compounds the errors.

    It has been my experience that modern TVs (2012->) do quite well with their hardware-based, on-the-fly deinterlacing to the point where it is quite sufficient for much non-critical viewing, for most people. So, little need to deint.

    Yes, going instead to 29.97(30)p throws away 1/2 the motion detail, and IMO is to be avoided if at all possible.


    Scott
    Last edited by Cornucopia; 9th May 2020 at 11:33.
    Quote Quote  
  4. Originally Posted by Killer3737 View Post
    Is anything I've stated there incorrect? Is there any reason you would still encode interlaced videos for this purpose? I guess the crux of the question is if de-interlacing at the source is a better idea than letting whichever hardware or software player do it, potentially not as well? And that in the case of NTSC sources, 59.94fps progressive is "correct" if de-interlaced properly? Otherwise you're just throwing away half of the fields, if you output 29.97fps progressive? A lot of people seem to get the idea that the extra frames are faked, or that motion is now "too smooth", but I think that's just a result of them watching improperly de-interlaced content for years?
    Interlaced encoding is less efficient than progressive encoding. I couldn't tell you how much, because I always de-interlace with QTGMC.

    http://www.chaneru.com/Roku/HLS/X264_Settings.htm#tff

    tff
    Enable interlaced encoding and specify the top field is first. x264's interlaced encoding uses MBAFF, and is inherently less efficient than progressive encoding. For that reason, you should only encode interlaced if you intend to display the video on an interlaced display (or can't deinterlace the video before sending it to x264). Implies --pic-struct.
    bff
    Enable interlaced encoding and specify the bottom field is first. See --tff for more info.


    The extra frames aren't faked. Each field is converted into a full progressive frame by interpolating the missing scanlines. QTGMC has a lossless mode that outputs the original scanlines untouched, and semi-lossless modes that output the original scanlines before the final smoothing, or it can restore noise from the source. Those modes really only work well on very clean sources though, as they can restore artefacts from the source and cause some shimmering and aliasing. I almost never use them, and let QTGMC do it's thing. Hardware devices would also de-interlace the same way (bobbing to 59.94fps).

    De-interlacing to 29.97fps can potentially look a bit "jittery", partly due to de-interlacing artefacts and partly because video doesn't have the same amount of motion blur as film, which helps film look smooth at 24fps. When you de-interlace to 59.94fps you retain all the temporal detail and motion blur isn't required to make it look smooth. In fact it'll generally look smoother than film, which is probably why some people find it looks "unnatural". Your brain adjusts to it pretty quickly though, and when you go back to watching "film" it can look a little jittery until your brain adjusts again.

    You can de-interlace to 29.97fps with QTGMC, but it de-interlaces the same way anyway, only outputting every second frame. If you do, QTGMC can add motion blur to make it more "film like".

    There's generally no need to further process something that's been de-interlaced by QTGMC, except for cropping/resizing. QTGMC has denoising options, so it can take care of that if need be. After de-interlacing with QTGMC and cropping/resizing, I rarely follow it with anything but gradfun3() from DitherTools. It converts to 16 bit, smooths gradients, then dithers back to 8 bit. That can help reduce any existing color banding and prevent 8 bit encoding causing it.

    I uploaded the attached examples to another thread recently. It's a pretty horrible source, to show the worse it is, the better off you are de-interlacing with QTGMC. Both samples are de-interlaced to 50fps (PAL). I'd be surprised if the average player or TV would de-interlace at a much higher quality than the Yadif example. That'd be okay for very clean sources, which DVDs often aren't. The samples were de-interlaced, then cropped and resized to square pixel dimensions for encoding, but you can simply crop any black/crud after de-interlacing, use the appropriate sample/pixel aspect ratio when encoding, and let the player take care of that. It's up to you.

    The lower the quality the source, the more likely it'll finish up looking better than the original if you de-interlace with QTGMC. IMHO, that applies to most DVDs. There's a little fine detail lost due to the denoising for the QTGMC sample, which is what happens when you denoise, but it cleaned up a lot of the encoder blocking too. It's not too often you need to do extra denoising while de-interlacing with QTGMC though, as it naturally denoises a little. The source is included in the zip file. The scripts I used were:

    QTGMC(EzDenoise=2.5)
    CropResize(0,0, 20,2,-20,-2, InDAR=20.0/11.0, ResizeWO=true)
    GradFun3()

    and

    Yadif(Mode=1)
    CropResize(0,0, 20,2,-20,-2, InDAR=20.0/11.0, ResizeWO=true)
    GradFun3()

    Deinterlacing Examples.zip (44.1Mb)
    Last edited by hello_hello; 10th May 2020 at 07:36.
    Quote Quote  
  5. The simple answer is perhaps if you are using QTGMC for bob-deinterlacing you won't go wrong if your playback device supports the double framerate.

    It gets a bit more complicated when you want to add some extra processing like denoising or resizing. Unless the filter is 'interlace aware' - which means it has the 'interlaced=true' option or similar - you have the choice of applying the filter(s)
    i) on the deinterlaced (or preferably bobbed) frames
    ii) on the separated fields (even/odd grouped, depending on the filter), and re-interlace.

    Which method i) or ii) is 'better' depends on the footage and personal preference. Method i) is definitely simpler and less prone to making mistakes.

    So as you keep the original captured interlaced videos in a safe archive, I would suggest to start with QTGMC() and wrap it into an .mp4 container. You will always have the chance to redo from the archive when you should come to different conclusions later.
    Quote Quote  
  6. A true interlaced source video would be 29.97fps, and each frame is comprised two unique fields, which are half height of the full res.
    Almost.

    Each field is full height but contains only the even- or odd-numbered "scan lines", e.g.

    "ODD" FIELD
    Line 1
    Line 3
    Line 5
    Line 7
    etc.

    "EVEN" FIELD
    Line 2
    Line 4
    Line 6
    Line 8
    etc.

    Lines 1,3,5,7 (every 1/59.94 second)
    Lines 2,4,6,8 (every 1/59.94 second)

    On screen you see every 1/29.97 second:

    Line 1 (odd field)
    Line 2 (even field)
    Line 3 (odd field)
    Line 4 (even field)
    Line 5 (odd field)
    Line 6 (even field)
    Line 7 (odd field)
    Line 8 (even field)

    Because each field contains alternate scan lines, one (full height) field has half the line count (240) of the total 480 scan lines.

    "Scan line" is a throwback to the olden days of analog TV. In modern parlance it would be one horizontal row of pixels.

    In the U.S., the standards for broadcast video are 1080 interlaced or 720 progressive, so a display should be capable of displaying either interlaced or progressive without serious artifacts.
    Quote Quote  
  7. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Potato - potahto.

    Scott
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!