VideoHelp Forum
+ Reply to Thread
Results 1 to 19 of 19
Thread
  1. Hello all,

    I have a lot of DV content shot on a digital camera and want to convert it to H264 and deinterlace it before, but I want to make sure I don't lose vertical resolution.

    I have created a simple testpattern video which has alternating horizontal lines of 1px each. I figure that if the deinterlacer keeps the vertical resolution then these lines will be visible in the progressive video. Now I've tried all deinterlace options in VLC and a few in AviSynth, including QTGMC and what I get from them is alternating white/black frames and maybe some small artifacts somewhere. If I disable deinterlacing I clearly see the horizontal lines.

    What's funny is that I also played the interlaced test video on a Sony Bravia TV and the result is perfect, deinterlaced with horizontal lines.

    So what does this mean, that a TV has a better deinterlacer than any software method available at this time, or I just didn't find the right one ? QTGMC was recommended as keeping the vertical resolution but my test shows otherwise.

    Thanks for any inputs you might have.
    Image Attached Files
    Quote Quote  
  2. Alternating black and white frames is what you expect from that pattern, and that's exactly what an interlaced CRT would display. In effect, the "lights" were flickering off and on 25 times a second.
    Quote Quote  
  3. So I encoded a picture of a pattern in interlace mode and after deinterlacing it's a different picture ? It does not seem right. I understand how interlacing / deinterlacing works, but the flickering is just loss of resolution. Why is the TV displaying the original pattern then ?
    Quote Quote  
  4. Originally Posted by ghiga_andrei View Post
    So I encoded a picture of a pattern in interlace mode and after deinterlacing it's a different picture ? It does not seem right. I understand how interlacing / deinterlacing works, but the flickering is just loss of resolution. Why is the TV displaying the original pattern then ?
    Why it doesn't seem right? If you know how interlace work then why you are surprised by results? Interlaced frame is made from two fields where half of line is missing. Any sane deinterlacer (except field blending deinterlacer that should produce solid grey frame) will create black and white frames.

    And your test signal (pattern) is special case of interlace and can be considered as pathological pattern... btw testing deinterlacer with static signal is also suboptimal. For objective deinterlacer quality testing usually bandwidth limited moving dynamic zone plate is used - on this video https://www.youtube.com/watch?v=0CLlPLZWvAM moving static zone plate is visible (dynamic zone plate has linearly changing phase of sweep https://www.youtube.com/watch?v=WZYS5XPxRzs ) - generally using not bandwidth limited patterns (signals) is not a good idea.

    To address your questions - TV is sufficiently smart to detect static signal and skip deinterlacing, flickering is not present on memory type displays (so all modern displays using memory to store complete frame of video - flickering will be perceived only on CRT, you may see some jitter on dumb deinterlacers that blindly apply bob to static video by introducing half line shift).
    Last edited by pandy; 8th May 2018 at 18:35.
    Quote Quote  
  5. In theory, a "perfect" deinterlacer is supposed to weave static content, so you get full resolution if nothing moves (no camera motion, no object or subject motion)

    Your Sony is using a motion adaptive deinterlacer.

    QTGMC tends to trades off in favor of motion smoothness instead of deinterlacing artifacts . The whole point of TempGaussMCBeta2 (QTMGC's precursor) was to reduce shimmer and flicker artifacts. Most TV set deinterlacers will score lower in motion and have artifacts if you look closely (Actually most tv sets will show blinking on that test pattern because they do not use motion adaptive algorithms, most just bob)

    Remember, in motion, each field is half vertical resolution of a full progressive frame; that's the point of interlace in the first place. "simple" deinterlacers just resize the field, with no post processing - so you end up with jaggies , shimmer, and buzzing edges (in the era of TGMC, that's almost what all TV's had; none had implemented the motion adaptive algorithms). "smarter" deinterlacers try to fill in the missing information and smooth over the jaggy edges. So in a sense, you can sometimes have higher effective resolution than the original field in motion.


    If you want a software deinterlacer optimized for motion adaptivity and full static resolution, you can use TDeint. It will NOT look as good in motion, and have the shimmer artifacts - but you will preserve the static resolution . TDeint has the option to use TMM motion mask , and QTGMC for the edi interpolation . But it simply won't look as good or smooth as QTGMC in motion. That smoothing over is the magic that makes it look almost like it was natively progressive in the first place, and why it's recommended as the "best" for general use. There are cases where it's clearly worse,, not just some patterns, but some motion estimation cases as well
    Quote Quote  
  6. Originally Posted by ghiga_andrei View Post
    So I encoded a picture of a pattern in interlace mode and after deinterlacing it's a different picture ? It does not seem right. I understand how interlacing / deinterlacing works, but the flickering is just loss of resolution. Why is the TV displaying the original pattern then ?
    At the very least you have to consider there are two possible interpretations of the video. One, the way you see it, it's a static image of alternation black and white horizontal lines. Two, the way an interlaced TV would display it, it's a alternating sequence of black and white fields. If you view it the first way you retain spacial resolution but lose temporal resolution. Viewed the second way you lose spacial resolution but retain temporal resolution. Neither is necessarily right or wrong.

    As Pandy pointed out, it's a pathological case.
    Last edited by jagabo; 8th May 2018 at 20:15.
    Quote Quote  
  7. I guess my next test would be to play the test video on a CRT tv to see what pattern would that display. I don't have one near me now, maybe in the weekend.
    Quote Quote  
  8. Your test video on an interlaced CRT TV will flicker black and white. I've done it before.

    The frame is sent to the TV one field at a time, and the TV has no memory of what happened before. Since one field is all white and one field is all black the TV will display alternately black (1/50 second) and white (1/50 second). The flicker will blow you out of the room.
    Quote Quote  
  9. I thought that the phosphorous mask in the CRT grill has a persistence which for the human eye acts like a memory, that's why interlaced video works on CRT screens.
    So my understanding was that the CRT would display the even lines first, which are white lines, and then the odd lines which are black lines, but the white lines would still be present on the screen due to this persistence of the phosphorous mask.
    Quote Quote  
  10. That's a fallacy. The persistence of the phosphors is very short, a few scan lines:

    https://www.youtube.com/watch?v=3BJU2drrtCM

    If the persistence was really long enough that you could see two fields at once you would see comb artifacts.
    Quote Quote  
  11. Originally Posted by ghiga_andrei View Post
    I thought that the phosphorous mask in the CRT grill has a persistence which for the human eye acts like a memory, that's why interlaced video works on CRT screens.
    So my understanding was that the CRT would display the even lines first, which are white lines, and then the odd lines which are black lines, but the white lines would still be present on the screen due to this persistence of the phosphorous mask.
    Depends on phosphor (which itself is related to how modern is CRT - general rule older CRT longer persistence of phosphor, since 90's phosphor persistence is way shorter) but flicker is unavoidable anyway - it will be significantly reduced only on long persistence phosphors used in special CRT's (uncommon in consumer world) - https://en.wikipedia.org/wiki/Phosphor#Standard_phosphor_types you will probably need some long/very long type of phosphor.

    Once again you should create your pattern by using proper (progressive) source and apply interlaced sampling - this is only way to understand deinterlacer behaviour and expected result - your pattern can be represented in progressive source by or black/white line mix where one of fields is shifted by single line or by black/white fields - both situation are valid and you facing common problem related to sampling theorem* - Nyquist rate* - deinterlacer is unaware if your source is undersampled (thus violate Nyquist rate) or sampled with just correct Nyquist rate.

    *sampling theorem;Nyquist rate - more correctly Nyquist–Shannon-Whittaker-Kotelnikov theorem and rate (a.k.a frequency, criteria etc)
    Quote Quote  
  12. Originally Posted by jagabo View Post
    If the persistence was really long enough that you could see two fields at once you would see comb artifacts.
    Yeah ok, this is finally something that convinced me you are right.

    So the conclusion from all this is that it's impossible to have 576px vertical resolution in an 576i signal.
    Quote Quote  
  13. Originally Posted by pandy View Post
    Once again you should create your pattern by using proper (progressive) source and apply interlaced sampling - this is only way to understand deinterlacer behaviour and expected result - your pattern can be represented in progressive source by or black/white line mix where one of fields is shifted by single line or by black/white fields
    I created the test video by importing a BMP picture of the lines, of exactly 576px height into Pinnacle Studio, my video editor for the DV camera, and then I exported the movie as DV-AVI and recompressed to H264 interlaced for filesize reduction only.

    Isn't the BMP picture a proper progressive source ?
    Quote Quote  
  14. Originally Posted by ghiga_andrei View Post

    So the conclusion from all this is that it's impossible to have 576px vertical resolution in an 576i signal.
    In motion, yes it's impossible.

    But your TV test just proves you can. TDeint proves you can if you want software processing

    Your pattern is a pathological case, but the difference on a modern display is motion vs. static.



    Do you think PAL DVD's have reduced resolution ? They are all interlaced, even the progressive content PAL DVDs. They are all "576i ."

    If you recall the interlace vs. progressive arguments - one of the main "selling points" of using interlace in the modern digital age was less bandwith (than 50p/59.94p) and the "best of both worlds" - full spatial resolution for static scenes, full temporal samples for motion .

    When you have static content, that is 2:2 cadence. That is progressive.

    All modern HDTV sets have 2:2 cadence detection. They can tell if something is progressive with that cadence. They don't deinterlace 25p content, they weave it . However, there are other patterns that can remain undetected (other pulldown cadences), which get deinterlaced (if you have progressive content, and it gets deinterlaced, there is resolution loss and deinterlacing artifacts). You can use test disks or see reviews where they test various cadences.
    Quote Quote  
  15. Originally Posted by ghiga_andrei View Post
    So the conclusion from all this is that it's impossible to have 576px vertical resolution in an 576i signal.
    Same for 720p and 4k on modern TV's as side to interlace there is also something called https://en.wikipedia.org/wiki/Kell_factor

    For static video you can have resolution same as number of lines (if we ignore Kell factor that reduce resolution further).

    Originally Posted by ghiga_andrei View Post
    I created the test video by importing a BMP picture of the lines, of exactly 576px height into Pinnacle Studio, my video editor for the DV camera, and then I exported the movie as DV-AVI and recompressed to H264 interlaced for filesize reduction only.

    Isn't the BMP picture a proper progressive source ?
    It is proper but your single picture cover only one field - as you know - interlace is made from at least two fields so you should use two progressive frames at least to create two corresponding fields - single frame means static picture and deinterlacer is not required (deinterlacing is required only when you trying to display true interlace source on progressive - it is unwanted process if your source is progressive - remember that CRT is capable to display interlace as interlace natively and with some tweaks you can also force progressive displays to emulate interlaced display scheme as such deinterlacing will be not required).
    Last edited by pandy; 10th May 2018 at 11:03.
    Quote Quote  
  16. Originally Posted by pandy View Post
    It is proper but your single picture cover only one field - as you know - interlace is made from at least two fields so you should use two progressive frames at least to create two corresponding fields
    The video editor extended the picture to 10seconds video of course, it was not 1 frame

    I did not know about Kell factor, that was interesting to read, thank you.
    Quote Quote  
  17. If you want to analyze the quality of different deinterlacers use something that more closely resembles real world video. Your test video has no detail, no hints that a deinterlacer can use to decide what it should do.
    Quote Quote  
  18. I can select some scenes from the DV camera that have high-res details on them or static parts and some with motion to test.

    What other deinterlacers are worth testing in AviSynth beside QTGMC ? I always want 2x frame rate, not blend.
    Quote Quote  
  19. Originally Posted by ghiga_andrei View Post
    The video editor extended the picture to 10seconds video of course, it was not 1 frame

    I did not know about Kell factor, that was interesting to read, thank you.
    Perhaps but this is not interlaced source - it is progressive source coded as interlace (marked on encoder syntax level but no temporal difference between fields).
    Real interlace source has difference between fields in time domain.

    Kell factor is not well known which is strange as it affecting directly camera (or more accurately light sensors) and displays...
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!