I have a lot of DV content shot on a digital camera and want to convert it to H264 and deinterlace it before, but I want to make sure I don't lose vertical resolution.
I have created a simple testpattern video which has alternating horizontal lines of 1px each. I figure that if the deinterlacer keeps the vertical resolution then these lines will be visible in the progressive video. Now I've tried all deinterlace options in VLC and a few in AviSynth, including QTGMC and what I get from them is alternating white/black frames and maybe some small artifacts somewhere. If I disable deinterlacing I clearly see the horizontal lines.
What's funny is that I also played the interlaced test video on a Sony Bravia TV and the result is perfect, deinterlaced with horizontal lines.
So what does this mean, that a TV has a better deinterlacer than any software method available at this time, or I just didn't find the right one ? QTGMC was recommended as keeping the vertical resolution but my test shows otherwise.
Thanks for any inputs you might have.
Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or try DVDFab DRM and remove iTunes movie & music protection! :)
+ Reply to Thread
Results 1 to 19 of 19
Alternating black and white frames is what you expect from that pattern, and that's exactly what an interlaced CRT would display. In effect, the "lights" were flickering off and on 25 times a second.
So I encoded a picture of a pattern in interlace mode and after deinterlacing it's a different picture ? It does not seem right. I understand how interlacing / deinterlacing works, but the flickering is just loss of resolution. Why is the TV displaying the original pattern then ?
And your test signal (pattern) is special case of interlace and can be considered as pathological pattern... btw testing deinterlacer with static signal is also suboptimal. For objective deinterlacer quality testing usually bandwidth limited moving dynamic zone plate is used - on this video https://www.youtube.com/watch?v=0CLlPLZWvAM moving static zone plate is visible (dynamic zone plate has linearly changing phase of sweep https://www.youtube.com/watch?v=WZYS5XPxRzs ) - generally using not bandwidth limited patterns (signals) is not a good idea.
To address your questions - TV is sufficiently smart to detect static signal and skip deinterlacing, flickering is not present on memory type displays (so all modern displays using memory to store complete frame of video - flickering will be perceived only on CRT, you may see some jitter on dumb deinterlacers that blindly apply bob to static video by introducing half line shift).
Last edited by pandy; 8th May 2018 at 18:35.
In theory, a "perfect" deinterlacer is supposed to weave static content, so you get full resolution if nothing moves (no camera motion, no object or subject motion)
Your Sony is using a motion adaptive deinterlacer.
QTGMC tends to trades off in favor of motion smoothness instead of deinterlacing artifacts . The whole point of TempGaussMCBeta2 (QTMGC's precursor) was to reduce shimmer and flicker artifacts. Most TV set deinterlacers will score lower in motion and have artifacts if you look closely (Actually most tv sets will show blinking on that test pattern because they do not use motion adaptive algorithms, most just bob)
Remember, in motion, each field is half vertical resolution of a full progressive frame; that's the point of interlace in the first place. "simple" deinterlacers just resize the field, with no post processing - so you end up with jaggies , shimmer, and buzzing edges (in the era of TGMC, that's almost what all TV's had; none had implemented the motion adaptive algorithms). "smarter" deinterlacers try to fill in the missing information and smooth over the jaggy edges. So in a sense, you can sometimes have higher effective resolution than the original field in motion.
If you want a software deinterlacer optimized for motion adaptivity and full static resolution, you can use TDeint. It will NOT look as good in motion, and have the shimmer artifacts - but you will preserve the static resolution . TDeint has the option to use TMM motion mask , and QTGMC for the edi interpolation . But it simply won't look as good or smooth as QTGMC in motion. That smoothing over is the magic that makes it look almost like it was natively progressive in the first place, and why it's recommended as the "best" for general use. There are cases where it's clearly worse,, not just some patterns, but some motion estimation cases as well
As Pandy pointed out, it's a pathological case.
Last edited by jagabo; 8th May 2018 at 20:15.
I guess my next test would be to play the test video on a CRT tv to see what pattern would that display. I don't have one near me now, maybe in the weekend.
Your test video on an interlaced CRT TV will flicker black and white. I've done it before.
The frame is sent to the TV one field at a time, and the TV has no memory of what happened before. Since one field is all white and one field is all black the TV will display alternately black (1/50 second) and white (1/50 second). The flicker will blow you out of the room.
I thought that the phosphorous mask in the CRT grill has a persistence which for the human eye acts like a memory, that's why interlaced video works on CRT screens.
So my understanding was that the CRT would display the even lines first, which are white lines, and then the odd lines which are black lines, but the white lines would still be present on the screen due to this persistence of the phosphorous mask.
https://en.wikipedia.org/wiki/Phosphor#Standard_phosphor_types you will probably need some long/very long type of phosphor.
Once again you should create your pattern by using proper (progressive) source and apply interlaced sampling - this is only way to understand deinterlacer behaviour and expected result - your pattern can be represented in progressive source by or black/white line mix where one of fields is shifted by single line or by black/white fields - both situation are valid and you facing common problem related to sampling theorem* - Nyquist rate* - deinterlacer is unaware if your source is undersampled (thus violate Nyquist rate) or sampled with just correct Nyquist rate.
*sampling theorem;Nyquist rate - more correctly Nyquist–Shannon-Whittaker-Kotelnikov theorem and rate (a.k.a frequency, criteria etc)
Pinnacle Studio, my video editor for the DV camera, and then I exported the movie as DV-AVI and recompressed to H264 interlaced for filesize reduction only.
Isn't the BMP picture a proper progressive source ?
But your TV test just proves you can. TDeint proves you can if you want software processing
Your pattern is a pathological case, but the difference on a modern display is motion vs. static.
Do you think PAL DVD's have reduced resolution ? They are all interlaced, even the progressive content PAL DVDs. They are all "576i ."
If you recall the interlace vs. progressive arguments - one of the main "selling points" of using interlace in the modern digital age was less bandwith (than 50p/59.94p) and the "best of both worlds" - full spatial resolution for static scenes, full temporal samples for motion .
When you have static content, that is 2:2 cadence. That is progressive.
All modern HDTV sets have 2:2 cadence detection. They can tell if something is progressive with that cadence. They don't deinterlace 25p content, they weave it . However, there are other patterns that can remain undetected (other pulldown cadences), which get deinterlaced (if you have progressive content, and it gets deinterlaced, there is resolution loss and deinterlacing artifacts). You can use test disks or see reviews where they test various cadences.
For static video you can have resolution same as number of lines (if we ignore Kell factor that reduce resolution further).
Last edited by pandy; 10th May 2018 at 11:03.
If you want to analyze the quality of different deinterlacers use something that more closely resembles real world video. Your test video has no detail, no hints that a deinterlacer can use to decide what it should do.
Real interlace source has difference between fields in time domain.
Kell factor is not well known which is strange as it affecting directly camera (or more accurately light sensors) and displays...