Can someone help me with this technical question: now that cathode-ray tubes are gone, why would anyone film interlaced? The function of interlacing was to reduce flicker on cathode-ray tubes. Is there any reason to use it now? Which gives smoother motion, which converts to different frame-rates or speeds or formats with better quality, interlaced or progressive? I'm not talking about ease of use, I'm interested in which gives better quality?
+ Reply to Thread
Results 1 to 20 of 20
-
-
It had more to do with bandwith reduction
Is there any reason to use it now?
Which gives smoother motion, which converts to different frame-rates or speeds or formats with better quality, interlaced or progressive? I'm not talking about ease of use, I'm interested in which gives better quality?
It's all pros/cons and this has all debated discussed in various forums again and again. Basically, if you have enough bandwidth , and are using a compatible system, progressive is better -
Interlace is analogue technique that allow reduce required bandwidth by half - it is quite well matched with human vision characteristic - allowing to keep or high spatial resolution for static content and high time resolution for dynamic content - same way as human vision system works.
It is used nowadays due of same reason as before - bandwidth reduction matched with our vision characteristic - allow to use twice "worse" HW to provide "almost" same experience as for progressive. -
But surely all interlaced is de-interlaced automatically by flat screens, so you don't get the good motion that you used to get on CRTs because both fields are shown at the same time, not one after the other? Or am I wrong?
-
I am not sure, but in our country TV in dvb-t broadcast something interlaced and something not. When is it interlaced, and you try to edit it, you can see stripes between top and bottom fields. But if is it possible I use direct stream copy even it is interlaced. Therefore there is bob I think. if is it progresive and you used bob, so the next frame is identical (almost) to previous, but when is it interlaced, then each frame is unique - 50p 20ms frame duration.
Maybe Iam wrong, but why then is in all videoplayers deinterlace setting ?
Bernix -
-
That's interesting and reassuring. I wonder how it fills in the missing lines in the right movement position? Very clever.
-
Here is what I see and it is hd live coverage....
[Attachment 43244 - Click to enlarge]
It is croped from 1920x1080
And to your OP question. Of course better is progresive. I think because x264 better commpressed progresive material then interlaced material (is more efficient) so same bitrate better quality for progresive frames.
BernixLast edited by Bernix; 29th Sep 2017 at 16:01.
-
It depends on the set .
A "cheap" flat panel uses a simple bob resize . No additional processing. Only the vertical field offset is compensated for. If you pause the playback, you will see jaggy lines from the resizing field=> frame . But in motion , 99% of people won't be able to see those problems - it looks almost identical to something that was originally 50p in motion.
Expensive flat pane uses motion adaptive deinterlacing + interpolation algorithms . If you pause the picture it looks like a real frame. If you've ever seen it side by side, it actually looks better than an expensive CRT. -
-
Depends on HW and SW capabilities, low price usually they may repeat frame so no real conversion, more expensive may use so called motion interpolation but this is only interpolation and sometimes it can be very low quality (in fact degrading perceived quality of source than improve it) - remember Philips with motion plus technology was quite poor on this... Some people pursuing perfection of original perception and trying to disable such functionality (if exist) for some of them there are special Film (Cinema) mode where 24p stays 24p.
Btw progressive displays such as PDP, LCD, OLED, DLP etc are using memory to store frame so they can accept any incoming framerate (limit is only in HW) - native refresh rate can be way higher than source (some of them may do refresh with even 800 - 1000fps). -
In the UK, it's always displayed at 50p or some integer multiple of (e.g. 200Hz, etc..) . A 25p frame display time is held twice as long for 50p, so in effect, you have a display with duplicate frames to make up the 50p. Motion is the same as 25p
But if you meant motion interpolation as Pandy was referring to, they synthesize and generate "inbetween" frames for smoother motion. It's not perfect and on some content it can look very bad. For "film" content that isn't supposed to be "smooth" it can be distracting. It gives a "soap opera" type look. Many people disable it
So again , depends on the set. Look for these features
Samsung - "Auto Motion Plus"
Sony - "Motionflow"
LG - "TruMotion" -
So it's better to film in interlaced because the interpolation will be more accurate than with progressive?
-
What do you mean by "film" ? I'm guessing you mean figuratively , not real celluloid ?
If you mean live action, like sports, or news, not a theatrical piece, then you usually want to shoot 50p if you can , or have enough bandwidth or good enough equipment.
50p is more accurate than "50 fields per second interlaced" . They both have 50 moments in time per second represented . But the interlaced version has only 1/2 the spatial information; each field is like 1/2 a frame
You can always convert 50p to interlaced quite easily. It's really "dumbing it down", but to do it properly you need to apply vertical blur to reduce the line twitter artifacts. So not only do you discard 1/2 the resolution right off the bat, you tend to lose more when processing it . -
Yeah I mean video, not a strip of plastic with pictures on (though I love film as a medium). I'm an amateur film maker and not interested in sport so I don't need the high framerate needed to follow a ball. So I'm talking about 25 pictures a second. I'm just interested in whether 25i or 25p gives better quality when "interpolated" by a flatscreen, and which can be better upgraded to a higher framerate in the future. From what you've said it seems that 25i will give better motion than 25p when extra pictures are generated upgrading it to 50p?
-
Shoot 50p.
To shoot interlace 25i makes no sense since you can easily shoot 50 nowadays.
And to shoot 25p is NOT easy. You get blurry pictures or images will strobe. You need to have some ND filters for day shooting and your camera movement is restricted. Look it up on internet - shooting 25p. -
Same as _Al_ - shot in 50p - 25p or 25i can be generated from 50p so you can later decide what is better.
-
25i (as in 50 fields per second interlaced) will have 2x the motion samples compared to 25p, so yes "25i" will interpolate better in terms of motion quality if you needed higher framerates (e.g. 100fps) . But it also has 1/2 the vertical resolution (if you had 1920x1080i, each field is only 1920x540 ; whereas a progressive frame is 1920x1080) . So picture quality is arguably worse for interlaced . If you are shooting film / cinema / theatrical piece , you usually don't want to shoot interlaced
If you know you're going to shoot "x" framerate with the intention of motion interpolation , then usually you want to use a faster shutter speed. Motion interpolation algorithms work better with clean boundaries. A common example is if you're shooting for slow motion (a high FPS, but played back at a lower FPS) . But then motion at the current frame rate will look more "choppy" or "strobey". The natural/normal shutter motion blur which makes everything smoother will be reduced in that case