VideoHelp Forum




+ Reply to Thread
Results 1 to 7 of 7
  1. Member
    Join Date
    Apr 2004
    Location
    United States
    Search Comp PM
    I've always had a difficult time fully understanding the various concepts related to interlaced and progressive video. There's one question that I think I have the answer to, but the answer doesn't seem to jive with the general concensus that I think I've observed that progressive video is superior to interlaced.

    It could be that I have my facts wrong, and if so, please correct me.

    Interlaced video shows 60 "pictures" per second. Each picture is half the vertical resolution of the video that is displayed, and when each picture is displayed, a black horizontal line is inserted between each horizontal line in the picture. So if the video is 640x480, each picture making up that video is really only 640x240. I believe that 2 of these "half pictures" make up a "frame", although I'm a little unsure about that.

    Progressive video shows 30 pictures per second, each of which is the same size as the video resolution (if the video is 640x480, each picture in the video is 640x480). I'm pretty sure these pictures are called frames.

    Assuming the video source is interlaced (meaning it was recorded with a camera that captured 60 half pictures a second), wouldn't that interlaced video be superior to a video of the same exact thing that was filmed in a "progressive way" (with a camera that captured 30 full pictures a second), at least where the "fluidity" of motion is concerned? An object moving across the screen would be shown in 60 different positions per second in the interlaced video and only 30 different positions in the progressive video.

    I know that having each interlaced picture be half the resolution of the progressive picture probably negtively affects the quality of the video, but isn't there some benefit to interlacing? It always seems like 1080p is viewed as being superior to 1080i in every way, but if what I said above is true, then is 1080i sometimes preferable, or at least could it be to some peoples' eyes?

    The only thing I can think of that would make progressive always better than interlaced (assuming bandwidth isn't an issue) is if progressive is actually 60 full pictures per second. But I don't think that's the case, right? I realize I should probably just look that one up myself.

    I know there are different frame rates like 24, 23.##, and 29.##, but to keep it simple I was just considering 30 frames per second.

    Anyone care to shed some light?
    Quote Quote  
  2. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Short answer:
    Yes 60 fields per second has better motion fluidity vs. 30p.

    60p as in 1280x720p/60 has the best of both. That is smooth motion and resolution plus clear stop frame.

    480p and 1080p usually use 24p film source with less motion fluidity. Film shooting technique is required.

    1080p/60 uses so much bandwidth that it becomes impractical.

    Long answer will have to wait.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  3. Member
    Join Date
    Apr 2004
    Location
    United States
    Search Comp PM
    Thanks for the info... it's starting to make a little more sense to me.

    Say I have a blu-ray playing a 24 fps film on a TV that's only capable of displaying 1080i. Does the blu-ray player output 48 fields per second? If not, does it output 24 frames per second and does the TV convert those to 48 fields per second for displaying?


    On a semi-related topic -- how does a non-CRT TV (LCD, for example) display interlaced video? From what I've read, I'm thinking it deinterlaces it, but I'm not sure why. Don't most flat panel TVs have a refresh rate of at least 60 Hz? If so, why can't they display 60 fields per second?

    If they do deinterlace it, and I'm converting a DVD that contains interlaced video to another video format, should I deinterlace it, or is the hardware deinterlacing done by the TV better than the software deinterlacing that I would use (I'm using Handbrake).
    Quote Quote  
  4. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by Brent212
    Thanks for the info... it's starting to make a little more sense to me.

    Say I have a blu-ray playing a 24 fps film on a TV that's only capable of displaying 1080i. Does the blu-ray player output 48 fields per second? If not, does it output 24 frames per second and does the TV convert those to 48 fields per second for displaying?
    A Blu-ray player can output several ways. All of which end up 1080p on a 1080p TV.

    In 1080i output mode, the 24p material from the disc is telecined to 1080i/29.97 over HDMI. Then the TV inverse telecines back to 23.976p.

    A "60Hz" HDTV will then repeat the frames 2:3:2:3 to 59.94fps (do the math) and display as 59.94p.

    A "120Hz" HDTV will frame repeat 5:5:5:5 to 119.88p fps or will interpolate intermediate frames to attempt smoother motion.

    A newer HDTV will accept a direct 23.976p feed from the Blu-ray player skipping the telecine/inverse telecine process. In theory the results from a Blu-Ray disc will result in the same quality display. The telecine process should be lossless.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  5. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by Brent212
    On a semi-related topic -- how does a non-CRT TV (LCD, for example) display interlaced video? From what I've read, I'm thinking it deinterlaces it, but I'm not sure why. Don't most flat panel TVs have a refresh rate of at least 60 Hz? If so, why can't they display 60 fields per second?

    If they do deinterlace it, and I'm converting a DVD that contains interlaced video to another video format, should I deinterlace it, or is the hardware deinterlacing done by the TV better than the software deinterlacing that I would use (I'm using Handbrake).
    While it would be possible to build an interlace LCD display, none exist. They are all progressive to allow scaling.

    Thus interlace video needs to be deinterlaced or inverse telecined (film source). Deinterlace quality varies HDTV to HDTV. Generally the newer the TV or the higher price class, the better the deinterlacer. The main explanation for difference in price for similar size HDTV sets is the quality of the deinterlacer/scaler. This mainly affects SD source. 1080i deinterlace isn't as difficult.

    In almost all cases the real time hardware deinterlacer in the TV will perform better than non-real time software deinterlacers so interlace video should remain interlace to disc. Future HDTV deinterlacers will get even better.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  6. Originally Posted by edDV
    While it would be possible to build an interlace LCD display, none exist. They are all progressive to allow scaling.
    Actually they could just as well leave the alternate field black. But the display would only be half as bright because only half the cells on the LCD face would be lit at any one time. And, of course, the display would flicker just like a CRT. On a CRT TV having only half the scanlines lit isn't as much a problem because the electron beam isn't so highly focused -- it covers almost 2 scanlines in height. I've tested this on several CRT TVs in the past with a video that has one field black all the time, one white all the time. You don't really see the black scan lines. (The display flickers like hell though!)

    Originally Posted by edDV
    In almost all cases the real time hardware deinterlacer in the TV will perform better than non-real time software deinterlacers
    I disagree with this. Hardware deinterlacers in HDTVs aren't especially good. Many are just simple bobs, especially with moving images (like football and soccer games). After all, they are simply implementing the same deinteracing algorithms that were implemented in software first. None does as well as a the smartest software deinterlacers like AviSynth's TempGaussMC_beta1() because it is too compute intensive.

    Originally Posted by edDV
    Future HDTV deinterlacers will get even better.
    This is the real reason not to deinterlace your video. There's no point in deinterlacing during production when TV's can do an adequate job at playback. Deinterlacing now, even with a very good deinterlacer, would be screwing up your video forever (ie, there is no perfect deinterlacer). As TVs get better, and in the future when they are even better than the best software algorithms now, your deinterlaced video will not improve.

    Also, deinterlacing to from 30i to 30p leaves you with half the temporal resolution. Motions will not be as smooth. Deinterlacing to 60p with a smart bob'er requires twice as much bandwidth.
    Quote Quote  
  7. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by jagabo
    Originally Posted by edDV
    In almost all cases the real time hardware deinterlacer in the TV will perform better than non-real time software deinterlacers
    I disagree with this. Hardware deinterlacers in HDTVs aren't especially good. Many are just simple bobs, especially with moving images (like football and soccer games). After all, they are simply implementing the same deinteracing algorithms that were implemented in software first. None does as well as a the smartest software deinterlacers like AviSynth's TempGaussMC_beta1() because it is too compute intensive.
    I agree at the extreme end. Software deinterlacers can match or beat consumer hardware deinterlacers but patents prevent using the latest tricks. The con is the processing time which can be extreme. TV/DVD player deinterlacers are all real time.

    Hardware deinterlacers have a hierarchy that roughly follows parts and royalty cost.

    Budget HDTV sets and progressive DVD players (think Wal-Mart) use the cheapest chips and generic technology.

    Name brand house processors come at several price levels. Extra cost is mostly royalties to patent owners and custom chip costs. Higher end technologies require multiple fields or frames of memory. Last year's top of the line processor usually becomes more affordable in the second year and is replaced at the top with new technology.

    The highest end consumer products borrow or scale down technology from pro level gear.

    Many of the key patents that date to the 70's to 90's eventually expire and will become available to generic chip makers so the low end improves as well.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!