VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 55 of 55
Thread
  1. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by mathmax View Post
    so.. I compare:
    Code:
    SeparateFields()
    Code:
    AssumeTFF()
    Bob()
    SeparateFields()
    SelectEvery(4,0,3)
    the last ones are more blurry, both have lost quality.
    A quirk of Bob() is that it always resets the field order of the output clip to the default BFF, so in your test above you are seeing only interpolated fields - you need to repeat AssumeTFF() after calling Bob().

    However, Bob() with default parameters does not preserve the original pixels, as it uses BicubicResize which is not a pure interpolator (and blurs slightly) unless called with b=0.
    To preserve original pixels, use Bob(b=0.0, c=1.0) (or any other value of c).
    Even then, chroma pixels of YV12 are not preserved, because of a flaw in the implementation which leads to a slight (normally imperceptible) chroma shift (see here).
    Quote Quote  
  2. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by edDV View Post
    Re-interlacing will interpolate new fields matched to progressive frame pairs. The first frame is re-interpolated to 720x240 field one and the second is re-interpolated to 720x240 field two. This involves re-sampling horizontal lines alternately back to 720x480i.
    I think it is inaccurate to talk about 're-interpolating' here (at least the way it is normally done in Avisynth scripts, where you simply take every second line and throw away the others). Do some editors actually reinterpolate?

    There may be an argument for other filters using 29.97p processing or Jagabo may come in to tell us we are destroying the chroma with these translations.
    Why should it destroy the chroma (assuming deinterlacing done properly)?
    Quote Quote  
  3. Member
    Join Date
    Jan 2006
    Location
    United States
    Search Comp PM
    I’ve been reading this thread with great interest. I don’t quite understand all of it but am learning. However it makes me think about a situation which may be related.

    I have two Panasonic stand alone DVD recorders, one with NTSC/ATSC/QAM tuners. Several years ago I set them to output 480p component to the HDTV which looked good when playing commercial DVDs. But when I recorded from Comcast cable (which I think was analog NTSC at the time) & played it back the picture was noticeably better when selecting 480i component output. And commercial DVDs looked the same either way, so I left the setting at 480i component.

    Now I’m sort of curious as to what might have been happening.

    Perhaps the commercial DVDs that I looked at were 23.976fps with pulldown and the Panasonic played these back properly at 480p and 480i. And my recordings were 29.97fps but the Panasonic could not properly de-interlace these to progressive.

    Any thoughts or comment on this?
    Quote Quote  
  4. Originally Posted by Mike99 View Post
    Perhaps the commercial DVDs that I looked at were 23.976fps with pulldown and the Panasonic played these back properly at 480p and 480i.
    Yes.

    Originally Posted by Mike99 View Post
    And my recordings were 29.97fps but the Panasonic could not properly de-interlace these to progressive.
    Yes. The deinterlacer in your TV is superior to the deinterlacer in the DVD recorder player.
    Quote Quote  
  5. Originally Posted by Gavino View Post
    Originally Posted by mathmax View Post
    so.. I compare:
    Code:
    SeparateFields()
    Code:
    AssumeTFF()
    Bob()
    SeparateFields()
    SelectEvery(4,0,3)
    the last ones are more blurry, both have lost quality.
    A quirk of Bob() is that it always resets the field order of the output clip to the default BFF, so in your test above you are seeing only interpolated fields - you need to repeat AssumeTFF() after calling Bob().

    However, Bob() with default parameters does not preserve the original pixels, as it uses BicubicResize which is not a pure interpolator (and blurs slightly) unless called with b=0.
    To preserve original pixels, use Bob(b=0.0, c=1.0) (or any other value of c).
    Even then, chroma pixels of YV12 are not preserved, because of a flaw in the implementation which leads to a slight (normally imperceptible) chroma shift (see here).
    Thank you very much for these explanations.

    Did you see the comparison with Yadif(mode=1) and separatefields() + resize? I'm wondering if that is really better to apply filters on the interpolated fields knowing that they'll be reduce to 240 for reinterlacing and then reinterpolated by the player..

    Originally Posted by mathmax View Post
    Now, if I compare Yadif(mode=1) and separatefields() + lanczosresize(width, 480), I realize that the resized field looks better than the interpolated one.

    resized:
    http://img341.imageshack.us/img341/3774/separate0000.png

    interpolated:
    http://img210.imageshack.us/img210/7062/yadif0000.png

    Of course that doesn't take the motion into account.. and focus on the quality of a single frame. But that makes me wonder if it's ideal to apply spatial filters on interpolated fields... I risk to introduce errors on the untouched lines. And even if the interpolated field looks nice after I apply the filter, not sure it'll still look so nice after reinterlacing and bob() performed by the player... I mean, since half of the lines will be dropped and reinterpolated, the final interpolated frame might look different...

    So.. maybe this technique is better for motion. But is it really better if you consider the quality of each frame?
    yadifmod(edeint=nnedi3()) gives me a nice result on which I can apply my filters and I know they'll not be damaged by future interpolations. Of course I lose half of the framerate.. but since I don't notice my video is jerky, I wonder if it's really worth to conserves 59.94 motion samples...
    Also, I noticed that the real time deinterlacing in VLC or power DVD is not so good.. and it's much better when I apply my own deinterlacer before encoding. That's also why I wanted to render NTSC - 480p.

    I'm quite satisfied with YadifMod(edeint=nnedi3()), which gives me a better result than Yadif().. But I wonder if this filter can be used to deinterlace to double framerate or if there is any better bobber than Yadif(). I look forward for any suggestions..
    Quote Quote  
  6. Using a motion compensated bob like QTGMC is better than using any interpolated bob. Optimally, what you want is for the lines above and below each scanline to be real data. Not data from two lines away, or data interpolated from two lines away. Although QTGMC doesn't perfectly retain the original pixels from the current field, you still generally get better results after later filtering.
    Quote Quote  
  7. Originally Posted by jagabo View Post
    Using a motion compensated bob like QTGMC is better than using any interpolated bob. Optimally, what you want is for the lines above and below each scanline to be real data. Not data from two lines away, or data interpolated from two lines away. Although QTGMC doesn't perfectly retain the original pixels from the current field, you still generally get better results after later filtering.
    thank you
    What about TempGaussMC, MVBob or MCBob? Are they any better?

    And do you know why the deinterlacing is not so good in VLC or power DVD? Which algorithm is used? Wouldn't you advise to deinterlace before encoding, in order to avoid the bad deinterlacing methods of the players?
    Quote Quote  
  8. Member
    Join Date
    Jan 2006
    Location
    United States
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by Mike99 View Post
    Perhaps the commercial DVDs that I looked at were 23.976fps with pulldown and the Panasonic played these back properly at 480p and 480i.
    Yes.

    Originally Posted by Mike99 View Post
    And my recordings were 29.97fps but the Panasonic could not properly de-interlace these to progressive.
    Yes. The deinterlacer in your TV is superior to the deinterlacer in the DVD recorder player.
    Thank you for the reply.
    Quote Quote  
  9. Originally Posted by mathmax View Post
    I'm wondering if that is really better to apply filters on the interpolated fields knowing that they'll be reduce to 240 for reinterlacing and then reinterpolated by the player..
    If you're reinterlacing, the interpolated fields will be tossed out anyway, so what difference does it make what you do to them during the filtering stage? And nothing gets 'reduced' to 240. Each field of the bobbed frame already consists of 240 rows of pixels (every other row of the 480p bobbed frame).

    Yes, if you're keeping the bob for one reason or another, then QTGMC is about as good as it gets.
    Wouldn't you advise to deinterlace before encoding, in order to avoid the bad deinterlacing methods of the players?
    Which players? I watch my DVDs through a standalone DVD player outputting to my TV set. Don't you? Or do you have some sort of an HTPC setup? These days the deinterlacers of standalone players and TV sets are usually pretty decent.
    Quote Quote  
  10. Originally Posted by mathmax View Post
    What about TempGaussMC, MVBob or MCBob? Are they any better?
    QTGMC is basically an update to TempGaussMC. I haven't used MVBob or MCBob enough to have an opinion on them.

    Originally Posted by mathmax View Post
    And do you know why the deinterlacing is not so good in VLC or power DVD? Which algorithm is used?
    I don't know about Power DVD. But VLC has several choices. I think the default is blend or a simple bob. Keep in mind that whatever it does has to be done in real time. So its yadif may cut corners.

    Originally Posted by mathmax View Post
    Wouldn't you advise to deinterlace before encoding, in order to avoid the bad deinterlacing methods of the players?
    It depends on what you're playing on. 30i material can't be encoded to DVD as 60p, for example. Encoding and playback of 60p requires more CPU power than 30i or 30p.
    Quote Quote  
  11. Originally Posted by manono View Post
    If you're reinterlacing, the interpolated fields will be tossed out anyway, so what difference does it make what you do to them during the filtering stage?
    The interpolated fields of the bobbed frames will be droped for reinterlacing. So of course we don't care about them... but when I process a spatial filter on the bobbed frames, they are taken into account, and the clean fields will be spoiled by the "errors" of the interpolated fields. If I compare yadif(mode=1) and QTGMC(), the bobbed frames look much better in the second case. Of course QTGMC will not perfectly retain the original fields, but at least the filters will be applied more accurately on sharp and clean frames.
    Of course there are advantages and disadvantages in each method, but I have the feeling that if I want to use strong spatial filter, I should rather use QTGMC() cause else the interpolated lines will rub off on the clean ones... I have not enough experience to know the extent of the problem though..

    Originally Posted by manono View Post
    Which players? I watch my DVDs through a standalone DVD player outputting to my TV set. Don't you? Or do you have some sort of an HTPC setup? These days the deinterlacers of standalone players and TV sets are usually pretty decent.
    I want to watch on PC too. I was mentioning VLC and Power DVD, and the relevance of deinterlacing before encoding in order to avoid real time deinterlacers of such players.

    Originally Posted by jagabo View Post
    It depends on what you're playing on. 30i material can't be encoded to DVD as 60p, for example. Encoding and playback of 60p requires more CPU power than 30i or 30p.
    I thought edDV said that DVD don't support 60p and that it was needed to reinterlace. If the DVD format and the players support it, I would rather keep my bobbed frames untouched after filtering.. that would avoid the problem I mentioned above. But should I set any particular option for the encoding?
    Quote Quote  
  12. Originally Posted by mathmax View Post
    Originally Posted by jagabo View Post
    It depends on what you're playing on. 30i material can't be encoded to DVD as 60p, for example. Encoding and playback of 60p requires more CPU power than 30i or 30p.
    I thought edDV said that DVD don't support 60p and that it was needed to reinterlace.
    That's what I said (highlighted above). The rest was referring to other types of media players.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    Originally Posted by mathmax View Post
    Originally Posted by jagabo View Post
    It depends on what you're playing on. 30i material can't be encoded to DVD as 60p, for example. Encoding and playback of 60p requires more CPU power than 30i or 30p.
    I thought edDV said that DVD don't support 60p and that it was needed to reinterlace.
    That's what I said (highlighted above). The rest was referring to other types of media players.
    oh sorry... I didn't read properly
    Quote Quote  
  14. Originally Posted by mathmax View Post
    If I compare yadif(mode=1) and QTGMC(), the bobbed frames look much better in the second case.
    Yes, but that hardly supports your case for using QTGMC for the bob since it's only one field from each frame that will be used for reinterlacing. Any 'errors' that slip over into the original Yadif field will be pretty small, I'd guess, and the speed of Yadif more than makes up for any 'deterioration' occurring within that field from using a spatial filter, given that the 'good' field of a QTGMC bob already starts out with some 'deterioration' as compared to the source. But it's your encode so do it any way you like. Just don't try to use a 60p source for your DVD as your encoder will either just spit it out or give you back some horribly slowed down video as a result.
    Quote Quote  
  15. Newbie...
    Last edited by Zombiemilkshake; 7th Jan 2012 at 11:33. Reason: Duplicate post removed.
    Quote Quote  
  16. DVD doesn't support 29.97 fps progressive encoding. Encode interlaced.
    Quote Quote  
  17. I have learned a lot from this thread and am very interested the conversation. Thank you to everyone for contributing to this subject.

    My question is if you had high motion 29.97p source footage and were going to widely release a DVD title using the 29.97p footage as your source what pre-processing and/or encoding settings would you recommend to achieve the best results in the marketplace at large?

    In my particular situation I have 1920x1080 29.97p high motion footage that I need to down convert to 720x480 for DVD and the disc will be replicated and widely distributed.

    I have tried down converting to 480p and 480i and have tried encoding the 480p both progressive and interlaced and get mixed results when testing on a wide variety of set-top and software DVD players.

    The 480p output is the easiest from a production stand point but I have found "dumb" software DVD players in particular have a user selected de-interlace feature that is turned on by default and the software player ends up de-interlacing the 480p footage making it look pretty bad (speaking subjectively).

    With 480i output I have to create fake frames/motion data to achieve 480i because the 29.97p material doesn't have motion data available to create true 480i footage so I have to convert the footage to 60p and then down to 29.97i which has some drawbacks also.

    I would love to hear any thoughts & recommendations you all may have.

    Thank you!
    Quote Quote  
  18. Removed, I appoligize for the duplicate post.
    Last edited by Zombiemilkshake; 7th Jan 2012 at 11:22. Reason: Removing duplicate post.
    Quote Quote  
  19. Originally Posted by jagabo View Post
    DVD doesn't support 29.97 fps progressive encoding. Encode interlaced.
    Thanks for the fast reply, that makes sense. Unless I hear a compelling contrary opinion I will encode interlaced. I appreciate the fast reply.
    Quote Quote  
  20. There's no need to create intermediate frames. Just encode the progressive frames as if they are interlaced.
    Quote Quote  
  21. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    But it still leaves the question about the behavior of HCenc, which I mentioned in this very thread -
    https://forum.videohelp.com/threads/341667-NTSC-progressive-or-interlaced?p=2128005&vie...=1#post2128005
    Quote Quote  
  22. Member
    Join Date
    Jan 2006
    Location
    United States
    Search Comp PM
    Is there a program that lets you see the individual fields within a frame?

    I was looking at a film animation that was converted to an NTSC DVD and when pausing the video it appears to have what looks like a double image. Is it safe to presume that each image is a field? And that they were blended when converted from film to video?

    I’m presuming if a 24 fps film is converted to 25 fps PAL that one film frame becomes one video frame. And if it’s an animation then both fields should look the same. IOW there would be no subject movement between the interlaced frames. Is this correct?

    But when 24 fps film is converted to NTSC then 4 frames have to be made into 5 frames and some blending will occur, either through 2:3 pulldown or converting to 29.97 fps. Is this also correct?

    I was curious what the individual fields look like & was hoping there was a way to do that. I don’t plan on doing any restoration or editing so am looking for some free/cheap easy to use software that lets me do that & satisfy my curiosity.

    Any suggestions would be appreciated.
    Thanks in advance.
    Quote Quote  
  23. Originally Posted by Mike99 View Post
    Is there a program that lets you see the individual fields within a frame?
    SeprateFields(), Bob() in AviSynth. VirtualDub's Bob Doubler, and Deinterlace (unfold fields side by side) filters. Be careful with VirtualDub, it has some problems handling interlaced YV12 sources.

    Originally Posted by Mike99 View Post
    I was looking at a film animation that was converted to an NTSC DVD and when pausing the video it appears to have what looks like a double image. Is it safe to presume that each image is a field?
    Not really. Many things could have caused that.

    Originally Posted by Mike99 View Post
    And that they were blended when converted from film to video?
    No.

    Originally Posted by Mike99 View Post
    I’m presuming if a 24 fps film is converted to 25 fps PAL that one film frame becomes one video frame.
    Usually, but not always.

    Originally Posted by Mike99 View Post
    And if it’s an animation then both fields should look the same. IOW there would be no subject movement between the interlaced frames. Is this correct?
    If each video frame comes from one film frame, yes (except for slight up/down bounce between fields).

    Originally Posted by Mike99 View Post
    But when 24 fps film is converted to NTSC then 4 frames have to be made into 5 frames and some blending will occur, either through 2:3 pulldown or converting to 29.97 fps. Is this also correct?
    Duplicating one frame out of every 4 isn't done often. 3:2 pulldown, yes. But that doesn't give blended frames. Each field is still purely from one film frame (how it's displayed is another matter).

    Originally Posted by Mike99 View Post
    I was curious what the individual fields look like
    They're just every other scan line of the frame.
    Quote Quote  
  24. Member
    Join Date
    Jan 2006
    Location
    United States
    Search Comp PM
    jagabo -

    Thank you for all the information.
    I'll give it a try.
    Quote Quote  
  25. Here's an example of how the fields are drawn on a CRT:

    https://forum.videohelp.com/threads/284952-Interlace-confusion?p=1721487&viewfull=1#post1721487

    You can see the original frame in the first post. In that GIF animation I filled the other field with black lines*. A bob filter will fill those black lines by interpolating between the lines above and below. SeparateFields() will remove the black lines leaving a half high image.

    * That is what happens on a CRT TV -- by the time a field is being drawn the previous field has faded away. CRT TVs usually draw the scan lines thicker than one line so that the alternate field isn't left black. It's partially overwritten by the current field. That reduces flicker.
    Last edited by jagabo; 2nd Feb 2012 at 07:16.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!