A quirk of Bob() is that it always resets the field order of the output clip to the default BFF, so in your test above you are seeing only interpolated fields - you need to repeat AssumeTFF() after calling Bob().
However, Bob() with default parameters does not preserve the original pixels, as it uses BicubicResize which is not a pure interpolator (and blurs slightly) unless called with b=0.
To preserve original pixels, use Bob(b=0.0, c=1.0) (or any other value of c).
Even then, chroma pixels of YV12 are not preserved, because of a flaw in the implementation which leads to a slight (normally imperceptible) chroma shift (see here).
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 31 to 55 of 55
Thread
-
-
I think it is inaccurate to talk about 're-interpolating' here (at least the way it is normally done in Avisynth scripts, where you simply take every second line and throw away the others). Do some editors actually reinterpolate?
There may be an argument for other filters using 29.97p processing or Jagabo may come in to tell us we are destroying the chroma with these translations. -
I’ve been reading this thread with great interest. I don’t quite understand all of it but am learning. However it makes me think about a situation which may be related.
I have two Panasonic stand alone DVD recorders, one with NTSC/ATSC/QAM tuners. Several years ago I set them to output 480p component to the HDTV which looked good when playing commercial DVDs. But when I recorded from Comcast cable (which I think was analog NTSC at the time) & played it back the picture was noticeably better when selecting 480i component output. And commercial DVDs looked the same either way, so I left the setting at 480i component.
Now I’m sort of curious as to what might have been happening.
Perhaps the commercial DVDs that I looked at were 23.976fps with pulldown and the Panasonic played these back properly at 480p and 480i. And my recordings were 29.97fps but the Panasonic could not properly de-interlace these to progressive.
Any thoughts or comment on this? -
-
Thank you very much for these explanations.
Did you see the comparison with Yadif(mode=1) and separatefields() + resize? I'm wondering if that is really better to apply filters on the interpolated fields knowing that they'll be reduce to 240 for reinterlacing and then reinterpolated by the player..
Also, I noticed that the real time deinterlacing in VLC or power DVD is not so good.. and it's much better when I apply my own deinterlacer before encoding. That's also why I wanted to render NTSC - 480p.
I'm quite satisfied with YadifMod(edeint=nnedi3()), which gives me a better result than Yadif().. But I wonder if this filter can be used to deinterlace to double framerate or if there is any better bobber than Yadif(). I look forward for any suggestions.. -
Using a motion compensated bob like QTGMC is better than using any interpolated bob. Optimally, what you want is for the lines above and below each scanline to be real data. Not data from two lines away, or data interpolated from two lines away. Although QTGMC doesn't perfectly retain the original pixels from the current field, you still generally get better results after later filtering.
-
thank you
What about TempGaussMC, MVBob or MCBob? Are they any better?
And do you know why the deinterlacing is not so good in VLC or power DVD? Which algorithm is used? Wouldn't you advise to deinterlace before encoding, in order to avoid the bad deinterlacing methods of the players? -
If you're reinterlacing, the interpolated fields will be tossed out anyway, so what difference does it make what you do to them during the filtering stage? And nothing gets 'reduced' to 240. Each field of the bobbed frame already consists of 240 rows of pixels (every other row of the 480p bobbed frame).
Yes, if you're keeping the bob for one reason or another, then QTGMC is about as good as it gets.
Wouldn't you advise to deinterlace before encoding, in order to avoid the bad deinterlacing methods of the players? -
QTGMC is basically an update to TempGaussMC. I haven't used MVBob or MCBob enough to have an opinion on them.
I don't know about Power DVD. But VLC has several choices. I think the default is blend or a simple bob. Keep in mind that whatever it does has to be done in real time. So its yadif may cut corners.
It depends on what you're playing on. 30i material can't be encoded to DVD as 60p, for example. Encoding and playback of 60p requires more CPU power than 30i or 30p. -
The interpolated fields of the bobbed frames will be droped for reinterlacing. So of course we don't care about them... but when I process a spatial filter on the bobbed frames, they are taken into account, and the clean fields will be spoiled by the "errors" of the interpolated fields. If I compare yadif(mode=1) and QTGMC(), the bobbed frames look much better in the second case. Of course QTGMC will not perfectly retain the original fields, but at least the filters will be applied more accurately on sharp and clean frames.
Of course there are advantages and disadvantages in each method, but I have the feeling that if I want to use strong spatial filter, I should rather use QTGMC() cause else the interpolated lines will rub off on the clean ones... I have not enough experience to know the extent of the problem though..
I want to watch on PC too. I was mentioning VLC and Power DVD, and the relevance of deinterlacing before encoding in order to avoid real time deinterlacers of such players.
I thought edDV said that DVD don't support 60p and that it was needed to reinterlace. If the DVD format and the players support it, I would rather keep my bobbed frames untouched after filtering.. that would avoid the problem I mentioned above. But should I set any particular option for the encoding? -
-
Yes, but that hardly supports your case for using QTGMC for the bob since it's only one field from each frame that will be used for reinterlacing. Any 'errors' that slip over into the original Yadif field will be pretty small, I'd guess, and the speed of Yadif more than makes up for any 'deterioration' occurring within that field from using a spatial filter, given that the 'good' field of a QTGMC bob already starts out with some 'deterioration' as compared to the source. But it's your encode so do it any way you like. Just don't try to use a 60p source for your DVD as your encoder will either just spit it out or give you back some horribly slowed down video as a result.
-
Newbie...
Last edited by Zombiemilkshake; 7th Jan 2012 at 11:33. Reason: Duplicate post removed.
-
I have learned a lot from this thread and am very interested the conversation. Thank you to everyone for contributing to this subject.
My question is if you had high motion 29.97p source footage and were going to widely release a DVD title using the 29.97p footage as your source what pre-processing and/or encoding settings would you recommend to achieve the best results in the marketplace at large?
In my particular situation I have 1920x1080 29.97p high motion footage that I need to down convert to 720x480 for DVD and the disc will be replicated and widely distributed.
I have tried down converting to 480p and 480i and have tried encoding the 480p both progressive and interlaced and get mixed results when testing on a wide variety of set-top and software DVD players.
The 480p output is the easiest from a production stand point but I have found "dumb" software DVD players in particular have a user selected de-interlace feature that is turned on by default and the software player ends up de-interlacing the 480p footage making it look pretty bad (speaking subjectively).
With 480i output I have to create fake frames/motion data to achieve 480i because the 29.97p material doesn't have motion data available to create true 480i footage so I have to convert the footage to 60p and then down to 29.97i which has some drawbacks also.
I would love to hear any thoughts & recommendations you all may have.
Thank you! -
Removed, I appoligize for the duplicate post.
Last edited by Zombiemilkshake; 7th Jan 2012 at 11:22. Reason: Removing duplicate post.
-
-
There's no need to create intermediate frames. Just encode the progressive frames as if they are interlaced.
-
But it still leaves the question about the behavior of HCenc, which I mentioned in this very thread -
https://forum.videohelp.com/threads/341667-NTSC-progressive-or-interlaced?p=2128005&vie...=1#post2128005 -
Is there a program that lets you see the individual fields within a frame?
I was looking at a film animation that was converted to an NTSC DVD and when pausing the video it appears to have what looks like a double image. Is it safe to presume that each image is a field? And that they were blended when converted from film to video?
I’m presuming if a 24 fps film is converted to 25 fps PAL that one film frame becomes one video frame. And if it’s an animation then both fields should look the same. IOW there would be no subject movement between the interlaced frames. Is this correct?
But when 24 fps film is converted to NTSC then 4 frames have to be made into 5 frames and some blending will occur, either through 2:3 pulldown or converting to 29.97 fps. Is this also correct?
I was curious what the individual fields look like & was hoping there was a way to do that. I don’t plan on doing any restoration or editing so am looking for some free/cheap easy to use software that lets me do that & satisfy my curiosity.
Any suggestions would be appreciated.
Thanks in advance. -
SeprateFields(), Bob() in AviSynth. VirtualDub's Bob Doubler, and Deinterlace (unfold fields side by side) filters. Be careful with VirtualDub, it has some problems handling interlaced YV12 sources.
Not really. Many things could have caused that.
No.
Usually, but not always.
If each video frame comes from one film frame, yes (except for slight up/down bounce between fields).
Duplicating one frame out of every 4 isn't done often. 3:2 pulldown, yes. But that doesn't give blended frames. Each field is still purely from one film frame (how it's displayed is another matter).
They're just every other scan line of the frame. -
Here's an example of how the fields are drawn on a CRT:
https://forum.videohelp.com/threads/284952-Interlace-confusion?p=1721487&viewfull=1#post1721487
You can see the original frame in the first post. In that GIF animation I filled the other field with black lines*. A bob filter will fill those black lines by interpolating between the lines above and below. SeparateFields() will remove the black lines leaving a half high image.
* That is what happens on a CRT TV -- by the time a field is being drawn the previous field has faded away. CRT TVs usually draw the scan lines thicker than one line so that the alternate field isn't left black. It's partially overwritten by the current field. That reduces flicker.Last edited by jagabo; 2nd Feb 2012 at 07:16.
Similar Threads
-
Progressive Vs Interlaced?
By shagratt71 in forum Video ConversionReplies: 4Last Post: 26th Dec 2011, 09:22 -
Progressive and Interlaced Chroma -- again!
By Anonymous344 in forum Newbie / General discussionsReplies: 6Last Post: 23rd Apr 2011, 07:14 -
de-interlaced means progressive ?
By codemaster in forum EditingReplies: 19Last Post: 23rd Dec 2010, 06:08 -
Interlaced or progressive
By rank in forum Newbie / General discussionsReplies: 4Last Post: 3rd Jul 2010, 16:41 -
Basic question about interlaced bottom/top first, progressive for US NTSC ?
By tmh in forum Newbie / General discussionsReplies: 3Last Post: 10th Jan 2008, 10:19