I guess I still don't fully understand the film to digital process, and every so often I've come across a video such as the attached sample and I've never quite been able to get my head around how it works. Today I thought I'd ask......
It's video which looks purely interlaced. Yadif de-interlacing works exactly as expected and the output is 25fps progressive. However when selecting "full frame rate" or "50fps de-interlacing" or "yadif with bob"..... however you might describe it.... the output is 50fps but in reality it's 25fps with every frame repeated.
Logically then, it seems it's actually progressive video which has been scanned as interlaced, or something along those lines, but is that correct? Is that how it works? Maybe I'm just being thick but I'd like to properly understand how the sample seems to be both interlaced and progressive, so to speak.
Cheers.
+ Reply to Thread
Results 1 to 24 of 24
-
-
Yes , it's progressive content, but encoded interlaced. That is common for PAL dvd's for example.
It looks "combed" because of the field shifting. A simple TFM() will realign them -
Cheers.
So I'd assume TFM() would be the preferable method rather than de-interlacing? The quality of the clip obviously isn't particularly good but I can still see a difference between TFM() and Yadif even if it's not possible to say one looks noticeably better (in this case), but I'd imagine in theory de-interlacing might potentially blur a little?
For years I would have assumed the sample was interlaced and just de-interlaced it, or I'd have let a GUI such as AutoGK or MeGUI analyse it and decide it needs de-interlacing. It's really only after switching to "full frame rate" de-interlacing in more recent times that I've noticed what appears to be interlaced video sometimes probably isn't and therefore thought about it. Not that I've re-encoded a whole lot of that type of video (I don't think) but it's kind of disappointing to think I might have been handling it "wrong".
What's the point of scanning progressive video as interlaced? The DVD from which the sample was taken is mostly progressive, it's just one or two of the "extras" which were scanned as interlaced.
Thanks. -
Yes, field-matching is almost always preferable to just deinterlacing, particularly deinterlacing with Yadif. Sometimes, as you know, QTGMC can do the job well when you want other things done (like 'cleanup') than just field-matching.
...but I'd imagine in theory de-interlacing might potentially blur a little?
What's the point of scanning progressive video as interlaced?
The DVD from which the sample was taken is mostly progressive... -
That's the part where I'm still a little confused. By "mostly progressive" I mean "looks progressive".
I just copied/indexed a vob file, which as it turned out contained the end of one "episode" and the beginning of another (I think it's the original B&W version, then the colourised version). DGIndex reports it as being interlaced. I guess my remaining confusion comes from previewing the video (DGIndex/MeGUI). The B&W version shows no combing, looks and smells progressive, and would be my definition of progressive. I'm pretty sure if MeGUI analysed the B&W version on it's own (re-authored DVD) it would have reported it as being progressive.
The colour version looks interlaced. Combing in every frame. TFM() in the script fixes the colour half while appearing to have no effect on the first half. Well the colour version has some horrible frame blending and I haven't played with it much (no interest in re-encoding it) but that aside TFM() seems to make it progressive again. Yadif definitely blurs the first half a little (fine detail) and also the second half at times. Yadif "with bob" would output 50fps that's 25fps with each frame repeated in both cases.
So I guess it's the "how's it scanned/stored" part I'm still not fully understanding. A single vob file, the first half appearing to contain progressive video, the second half containing (I assume) progressive video scanned as interlaced. It's the "why is it done different ways" part I'm still not clear on.
That's why (the first part looking progressive) I've always just assumed the video was interlaced if it looked interlaced and the whole "progressive video scanned as interlaced" penny hadn't dropped until recently. Now it has the concept seems monumentally obvious and as a result reminds me I'm fairly silly...... it's really quite depressing when you're just smart enough to know how smart you aware. A few IQ points less and I could probably live in blissful ignorance.
Thanks.Last edited by hello_hello; 1st Nov 2014 at 03:13.
-
To a CRT display, to which DVDs a designed for, each field is independent. When converting 25fps progressive to interlaced there's actually no real reason from the CRT perspective to fill a "frame" (which doesn't actually mean anything in CRT land) with two parts of a single image. If you cut an interlaced stream, the first "field" may in fact be the second half of a full picture and from then on, looking at MPEG2 frames without deinterlacing the entire thing will look interlaced. My education on that came from my Buffy DVDs, where large segments of the first episode seemed completely progressive, but every now and then a scene popped up where the fields lagged behind by one and made the frames looked interlaced.
-Edit- I said that badly, if you're filming on interlaced video and you need to cut to a sped up filmed segment, it's not actually necessary for the telecining device to wait for the next top field before starting the process. [And variations on that.]Last edited by ndjamena; 1st Nov 2014 at 04:49.
-
There are two separate issues;
1) How the video is encoded: progressive or interlaced. This has to do with how the MPEG encoder treats the video internally (not necessarily whether the frames themselves contain progressive or interlaced pictures). Whether the frame is compresses as a single picture, or as two separate pictures (one for each field). Progressive frames can be encoded progressive or interlaced (the latter being less efficient and leading to more artifacts). Interlaced frames need to be encoded interlaced. Of course, there's nothing stopping the clueless from encoding interlaced material in progressive mode (and thereby screwing up their video). MediaInfo tells you whether the video was ENCODED in progressive or interlaced mode, not about the contents of the video frames.
2) Whether the contents of the frames is progressive or interlaced. In your video the frames are interlaced. Each frame contains fields from two different film frames. But if you look closely at the content of those fields you'll see that two adjacent video frames contains fields from the same film frame: For example give a list of film frames:
Code:1 2 3 4 5 <-- sequential film frames
Code:1t 1b 2t 2b 3t 3b 4t 4b 5t 5b <-- transmitted as alternating top and bottom fields (no frames in SD analog video)
Code:[1t 1b] [2t 2b] [3t 3b] [4t 4b] [5t 5b] <-- each video frame in brackets
Code:[1b 2t] [2b 3t] [3b 4t] [4b 5t] [5b 6t]
Code:SeparateFields() Trim(1,0) # throw away the first field Weave() # recombine the remaining fields
Last edited by jagabo; 1st Nov 2014 at 07:18.
-
Thanks guys.
This seems like basic stuff but it's really not something I've put much thought into until recently. I understand pulldown and interlacing etc but I've just never thought all that much about how it physically works, so to speak.
I think I'm getting there now. I'm still wondering if there's been lots of times when TFM() would have done the job, maybe similar to ndjamena's Buffy example, but I'd thought "partially interlaced" and de-interlaced instead. Well I'm pretty sure I've never had an encoder GUI analyse a video and declare it's "partially interlaced requiring TFM()" or anything similar. If it looks interlaced it's seen as interlaced, so it hadn't really occurred to me sometimes it might not require de-interlacing as such. Sigh......
I understand the difference between a frame and a field in respect to how interlacing works, but when it comes to progressive video.....
Call me silly but I want to make sure I've got my head around it properly. Progressive video obviously constitutes a whole frame which is a single moment in time (as opposed to interlaced fields) but for progressive video are the frames still divided/stored/encoded as two separate fields or is that only when video is captured/encoded as interlaced (even if it's actually progressive)?
I'm still trying to make sure I correctly understand Avisynth's AssumeFrameBased & AssumeFieldBased. I do, but not quite, maybe..... for example....
http://avisynth.org.ru/docs/english/corefilters/parity.htm
"AssumeFrameBased throws away the existing information and assumes that the clip is frame-based, with the bottom (even) field dominant in each frame. (This happens to be what the source filters guess.) If you want the top field dominant, use ComplementParity afterwards."
I probably should make sure I'm not making an incorrect assumption as to why for AssumeFrameBased there's still a reference to fields.
Thanks. -
Call me silly but I want to make sure I've got my head around it properly. Progressive video obviously constitutes a whole frame which is a single moment in time (as opposed to interlaced fields) but for progressive video are the frames still divided/stored/encoded as two separate fields or is that only when video is captured/encoded as interlaced (even if it's actually progressive)?
IIRC, in "AssumeFrameBased", the info that it "throws away" is the field dominance/parity. It then resets the partity to what it assumes Framebased should be (which is why that extra disclaimer for adjusting it). AssumeFrameBased ASSUMES that what was previously considered FieldBased SHOULD NOW be considered FrameBased.
Scott -
Encoded, yes. When encoding as interlaced you encode the two fields separately and usually do such things as choose alternate scanning and not set the progressive frame flag.
Stored, almost never. While it's possible and legal to store fields separately, almost always 2 fields are stored together as a frame.
Here, read this, and then read it again:
http://www.hometheaterhifi.com/volume_7_4/dvd-benchmark-part-5-progressive-10-2000.html
You can skip the part in the middle about the different chipsets. -
I probably should make sure I'm not making an incorrect assumption as to why for AssumeFrameBased there's still a reference to fields.
Important to note that avisynth's AssumeFrame/FieldBased or AssumeTFF/BFF commands don't change the actual content; they only what avisynth "thinks" internally. What avisynth assumes is not always correct. Avisynth "assumes" many things by default, and sometimes that information is passed by the source filter. -
You may want to create interlaced frames from the progressive content, when converting 50p to 25i for example. The field order of the 50p frames is used as the field order for the 25i frames.
AssumeTFF().SeparateFields().SelectEvery(4,0,3).We ave() will create 25i, TFF
AssumeBFF().SeparateFields().SelectEvery(4,0,3).We ave() will create 25i, BFF -
Thanks for all the info guys. I still wasn't quite sure whether video was always stored as fields, even when it's progressive (as opposed to progressive video scanned as interlaced). For example when I encode a progressive video myself using the x264 encoder, is it stored as fields or frames? I think some of the info in manono's link clarified that as it appears the "Picture_Structure" flag specifies either "frame" or "field". That page was fairly interesting reading.
Thanks again. -
-
MPEG2/4 always stores frames as frames, as far as I'm aware the main difference is that interlaced YV12 has the even and odd scanline chroma separated, whereas progressive YV12 shares the chroma between even and odd vertical neighbours. VC1 stores interlaced video as separate fields, which is why it took so long for many programs to support it in interlaced form.
-
It doesn't matter exactly how the video is stored within the file (unless you're writing a decoder). What matters is what the decoder delivers.
-
No, for DVD it almost always stores them as frames, as I said earlier. It can and sometimes does store them as separate fields. I've seen several cases where it's like that. If you were to read the page to which I linked earlier, about a third of the way down you'd see the chart of how NTSC MPEG pictures can be stored with Example 2 showing a legal field picture structure.
-
Yeah, ProCoder is able to encode field based interlaced MPEG2 video where the video is indeed stored as fields rather than interlaced frames. It's DVD compliant but many muxing engines struggle with such streams. ProCoder is the only encoder to my knowledge that is capable of this.
-
I guess the question I was trying to ask, is whether progressive video is always split into fields. ie Can it be stored as frame1, frame2, etc with not a field in sight when it's progressive (I'm asking in general, not specifically about DVDs).
If there'd never been any interlaced video I imagine that's what would have happened, but there was, so a frame became a combination of two fields, but.... I can see the section on the page manono link to showing fields as a valid picture structure, but when it's frames there's still flags for whether fields are repeated or to specify the field which comes first etc..... but can it ever be "just frames". Maybe the question isn't valid and I don't quite know why, but that's sort of what I was trying to ask.... I think.... -
Yes, they can be just frames, but frames can be split into fields, so fields are always there... waiting, lurking...
-
-
I believe the DVD player initially outputs 50/59.94 fields per second. If it's set for progressive scan the individual fields are 'intercepted' and rejoined with their 'partners' and made into frames again, before being sent to the television. I think what I said earlier is correct, although the end result may be as you said.