VideoHelp Forum

Try DVDFab and copy Ultra HD Blu-rays and DVDs! Or rip iTunes movies and music! Download free trial !
+ Reply to Thread
Results 1 to 10 of 10
Thread
  1. Trying to figure out what's going on in this clip:
    https://www.sendspace.com/file/c7yi5a
    using MeGUI + AVISynth to transcode for NAS viewing...

    As you can see, there appears to be field-blending going on in several of the scene changes - that is, each time the camera changes to another angle, etc. However, I can't discern if it's doing it on every scene change, and throughout the other parts of this video there are longer stretches of singleshot scenes, where I presume there is no field-blending.

    I'm unable to determine whether this is 23.976 fps film that got intermittently field-blended up to PAL, or what?

    I already did one trial run of QTGMC() + Srestore (), and while it wasn't bad, it didn't seem to be perfect... especially in longer segments of singleshot it got a bit "jerky" as though it was dropping frames out (that were supposed to be there) in order to accommodate the end goal of 23.976 fps.

    ...any help is much appreciated.
    Quote Quote  
  2. There's no field blending anywhere and it's not Film to PAL. I think what's going on is that it was edited as video and there are orphaned fields at those scene changes. It's progressive with interlacing at the scene changes. There are a number of ways to handle it. Perhaps the easiest is a simple:

    TFM()

    or:

    TDeint(Full=False)

    Both will deinterlace using TDeint whenever it sees interlacing. If you want to use QTGMC as the deinterlacer, then:

    tdeintted = QTGMC.SelectEven()###check if SelectEven or SelectOdd gives the better field
    tfm(clip2=tdeintted)

    ...in order to accommodate the end goal of 23.976 fps.
    You got jerkiness because it's 25fps and you were removing a unique frame every second. If you absolutely have to have 23.976fps (why?), then use an AssumeFPS(23.976) afterwards. You'll also have to adjust the audio to match. If it was created at 25fps, I wouldn't do that if I were you.
    Last edited by manono; 19th Dec 2016 at 14:02.
    Quote Quote  
  3. Thanks, @manono. You're always such a huge help! That worked out, perfectly.

    Do you mind verifying this clip? https://www.sendspace.com/file/v6q1x5

    What I see is that it appears to be 25i (to my naked eye), but on previous examples you helped me understand that others like this are field-blended. Is there a difference between 25 fps interlaced and 25 fps field-blended? What's the tell-tale sign to differentiate? Would it be to "bob" it and then count the frame pattern?

    My hunch is that this is field-blended up from 23.976 fps film.

    thanks in advance!
    Quote Quote  
  4. I agree with manono -- progressive frames but orphaned fields at scene cuts. Just TFM().

    The second clip is field blended NTSC to PAL. The difference between normal 25i and field blended 25i is the blending seen in individual fields. 25i is just 50 different fields per second with motion (potentially) at every field. 25i with field blending is some other frame rate with blended (double exposure) fields to make up the difference between that other frame rate and 50 fields per second.
    Last edited by jagabo; 19th Dec 2016 at 16:51.
    Quote Quote  
  5. Originally Posted by jagabo View Post
    The second clip is field blended NTSC to PAL.
    Thanks @jagabo.

    Does this mean it was NTSC (29.970 fps) to PAL (25 fps)? Or, NTSC (film?) [23.976 fps] to PAL (25 fps)? And, how would I know the difference? I presume there are videos out there that were originally 30 fps NTSC Video that were then rendered to 25 fps PAL. Is there a way to determine that that is the case if all I have to work with is a PAL version of a video?

    I'm asking, of course, to know the best approach to transcoding... If this is a 23.976 fps source that is 25 fps field-blended, you've already taught me well to use QTGMC() + Srestore(). Is that what I'm looking at here?

    As a side... in using Handbrake on my Mac, I've had PAL videos that I've applied "Detelecine" to, and the resultant transcode is a 30.000 fps video. By and large, I've abandoned Handbrake for the purposes of deinterlacing and inverse telecining, and that's why I'm here trying to learn AVISynth, MeGUI, and more.

    thanks!
    Quote Quote  
  6. It's film to PAL. If it were NTSC video to PAL, often times those are a nightmare to undo and usually can't be undone successfully. Do you remember where jagabo said, "25i with field blending is some other frame rate with blended (double exposure) fields to make up the difference between that other frame rate and 50 fields per second."?

    You can use that to help figure the 'source' or 'original' framerate if you have a problem doing it in other ways. Bob it first. Then in the bobbed frames per second (50 in the case of this sample), count the number of 'clean' or unblended frames in every 50. That's your original framerate. Granted, that can be hard to do and you have to choose where to do your counting very carefully, with just the right amount of movement to put the blending on full display. If there's double blending or if it's been screwed up in other ways, it won't work. And it probably can't be unblended successfully, either.
    Quote Quote  
  7. Originally Posted by U2Joshua View Post
    Does this mean it was NTSC (29.970 fps) to PAL (25 fps)? Or, NTSC (film?) [23.976 fps] to PAL (25 fps)?
    The latter. So use QTGMC().SRestore() or Yadif(mode=1).SRestore().

    Originally Posted by U2Joshua View Post
    And, how would I know the difference?
    Usually when this happens the PAL video is made from an analog NTSC video tape (30i). There are two main types of 30i material, true 30i where every field is from a different point in time (1/60 second intervals, typical of news and live sports) or 30i from telecined film.

    With the former (true interlaced 30i NTSC, 60 different fields per second) virtually every field will have blending when there is motion.

    With the latter (NTSC film) each of the blended fields is a blend of two film frames. You will see some fields with no blending because they come from only one film frame. You can see this if you single step through fields (Bob(), Yadif(mode=1), or QTGMC()). If you ignore all the blended fields and discard all the duplicate fields you'll find there are 24 different fields every second.
    Last edited by jagabo; 19th Dec 2016 at 17:44.
    Quote Quote  
  8. Thanks, guys... not sure I'm 100% out of the woods, yet, but I'm learning

    much appreciation.
    Quote Quote  
  9. Seeing this graphically should help. To simplify matters let's deal with frames rather than fields, 60 frames per second NTSC, 50 frames per second PAL. Note that after Yadif(mode=1) or QTGMC() we are dealing with frames anyway. So it's fair to think in terms of frames for this discussion. Make sure you browser is showing these code blocks with a fixed pitch font or they won't make any sense:

    Code:
    +----+----+----+----+----+----+----+----+----+----+----+----+-
    | A  | B  | C  | D  | E  | F  | G  | H  | I  | J  | K  | L  |
    +----+----+----+----+----+----+----+----+----+----+----+----+-
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    | AB  | BC  | CD  | DE  | EF  | GH  | HI  | IJ  | JK  | KL  |
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    The width of each box in the top row represents the "perfect" projection of a 1/60 second long NTSC frame. That is, the horizontal dimension represents time. The letters in the boxes represent a different image at each frame. We have 12 different pictures over 12/60 second.

    The boxes in the lower row respresent 1/50 second long exposures of PAL frames. As you can see, the PAL exposures are slightly longer than the NTSC projections. So each PAL exposure overlaps with the projection of two NTSC frames. The first PAL frame is mostly NTSC frame A with a little NTSC frame B. The second PAL frame is mostly from NTSC B but a significant amount from C too. The third PAL frame is an equal blend of C and D, etc.

    Let's consider what happens when 24 fps film is made into 60 fps NTSC. The ratio of 60 to 24 is 2.5. So each film frame needs to be projected for 2.5/60 seconds. This is achived by alternating between 2x and 3x duplicates. Ie, film frame A becomes 3 video frames, film frame B becomes 2 video frames, etc. This is the "3:2" part of "3:2 pulldown". On average, each film frame becomes 2.5 video frames. After 3:2 duplication the NTSC frames look like this:

    Code:
    +----+----+----+----+----+----+----+----+----+----+----+----+--
    | A  | A  | A  | B  | B  | C  | C  | C  | D  | D  | E  | E  | E
    +----+----+----+----+----+----+----+----+----+----+----+----+--
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    | AA  | AA  | AB  | BB  | BC  | CC  | CD  | DD  | DE  | EE  |
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    Now let's examine what happens when those NTSC frames are converted to PAL frames. The again, the first PAL frame is a mix of the first two NTSC frames. But those two NTSC frames are duplicates. So the first PAL frame is AA or simply A -- a progressive frame with no apparent blending. The same is true for the second PAL frame. The third PAL frame is a mix of two NTSC frames again, but those two NTSC frames come from two differnt film frames, A and B. So that PAL frame appears as a blended frame, AB. Etc.

    Of course, this is further complicated by the fact that NTSC video isn't 60 fps but rather 59.94 fps. So there is a slow phase shift in the patterns and consequently sometimes PAL frames overlap with three NTSC frame.
    Quote Quote  
  10. Originally Posted by jagabo View Post
    Seeing this graphically should help. To simplify matters let's deal with frames rather than fields, 60 frames per second NTSC, 50 frames per second PAL. Note that after Yadif(mode=1) or QTGMC() we are dealing with frames anyway. So it's fair to think in terms of frames for this discussion. Make sure you browser is showing these code blocks with a fixed pitch font or they won't make any sense:

    Code:
    +----+----+----+----+----+----+----+----+----+----+----+----+-
    | A  | B  | C  | D  | E  | F  | G  | H  | I  | J  | K  | L  |
    +----+----+----+----+----+----+----+----+----+----+----+----+-
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    | AB  | BC  | CD  | DE  | EF  | GH  | HI  | IJ  | JK  | KL  |
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    The width of each box in the top row represents the "perfect" projection of a 1/60 second long NTSC frame. That is, the horizontal dimension represents time. The letters in the boxes represent a different image at each frame. We have 12 different pictures over 12/60 second.

    The boxes in the lower row respresent 1/50 second long exposures of PAL frames. As you can see, the PAL exposures are slightly longer than the NTSC projections. So each PAL exposure overlaps with the projection of two NTSC frames. The first PAL frame is mostly NTSC frame A with a little NTSC frame B. The second PAL frame is mostly from NTSC B but a significant amount from C too. The third PAL frame is an equal blend of C and D, etc.

    Let's consider what happens when 24 fps film is made into 60 fps NTSC. The ratio of 60 to 24 is 2.5. So each film frame needs to be projected for 2.5/60 seconds. This is achived by alternating between 2x and 3x duplicates. Ie, film frame A becomes 3 video frames, film frame B becomes 2 video frames, etc. This is the "3:2" part of "3:2 pulldown". On average, each film frame becomes 2.5 video frames. After 3:2 duplication the NTSC frames look like this:

    Code:
    +----+----+----+----+----+----+----+----+----+----+----+----+--
    | A  | A  | A  | B  | B  | C  | C  | C  | D  | D  | E  | E  | E
    +----+----+----+----+----+----+----+----+----+----+----+----+--
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    | AA  | AA  | AB  | BB  | BC  | CC  | CD  | DD  | DE  | EE  |
    +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-
    Now let's examine what happens when those NTSC frames are converted to PAL frames. The again, the first PAL frame is a mix of the first two NTSC frames. But those two NTSC frames are duplicates. So the first PAL frame is AA or simply A -- a progressive frame with no apparent blending. The same is true for the second PAL frame. The third PAL frame is a mix of two NTSC frames again, but those two NTSC frames come from two differnt film frames, A and B. So that PAL frame appears as a blended frame, AB. Etc.

    Of course, this is further complicated by the fact that NTSC video isn't 60 fps but rather 59.94 fps. So there is a slow phase shift in the patterns and consequently sometimes PAL frames overlap with three NTSC frame.
    This definitely helps! Thank you! I think I can at least conceive of what's going on in these transcoded files... I think I'm still struggling to be able to identify the source patterns, etc., when faced with reviewing a source that I want to transcode for my own use.

    Again, many thanks.
    Quote Quote  



Similar Threads