VideoHelp Forum




+ Reply to Thread
Results 1 to 13 of 13
  1. I was thinking, to myself, more or less trying to get the whole concept of Separatefields(), been reading the AVISynth wiki on the command, eg. still not totally sure about just anything.

    Could someone tell me in short what differs ie:

    Separatefields()
    BicubicResize(640,360)

    ..from a primitive Bob-deinterlacer?

    That would much help me on my way through!

    Ty in adv~
    Regards~
    Drop dead gorgeous!
    Quote Quote  
  2. SeparateFields().BicibucResize() will work like a very primitive bobber. The lower field will be shifted up by one scanline though. This causes the picture to bounce up and down with every field. A better bobber leaves the original fields where they are and fills in the missing lines by interpolating between the lines above and below -- so it bounces less.
    Last edited by jagabo; 13th Feb 2010 at 22:36.
    Quote Quote  
  3. Ty jagabo. Excellent answer. Just about what I was hoping for, to conclude that what I was brooding in my head was just about right. Even to that part with even-odd fields being shifted causing a "jumpy" picture, was part of my imagination. Nice to have that one figured out. Case closed.
    Drop dead gorgeous!
    Quote Quote  
  4. SeparateFields().BicubicResize() on the left, Bob() on the right (slowed to 4 fps):
    Image Attached Images  
    Quote Quote  
  5. Nice clearification, indeed jagabo.

    I must confess, with my novice eyes, at had problems noticing the difference (but then, also, I havn't slept for soon about 24 hours ^^), but finally I agreed to myself that the left one is bumping slightly more. And besides, Bob() isn't the most advanced Bobber out there there is either, now is it?
    Drop dead gorgeous!
    Quote Quote  
  6. I think I need help here, I really do. It's that one shifted scanline that is annoying me, and keeps me up at night.

    Tbh, I've come to discover that I havn't had quite the whole picture on interlacing. I have more or less, even after shallowing though the Wiki article, not been able to visualize the process, and therefor it's been a fog.

    Two nights ago, it finally struck me like lightning. It all started to make sense. I had trying to resolve another mind-bubble in another thread, and someone had told me "Think fields, not frames". So, I kept repeating that meaning to myself while lying in bed, wondering. This is what led me back to this old thread.

    This, I (think) I know for a fact. An interlaced frame is divided into two fields. One contains all odd scanlines, the other one contain all even scanlines. So, now I started visualizing the process like this to myself.

    In short, interlacing is a method for gaining "pseudo-framerate", at the cost of the picture quality. This by capturing one "picture" of ½ the vertical pixel span, and the previous/next (depening on BFF or TFF) from the other ½ of the interlaced "picture".

    Take SDTV NTSC for example. It's 720x480 pixels, right?

    Odd field should contain 720x240 worth of pixel data, just the same as the even field, right?

    This is where I slam my head into the wall, trying to understand. As I've understood, the playback device appointed to process the interlaced material will take odd field and place after or before even field, thus the x2 framerate and 1/2 vertical span when doing SeperateFields().

    Then, ie. at playback on interlace-processing device, I assume, this device would stretch each field vertically to full [this example] 720x480 again. Meaning, the exact same same interpolation would be suffice on both fields, 2x vertically, simply said.

    Everything falls perfectly into place and makes totally logic sense for me..

    ..up until this moment.

    That "bouncing" up and down. It doesn't make any sense, with the "visualization" that I've created to myself, at least. Putting odd field (containing 720x240) side by side with even field (also 720x240) would make a perfect match, as far as I see it, with no bouncing. If the odd field contained 720x241 pixels, and the even field contained 720x240 pixels, then(!) the bouncing described here would make so much more sense. But now.. strange..

    There _is_ indeed bouncing.

    I know I'm missing out on some little piece of the puzzle, that will explain this. I really need to find this last piece of the puzzle, hopefully not at the expense of wrecking my whole perspective on the process totally (thus bring me back to point zero ^^), hopefully just with a minor puzzle piece that will explain the bouncing / one scanline shifted from odd field <-> even field.

    Sorry for (prolly) sounding really stupid. Just tried to explain my "perceptive impression" of the process so that someone perhaps more easy can pin-point what/where I'm going all messy.

    Ty in adv.
    Regards~
    Last edited by Gew; 24th Feb 2010 at 10:22.
    Drop dead gorgeous!
    Quote Quote  
  7. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by Gew View Post
    Putting odd field (containing 720x240) side by side with even field (also 720x240) would make a perfect match, as far as I see it, with no bouncing.
    No, because the odd and even fields contain pixels from different locations, displaced vertically by 1 pixel.
    Quote Quote  
  8. But, then, wouldn't the whole picture be displaced by 1 pixel?
    Alternatively give room for one captured scanline more or less?

    As I visualize, if the 1-pixel-displacement is made post-capture, eg. during interlace process, then wouldn't one scanline (from one field) be "pushed out" of the picture, hence also leaving one "blank" scanline on the other end? And if the 1-pixel-offset is made already at capture, wouldn't that alter the original capture resolution limit of 720x480?

    What am I missing out on?
    Drop dead gorgeous!
    Quote Quote  
  9. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    The displacement occurs during capture, the 240 even pixels are captured at different moments from the 240 odd pixels (1/50 sec apart at 25fps).
    Quote Quote  
  10. "Older video game consoles such as the Nintendo Entertainment System generated a non-standard version of NTSC or PAL in which the two fields did not interlace, and instead were displayed directly on top of each other, keeping the orientation of the scanlines constant."

    (taken from LDTV Wiki)

    This may be the sort of setup that I had imagined all interlacing to be, correct? With 720x240 odd field side by side with 720x240 even field would be "perfectly in alignment". But why isn't this perfect? What's benefitical with the 1-pixel displacement, anyways?
    Drop dead gorgeous!
    Quote Quote  
  11. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by Gew View Post
    What's benefitical with the 1-pixel displacement, anyways?
    It allows the full frame vertical resolution to be captured in static areas.
    Quote Quote  
  12. Plus, this fact about what interlacing does wrecks my entire belief that ie. my PAL HandyCam has a great capture spectrum of 720x576 pixels, where it's actually only 720x288 pixels, later on interpolated up to 720x576 ;( Only to give the accessibility of for instance BOB Deinterlacing to fetch über-fast movement better. Is the two interlaced fields _only_ being blended in time by the playback device, or also some spatial-ly? So that if the capture has only brief movement you could actually gain, well hard to say exact in math, but practically a bit higher vertical span than 288 pixels, by using both fields..? Oh, this would then also explain the 1-pixel-displacement.

    -EDIT-
    I was typing this post as a direct follow-up, whileas yours was added in the meanwhile. I read it after posting."It allows the full frame vertical resolution to be captured in static areas.", which sounds reasonable. I'd like to think this is quite equivalent to how I tried to think "or also some spatial-ly? So that if the capture has only brief movement you could actually gain, well hard to say exact in math, but practically a bit higher vertical span than 288 pixels, by using both fields..?", I assume.

    Ty. Starting to make sense, I think.
    Last edited by Gew; 24th Feb 2010 at 11:06.
    Drop dead gorgeous!
    Quote Quote  
  13. Here's an enlarged crop from from a frame (8x nearest neighbor so you can see each individual pixel):

    Name:  frame.png
Views: 570
Size:  1.2 KB

    When you use BOB() the two fields are separated but the scanlines are left in place (the black scanlines here are the missing scanlines from the other field):

    Name:  Even.png
Views: 572
Size:  841 Bytes Name:  odd.png
Views: 1181
Size:  845 Bytes

    Then the black scanlines are then filled in with data interpolated from the scanlines above and below:

    Name:  bobeven.png
Views: 598
Size:  1.2 KB Name:  bobodd.png
Views: 539
Size:  1.3 KB

    So the original scanlines of the fields remain in their original locations.

    When you SeparateFields() the two fields are separate and the black lines thrown out. Leaving you with:

    Name:  evensep.png
Views: 666
Size:  660 Bytes Name:  oddsep.png
Views: 755
Size:  681 Bytes

    It's hard to see here but the odd field (on the right) is shifted up by a scanline. Now if you resize the frame to restore the original height:

    Name:  sepreseven.png
Views: 532
Size:  1.3 KB Name:  sepresodd.png
Views: 573
Size:  1.3 KB

    The odd field is still shifted up from it's original location.

    The two methods animated side by side (left = bob, right = sepearate + resize):

    Click image for larger version

Name:	sidebyside.gif
Views:	396
Size:	127.3 KB
ID:	533

    The Bob() version moves up and down by one scanline. The SeparateFields().Resize() version moves up and down by two scanlines.
    Last edited by jagabo; 24th Feb 2010 at 11:58.
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!