the field order is different in that last sample (bff)
observation: if you pick selecteven() with tgmc , frame 11 looks better than when using selectodd()
+ Reply to Thread
Results 91 to 106 of 106
The top field of frame 11 is the one with that squiggle in MJ's nose. Here's frame 11 with VirtualDub's Deinterace/Blend (left) and AviSynth's Blur(0, 1.0) (right):
They look almost exactly the same to me.
Keep in mind that both fields can have time base errors so keeping one vs. the other won't fix the problem, just move it to different frames.
Last edited by jagabo; 31st Jul 2010 at 20:20.
Not sure, it doesn't look to be blending anything , just blurring everything slightly
Regular avisynth deinterlacer (e.g. yadifmod +nnedi2 , even nnedi2 alone) is even better , more sharp, less aliasing
The only difference in the jitter artifacts in your earlier posts was selecting different field. Blend deinterlacers suck.
Drop field and resize: discard one field, resize the other. Note that I added more to that post.
One question: I read the avisynth documentation to improve my understanding about jitter. I wonder what is the pitch of a frame. It's described this way:
Just as in VirtualDub, the "pitch" of a frame buffer is the offset (in bytes) from the beginning of one scan line to the beginning of the next. The source and destination buffers won't necessarily have the same pitch.
The row size is the length of each row in bytes (not pixels). It's usually equal to the pitch or slightly less, but it may be significantly less if the frame in question has been through Crop.
Thank you in advance for throwing light on this
Pitch is the distance from the start of one scanline to the start of the next, in bytes. The length of each scanline as stored in memory is not necessarily the same as the number of pixels in that scanline:
byte position of a pixel = y * pitch + x
byte position of a pixel = y * image_width + x
For example a 638x480 Y plane may be stored as a 640x480 array with the last two bytes of the array unused. Ie, it's pitch is 640 even though the width is 638. This is done to keep data aligned in memory for MMX access, etc.
Last edited by jagabo; 2nd Aug 2010 at 06:21.
Thank you jagabo for this explanation. This is crystal clear now
To return to the deinterlacing issue, do you think that dropping one field and resize the other is a good solution?
I personally think it's a pity to loose half of the image...
avisynth deinterlacer (e.g. yadifmod +nnedi2 , even nnedi2 alone) as poisondeathray advised me.
On more question: I would like to get a mask with black parts when there is a lot of motion in a picture and black part when there is no or low motion. You said you could maybe use mvtools, but I don't figure out the filter to achieve this. Could you give me some hints?
Someone else suggested MvTools. I don't know much about it but I don't think it's suitable for what you have in mind.
yes... I don't think it's suitable for what I have seen in the mvtools documentation.
So what would you do to create on mask per picture of my video?
You can start by subtracting one frame from the next or previous to see what's changed between frames:
You'll probably have to do it both ways since that leaves you with 128 +/- rather than 0-255.
Overlay() also has subtraction and addition functions:
v1=Overlay(last, Trim(last,1,0), mode="subtract")
v2=Overlay(Trim(last,1,0), last, mode="subtract")
That's the equivalent of abs(frame[x-1]-frame[x]). Of course, you'll need special handling at scene changes.
Last edited by jagabo; 9th Aug 2010 at 12:09.