I have a Canon Elura 100 miniDV NTSC camcorder, and the DV output is interlaced, the manual says. I'll need to convert a number of tapes from this camcorder to DivX format, and the resulting files will be viewed pretty much exclusively on computers. So, I've reluctantly accepted the need to produce progressive DivX files.
But as has been pointed out here many times, deinterlacing has its price in picture quality. I'm going to be using AviSynth and VirtualDub (in fast recompress mode), so I hope I'll have a fair amount of control over exactly how things are done. My assumption is that I'll get a better deinterlaced view that way than relying on on-the-fly deinterlacing during playback on a variety of relatives' computers.
But I wonder whether a better understanding of how the interlaced DV is generated by the camera might help me pick the right filter in AviSynth. In particular, if it isn't really true interlacing, maybe I can pick a deinterlacing filter that's less destructive.
When I think of interlacing, I go back to the electron gun. In old CRT cameras, it would scan alternate fields in succession, one line at a time. So not only are the fields separate, but time actually flows evenly throughout the scanning process at 1/30 sec. per frame.
Do modern camcorder sensors allow things to be done the same way? Or do they have to use tricks to approximate the same thing - such as actually sampling the *same* pixels on each "field" instead of different ones? Do they scan along from one pixel to the next, or do they latch each line all at once? Or even each field all at once?
Well, I guess what I'm hoping is that I can use tricks to get back to a progressive scan instead of a full, official deinterlace, with better quality.
I would appreciate any thoughts on this. Also, if there's a link to a detailed description of exactly how these cameras generate the interlaced output, that would also be helpful.
One other question. When I simply view in Media Player the DV avi file captured by WMM, I don't see any interlacing artifacts. Assuming the file on the tape is really interlaced, where is the deinterlacing being done - by WMM during the capture, by the MS DV codec, or by MP? (When I open the file in TMPGEnc 3, it scans it, and then confirms it's interlaced, bottom field first, at least in structure.)
Thanks for any help.
+ Reply to Thread
Results 1 to 2 of 2
-
-
It does not generation interlace, it shoots natively in interlaced formats. Your playback may have an smart de-interlace filter being dragged over it live during playback. A few exist, for playback only.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS
Similar Threads
-
Output file is a mess - judder & interlaced....
By The.King in forum Video ConversionReplies: 6Last Post: 29th Jan 2011, 21:44 -
progressive input gives interlaced output...
By bennn in forum ffmpegX general discussionReplies: 4Last Post: 9th Dec 2010, 12:32 -
VirutaDUB Interlaced Output and Audio Errors.
By vodmare in forum Video ConversionReplies: 25Last Post: 16th Aug 2010, 06:42 -
Interlaced AVCHD (Pansonic HD camcorder) to X264 progressive with ffmpeg
By chicken264 in forum Video ConversionReplies: 0Last Post: 27th Jun 2010, 12:58 -
encode interlaced material witth ffmepg to MPEG2 but get progressive output
By Massa in forum ffmpegX general discussionReplies: 9Last Post: 13th Jan 2009, 12:42