Hello,

For the past several weeks I've been doing a lot of reading and experimenting with just about every conceivable codec, format, source and compression software/technique. I recently purchased an ATI 8500DV (once all the bugs have been worked out: a great card).

I'm happy to say that I have authored some pretty decent-looking xSVCDs that are very comparable to VCR tapes using SP mode.

However, I cannot seem to achieve the same type of quality and sharpness that appears during the card's overlay mode.

I am aware of interlacing, and 3:2 pulldown and their effects on progressive monitors, and how this can cause a perceived loss of quality.

When I use Virtual Dub's overlay mode, I see the analog signal (from the TV tuner, for example) just fine in brilliant sharpness and color. When I go into preview mode (indicating the quality of the capture), there is a noticeable difference between the two. This is consistent across all formats (sizes).

This is the only issue that I still have left to understand. Am I doing something wrong, or is this simply the best it is going to get with these type of capture cards?

Again, I have no problems capturing, encoding and authoring great-looking xSVCDs (or xVCDs) even from the TV tuner on the card. But, I was under the impression that given a lossless code (like Huff) or uncompressed RGB (which the 8500DV can't seem to capture in) coupled with a large, fast, and unfragmented hard drive that I would be able to get video that rivaled what I see during overlay mode.

Am I missing something?

Thanks in advance,

Dave