Looking to apply Neat Video to various VHS captures to DV avi, eventually to convert to pillarboxed DVD and/or pillarboxed 720p Blu-Ray.
I'll be deinterlacing/cropping with the avisynth script QTGMC.
Will there be any difference between applying NV to the raw interlaced captured footage or applying NV after it's been deinterlaced? Before or after other correction?
I.e. where in the process do you recommend applying NV for best results?
+ Reply to Thread
Results 1 to 7 of 7
The one reason I can see for pillarboxing 4:3 material on a 16:9 DVD is to increase the compressibility. Ie, to put more running time on a disc. Since the video will only occupy a 540x480 portion of the 720x480 frame it will take less bitrate to achieve similar quality. And since your sources are VHS they don't have more than 540 lines of resolution anyway.
I find tweaking the video and upconverting rather than depending on the hardware to do it looks better.
Anyway, the focus here is on NeatVideo, where to insert it into the flow.
Neat Video after deinterlacing IMO. Depends on the specific video and its problems though.
Authoring the DVDs 4x3 with hard pillar box to 16x9 will prevent them from playing properly on a 4x3 TV - they'll always have black on all four sides (unless the DVD player has a decent zoom/crop button, which many do not).
There aren't many 4x3 TVs around, but plenty of PC monitors still aren't 16x9 and you'll face the same problem there. While VLC will happily crop to whatever shape you want, most commercial PC DVD player software isn't so flexible.
If your BluRay player is upscaling to 1080i or 1080p, it must pillar box the 4x3 content to 16x9 - 1920x1080 is not a valid 4x3 format over HDMI, so it would be against the HDMI spec to assume the TV would treat it as 4x3, and most TVs will not.If your BluRay player is outputting the DVD as 720x480 4x3 then the TV should pillar box it (in the right mode). If your BluRay player is outputting the DVD as 720x480 16x9, then it's wrong, but some TVs will let you override it.
Last edited by 2Bdecided; 11th Oct 2013 at 05:35.
I'm with the rest of the world on the "when" question for NeatVideo and for many other VirtualDub final-stage filters. Usually (please note, usually), most filters work best with progressive material, but NV and several others can be set for use either way. If you tell NeatVideo your material is interlaced, NV internally uses the equivalent of SeparateFields. I don't see any difference in the final output. The downside to working with interlaced in NV is that you have to get a noise sample from a vertically smaller frame in their setup dialog; sometimes those frames are too small for the size sample that NV wants. In my case that doesn't happen often: I usually keep video deinterlaced and/or inverse telecined until I'm ready for the encoder.
Another tricky deal is that you often have to filter the hell out of a really grubby video. That can result in banding and other problems such as a "plastic" look from removing too much of the fine grain or other noise, and you usually have to worry about severe color problems with those sources. So a workflow for such a crummy ugly noisy video would be YUV filtering, then RGB for NV and other filtering and color tweaks, then back to YUV to tweak all the tweaks and inject some grain and debanding measures back into YUV again (it's going to end up as YV12 anyway). I would do as much prep work in YUV as possible before going to RGB.
A few years back I had some interlaced/telecined captures that I fed straight into RGB and NeatVideo with reckless disregard for frame structure, and NV was the principle processor. Looking back at those projects I ended up having to do them all over again, which took a couple of years. Another thing I should mention: NeatVideo can't fix everything. It has no effect on spots, streaks, bad deinterlacing artifacts or combing or aliasing, and little effect on stuff like rainbows and other forms of bad chroma noise without using some very high settings and turning the video into an oil painting. So, it's often inappropriate or unnecessary for many sources. In any case, if you have major repairs, spots, dropouts, bad levels, streaks, and other stuff that requires Avisynth, do that first. Feeding bad rainbows into NV will sharpen and accentuate the problem, so why start there? NV does have a talent for removing that residual floating/simmering tape grunge that you see in shots with motion, and it can often act as a temporal smoother to stabilize some of the wiggling and rippling you see with tape captures -- but is most effective after something like QTGMC, MCTempooralDenoise or other Avisynth cleaners have had a chance at it. More often than not, I'm using NV as a late-stage polishing step and almost always at medium to very low settings.
We sometimes get into an assembly-line mode when there's a lot of video to be processed -- that is, using the same filters, same settings, and same workflow for everything. Sometimes that works, but just as often it doesn't. Many "problem" videos require that you have used a variety of procedures so that you are familiar with what various filters can or cannot accomplish. From there, you often have to experiment. Yes, this takes time. The idea that you can take a 1-hour VHS capture and run the whole thing through the same set of denoisers and color fixes simply doesn't work: VHS/analog is a changeable bitch that defeats your vanilla setup with almost every scene change.
Display: set your TV to automatically display whatever it gets, and to display it correctly. Why TV's have different names for this setting is a mystery to me, but it's usually "Full", "Auto", "Normal", or something like that. If you're like my clueless father-in-law you hit the TV resize button for everything that comes in, and it's almost always stretched regardless of format. This guy even zooms 1980x1020 and can't fighure out why he can't see the bottom animated border display on sports shows or the Weather Channel. Trying to adjust and readjust both player and TV for every source is an exercise in futility. Set your TV so that it automatically adjusts for whatever it gets, whether it's 480, 720, 1080, whatever. Then leave it alone and let it do what it was designed to do.
Players are a different story. I have a $400 player that often acts like a $30 Coby cheapie when it comes to sizing properly, and it behaves differently with component and HDMI output. Not only that, but this player and my others have different names for the same AR settings. Basically you have to tell your player what screen format you're using. You should never have to change that setting. If you're sending output to a 16x9 or 4:3 TV, set it for that and leave it there. There is likely another setting for the source video. This is where players give you trouble. I have one player that has an "Auto" setting, supposedly telling me that it will output the correct AR for 4:3 or 16:9. But it does some silly stuff, like displaying a 4:3 movie correctly but squishing a front menu that should display at 16:9. I'm not about to fiddle endlessly with the ratio controls just for the menu. Yet I have another, older player that does it all correctly, so go figure.
So you might have to fiddle with a player now and then, but there's no sense fiddling with player + TV at the same time. It's a losing proposition.Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end. -- Henry David Thoreau