VideoHelp Forum
+ Reply to Thread
Results 1 to 12 of 12
Thread
  1. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    This is a technique I do all the time on scans of film, right up to large format and glass plates. Scan two or more times, then overlay and average. Given one important assumption (the ability to position properly), system noise can be dealt with rather effectively.

    My question (and I have not tried this myself but was contemplating it) was whether the technique would apply to video captures from analog tape. Same assumption would apply: same capture frame-to-frame and the ability to line up each frame.

    Personal thoughts areL I've done enough single frame captures of analog video to believe that frame-perfect overlay is achievable, but I suspect that tape flutter and tracking would cause mis-alignments when trying to overlay two captured frames pixel-for-pixel. But, I haven't tried.

    Thoughts?
    Quote Quote  
  2. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by sphinx99 View Post
    This is a technique I do all the time on scans of film, right up to large format and glass plates. Scan two or more times, then overlay and average. Given one important assumption (the ability to position properly), system noise can be dealt with rather effectively.

    My question (and I have not tried this myself but was contemplating it) was whether the technique would apply to video captures from analog tape. Same assumption would apply: same capture frame-to-frame and the ability to line up each frame.

    Personal thoughts areL I've done enough single frame captures of analog video to believe that frame-perfect overlay is achievable, but I suspect that tape flutter and tracking would cause mis-alignments when trying to overlay two captured frames pixel-for-pixel. But, I haven't tried.

    Thoughts?
    The technique is called frame averaging and it has been used successfully for stills and film to average out some noise. "Analog video" from tape has a spoiler called time base error. Analog video playback has major horizontal jitter (pixels in motion even for stationary shots). Time Base Correction (TBC) reduces the h-jitter to a limited extent but simple field/frame averaging may introduce artifacts and smears motion.

    The better digital noise reducers use motion detection algorithms to separate near stationary pixels from pixels in motion and only apply frame averaging (or other techniques) to the stationary pixels. The pixels in motion are unfiltered or switched to single field data.

    The science behind this is related to human eye perception. The human eye sees noise in stationary portions of the picture. Noise is detected by the eye as motion and thus un-natural or confusing. Pixels in motion are resolved as motion vectors with low resolution. The brain only "cares" that the object is in motion and direction. This is a threat response.

    I'll stop for that to sink in. Ask for more clarification.
    Last edited by edDV; 10th Mar 2010 at 23:23.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  3. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    Thank you for the reply. (Emphasis on first paragraph; the remainder I'm comfortable with.) So, the inevitability of horizontal jitter means that frame averaging may reduce horizontal resolution but provide a measure of noise reduction... the question then becomes whether it's worth the trade-off given a stable transport and good line TBC.

    I captured a frame three times with a AG-7750 and tabbed between them in virtualdub; the frames looked to be properly aligned and indeed there were some subtle variations in noise (but not detail) between them. It occurs to me that two streams would not be enough to deal much of a blow to noise and capturing 4+ and doing an avisynth overlay seems... overkill... but I might try it to humor myself. It does seem to me that for truly precious material, this could be a way to increase SNR without some of the tradeoffs that come with temporal NR.

    One other thought - though this would not be a matter for a simple overlay, if the source material is mediocre (svhs/hi-8, etc.) SD material being captured and interspersed into a 720x480 frame, it might be possible to do a software-based smart overlay that compensates for horizontal jitter in 1 pixel increments, which should generally fall within the limits of what detail the source material would contain in the first place.
    Last edited by sphinx99; 11th Mar 2010 at 00:25.
    Quote Quote  
  4. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by sphinx99 View Post
    Thank you for the reply. (Emphasis on first paragraph; the remainder I'm comfortable with.) So, the inevitability of horizontal jitter means that frame averaging may reduce horizontal resolution but provide a measure of noise reduction... the question then becomes whether it's worth the trade-off given a stable transport and good line TBC.

    I captured a frame three times with a AG-7750 and tabbed between them in virtualdub; the frames looked to be properly aligned and indeed there were some subtle variations in noise (but not detail) between them. It occurs to me that two streams would not be enough to deal much of a blow to noise and capturing 4+ and doing an avisynth overlay seems... overkill... but I might try it to humor myself. It does seem to me that for truly precious material, this could be a way to increase SNR without some of the tradeoffs that come with temporal NR.

    One other thought - though this would not be a matter for a simple overlay, if the source material is mediocre (svhs/hi-8, etc.) SD material being captured and interspersed into a 720x480 frame, it might be possible to do a software-based smart overlay that compensates for horizontal jitter in 1 pixel increments, which should generally fall within the limits of what detail the source material would contain in the first place.
    The problem is if there is any video motion in the frame it will be blurred. Frame averaging only works for stills.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  5. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    Of course. But, I think that with a good transport, jitter during capture is minimized enough that I could capture the same sequence of tape three times and stand a good shot of overlaying them without too much loss of fine detail. Particularly since I'm overlaying 720x480 frames that don't exactly contain 720x480 worth of detail. After writing the above post I checked a few more frames and it's remarkably consistent considering I hit rewind, capture, rewind, capture, rewind, capture then started comparing. Things line up virtually to the pixel.
    Quote Quote  
  6. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by sphinx99 View Post
    Of course. But, I think that with a good transport, jitter during capture is minimized enough that I could capture the same sequence of tape three times and stand a good shot of overlaying them without too much loss of fine detail. Particularly since I'm overlaying 720x480 frames that don't exactly contain 720x480 worth of detail. After writing the above post I checked a few more frames and it's remarkably consistent considering I hit rewind, capture, rewind, capture, rewind, capture then started comparing. Things line up virtually to the pixel.
    Consumer VCR H jitter can be several pixels wide, but if you have well defined vertical picture content you can line up the frame cap samples manually. VHS luminance is low pass filtered to 3 MHz during recording. 13.5 MHz sampling has a nyquist bandwidth about 6.75MHz so VHS is approximately double sampled in the horizontal. That means any black to white vertical transition will be at least 4 pixels wide out of 704. A small misalignment of frames will cause a loss of horizontal resolution when averaged.

    Live TV captures are horizontally stable. H jitter is in the 1 nanosecond (>0.01 pixel width) range so frame averaging will have much better results.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  7. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    sphinx99 - it doesn't work. Most of the noise (all of the noise, in a good capture set-up) is on the tape itself. You get the same noise each time you capture.

    Any apparent noise reduction is simply due to the horizontal softening caused by averaging slightly mis-aligned captures.


    Whereas AVIsynth's spatial, temporal, and motion compensated denoisers are really good. mvdegrain or mc_spuds are great places to start (probably all you'll ever need!).

    Cheers,
    David.
    Quote Quote  
  8. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Although I agree with your recommendation, it would be possible to frame average a stationary scene so long as the caps were spaced over several separate frames. There would be some noise randomization. It wouldn't work if there was any motion present.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  9. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    2B (David) - thank you for the reply. After looking at some more frames in detail, I think you are right. This was a good learning experience, as I had assumed that most noise I saw was related to interactions between tape and head, or generated somewhere on the video path after the tape. It's too bad it appears to be right on the tape.

    I know about software NR as I own NeatVideo which IMO gives me far superior results to any combination of AVIsynth filters I've tried. I was hoping to reduce the need for that sort of noise reduction, but I guess not.

    So, where does the noise come from? Is it recorded on the tape by the original recorder due to noisy electronics? Is it usually from the original source, in this case a video camera connected via RCA composite video? (I don't think so - I've seen raw footage from old cameras captured directly to digital and it's not so bad.)
    Quote Quote  
  10. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    I experimented with this about 6 years ago -- the output gave me a headache to watch, although you could argue that it did look "better" in some ways.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  11. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    From my experience working in video filter designs, I have worked in different types of these scenarios.

    The key problem in this (in addition to the ones alreday stated above) is the capture equipment. You see, each cap equip entails internal functions that do things to the video. One of them is filtering. As video streams through the card and passes through the <filter> this alters the pixels per frame. This hardware filter could be anything from COMB to NR to any number of propriatory internal workding of the card. So when you throw in several same captures of the same source, you get different "filtered" results--same scenes but different "filtered" pixel values.

    Look at it another crude way, from a 3x3 dimension:

    where r is a random noise or internal filter value (part of the hardware equation)

    pass 1, where r is a random noise/internal filter value, say 1 in this pass

    1 2 1 : 2+r : 3 5 3
    1 2 1 : 2+r : 3 5 3
    1 2 1 : 2+r : 3 5 3

    pass 2, where r is a random noise/internal filter value, say 3 in this pass

    1 2 1 : 2+r : 5 7 5
    1 2 1 : 2+r : 5 7 5
    1 2 1 : 2+r : 5 7 5

    The picture detail and information is the same, (ie, a person, place, or thing) but the background noise (like a screen or mask) will be different. So when when you average the two, or three, or more, pictures, you get a strange outcome.

    Maybe someone can build on this analigy, though better

    The idea or theory is good, and still have room for validity, (I would love to try it myself sometime) but if the source is too irratic or unstable, like vhs, this will be hard to work out. I still think its doable though. Just for a test, it would be interesting. Maybe some with lots of time on their hands will give the theory a go, if anything, for the challenge and eductional aspects. Anyway..adding a TBC that truely stabilizes the source will do wonders to the above theory however. The more professional the footage (ie, motion tripod, etc) the better the results.

    -vhelp 5348
    Quote Quote  
  12. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by sphinx99 View Post
    I know about software NR as I own NeatVideo which IMO gives me far superior results to any combination of AVIsynth filters I've tried.
    Like you, I've tried both. I don't really use NeatVideo for standard VHS captures. I find it's great if there's some pattern to the noise (e.g. sensor / camcorder related), but if it's truly random (e.g. VHS tape related) then I don't think NeatVideo's intra-frame denoising is as useful - and it's inter-frame denoising is far far worse than a good motion compensated AVIsynth de-noiser.

    More to the point, both are slow!

    Cheers,
    David.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!