Thanks to recommendations from many here, drilling into my hard head the benefits of different types of playback TBC's like the AVT-8710 and the TBC's on JVC and other VCR's. Hardware TBC's of various capabilities are the only way to go, correcting bad vibes from poor input sources before they're captured.
It's well known that old VHS tapes stretch and have other problems that result in highlight shimmer, wiggles in straight lines, irritating shifts of moving objects in images, etc. My new (used, rebuilt) HR-S7600 has some nice features that correct many of these anomalies before they ever reach my capture card.
But...I understand that these line-level TBC's work mostly via memory cache of some kind. They store a number of lines, analyze them, make corrections, then move them to output. This makes me wonder: is there such a thing as a software line-based TBC? VirtualDub, AviSynth, Premier, etc., have oodles of filters that do all kinds of things. Why not a software-based TBC that does the same thing? I realize that a full-frame software version is probably impossible -- Full-frame would need the entire, original image to correct that kind of tearing and jumping. But what about software line-level TBC's to correct wiggly lines, etc.? I've searched the web, found nothing.
+ Reply to Thread
Results 1 to 20 of 20
-
Last edited by sanlyn; 19th Mar 2014 at 00:41.
-
No. Something like this is not possible. The difference is analog signal processing vs digital signal processing. Once errors are digitized, you're stuck with them.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
Shucks.
Think I'll read up on it (that oughtta be enough to fill my retirement years with a futile project). I do wonder why I find nothing, anywhere, that sez anyone ever tried. But come to think of it...was analog, now it's digital. Hhmmmm.Last edited by sanlyn; 19th Mar 2014 at 00:41.
-
You'd think there would be a way for a program to analyze the frame and align the fields individually based on analysis of the image in the frame. Sometimes you can see the black left or right edge all distorted and jagged. If my eye can see it and tell it's jagged, seems like a program could make the same analysis and correct it. I say it's not impossible... just difficult.
Darryl -
LS, this is not completely impossible. It is impossible using a standard A/D video capture device. As you say, once the time-base errors are digitized, you're stuck.
But, if you were to digitize a video signal at sufficiently high resolution, including all the sync and color sub-carrier (i.e. do a "pure" A/D conversion w/o any reference to video timing), you could reconstruct the video in SW correcting time-base errors. That is, do in software what a hardware TBC does.
Of course, this cannot be done using a video capture card, which is what sanlyn was talking about, I assume. And I haven't even thought of what sample frequency you would need (several times the color sub-carrier, I'd guess).
Steve -
the amount of time and computing power it would take makes it impractical when you can fix it in real time before capturing for relatively little money (relative to the aforementioned time/computing power..that is)
-
VHS H-phase jitter is so wide that you would need to sample at something like 10x 6MHz (assuming you are only going for 3MHz luminance and 10 pixel widths correction).
Uncompressed 60Ms/s luminance would result in a capture file ~150GB /hr or ~900GB for a 6Hr EP tape. Chroma captured at 1MHz for U and V would add ~50GB/hr each for a total of ~250GB/hr or ~1.5 TB.
That is if you capture the entire tape. Processing time on this file with software might be possible with future personal computers but the question remains. Why do it that way when it can be done real time with hardware?
PS: There is another way to approach the problem with subpixel processing. This technique works well at full D1 resolution to trade bit depth for psuedo enhanced resolution. This is the basis for Anti-Alias filtering and can be used to make objects (like titles or graphic objects) appear to move in subpixel increments or create the appearance of detail within the resolution of a single pixel. Applied to jitter reduction filtering this technique might remove apparent H jitter from the image at the expense of picture detail. But again, subpixel filtering is extremely CPU intersive and VHS signal to noise severely limits gray levels to trade off to jitter reduction.
http://en.wikipedia.org/wiki/Anti-aliasing
http://en.wikipedia.org/wiki/Supersampling -
True, greymalkin, if one already has the hardware TBC then that's the way to go. I suppose a software TBC would come into play if the signal were already digitized and the original analog was gone or damaged. And the computing power required... ahem, well ...this is all totally conjecture, and none of us have a super-computer like those at NASA, but I guess that's what would be required.
But so far there are considerations I didn't think about, yet there are clues that could point to a means. Obviously this thread could go on for years. But perhaps this has put a bug in someone's ear who considers all this fairy-tale stuff. My programming of 30 years dealt with databases, not imaging. But, then, I worked with people who came up with some brilliantly simple solutions to complex problems. One day it took 6 hours to gather certain data, next day it shows up in 10 seconds. But thanks for those ideas. Now if there was only enough time...Last edited by sanlyn; 19th Mar 2014 at 00:41.
-
What about the old Flaxen VHS filter?
Link ---> http://neuron2.net/flaxen/flaxen.html
I remember playing around with it and getting some decent results but this was on a VHS that was captured ... without a TBC ... but was of very high quality to begin with so ... take that as you will.
I haven't tried this filter in a LONG time now but it is the only software TBC (like) filter that I know of.
- John "FulciLives" Coleman"The eyes are the first thing that you have to destroy ... because they have seen too many bad things" - Lucio Fulci
EXPLORE THE FILMS OF LUCIO FULCI - THE MAESTRO OF GORE
-
I added more above.
Bottom line, image processing requires more calculation than you want to do routinely with software. Note the need for a display card vs. letting the CPU handle display.
Software is used to develop algorithms (usually on very small pictures) and the result is encoded into hardware (e.g. custom GPU+dedicated memory) to get realtime processing and lower overall cost. Development costs are high, per unit costs are low in volume.
Forensic analysis of existing material is very compute intensive. The security industry is the place to look fo these techniques (e.g. extraction of a license plate number from a security camera VHS tape, or ID of a gun being used). Very large computers are used and the results are marginal vs. what Hollywood shows on CSI.Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Originally Posted by edDVWant my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
The really cool stuff is used for satellite and UAV (e.g. Preditor or Global Hawk) recon. But they start with optimal camera resolution (limited only by atmospherics) so they can filter down.
PS: Movie guys have similar luxury by working from 35/70mm film transfers or now HDCAM SR (1920x1080 4:4:4 RGB 440 or 880 Mb/s). -
Evening guys
Here's my take on this topic..
The bottom line is, the electronic properties/attributes/influences etc,
are no longer available, once the image in finalized inside the conatiner,
the AVI.
A TBC is designed to work with Electons and things, not AVI pixels in
the form of an image. The items responsible for influencing the data
during the capturing (electron) phase is only available during real-time
processing. They are no-where to be found in an avi container, who is
just a simple matrix of numbers with various values that make up the
actual image. And, that image includes whatever influenced the data
during the real-time live capturing process.
This is kind of hard to do once the video is an actual video frame and
away from the "live" process. I mean. All the data is flowing through
the chain-of-events, in the form of Elecontrons and things, you know.
Its during this "live" phase that this can only happen, (because some of
the attributes of the video/electrons are being influenced neg/pos and
during the "live" process of capturing this data, is when the TBC could
be in place to handle those "specific-influencing" agents, but when the
data is finalized to an AVI, all those influencing agents are no longer
there to apply the TBC on, and finished video [AVI] is complete, with
whatever "helped" to influence it into the video it is.. be it noise;
flagging; discolor; ect, etc, etc) and that is partly on account of the
streaming data being available, sort of like non-influenced pixels waiting
to to be worked on before hitting the AVI container.
But, once (all) the pixels are captured and processed and finally, inside
said container, the AVI, its a done deal. The data (that you want to
TBC through software) is no longer in the form of "influences" (or whatever
they all are actually) to be TBC'ed once sent to the container, the AVI.
And so, the influence (or time-base errors, or whatever) are now permanent
pixels. And, if you try to remove them (by a clever method of syntheticly
created halusinations) all you would be doing is moving pixels that are
a part of a finished image.
-vhelp 3997 -
I believe there already are scientific instrumentation A-D capture cards that could do sampling of a baseband/broadband RF or IF electrical signal. What you would get would be a raw PCM file that included everything--video frame, color pilot, front porch/back porch, vertical interval, etc. Basically, what you would see if you ran a fast scrolling printout of a waveform monitor signal. Much like an audio signal, but at rediculously higher sampling rates.
From there, getting a viewable signal would require some serious pattern matching algorithms to find where a frame begins and ends, where a line begins and ends, what is vertical interval, what is chroma and what is luma, etc. before even arriving at what the pixels are. Whew!
That's why there's still strong reasons for using good analog circuits, not just doing everything "digitally".
Scott -
Originally Posted by vhelp
Originally Posted by vhelp
Seriously though, you've missed part of the point. If the composite video signal is digitized at a high enough sample rate -- including sync pulses, color bursts, video data -- it would be possible, in theory, to apply a software TBC and generate the sequence of images in an AVI from that raw data. That is, to do in S/W what a H/W TBC does. I'm not saying that this could be done from the AVI, but from a raw digital stream. It is entirely possible, but certainly not practical, due to the data rates needed.
Past that, with enough CPU cycles available, it also would be possible to at least a part of the job of a TBC by analyzing frames from the AVI. If you could identify a straight line in the image, and you found scan lines where that (supposedly) straight line wavered, you would know that those scan lines needed to be "fixed" because they (probably) had a time-base error. Would this be easy? Of course not. Would it be easier to do when you still had access to the analog video signal (all those elecontrons and things)? Absolutely! Would it make better sense to buy a H/W TBC? Of course. But some of this could be (and is being) done.
Steve -
FYI: TBC basics.
This Fortel Patent describes how a traditional digital TBC works fairly well in the description section. Fortel claims a different technique using interpolation of new pixels.
http://www.wipo.int/pctdb/en/wo.jsp?IA=US2000006778&DISPLAY=DESC
Here is the original CVS patent from 1973. It was this device that allowed unstable helical scan VCRs to be played to broadcast and launched the electronic news gathering revolution.
http://patft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2...&RS=PN/3860952
Here is a good layman's intro describing the 1977 Ampex TBC-1.
http://www.labguysworld.com/Ampex_TBC-1.htm -
Morning everyone.
I'm not saying that this could be done from the AVI, but from a raw digital stream.
else's point.
sanlyn wrote:
But...I understand that these line-level TBC's work mostly via memory cache of some kind. They store a number of lines, analyze them, make corrections, then move them to output. This makes me wonder: is there such a thing as a software line-based TBC? VirtualDub, AviSynth, Premier, etc., have oodles of filters that do all kinds of things. Why not a software-based TBC that does the same thing? I realize that a full-frame software version is probably impossible -- Full-frame would need the entire, original image to correct that kind of tearing and jumping.
digital stream. And, I don't think that sanlyn was saying this either.
But, at this point in time of discussion, it could well be someone now
putting words into *his* mouse, and his music will change, and he will now
be guiding his questions on your suggestion, "..raw digital stream".
Perhaps you saw the above and were basing your comments on such. Ok. If that
be true, then this is still not possible, because no TBC device outputs this kind of
information in a some form of streamed data. ( sanlyn said he did search the
web for this information, and found nothing. ) Sometimes, things are just left
better off to hardware electronics, and for obvious reasons - speed.
Consider this. What a TBC does in real-time "per field/frame", vs. what a (in theory)
software based TBC could do (even in todays fast computers) would just not be
practicle for todays video processing.
Also, consider this. What if (in terms of TBC) that the processing of video can
only be done in hardware electronics, and for obvsious reason - speed. Maybe
there are things or areas in the "unknown" that just can't be done via software
Tell you what.. Why don't we all sit down, and write a list cataloging everything
that a TBC processes, and we all concort with each items as to how we could
turn them into software algorithems, so that we might consider testing (theory)
how fast *each* of those items on the list will perform ??
It would be a start, at least to calm us down as to weather or not a software-based
TBC is possible *or* plasable (worthwhile) in the end.
-vhelp 3998 -
Originally Posted by edDV
These TBCs actually sampled direct color video at 3X the color subcarrier (3 fsc) or about 10.7 Mhz. As memory chips became more prevalent, subsequent TBCs began sampling at 4 fsc. Then 4:1:1, 4:2:2, 4:4:4, etc. sampling started showing up.
The residual timebase errors that were embedded into the video data during the A/D conversion were removed in these TBCs by modulating the D/A sampling clock with a "velocity compensation" profile during the conversion back to analog. By measuring the residual timebase error at the beginning of each line and storing that measurement in memory along with the video data, a first (and sometimes second) order profile was then constructed on the read side of memory to spread the error difference across the line as accurately as possible. This significantly reduced the timebase error towards the end of each line. -
Thanks to all for these great ideas. And to edDV for the links to all that really thick reading matter. From all this I get some enlightenment about one of my basic misconceptions (or sloppy thinking, whatever), one that I suspect is shared by others.
Those wigglies and many other linear visual distortions on a typical VHS tape may or may not actually exist on the tape itself, but some people seem to think so. As I see from this discussion, the defects are a result of TIMING errors: deformed tape, clumsy transport, voltage problems, etc. The line might be straight on the tape, but parts of the line's image go through the player's output at different times. So you get wiggles and deformation on playback. A line-scanning TBC doesn't really correct the image itself; rather, by its name and nature a TBC (TIME Base Corrector) makes corrections in transmission. Once that image leaves the part of its universe based on TIME and enters the universe of NUMBERS, the timing factor can't (reasonably) be corrected, because the TIME factor ain't there any more to be fiddled with. There. Hope I have that right now.
For those who wondered, I was thinking of a software TBC in terms of correcting wigglies on a video stream that has already been committed to digital, with distortions now unfortunately preserved in the form of numbers rather than analog vibes.
FulciLives: yes, I use the FlaXen VHS filter a lot, but it won't fix wiggles, nor will it fix that crummy VHS edge-shimmer. However, it did fix a lot (but not all) of those vague, floating gray lines that come thru on some cable signals. In that respect, though, the temporal cleaner worked even better for that. But those gray bars aren't timing arrors; they are discolorations. I've been able to get rid of those mostly in VDub, but not the slow-moving wiggles (ripples in the image, especially moving objects) that apparently are due to timing distortions because of transmission interference enroute your VCR or tv. I *GUESS* those slow-moving ripples are timing errors of some kind, because my JVC's TBC seems to make them disappear. I gave up trying to do that with software.
This kind of time error apparently came thru my antenna, from which (in an emergency) I taped several minutes on a PBS broadcast. David MCullough sits talking in a chair, and the camera slowly moves toward him. In the background are shelves of books and an old typewriter. During this scene a series of slow horizontal ripples and gray bars move steadily upward, distorting the edges of the books and making the keys on the old tyepwriter do a slow wiggle. With my JVC's TBC on, the ripples go away and all edges and typewriter keys remain stable.
However, the hardware didn't clear up the moving gray horizonal discoloration. Jim Casaburi's temporal cleaner cleaned that up nicely, at its default setting. Softened the image a bit, though. Adding a touch of chroma smoother also helped.Last edited by sanlyn; 19th Mar 2014 at 00:42.
-
Originally Posted by sanlyn
The purpose of a TBC is to eliminate TIMING errors. This is done by deriving a sampling clock with an error profile that matches the timebase error of the analog video signal as closely as possible. The common timebase errors in the clock and the video then cancel each other during the A/D conversion. The characteristics of the sampling clock generated by each particular TBC determines its timebase correction capabilities.
Note that the purpose of a TBC is different from that of a Frame Synchronizer. The purpose of a Frame Synchronizer is to guarantee a continuous sequence of video frames at its output, regardless of what is happening with its video input. Pixels do not need to be modified or filtered, they simply need to be organized into continuous frames.
Perhaps a Frame Synchronizer would be a more achievable goal for software...
Similar Threads
-
Confused: Why a VCR with TBC if separate TBC needed anyway?
By tluxon in forum RestorationReplies: 211Last Post: 2nd Aug 2013, 19:31 -
New to tbc - need advice on optimizing Panasonic svhs ag1970 w/tbc
By yoda313 in forum Capturing and VCRReplies: 8Last Post: 15th Jan 2011, 09:43 -
TBC suggestions: TBC-1000, AVT-8710, ADVC-300, TV1-TBC, or TV1-TBC-GL
By m27315 in forum RestorationReplies: 16Last Post: 24th Mar 2010, 01:36 -
TBC's, TBC's, TBC's, upto my knees ........ puzzling over sync controls?
By StuR in forum RestorationReplies: 6Last Post: 22nd Nov 2007, 11:58 -
panasonic VCR tbc + external tbc - any use ?
By abbymat in forum RestorationReplies: 13Last Post: 15th Aug 2007, 08:29