I have a question that will take a bit to explain. I archive old animation as a hobby, and in my attempts to scale and make it a little clearer, I remembered a couple of ways to enhance images. The first is a pretty widely available method used mainly by astronomers, called "stacking". It takes frames from a static video or burst shots, and using the slight differences between frames, reconstructed a cleaner, more accurate image. Here are some examples of software that do this:
I also remembered an old scientific paper I read a decade ago. At the time, it was extremely new tech, but I wondered if anything like it is available now. In the paper, they used stacking like astronomers do, but with a more sophisticated algorithm along the way. Here's the paper I read:
There are even more advanced approaches now, but they seem maybe too new to have made it into affordable software:
As you can see, it is used to make a previously non-legible license plate completely legible. While this is used on multiple frames to produce a single image, it made me think of how many videos there are out there of things like cartoon intros, from many sources. While there may not be any two identical frames in any one video of high action animation, we do have that frame duplicated across many videos from many different VHS tape recordings. So the algorithm would have to stack these videos on each other, aligning the frames, and dig down through the different clips.
What's especially interesting to me, and this may not be possible, is to scale these video clips, run this process, and maybe get something resembling an HD source. And if you take something like the intro to a show released on DVD, you have many many very high quality copies that were captured using a standard method/scale. Of course it would have problems, maybe introduce artifacts, and wouldn't compare to the real deal source material, but it seems it would possibly be better than what we have now.
So on to the questions. Has that 10-year-old advanced process made it into any software available now? Have either this, or the astronomy-style stacking made it into any software that stacks videos and outputs a composite video, instead of using a video to output a single image? The thing is, we have all the tools to do that last thing, but it'd be a highly manual process.
I did an upscale test by stacking the only way I know how (which some have called "incorrect"), following this tutorial: https://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-...ith-photoshop/
Here are the results of stacking the same frame across 4 different copies of the intro found on the Thundercats DVD, one of which was a good deal worse than the others:
I don't know what more advanced algorithms would do, or require. Maybe they require more than 4 images. The minimum number Deep Sky Stacker talks about is ten. If this doesn't exist, I'm not sure how to automate this process, as it requires alignment of the frames, and with different sources, that might include resizing to correct for slightly different aspect ratios. It seems that this is more complicated than a simple AVISynth script, but may be some software exists out there that I don't know about. Thanks for any help anyone could give me.
+ Reply to Thread
Results 1 to 5 of 5
Last edited by Jellybit; 1st Oct 2017 at 10:00.
I was interested ... until you said VHS. Higher quality source surely exists.
I'm all for restoring cartoons, it why my hobby started 20+ years ago, but I just want to make it enjoyably watchable. What you propose is something best reserved for archivists with access to film sources (at very least, a broadcast master non-VHS tape).
As i recently mentioned in another thread, "super-resolution" can work, but there is always a compromise. In the case of SR, it trades increase in resolution for decrease in motion.
In the specific case of film-based (24fps) cell animation with reduced motion (e.g. 12fps) recorded onto an ntsc medium, there is not the in between motion (necessary to be removed but not necessary for the intended motion impression). It just doesn't exist as it was never created in the first place. For full motion footage, the superresolution effect can work, but is usually undesirable (similar to cam example below).
Median stacking can work, but its improvement is not in the form of an increase in resolution, but a reduction in noise. The resolution never changes.
And the noise is a specific kind of noise.
When used with camera images, it is the noise of the sensors themselves. But to get improvement there, stacking must be done immediately after capture, and because it is using sequential images, it is unusable with any footage that has motion over the threshold of its capture-combining method. Motion-blended smear or ghosting would result. So that's why it isn't used in motion picture scanning.
Stacking can work on the playback side, using supposedly identical copies captured & combined. But the noise it gets rid of is only the noise inherent in that particular playback medium/chain. Not in the recording (master).
That is why this can work somewhat with VHS, but what it gives you is a cleaner, more noise-free image of VHS resolution. If resolution, etc was lost during the original transfer, it cannot be restored (and usually it is lost well before final transfer to vhs, more likely done when telecined & saved to the SD master, e.g. digibeta, etc).
To add to my earlier post (and to Cornucopia's), since it wasn't stated, the idea would be to get better scans from the better source.
Super-resolution is usually more forensic in nature, not something that is used to acquire better resolve of entertainment sources.
One of the only exceptions may be with something like Doctor Who, where some creative restoration work has been done in the past 10-20 years. The original sources are not available, yet they were able to recover the footage by Frankensteining multiple sources. But that's wasn't exactly super resolution.