An additional comparison between AviSynth (just a denoise with TemporalDegrain2, nothing else) and Topaz VEAI: https://imgsli.com/MTEwNjE3
The complete thread starts here: https://forum.doom9.org/showthread.php?p=1969824#post1969824
+ Reply to Thread
Results 481 to 510 of 576
-
-
Funny to see this thread alive again. I just posted some animation of my own.
https://www.youtube.com/watch?v=8GYkvKsyrco
Final Fantasy X: The Dance, upscaled from the original PS2 source, compared against said source.
https://www.youtube.com/watch?v=Vn6bc88K3Iw
Full upscaled 4K video, without comparison to other footage.
https://drive.google.com/file/d/1Hf5CgDls6wl2gdEePB0kUz2ofy3iR_06/view?usp=sharing
4K versus the blended, VapourSynth-filtered video. While it's called "QTGMC" for shorthand, this is actually a blended composite of 15 different videos created without any form of upscaling. VapourSynth-only. QTGMC was only used in some of the videos that went into creating this composite blend. This blended, composite video was the majority of the source used to create the 4K upscale above.
https://drive.google.com/file/d/1TH6iOsnIjhpKXKvGu81RDIu5uwrEqM-I/view?usp=sharing
The blended, composite QTGMC video against the original source. Unfiltered on the left.
These videos show the step-wise improvement from OG to filtered to upscaled via TVEAI.
The final 4K video is a composite of composites of composites of composites of composites. It's what I call a fifth-generation video, meaning it has been processed in Resolve before being passed back to TVEAI five separate times. Each trip through Resolve re-composited the footage to allow TVEAI to further improve it. VapourSynth encodes were chosen on the basis of their ability to modify the footage in ways TVEAI would respond to and augment. Continually mixing the original source and VS-processed source back into the upscaled footage allows for further rounds of improvement. Output resolution is cut back to original size in most cases after passage through Resolve. This prevents resolution from ballooning to unsustainable levels. Cineform and DNxHR codecs were used to prevent detail loss during intermediate processing stages. ProRes was not used because Resolve doesn't support ProRes on Windows.
The final 4K video contains inputs at a wide range of resolutions. The original source, VS composite, upscaled source without the use of VapourSynth, and various outputs created by other workflows not described here were all folded together to create the final video. Input resolutions ranged from OG source (582x416) to 4656x3328. Final output is in 4K at 3840x2160.Last edited by JoelHruska; 2nd Jun 2022 at 12:21.
-
Welcome back! Nice to hear again from you, and thanks for your sample.
I remember that our conclusion was that for your videos the best approach was to take advantage of both worlds (AviSynth and Topaz VEAI). I do not see this in your new samples. Am I wrong? -
Iollo,
I used VapourSynth instead of AviSynth for the initial 15 videos that I composited together, but some of the other encodes that I eventually folded in were also run through AviSynth. I use both.
https://imgsli.com/MTEwNjc0
This Imgsli link shows all five generations of blending via Resolve Studio.
Gen 1 is comprised of 16 videos (I said 15 above, should've checked my own work). All of them were generated in VS.
Gen 2 is eight videos. All of these are TVEAI outputs. Note that Gen 2 videos are the first iteration to go through TVEAI. Gen 1 is created by Resolve.
Gen 3 is 13 videos. Hope3 and 582-Hope are a mixture of VS output and upscaled output, which means the VS sources are being indirectly mixed back into the final product. I sometimes blend outputs that are 50-75% original source and 25-50% upscaled output. Feeding this kind of combination back through TVEAI a second time can sometimes allow the upscaler to improve it again.
Gen 4 is 17 videos. Three of the 17 files -- the ones with _new in their names -- are VS outputs with no upscaling applied. ColorsCombined, AnotherRoundDNXHR, and 4KOutAgain contain AviSynth outputs in addition to VS outputs. These files were created by separate workflows not discussed here.
Gen 5 is six videos. One of those videos is the original source without modification.
60 inputs in total, except for the fact that most of the inputs are themselves made of other inputs. I probably created several hundred versions of this output altogether and combined what worked best. -
-
Iollo,
Yeah, TVEAI is incredible for animation when paired with VS/AVS. Not all videos upscale well, but for those that do it's a more powerful combination than attempting to rely on either TVEAI or AVS/VS alone. -
but for those that do it's a more powerful combination than attempting to rely on either TVEAI or AVS/VS alone
-
Have opinions actually changed on TVEAI? The posts I've seen on the topic have been pretty vitriolic.
-
Oh, btw.
I would recommend taking your EZDenoise output and dropping it directly on top of the TVEAI output at 15% - 25% opacity. Resize the output in Resolve to match the *original* source output. If the source is 640x480 and TVEAI is 1280x960, tell Resolve to create 640x480 output.
I'd also test the original animated video in Proteus at something like 50-50-0-0-0-30 x2, and in Gaia HQ. Drop those two outputs + whatever output you already created in Resolve. Then, put the EZDenoise output on top. Blend them at something like 25% / 35% / 50% / 75% / 100% opacity (that's by layer, starting from the top). I do not know that Proteus will produce good output at 50-50-0-0-0-30 x2, but assuming that it does, I'd test EZDeNoise / Proteus / Your current encode / Gaia HQ, stacked in that order.
This may not be the best order to stack in or the right percentage weights for the footage, but that's where I'd start.
You've got a good source there. It should be easy to get a better result by combining TVEAI and AviSynth. We can get some grain and noise into the animation with Gaia HQ (but layered underneath, so it isn't very noticeable), pick up a little sharpness and clarity with Proteus, include the encode you already did, and then put EZDenoise on top of all of it to smooth the output just a touch.
Resolve has a free version that is limited to 8-bit h.264 files, but if you keep your outputs in that file format it can handle this task without costing you any money.
All of this advice is a bit provisional, since I'm working with a still and not the final clip. But this approach or something similar ought to work based on that frame. -
Have opinions actually changed on TVEAI? The posts I've seen on the topic have been pretty vitriolic.
edit: added quote -
I would recommend taking your EZDenoise output...
However, I will consider your approach for my future work! -
No idea about the community; for my videos and for other general cases is inferior than AviSynth/VapourSynth; but there are special cases where combined with them works very well. Yours are such.
The blending steps I outline above are how one moves TVEAI from a "special case" tool to a "Wow, this works on a lot of footage" tool. It still does not work on everything and I don't want to imply you are wrong about it not-working on the types of footage you want to upscale.
I also want to stress that these improvements don't require 60 videos to achieve, or five passes through Resolve. I ran this workflow out that far because I wanted to push the outer edge of the envelope as far as what I could improve. Even running *two* TVEAI passes in two different models can yield large improvements relative to using just a single, unblended output. -
OK everyone: what about an upscaling challenge/competition ?
We can provide the "perfect" source for it !
Check what someone else already done:
...do you think you can do better than both ? -
Ah, I see.
Well, maybe you or someone else will find it useful. Very handy to see the impact of EZDenoise, btw.
If you do decide to experiment, EZKeepGrain in QTGMC can also help TVEAI retain detail when used at a setting between 0.5 ~ 1.0. And GrainFactory can be useful for providing a smoothing effect depending on which settings you choose. Grain and noise injection can either boost detail or blur it depending on what you need.
NoiseRestore and GrainRestore in QTGMC remain the most effective one-stop method for boosting detail via noise and grain injection. EZKeepGrain is secondarily helpful. GrainFactory's defaults are more likely to blur than boost, but this can also be useful if you need TVEAI to pay less attention to something in-scene. -
I would need to see the OG source. Trying to improve a downloaded YouTube video is far from easy, given how YT treats footage.
But I took on this FFX project after seeing someone else's output.
This is the video I set out to beat:
https://www.youtube.com/watch?v=fsDzmVA-cY0
I'll leave it up to you which is better -- beauty is in the eye of the beholder and all that -- but I think I more than hold my own. The author who created the video above did not help himself by shifting from 29.97 fps to 24 fps. It leaves the footage very jumpy in some places. -
OK everyone: what about an upscaling challenge/competition ?
The last contest on this pages also went into big fight: https://forum.videohelp.com/threads/403073-Why-is-Neat-video-the-best-video-denoiser
So sorry, I am not interested. -
@s-mp Right, but the second one is better than the VEAI-upscaled, IOHO
@JoelHruska We can provide the 720x480 / MPEG-2 (1:1 DVD rip), please MP.
@lollo Well we do believe the key is to "anonymize" results and let peers vote without knowing who done what (aka blind test)
Anyway "engaging rules" must be clear for all partecipants. -
Great thread, but I wish it would be less arguing and actually sharing some tips on the usage and flow.
There is absolutely no arguing that "AI" will eventually replace any mathematical scaling or interpolation algorithm. Just give it some time and it will be trivial to see the vast difference in quality.
For me Video Enhance AI from Topaz Labs was worth the money, since it was able to improve very low resolution videos from my Sony Cyber-shot.
The GUI is great and much better than any other tool I was able to find.
In regards to the de-interlacing (@khobar), I found using ffmpeg with bwdif produced much better results than using the "Dione Interlaced" model.
After which I would use the other models to increase details and or resolution.
I do expect and hope Topaz works on improving the model or develop one which actually is just doing de-interlacing.
I am digitizing old analog tapes. But for those, the results produced by VEAI were not really convincing.
Most models fail if the target resolution is too high, they produce strange patterns or textures.
I was able to achieve the best results with a slight resolution increase, resulting in clear definition of edges, borders and lines.
But since I am still in the process of actually digitizing (and finding better tools than my batch files), I am yet to have tried all the settings.
A while ago VEAI switched from CUDA to DirectML which allows to support more graphics cards.
I am actually wondering how much the performance is affected or increased (from NVIDA) by the Shader Cores and Tensor Cores.
Most benchmarks on those cards compare frame rates in games, or scores in 3Dmark, but I could image those do not translate 1:1 if using DirectML or specifically VEAI.
Anyone got experience with this? I want to upgrade my GTX 1650 to either an RTX 3050 or 3060 and wonder if the difference is noticeable.
One of the developers of VEAI (suraj) actually pointed out the Tensor Cores result in a higher performance and they are working on supporting the ever increasing core count of the GPUs. -
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
Or just use AviSynth / VapourSynth QTGMC (and, eventually, Topaz VEAI later)
Or just use AviSynth / VapourSynth (only) for that kind of material -
I certainly never imagined having consumer CPUs with multiple cores when I started programming 30 years ago in C++.
Of course prediction is a gamble, but sometimes it is possible to make a good guess.
Thank you, I read about both quite often. But I am not really in the mood to dig into the scripting or whatever it requires to make it work.
I am happy to leave my compiler at the end of the day, and rather use an working GUI to enjoy my hobby (of currently digitizing videos).
Maybe I will write my own, and wrap it around a library from ffmpeg. -
@Xood,
I've benchmarked some cards in TVEAI. GPUs like the RTX 3080 can get down to roughly 0.09s / 0.12s per frame when processing DVD-resolution video and upscaling it to 1280x960. Resolution targets are generally useless in upscaling because 2x models produce different output from 4x models. A 1280x960 version of an output upscaled at 2x in TVEAI may look better than a 4x output upscaled in TVEAI using the same model because the 2x and 4x outputs are not identical and 4x is not just 2x * 2. The 6800 XT broadly matches this performance.
If you want tips on upscaling I suggest you read the articles I've written. While they are aimed at DS9, they are broadly applicable to most content. I'm working on a new story called "What AI Upscaling Can and Can't Do," which examines a much wider range of content. This story will show step-by-step processing improvements from using AviSynth/VapourSynth, Resolve, and TVEAI + other upscalers itself in a wide range of content ranging from old personal video to B&W TV to DVD transfers.
My general tips are to rely on AviSynth and VapourSynth for most pre-processing, to use TVEAI as a final finishing pass, and to plan to combine the outputs of multiple TVEAI renders together in an application like Resolve to take best advantage of the software. I typically use QTGMC for deinterlacing and either VIVTC or TIVTC for detelecine. I've also experimented with TMPGenc 7 a little and I ran a great many tests in Handbrake back in 2020 before deeming it unsuitable for what I wanted to do.
There are ways to avoid the artifacts and other issues you mention, but they require additional processing in other applications to do so, along with model-output blending to wipe the patterns out. Resizing output in Resolve prior to upscale can also help TVEAI not throw artifacts, even though this is exactly the opposite from the normal way you'd want to handle the content.
https://www.extremetech.com/extreme/333234-star-trek-deep-space-nine-and-voyager-ai-up...deo-enhance-aiLast edited by JoelHruska; 7th Jun 2022 at 09:46.
-
Thank you @JoelHruska for your reply and the link to your article. I was actually not aware of it.
Even hinting at DS9 did not trigger anything. It was such a long time ago I last watched it
While I can only imagine the effort it must have taken to try out all the countless variations and steps to reach your final result.
This is out of my league, I really want as few steps as possible. And at the same time, a balance between time and quality.
My GTX 1650 needs around 0.45s per frame for VHS content. Your RTX seems to have it twice, which is pretty nice.
I was actually hoping it to be much faster. Like my i7-8700 runs at a speed of 0.7x for YUV 4:2:2 to H.265. While my GTX flies through with 12x speed.
Since even your RTX 3080 does not get anywhere near as fast, the much cheaper 3050 will serve me just fine.
(and I want it's B-frame feature for H.265 encoding)
In regard to QTGMC, I never used it. Possibly it is not part of FFMPEG and hence I never stumbled upon it.
A few month ago I ran many tests and for me bwdif was the winner in quality and speed. Only nnedi was able to produce somewhat better quality, but was a whole lot slower.
If QTGMC is not much slower then bwdif, then I would give it a try. Otherwise I would keep on using bwdif.
Frankly, I was hoping TVEAI would be able to do it all. But in its current state, it is mainly usable for selected content.
For all other "memories" a good quality conversion chain with ffmpeg de-interlacing, border, scaling and encoding is currently my tool of choice. -
And at the same time, a balance between time and quality.
In regard to QTGMC, I never used it.
TVEAI is not a fire-and-forget program yet. There's still a lot of prep work that goes into it. -
It's built into AviSynth and VapourSynth.
@s-mp Right, but the second one is better than the VEAI-upscaled, IOHO -
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
When people write stuff like this, you just know they've never used it.
I may have muddled the difference between installing a front-end like StaxRip or Hybrid or installing AviSynth, and I make no particular claim to expertise with the application, but the idea that I've never used it is farcical. -
I took a quick look at AviSynth, it seems to be generating images which feed into any compatible application as a stream.
It's basic scripting language has access to some build in functions and additional ones can be added using plugins (.dll).
QTGMC for example seems to be using many different plugins to achieve its goal.
Installing or copying, you must have the necessary plugins in your AviSynth folders to use them.
The last AviSynth version is pretty old, there is an .org and .nl website. Very confusing on its own.
And the use of dll makes it worse, there are a lot of potential security risks in using the whole toolchain.
(even if the source code is provided, I am sure most will not look at it or actually compile it)
This is something I would only run in an isolated Sandbox.
What makes it very interesting, is the ability to open the script in other video tools, without them actually knowing or having to know about it's function or explicitly supporting it. -
AviSynth+ 3.7.2 is available here. Last release is from March 18, 2022.
https://github.com/AviSynth/AviSynthPlus/releases/
There's also VapourSynth if you want (many) of the same filters implemented in Python.
Hybrid is not meant for novice users, but it's quite powerful. https://www.selur.de/downloads
It can serve as a front-end for both AVS and VS. I also use StaxRip:
https://github.com/staxrip/staxrip
MeGUI is also popular with some folks.
Similar Threads
-
how can i restore an enhance audio?
By enable in forum AudioReplies: 4Last Post: 21st Feb 2021, 16:26 -
DVDFab Video Upscaling--Enhance DVD from SD (480p) to Full HD (1080p) video
By DVDFab Staff in forum Latest Video NewsReplies: 2Last Post: 6th Aug 2020, 03:31 -
Is format factory can enhance video ?.
By mrs.faith in forum Video ConversionReplies: 1Last Post: 21st Apr 2017, 14:15 -
How Enhance video quality in potplayer?
By asiboy in forum Software PlayingReplies: 5Last Post: 1st Jan 2017, 15:01 -
Enhance this image to get the license plate
By thestolz in forum RestorationReplies: 7Last Post: 18th Jul 2016, 12:47