Curious where negativity resides in the need for scientific peer review? In the need for transparency & clearly defined steps? In the need for ground truth references? In the expectation of objective (Or statistically significant subjective) ABX comparisons?
You confuse rigorous skepticism with negativity.
Scott
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 91 to 120 of 576
Thread
-
-
Sometimes the world has too many fanboys/cheerleaders to let facts get in the way.
Their colors always show when shown the follies of the ways (claims refuted with facts), and they'll usually just double down on the wrong/false information.
For whatever reason, we're seeing that with this Topaz software.
I would be quite thrilled if a true "AI" program existed to repair video (size, framerate, etc). Or even if it was a superior way to upsize, compared to what is available in Avisynth. (FYI: A commercial app being better than Avisynth can happen. Mercalli is a great example.) But this Topaz software is neither. It's all bark, no bite. Marketing. Bullshit. All of their software is BS. For example, "JPEG to RAW converter" -- WTF? No. Hell no. Nonsense. This company is bamboozling amateurs with crapware for big bucks. Don't fall for that. Be smarter.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
I agree about the company statement. Is messy with their releases, models etc...
But When applying neuronal filters models, its all about the "samples", and the amount of iterations, corrections and training of that neuronal models.
(It is available in a lot of stuff, For example: sound noise https://ffmpeg.org/ffmpeg-filters.html#arnndn)
But if this guys are putting their money in training neurons, i am willing to use them, if empirical useful.
Marketing can say BIG AI, but is about the quality of neuronal models. -
Hi everyone, I have yet to read over the thread when I'm not coming off the tail end of a day with far too little sleep, but I've been experimenting with the trial of topaz and have also been looking for alternatives.
My GTX 1080 means that trying to upscale a 21 minute video from 720x480 to 4k with the gai cg preset takes 24 hours, 14 for upscaling to 1080p. (The original plan was to probably go from 4k back down to 1080 after a second pass but it's going to take ages).
I feel like something that can leverage my CPU might actually be faster though, I have a 5800x and h264 encoding a 44gb 4k file down to 4gb h.265 takes under 20 minutes so I feel like anisynth could provide impressive results with much less time involved, however I have never looked into this before, I downloaded avisynth and avspmod to get started but could use some help.
I'm basically looking to improve the dvd rips of an old cg cartoon as much as I can and topaz has been pretty impressive but there are some quirks I noticed here and there in the output.
I was hoping one of you guys could provide me with a basic profile that would convert the 480p video up to 1080, or maybe even 4k (though not sure that's necessarily a good choice coming from 480p) with similar, or even better results than topaz?
It sounds like fairly aggressive sharpening and smoothing would be good, however I noticed that topaz does seem to miss some blocky affects and I do feel like it could have done better.
I'm hoping with a starter script for this sort of use vs live action stuff which seems common, and some tutorials I can get started and at least do a comparison, I'd really appreciate the help from you guys to try and restore this old show.
Here's a link where you can get some samples, I have the full first episode of the show, as well as the intro alone that I simply sized up to 4k res, as well as the output from topaz for the intro alone. I do also have the full episode churned down to a more reasonable size but that's probably not really needed for this.
http://www.perpetualarchive.ca/Dragon%20booster%20Stuff/
It does seem like from some googling there are other plugins/filters you can download that would help further with certain things so maybe I'll have to get some of those, but I think that my source is a reasonably decent quality.
Also, resolution for further episodes will probably change, but I'm guessing when upscaling it will just pillar or letterbox as needed to fit inside the target res as much as possible? There are some that will be in 720x404 or so.
edit: also included the full dvd rip as the original mkv in case that's useful. topaz added weird scan lines that don't exist with mkv but I bet avisynth doesn't do that.
edit: Trying this script but I can't figure out how/where I actually input the path to the original video in a way that it accepts it, it keeps saying not a clip.
https://www.l33tmeatwad.com/scripts/anime-upscaleLast edited by XOIIO; 23rd Dec 2020 at 06:05.
-
Last edited by jagabo; 23rd Dec 2020 at 09:01.
-
Ahh I see so that other code is basically just defining the filter, like a separate loop in an arduino script that you call, ok.
Seems to be working in avspmod but I don't think it's actually outputting a file, it's not showing anything on the right side, only the left. megui just crashes when I try to open that script file. Hmm.
It's also only running at .25 frames per second, and not even touching my cpu or gpu for some reason usage wise. -
It sounds like something is wrong. With a script like:
Code:Mpeg2Source("opening.d2v", Info=3) TFM(d2v="opening.d2v") TDecimate() vInverse() AnimeUpscale(widescreen=false)
Note, I'm not familiar with AnimeUpscale. I just used the defaults, aside from the widescreen=false option. I don't know how optimal those defaults are. You'll have to play around with the various settings.Last edited by jagabo; 23rd Dec 2020 at 20:31.
-
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
I was playing around with AnimeUpscale. The script a few posts above opens properly in avspmod and VirtualDub but when I went to encode it with x264 cli or ffmpeg it just aborted with no results and no error report. So I started looking at what the program did and decided its best feature was the noise reduction with mDegrainSimple. So I borrowed from that and did some other cleanup and upscaling of the original DVD data. I also wanted to use Mpeg2Source's deringing filter which only works in 32 bit AviSynth. So I used MP_Pipeline to run 32 bit Mpeg2Source in 64 bit AviSynth.
Code:function mDegrainSimple(clip, int "frames", int "blksize") # stolen from AnimeUpscale { originalvideo = clip blksize = Default(blksize, 8) frames = Default(frames, 1) overlap = blksize/2 super = MSuper(originalvideo, pel=2, sharp=1) backward_vec3 = (frames==3) ? MAnalyse(super, isb = true, delta = 3, blksize=blksize, overlap=overlap) : super backward_vec2 = (frames>=2) ? MAnalyse(super, isb = true, delta = 2, blksize=blksize, overlap=overlap) : super backward_vec1 = MAnalyse(super, isb = true, delta = 1, blksize=blksize, overlap=overlap) forward_vec1 = MAnalyse(super, isb = false, delta = 1, blksize=blksize, overlap=overlap) forward_vec2 = (frames>=2) ? MAnalyse(super, isb = false, delta = 2, blksize=blksize, overlap=overlap) : super forward_vec3 = (frames==3) ? MAnalyse(super, isb = false, delta = 2, blksize=blksize, overlap=overlap) : super mvvideo = (frames==3) ? MDegrain3(originalvideo, super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=400) : (frames==2) ? MDegrain2(originalvideo, super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400) : MDegrain1(originalvideo, super, backward_vec1,forward_vec1,thSAD=400) return mvvideo } MP_Pipeline(""" ### platform: win32 Mpeg2Source("opening.d2v", CPU2="ooooxx", Info=3) ### ### """) import("C:\Program Files (x86)\AviSynth+\plugins64+\deblock_qed.avs") import("C:\Program Files (x86)\AviSynth+\plugins64+\deblock_qed_i.avs") import("C:\Program Files (x86)\AviSynth+\plugins64+\Santiag.avs") deblock_qed_i(quant1=30, quant2=30) TFM(d2v="opening.d2v") TDecimate() Santiag() # antialias some bad edges dehalo_alpha(rx=1.0, ry=2.0) # reduce vertical oversharpening halos #removegrain(2).flash3kyuu_deband() # didn't to a lot so I removed it mDegrainSimple(frames=2,blksize=4) # heavy noise reduction // stepwize sharp upscaling aWarpSharp2(depth=2) nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=960, fheight=720) aWarpSharp2(depth=2) Sharpen(0.2) nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1440, fheight=1080) aWarpSharp2(depth=5) Sharpen(0.5) prefetch(8) # 8 threads
XOIIO's dumb upscale, cropped and downscaled to 1440x1080 (bicubic):
[Attachment 56425 - Click to enlarge]
result of above script:
[Attachment 56426 - Click to enlarge]
Encoding with x264 CLI at the slow preset ran around 16 fps (i9 9900K).
The m2v of the opening sequence is attached if anyone wants to play around with it.Last edited by jagabo; 25th Dec 2020 at 08:37.
-
@lordsmurf
I just wanted to tune in and thank you. I agree with everything you said in this thread 100%, down to the dogfood analogy. Really speaks from my heart. People are so lazy and ignorant they choose to consume rather then to learn. -
The problem here is the same as the other thread that just got reactivated, about deblurring a license plate. The commonality between both threads is that people see Hollywood movies where someone pushes a button and the image suddenly becomes full of details and looks like it was shot in IMAX from six feet away. They therefore think this is possible when, in fact, it is 100% hocus-pocus and has no grounding in reality. These things are in the same category as traveling faster than the speed of light. That particular movie science fiction simply cannot be done, not now, and not 1,000 years from now.
Adding detail might be possible some day (not now), although whether that detail matches the original reality is like expecting colorized movies to match the colors in the original scene. -
Just to educate uneducated, there is fine way to do blind deconvolution, and it is known fact.
-
If you took jagabo's clean up script, skipped the upscaling step,, and instead upscaled with a NN model appropriate for that animation style (it looks like it's supposed to be clean and flat textures)
This is an ESRGAN 4x model , downscaled to 1440x1080 (jagabo's screenshot looks like 601/709 mistmatch) -
Oops, yes. I used VirtualDub2 to make the screen shots -- it converted to RGB with rec.601 instead of rec.709. I should have done the RGB conversion myself.
-
Yes , to that extent it's not possible, where you have 4 pixels and suddenly a full picture emerges
Adding detail might be possible some day (not now), although whether that detail matches the original reality is like expecting colorized movies to match the colors in the original scene.
The short version is it's a mixed bag, because you need appropriate models for a given situation. I'll post more on this later... But for 1) certain types of animation, anime, cartoons 2) live action derived from clean downscale , the NN scaling can be significantly better . You need specific trained models for specific situations - that's the problem. And it can take months for the training on regular hardware . Also, you need proper pre and post processing. For people that think they can just use a single program and presto... forget it. There are common issues that arise from the NN scalers -
I have the same perspective on this, enhanced object is going to be a made up object, believe it or not.
There is apples and oranges going in this thread, one side insists on explaining that above mentioning enhancing software is good one click solution to perceive video as better by pulling some training models , sharpening etc.
And other side, that actually talks about restoration of video, which needs clever human, to select filters, settings, right order and doing that basically for every video. This needs heavy work that needs lots of knowledge and experience, knowing colorspaces, software, how they work etc. Comparing results and deciding when filter is too much etc. One click software cannot do that. And also even it starts do that, the content will be something else, as johnmeyer mentioned. Folks forget that this Universe is not determined as it is in minecraft. -
Hi everyone, I do really need a (n open source, preferably) tool - or script - to easily upsize my old 576i stuff to 1080p or, even better, to 2160p: any suggestion ?
Last but not least do you think these open source tools maybe useful ?
- Video2X: Machine learning video/GIF/image upscaling
- Waifu2x-Extension-GUI: Image, GIF and Video enlarger/upscaler(super-resolution) achieved with Waifu2x, SRMD, RealSR, Anime4K and ACNet.
- BasicSR: an open source image and video restoration toolbox based on PyTorch, such as super-resolution, denoise, deblurring, JPEG artifacts removal, etc.
- Video Super Resolution: A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
- Fast-SRGAN: enable real time super resolution for upsampling low resolution videos.
- Anime4K: a set of open-source, high-quality real-time anime upscaling/denoising algorithms that can be implemented in any programming language.
- Zooming-Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution
Infos:
- Video-Super-Resolution list: collection of papers
EDIT: just found another interesting AS script @ Doom9...
Avisynth AiUpscale v1.1.0Last edited by forart.it; 12th Jan 2021 at 07:08.
-
Hullo everyone,
I think it might be worth updating this conversation now the new version of Topaz has been released which adds de-interlacing.
Here's some still examples to compare, all originals interlaced PAL SD made into full 1080 HD 50p files.
LEFT: - QTGMC with NNEDI3 resizing via Vapoursynth in Hybrid with Denoiser & Sharpener.
RIGHT: - Topaz Enhance AI v.1.9.0 with the new Dione DV and TV presets.
From DV PAL source clip:
From Hi8 PAL source clip:
Here are the full size screengrabs and video files: https://1drv.ms/u/s!AlStBzht9Fsbgb9yaZ8cwfZBSxOb7g?e=XaRYOJ
I tried to choose frames which have movement and fine details in, as well as a variety of exposures, textures, colours and shapes.
Topaz obviously costs money, doesn't like passing through audio very much, and doesn't let you create a 1440x1080 output file from a 4:3 source; output must be full HD with pillarboxing. But it does appear to be faster, clearly much much easier to set up, and on the strength of my initial tests does just as well, if not better, than I've been able to do with many hours of fiddling with Hybrid and the many many Vapoursynth/Avisynth options. If I get two clients who request upscaling, it's worth me buying Topaz - I am using the 30 day free trial at the moment, and would recommend you give it a try before firing back.
Oh, I should just add that the Hybrid exports are ProRes, and the Topaz are H264 - and they still look as good/better!
Just though you might like to hear what I've found so far. Just to upset the AviSynth die-hards even more - I did all the above on a Mac!
What do you think?
Best to you all,
John
Salisbury, Wilts. UK -
Did you add contrast sharpening, since from the looks of it Topaz did,... ?
users currently on my ignore list: deadrats, Stears555 -
Some workflow issues with Topaz ? Looks like interlaced chroma upsampling errors ( probably progressive instead of interlaced conversion to RGB before deinterlacing)
[Attachment 57131 - Click to enlarge]
What was the procedure? Direct DV input ? -
Yes, direct from an HVR-1500 via Firewire into FCP.
I added a Sharpener so that probably accounts for that.
The chroma errors are pretty much the only thing to distinguish the Topaz from the Hybrid, which for the speed increase I can live with, but if anyone has a suggestion on how to speed up my Hybrid/QTGMC/ffmpeg/Vapoursynth render time I’d rather use that.
My aim overall is to get a set of adjustments and filters I can use universally to deinterlace and resize any clients SD clip to HD without needing to fiddle endlessly with Hybrid every time. That’s the attraction on Topaz for me - much simpler & quicker than Avisynch, which is more valuable overall to my business, even if it doesn’t do quite as good a job. Clients won’t care anyway. They pay a set rate which only allows for a certain amount of time spent on each tape, so whatever hurrys that up means more efficiency, more tapes done, more money in to buy more old video equipment...!
Any advice? I’m running QTGMC, ffmpeg & Vapoursynch through Hybrid on my 2018 MacBook Pro 15”. Just want to speed it up basically. -
It's probably "fixable" if the dev's were aware of it, you can report it. IIRC think they are using ffmpeg/libav libraries on the back end. Upsample with interl=1 when deinterlacing
For the 2nd, you should start a different thread, but you can optimize the settings, perhaps GPU (or not, sometimes it's slower for some steps). Don't use stuff like "placebo". Batch process. If you value speed, make some tradeoffs with bottlenecking settings -
Hybrid speedUp: post the Vapoursynth script you use atm. in Hybrid, otherwise how do you expect recommendations for speedUp?
users currently on my ignore list: deadrats, Stears555 -
There is some local contrast enhancement with the Topaz example; it causes high frequency luma flickering around objects. For example it affects the area around the white shirt, this is a separate problem from the interlaced chroma
Another issue is the superbrights are clipped. All these neural net algorithms (including Topaz) work in RGB, and if you don't rescue or legalize the highlights pre RGB conversion, or use a full range conversion to RGB, you clip the data
There is temporal aliasing caused by the NNEDI3 upscaling in the QTGMC example (you could have smoothed it over with QTGMC in progressive mode, but that would be even slower) . Some elements like the front tractor grille are magically "fixed" and straightened by Topaz, when I bet the DV source had aliasing and moire patterns to begin with. But there are the same luma flicker issues in Topaz clip, the top 1/4 of frame around the sky. For QTGMC there is some at the very edges, but you can use border=true to process the edge pixels -
Um, I'm not using a script - just whatever is already loaded into Hybrid. I check a box.
The boxes I have checked are:
- Crop/Resize to PAR 1:1 square pixel 1440x1080
- Filtering - De-interlace/Telecine - QTGMC Auto - Bob - Final temporal smoothing: 2 - Superfast, all else defaults
- Filtering - Filtering - Noise, Chroma noise strength: 1, temporal, all else defaults
- Filtering - Vapoursynth - Denoise - VagueDenoiser 85%, all else defaults
- Filtering - Vapoursynth - Sharpener - FineSharp, mode 1, Sharpen 2.50, all else defaults
- Filtering - Vapoursynth - Resizer: NNEDI3, all at defaults
That's it. I've not modified any other setting or enabled anything else, mainly because anything else I tried added time, and most of the videos I get in aren't actually that bad so much more processing isn't needed. Colour & whatnot I can do in Final Cut before final output from the timeline.
Do you need to know anything else to give me some pointers?
I really don't want to have to learn & try every million options just to do the basics, I just don't have the time, motivation or will ever earn enough from this to be worth it. Again, which is what drew me to the Topaz thing.
Any idiot proof suggestions welcome! -
- Simply posting the script content (Filtering->Vapoursynth->Vapoursynth script view) is probably more useful for most users.
- QTGMC&NNEDI3 both habe a gpu option which should speedUp things (assuming it works on your system and on Mac in general; only got a MacOS running in a VM without hardware accelleration)
- instead of FineSharp I would recommend to use CAS, FineSharp will add lots of artifacts with that setting.
- mixing FFmpeg and Vapoursynth filters usually is a bach idea, better use 'AddGrain' from Vapoursynth
Any idiot proof suggestions welcome!
Cu Selurusers currently on my ignore list: deadrats, Stears555 -
Thanks Selur!
Those are some useful tricks to try. I didn't even realise you could view the scripts within Hybrid. While I'm looking at it, you may as well cast your eyes in case anything else leaps out that I could do to improve the speed of my renders:
# Imports
import os
import sys
import vapoursynth as vs
core = vs.get_core()
# Import scripts folder
scriptPath = '/Applications/Hybrid.app/Contents/MacOS/vsscripts'
sys.path.append(os.path.abspath(scriptPath))
# Import scripts
import edi_rpow2
import havsfunc
# loading source: /Users/Fryfilm/Movies/VIDEO RESTORATION TESTS/Hi8 to HD/Tractors via JVC FW FCP7 50i to DV .mov
# color sampling YUV420P8@8, matrix:470bg, scantyp: bottom field first
# luminance scale TV
# resolution: 720x576
# frame rate: 50 fps
# input color space: YUV420P8, bit depth: 8, resolution: 720x576, fps: 25
# Loading /Users/Fryfilm/Movies/VIDEO RESTORATION TESTS/Hi8 to HD/Tractors via JVC FW FCP7 50i to DV .mov using LibavSMASHSource
clip = core.lsmas.LibavSMASHSource(source="/Users/Fryfilm/Movies/VIDEO RESTORATION TESTS/Hi8 to HD/Tractors via JVC FW FCP7 50i to DV .mov")
# making sure input color matrix is set as 470bg
clip = core.resize.Point(clip, matrix_in_s="470bg",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# setting field order to what QTGMC should assume
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=1)
# Deinterlacing using QTGMC
clip = havsfunc.QTGMC(Input=clip, Preset="Super Fast", TFF=True, InputType=0, TR2=2, Sharpness=1.0, SourceMatch=0, Lossless=0) # new fps: 50
# make sure content is preceived as frame based
clip = core.std.SetFieldBased(clip, 0)
# denoising using VagueDenoiser
clip = core.vd.VagueDenoiser(clip=clip)
# contrast sharpening using CAS
clip = core.cas.CAS(clip=clip, sharpness=0.010)
# resizing using NNEDI3CL
clip = edi_rpow2.nnedi3cl_rpow2(clip=clip, rfactor=2, qual=1, pscrn=2)
# adjusting resizing
clip = core.fmtc.resample(clip=clip, w=1440, h=1080, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: YUV420P16 to YUV422P10 for ProResModel (i422)
clip = core.resize.Bicubic(clip=clip, format=vs.YUV422P10, range_s="limited")
# Output
clip.set_output()
Thank you all for your input! -
Thanks for taking the time to post those comparisons.
The Topaz output looks good, but I don't see anything that I haven't done myself with various AVISynth scripts. The emergence of apparent detail is similar, although not as good as what I posted earlier in this thread. For the result I posted, I simply used MVTools2 denoising, and various sharpening filters:
https://forum.videohelp.com/threads/399360-so-where-s-all-the-Topaz-Video-Enhance-AI-d...e3#post2603010Last edited by johnmeyer; 2nd Feb 2021 at 18:46.
-
CAS -> I would use a higher value
With QTGMC try GPU and another preset, it's probably not slower since the gpu can be partially used.
Also try whether it help to move CAS behind the resizer
Denoising -> KNLMeans2CL or similar might be interesting.users currently on my ignore list: deadrats, Stears555 -
Great, thank you all!
I will try out those suggestions.
While I'm testing, it would be nice not to have to process a whole clip just to see the results. Is there a straightforward way to limit the output frames or trim the base clip?
I guess this means importing a script into Vapoursynth, so is there one that I can make work with it's GUI? If loading that is a process I only need to do once I can follow instruction well, just want to enable easy trimming for future use, if possible.
There must be a thread already about this but I keep getting lost, so if someone could point me in the right direction I'd be most grateful.
Best,
John
Similar Threads
-
how can i restore an enhance audio?
By enable in forum AudioReplies: 4Last Post: 21st Feb 2021, 16:26 -
DVDFab Video Upscaling--Enhance DVD from SD (480p) to Full HD (1080p) video
By DVDFab Staff in forum Latest Video NewsReplies: 2Last Post: 6th Aug 2020, 03:31 -
Is format factory can enhance video ?.
By mrs.faith in forum Video ConversionReplies: 1Last Post: 21st Apr 2017, 14:15 -
How Enhance video quality in potplayer?
By asiboy in forum Software PlayingReplies: 5Last Post: 1st Jan 2017, 15:01 -
Enhance this image to get the license plate
By thestolz in forum RestorationReplies: 7Last Post: 18th Jul 2016, 12:47