VideoHelp Forum
+ Reply to Thread
Results 1 to 12 of 12
Thread
  1. Member tugatomsk's Avatar
    Join Date
    Sep 2020
    Location
    Portugal
    Search Comp PM
    While looking for ways to directly use waifu2x-caffe for video sequences without the hassle of having to create image sequences and then recombining them, I found this:

    https://github.com/k4yt3x/video2x/releases/tag/4.8.1

    It's called video2x-GUI and it's a user interface for using several kinds of upscalers besides waifu2x-caffee, like vulkan and others.

    However, since I don't have a dedicated graphics card, I had to try using waifu2x on a short sequence using CPU only. I was disappointed to find out that there was no improvement in image resolution compared to a simple resize using ACDsee PRO...

    I read somewhere that using waifu2x-caffee with GPU is the only way to obtain great results. If this is the case, why does this GUI enable the CPU option?

    I'm beginning to wonder if this GUI is any good at all...
    Image Attached Thumbnails Click image for larger version

Name:	video2x GUI.jpg
Views:	625
Size:	293.9 KB
ID:	65162  

    Click image for larger version

Name:	video2x GUI II.jpg
Views:	546
Size:	143.5 KB
ID:	65163  

    Quote Quote  
  2. Can't say anything about that, don't think I ever tried it.
    There are Waifu2x Vapoursynth filters, so any GUI using Vapoursynth can potentially use Waifu2x.
    (btw. there are tons of other (newer) ml based resizers that oftern yield better results than Waifu2x)

    You probably won't need a dedicated graphics cards if your onboard graphic drivers support OpenCL.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    There is an older one for AviSynth as well: Waifu2xAvisynth by sunnyone on github / homepage in Japanese

    And a slightly newer, but with some more elaborate setup to run it in AviSynth, here in this board: waifu2x by nagadomi
    Quote Quote  
  4. Member tugatomsk's Avatar
    Join Date
    Sep 2020
    Location
    Portugal
    Search Comp PM
    Originally Posted by Selur View Post
    (btw. there are tons of other (newer) ml based resizers that oftern yield better results than Waifu2x)
    I've just read about Topaz Gigapixel's AI Upscaler. Is it better or equal those that you've mentioned, the ones better than waifu2x?
    Quote Quote  
  5. Depends on the content, from what I saw depending on the model selected in Topaz Gigapixel's AI Upscaler might be better or worse for you content.
    -> you'll have to test to know for sure.
    In general Topaz Gigapixel's AI Upscaler is basically a frontend to for a bunch of ml technices. So you could probably do the same (probably in a more controled way) with free open source stuff, but sure not as easy and comfortable.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  6. There are way better upscaling filters than waifu2x
    Quote Quote  
  7. Member tugatomsk's Avatar
    Join Date
    Sep 2020
    Location
    Portugal
    Search Comp PM
    Originally Posted by s-mp View Post
    There are way better upscaling filters than waifu2x
    Such as?
    Quote Quote  
  8. A bit off-topic, but may be someone reading this knows:
    Iirc waifu, and most GAN based stuff currently only work on spatial data, without looking at temporal components, right?
    Only approaches I remember taking the temporal natur of video into account are TecoGAN potentially Topaz VEAI (I guess).
    -> Are there any other noteworthy approaches which take the temporal natur of video into account?

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  9. I'm pretty sure that GAN based stuff looks at temporal components. I remember seeing online a filter that turns dashcam footage into a video that looks like it's from a video game and they mentioned using temporal stuff because single frame GAN filterer makes each frame different creating seizure intruding results
    Quote Quote  
  10. okay, will have to read up on that.
    Since, VSGAN is a Single Image Super-Resolution Generative Adversarial Network (GAN) iirc. thus would not take temporal effects into account. (The training of a model might (I don't think traditional GAN does) , but applying the model does not as far as I know.)

    Cu Selur

    Ps.: created an issue entry about it over at the vsgan github (https://github.com/rlaphoenix/VSGAN/issues/23), rlaphoenix should be easily able to shed some light an this.
    Last edited by Selur; 19th Jun 2022 at 04:25.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  11. It would depend on the architecture used. EGVSR is the only one that really does anything like that but is hell of a lot harder and longer to train. It is supported in VSGAN though.
    So yes, since none of the models from https://upscale.wiki/wiki/Model_Database are EGVSR based, basically everything is spatial only (no temporal component in training or application). EGVSR trained models are rare.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!