VideoHelp Forum
+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 91
Thread
  1. chainner supports pth
    okay,... ah you probably need to install some of the stuff from the dependency manager,...
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  2. That's by the way what i mean with over-lit. RealCUGAN (and a lot other "sharp-models") do makes white things to white, so to bright:

    Here comparison (look on the white bandages): https://imgsli.com/MTc1MTAy

    If someone needs zipped pictures: https://uploadnow.io/f/CW2mKkW
    Quote Quote  
  3. Here is an example taken from one of your earlier videos

    This example uses the pro 2x model, no denoise (-1)

    Waifu2x-Extension-GUI-v3.100.01-Win64 had the same output as realcugan-ncnn-vulkan-20220728-windows, you can verify yourself with the GUI (I just uploaded 1 version). I did not test hybrid but it should be the same because all use realcugan-ncnn-vulkan.exe

    Code:
    "realcugan-ncnn-vulkan" -i kurz2_006.png -m models-pro -o Real-CUGAN_ncnn_models-pro-up2x-no-denoise.png
    syncgap 1 (slowest, most accurate)
    Code:
    "realcugan-ncnn-vulkan" -i kurz2_006.png -c 1 -m models-pro -o Real-CUGAN_ncnn_models-pro-up2x-no-denoise_syncgap1.png

    The official Real-CUGAN and vsmlrt have similar results to each other (I just uploaded one version) , but different from NCNN , at least on this pro model 2x, no denoise

    Some models have differences between the implementations, but some do not
    Image Attached Files
    Quote Quote  
  4. Okay, seems like th conversion with chaiNNer doesn't work.
    I can convert the models to NCNN or onnx, but from the looks of it one can't convert those to pytorch.
    -> so hoping that VSGAN will support those 'alternative' compact models at some time.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  5. How can i see, which model is compatible with hybrid?
    Quote Quote  
  6. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    ...
    Last edited by lollo; 4th May 2023 at 07:35.
    Quote Quote  
  7. VSGAN documentation (https://vsgan.phoeniix.dev/en/stable/) stats:
    Supported Models

    ESRGAN

    Enhanced Super-Resolution Generative Adversarial Networks. Supports both old and new-arch models of any scale.
    ESRGAN+

    Further Improving Enhanced Super-Resolution Generative Adversarial Network.
    Real-ESRGAN

    Training Real-World Blind Super-Resolution with Pure Synthetic Data. Supports 2x and 1x models if they used pixel-shuffle. Includes support for Real-ESRGAN v2 the arch mainly intended as an ultra-light model for fast video inference. However, it’s not a video network.
    A-ESRGAN

    Training Real-World Blind Super-Resolution with Attention U-Net Discriminators.
    EGVSR

    Real-Time Super-Resolution System of 4K-Video Based on Deep Learning.
    no further info. -> test it out
    Last edited by Selur; 1st May 2023 at 11:51.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  8. I saw now, that all these VSGAN Compact models are very slow, if the inputresolution is 720p or higher. Can you also reproduce that?

    For example with LD-Anime-Compact i have with hybrid ~ 1-1.5FPS when i use 720p as input. When i use 576p as input (like on this thread before), i have 10-15FPS.

    Can anybody reproduce that?

    Anyway, i also noticed: When i upscale with LD-Anime Compact an 1080p input to 1080p (so no difference but i checked "resize"), it does absolutley do not improve the video (only a little bit). When i upscale the exactly same video but with 720p input to 1080p, the image is waaaay better, than before. (the source is in 1080p and 720p, but the series is not produced in 1080p, but the source looks the same on both inputs).

    Can anybody tell me, why that happens? When i use RealCUGAN and do not increase the inputresolution, i still get a way better result.

    How can i "downscale" a source to its "real" resolution? So i mean the resolution of the original production, because the bluray-source is 1080p).

    i tried with handbrake just lower the resolution, but then it looks worse. So i mean when i lower the 1080p source to 720p, it looks worse than the 720p source. Has anyone an advice ? How do you lower the resolution ?
    Quote Quote  
  9. How can i "downscale" a source to its "real" resolution? So i mean the resolution of the original production, because the bluray-source is 1080p).
    You can use descale (https://github.com/Irrational-Encoding-Wizardry/descale) if you know the native resolution. (not integrated into Hybrid)

    Can anybody reproduce that?
    Applying LD-Anime-Compact '2x_LD-Anime_Compact_330k_net_g.pth' on a 1280x720 source (resizing to 2560x1440) I get ~-1.5fps.
    Scaling the source to 640x360 and then applying '2x_LD-Anime_Compact_330k_net_g.pth' to get back to 1280x720 I get ~38fps.
    Tip: If you use higher resolution sources, disable 32bit.
    When I use a 1280x720 source (resizing to 2560x1440) and disable 32bit, I get 8.54 fps.
    For SD content, usually 32bit is faster than 16bit VSGAN. (no clue why this is that case, but it is usually what happens)

    When i upscale with LD-Anime Compact an 1080p input to 1080p (so no difference but i checked "resize"), it does absolutley do not improve the video (only a little bit). When i upscale the exactly same video but with 720p input to 1080p, the image is waaaay better, than before. (the source is in 1080p and 720p, but the series is not produced in 1080p, but the source looks the same on both inputs).
    If you use LD-Anime Compact it always creates a 2 or 4 (depending on the X) and is you stay at your current resolution, you downsize from the archived resolution down to the target resolution, depending on the chosen downscaler/resizer you will get different results.


    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  10. There is a python function getnative.py that attempts to calculate the "ideal" resolution
    https://silentaperture.gitlab.io/mdbook-guide/filtering/descaling.html
    https://github.com/Infiziert90/getnative

    Some fan forums will already know the answer to specific BD releases so check /google
    Quote Quote  
  11. Ahh, nice tipp with the 16bit. I will try that. Then i maybe dont have to "downscale" the source. I will try out.

    Btw. i managed now to use anifilm compact (both) im hybrid. I first used the wrong file. I haf to choose the "real esrgan compatible" .pth.

    I did some more test and what i can say so far is:

    - anifilm compact and LD Anime Compact does not change the brighness. But anifilm changes the color a bit (or it is a light-thing, it just look like another color). But it has slightly more artefacts (4x is better). So at the end, LD Anime Compact is better (but slightly less sharp).

    - Futsuu Compact is sth similair to RealCUGAN but does not produce so much unwanted things but still bright up white too much.
    - DigitalAnime SuperUltraCompact is sth between LD-Anime Compact and Futsuu. It does bright up a bit, but its also sharper. It does not change the original look that hard like Futsuu (RealCUGAN is even worse) but still more than LD Anime Compact (which does actually does not change the look). But it's like RealCUGAN with less artstyle-change but also less sharp i think. But i think it is a nice model to try.

    Also i think LD-Anime Compact is the best on older animes and animes, which were originally produces below 720p. For example in some later Naruto Shippuden episodes i think DigitalAnime SuperUltra Compact is also a very good option. Or generally, if your source is already not too unsharp, LD-Anime Compact eill not sharp the picture alot. Then you will maybe be good with Futsuu or Digital Anime, because the source has already good enough quality, that these two do not change as much, like they do in low quality sources.

    - I found a way to deal with LD-Anime Compact's "not so black" lines. In Hybrid under "lines" you will find darkening and misc. Under misc there is "thin A4k" which does thin the lines (but without reduce volume of objects like aWarpSharp2). And under darkening there is darkenA4k and it does darken the lines.

    With these two filters you can actually perfectly make your wished line-style.

    I only need noe a filter for the opposite of darkening.

    Funny thing is: I think, blacker lines does make things look sharper. Because when you see a not full black line it looks more like blurry than a full black line.

    @Poisondeathray: that's a nice feature. would spend my alot time actually (and money, because of used kW hours). I will look into that too (and mabye then think it looks too complicated xD)
    Last edited by Platos; 2nd May 2023 at 12:50.
    Quote Quote  
  12. Originally Posted by lollo View Post
    ...
    Here you have even more Pictures about diferent Models. And no problem, it's good, when others benefit from it):

    I had to decide, which model i use now and therefore i needed a decend way to compare and this is a slider for me. So because i already take all pictures, i can also share it here. I did capture 9 scenes with 15 different combinations of models each time.

    I used the following models:

    - BoubbleAnimeScale Compact (not in picture)
    - DigitalFilm Compact (not in picture)
    - DigitalFilm SuperUltraCompact (Called SUC in pictures).
    - LD-Anime Compact
    - FutsuuAnime Compact
    - RealCUGAN -1n model se, very rough sync gap
    - RealESRGAN AnimeV3

    And a combination off all with LD-Anime Compact (always LD-Anime Compact is used first).
    I did not upload any picture of DigitalFilm Compact (no UltraSuper), because imgsli could not upload so much. But it's not needed, because it's only a liiiitle bit better than the ultra-version but way slower.
    The BoubbleAnimeScale (alone) is also not there because it just looks shit. It has even thicker lines than the combined one which is in the picture.

    So, here are the pictures:

    Picture 1: https://imgsli.com/MTc1ODc5/8/5
    Picture 2: https://imgsli.com/MTc1ODgy/9/4
    Picture 3: https://imgsli.com/MTc1ODgx/1/4
    Picture 4: https://imgsli.com/MTc1ODgz/11/4
    Picture 5: https://imgsli.com/MTc1ODg1
    Picture 6: https://imgsli.com/MTc1ODg2/5/4
    Picture 7: https://imgsli.com/MTc1ODg3/4/5
    Picture 8: https://imgsli.com/MTc1ODg4/8/3
    Picture 9: https://imgsli.com/MTc1ODg5/4/9

    I hope it's usefull for people, who have interests in upscaling Animes.

    - In my opinion RealCUGAN and RealESRGAN does absolutely destroy the original Look compared to the other. And BoubbleAnimeScale is like the same in combination with LD-Anime Compact (and as i said, even worse without).

    I think DigitalFilm SUC is ok, but it tends a bit to misinterpret things and it produces thicker lines, as i want. Also they looks sometimes just strange (i dont know why, i cant describe). While FutsuuAnime Compact does a way better job, it still misinterpret at some points (but it has darker lines and thiner lines). And then there is LD-Anime Compact, which i still like most of all. It draws a little bit thicker lines than Futsuu, does less black lines, but it is the best model for beeing accurate. It does missinterpret nearly nothing. The downside is, it is not that sharp but how i said, it does mostly only improve the picture. Thats the difference between all these models.

    So and that's why i combine them all with LD-Anime Compact and they always benefit from it.

    So LD-Anime Compact combined with DigitalFilm Compact does a decent job. It does not change further the original look alot, but it tends to draw the lines strange. they look just strange in my eye. So and the Boubble-Model is nice compared to RealCUGAN and RealESRGAN, but not when you want have the original look.

    At the end most of all i like Futsuu and DigitalFilm SUC combined with LD-Anime compact (and double time LD-Anime Compact). But on the DigitalFilm version i dont like the lines as much as in the double-LD-Anime Compact. And most of all i like LD-Anime Compact combined with FutsuuAnime Compact. It looks very similair to the double-LD-Anime Compact, but i like it more, because it looks sharper and i just like it more

    But at the end, you have to deside for yourselfe and yeah. If you want, you cann tell me your best choice.

    ps: i noticed, that you should not use higher resolution in the second-model. So when you upscale (for example) from a 720p to 1080p with the first model, do not higher the resolution further up with the second model. Because it will look worse/blurrier often (dont know why). Instead you can use higher resolution of the first model and then use agian the same resolution on the second model. But also that combination can look worse, because i guess upscaling too far is not always good.

    I have an exampel for 1080p vs 1440p:

    https://imgsli.com/MTc1ODkw

    You have to zoom in to see it. The Eyebrow looks better on 1440p but if you zoom in on his left hand, i personally think, it looks sharper on the 1080p-Version. Because it tends to makes lines thicker. So lines, which were sharp before, will be thicker, but lines which were a bit pixelated, will be thinner and sharper. Lines which are blurry, i dont know... But thats only my interpretation. I compared this only on this picture.

    Edit: An my source was not the same like in the last post with some comparing. It's now another.
    Last edited by Platos; 4th May 2023 at 18:03.
    Quote Quote  
  13. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Thanks!

    I share your conclusions, RealCUGAN and RealESRGAN are spectacular at first impression, but alter the original art too much. I prefere a less invasive improvement (if possible) as well.

    It looks very similair to the double-LD-Anime Compact, but i like it more, because it looks sharper and i just like it more
    Yes, we are sensitive to sharpness.

    You did a great job in providing results and facts!

    edit I think I deleted by mistake my previous post, that you linked. For future readers, I wrote there that I was following with high interest Platos experiments and the great contributions from Selur and poisondeathray
    Last edited by lollo; 4th May 2023 at 18:08.
    Quote Quote  
  14. iirc. one could use chainNer to combine/interpolate two models, assuming this works with your models of choice, you could speed up the process by combining the models.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  15. Originally Posted by Selur View Post
    iirc. one could use chainNer to combine/interpolate two models, assuming this works with your models of choice, you could speed up the process by combining the models.
    Hmm, can you give me some input about that?

    Does this "manage" two models calculating one after the other or dies this like "merge" the result of to models or does this merge the models itselfe ?

    Because now when you say it... I calculated yesterday how fast (or slow) i am: I get like 9FPS with 720p-l to1080p with LD-Anime Compact and then i get only 4.5FPS on 1080p to 1080p in the second run with Futsuu Anime Compact. I calculated, that this is like ibwould use a model, where i have only 3FPS. That means 8 times slower than realtime. Thats quite slow.

    So i would actually really profitate of this maybe. could you give me some more input ? Would be nice.
    Last edited by Platos; 5th May 2023 at 05:38.
    Quote Quote  
  16. I have never done this, but what you can try, is to
    1. Download latest release from https://github.com/chaiNNer-org/chaiNNer
    2. install it
    3. start it (it will then download some stuff)
    4. once it's started click in the upper right corner on 'manage dependencies' and install them all
    5. drag&drop two .pth files in it (one after the other)
    6. drag&drop a Pytorch->Interpolate Models element into the plane and connect it with the two source models (configure the weight balance between the models)
    7. add a Pytorch->Save Model element, connect and configure it.

    8. hit the play-button
    try the new model.

    No clue whether the resulting model does what you want and how it performs speed-wise, but it might be worth a try.

    Does this "manage" two models calculating one after the other or dies this like "merge" the result of to models or does this merge the models itselfe ?
    the later

    Cu Selur
    Last edited by Selur; 5th May 2023 at 05:51.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  17. Ah wow, that seems to be easy. Thank you, i thought that's another horror-story of command-line-ing for me xD

    It has a GUI, that's good. And it seems to be a nice gui.
    Quote Quote  
  18. I have a question: The Connection in "interpolate models" on the lower right corner: Model (orange) with Amount A (blue. What does that mean? Why do i have to do that ? Edit: Lol, it's just a line which goes from model (itnerpolate model) to model (save model) and the lines is going underneath. haha

    And it seems i can upscale images directly in the GUI. Probably for testing i guess or so. This GUI can actually way more than only merge models into a hybrdi-model.

    Edit two: I believe, the output-folder has to be the same like the inputfolder of the two models. Because otherwise i got an error.
    Last edited by Platos; 5th May 2023 at 06:22.
    Quote Quote  
  19. This GUI can actually way more than only merge models into a hybrdi-model.
    Correct, nobody said it can't, read the description in the github page,...
    So you could, probably, export your clip into an image sequence and then play around with it.

    What does that mean? Why do i have to do that ?
    How I see it "interpolate models" is similar to Merge in Avisynth/Vapoursynth, which is why I wrote I'm not sure, this will do what you want,...
    It does not do the same as when you apply model A and then model B on the output of model A, but it creates a new model that balances between the two models.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  20. Ah, sadly i got an error in hybrid. i did use this model (just did only do a 50/50 hybrid of these two models).

    Can you see the problem in this debug-file ?
    Image Attached Files
    Quote Quote  
  21. No, the debug output only indicates that something with the decoding of the script cased issues,... if you want a proper error check the Vapoursynth Preview.
    ->
    Code:
    sgan.load(model)
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsgan\networks\esrgan.py", line 46, in load
    model = arch(state)
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsgan\archs\ESRGAN.py", line 72, in __init__
    self.num_blocks = self.get_num_blocks()
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsgan\archs\ESRGAN.py", line 228, in get_num_blocks
    return max(*nbs) + 1
    TypeError: max expected at least 1 argument, got 0
    is what happens here, seems like the .pth files from chaiNNer are not compatible with VSGAN.

    Cu Selur

    Ps.: opened an issue over at https://github.com/rlaphoenix/VSGAN/issues/35 about this.
    Last edited by Selur; 5th May 2023 at 06:59.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  22. Ah sadly. But nice you opened an issue.

    i tried also some other combination's they all did not work and throw the same in vapoursynth preview.

    ps. you can only merge 2x models with 2x models and compact with other compact i guess. Otherwise chaiNNer gives an error.
    Quote Quote  
  23. Originally Posted by Platos View Post
    i tried also some other combination's they all did not work and throw the same in vapoursynth preview.
    It's a vsgan issue; the combined models will "work" in vapoursynth with vs-mlrt or avisynth with avs-mlrt as ncnn converted models

    "work" because they process... but sometimes the results might not be what you expect. A merged (interpolated) model is not the same thing as merging the outputs of models as layers or compositing +/- masks

    ps. you can only merge 2x models with 2x models and compact with other compact i guess. Otherwise chaiNNer gives an error.
    Yes, they have to be same architecture and scale. The "compact" (SRVGGNet) would not merge in a model with ESRGAN (RRDBNet)
    Quote Quote  
  24. Ah, ok. Yeah maybe i have to "chain" it by myself like in the comparing-test above, where i just upscale 2 times with 2 models. But it takes so long

    Wha is vs-mlrt ? Probably not easy for me to use i guess.
    Quote Quote  
  25. @poisondeathray: Do you know a way to convert those models into a VSGAN compatible version?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  26. Member
    Join Date
    Aug 2017
    Location
    United States
    Search PM
    Since there's a focus on speed here, it would benefit Nvidia users greatly to try this TensorRT Docker setup. It improves inference time by at least a factor of 4.

    https://github.com/styler00dollar/VSGAN-tensorrt-docker
    Quote Quote  
  27. Isn't that 'just' vs-mlrt inside a docker container?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  28. Member
    Join Date
    Aug 2017
    Location
    United States
    Search PM
    Originally Posted by Selur View Post
    Isn't that 'just' vs-mlrt inside a docker container?
    But with ability to use TensorRT.
    Quote Quote  
  29. Confused,.. doesn't normal vs-mlrt through vstrt (https://github.com/AmusementClub/vs-mlrt/tree/master/vstrt) also support TensorRT?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  30. Member
    Join Date
    Aug 2017
    Location
    United States
    Search PM
    Perhaps it does now. Given that the Docker is self-contained you have the advantage of not running into Python dependency hell.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!