VideoHelp Forum




+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 91
  1. Originally Posted by SaurusX View Post
    Since there's a focus on speed here, it would benefit Nvidia users greatly to try this TensorRT Docker setup. It improves inference time by at least a factor of 4.

    https://github.com/styler00dollar/VSGAN-tensorrt-docker
    That sounds interesting. What Speed (FPS) do you get with what graphics card with that on 720p to 1080p upscaling ?

    4 times more speed sounds nice.

    Can i check somehow, if my tensorcores are working to check, if hybrid already uses these ?
    Quote Quote  
  2. vsgan does use the tensor cores, but it is not as fast as vs-trt (https://github.com/AmusementClub/vs-mlrt/tree/master/vstrt).
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  3. And does hybrid uses these vs-trt ? Otherwise i could try these docker-thing to look, how much faster it is.
    Quote Quote  
  4. No, Hybrid does not use vs-mlrt, vs-trt is a part of vs-mlrt.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  5. Originally Posted by Selur View Post
    No, Hybrid does not use vs-mlrt, vs-trt is a part of vs-mlrt.
    Ah ok, what a pity. But for sure it would be a ton of work and i am probably useless for help xD

    Then i will try to use this docker thing.

    But i want to thank you for your software. It's so great!
    Last edited by Platos; 5th May 2023 at 10:37.
    Quote Quote  
  6. btw. just did vstrt a try, and it is faster than vsgan.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  7. Originally Posted by Selur View Post
    btw. just did vstrt a try, and it is faster than vsgan.
    You mean you did use one of these models like LD-Anime Comapct with vstrt ?

    How much is the speed improvement ?

    Sadly i'm not able to install this docker-thing. The github-site says the youtube-tutorial is a) outdated (also the comments proves that) and b) this method shown in the video is worse in speed so actually not good for me.

    But i totally dont get, how to install it. I man the docker is ok, just use the .exe but the rest is horrible.
    Quote Quote  
  8. You mean you did use one of these models like LD-Anime Comapct with vstrt ?
    yes. (see: https://forum.videohelp.com/threads/409482-vs-mlrt-a-bit-after-Getting-started-Backend-trouble)
    How much is the speed improvement ?
    quite a bit, haven't benchmarked it.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  9. Ah nice thread. I hope someone will answer there.

    Maybe i better try this with vstrt because you already documented it.
    Quote Quote  
  10. It was already documented at the homepage of vs-mlrt, only thing I did is write an example.

    btw. the VSGAN author just fixed the loading of the combi models, see: https://github.com/rlaphoenix/VSGAN/issues/35
    (so if you know have two models, which work in VSGAN and you can mix them with chainNNer the result should also work in VSGAN)

    Cu Selur
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  11. Originally Posted by Selur View Post
    It was already documented at the homepage of vs-mlrt, only thing I did is write an example.

    btw. the VSGAN author just fixed the loading of the combi models, see: https://github.com/rlaphoenix/VSGAN/issues/35
    (so if you know have two models, which work in VSGAN and you can mix them with chainNNer the result should also work in VSGAN)

    Cu Selur
    Ok,nice. I red you did have to do sth. with hybrid? Can i get the new hybrid-version ?

    And btw. you did wrote there on github "not" instead "now".
    Quote Quote  
  12. Wait did you try to implement in hybrid or why does your path in the first code contains "hybrid" ?
    Quote Quote  
  13. Didn't create a new Hybrid version, adding support for vs-mlrt would take quite a bit of time.
    I simply used a script created by Hybrid as basis.

    removed the 'not' on github.


    Cu Selur
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  14. Ok, but when you have no new hybrid version, how does the fusion of 2 VSGAN models work. Sth has to be new. Do i have to download the models again, or the chainner-software ?

    [Edit: So you say "I replaced the Vapoursynth\Lib\site-packages\vsgan in my setup with latest code:". Where can i get that replacing ?]

    And yeah, i understand. Sadly i can't help you with hybrid because im a noob or less than that in programming But if i can help you doing that someday in any way tell me. Like Alpha-tester

    Update: It works now. Had to download the "vsgan" folder from the files there: https://github.com/rlaphoenix/VSGAN

    and replaced the the vsgan-folder in hybrid with the one from github. (hybrid/64bit/Vapoursynth\Lib\site-packages\vsgan)

    I will also try to do that with the vs-mlrt, but let's see if i manage to get it work. probably not xD But let's see.
    Last edited by Platos; 5th May 2023 at 15:04.
    Quote Quote  
  15. Happy you figured it what to do, to get Hybrid to work with the combi models.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  16. Warning: I just noticed, that the combi models work fine now, but some of the other models now stopped working properly.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  17. Really ? Which one ? I can test it. I tried LD-Anime Compact and Futsuu Anime Compact. Worked.
    Quote Quote  
  18. For example 4x_BSRGAN
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  19. Edit: I tested now some chained models. Because the results are not thaaat stunning, i dont make pictures, i just describe it:

    - a 50/50 hybrid of LD-Anime Compact and FutsuuAnime Compact is even blurrier than LD-Anime Compact which has the "weakest" sharpening-effect of all these VSGAN Compact models.

    - a 35/65 of the same models (65 for Futsuu) does maybe very slightly!! make the model sharper, but i (way) get a better result with using Darken & Thin GLSL A4k and DAAmod and CAS GLSL Sharpening (after one run of LD-Anime compact). So it's not worth it.

    - a 25/75 of the same is similair, but it is a bit darker (the more futsuu the darker the lines get). The looks then a bit (really only a bit) "sharper", because in my opinion blacker lines looks sharper. I mean, it looks not worse (i only testet it on one frame). The lines dont get bigger nor they get smaller. It's not worse, but it's no real improvement like when i use 2 models in a row. Maybe you like the darker tone of the lines, but it has still the same sharp (or not so sharp) style of LD-Anime Compact. It's ok.

    So compared to LD-Anime Compact and then using again FutsuuAnime Compact the chainned models are all weaker.

    Then i did a test with anifilm and LD-Anime compact and it is also not really god.
    The hybrid of LD-Anime compact and DigitalFilm Compact is funny It produces a green color, so all colors gets more green

    Edit: And i dont know why but all the chainned model does cut off or move the picture about some pixels. So on the right side of the picture i see more when i use the single-models (i testet the same scene again with chained and single-models and all the chained have the same. And the sourcefile is same as the single-models.)

    Originally Posted by Selur View Post
    For example 4x_BSRGAN
    It works on my end with 0.11fps. After 5 minutes it started dispalying me the fps.

    Used VSGAN 4xBSRGAN with 16bit and 720p to 1080p resoultion. No filter.
    Last edited by Platos; 5th May 2023 at 17:25.
    Quote Quote  
  20. I want to update in some sharpening-improvement in this thread:

    I tried to avoid using FutsuuAnime Compact after using LD-Anime Compact because of the speed loss. I could manage to make the LD-Anime Compact version looks closer to "LD-Anime Compact + FutsuuAnime Compact". But i could not reach it.

    I used this filters:

    - Darken- and thin GLSL A4k (in Hybrid Vapoursynth-line) with 0.4 and 1.3 strength and HQ (its important to use HQ, its really better)
    - Then DAAMod (in Hybrid Vapoursynth-line) with Nsize 8x4, NSS 256 and PScnr old (i just use some settings which sounds nice xD)
    - Then CAS GLSL (in Hybrid Vapoursynth-sharpen) with "better dialog" and "accurate"

    If someone has a better idea how to get closer to the "LD-Anime Compact + FutsuuAnime Compact" without using an AI Model or another slow method or a method which changes the original look, tell me. I used the filters in this order i wrote it.

    But here are some pictures to compare:

    Picture 1: https://imgsli.com/MTc2MDYx/0/1
    Picture 2: https://imgsli.com/MTc2MjE0

    So i think sth. around 1.35 strength for darken is ideal to make a similair look. 0.6 would reach in a similair line-thickness, but the more you thin out lines, the more some pattern changes. Like for example have a look on the old man and on his hair near his ear (and above). This "zig zag" lines changes more, when you use a higher level of "thin out". It looks sharper then, but yeah, the negative effect gets bigger. So it's up to you, what you like. I rather take the original look and so i would go for 0.4 strength in line thin.

    Then this AntiAliasing and sharpener. I just took 2 which sounds good xD. So i just tried some. It helps a bit.

    Edit: It's worth it to run all at once, because on my machine it did not slow down the upscaling in any way (first upscaler, then darken, then thin, then AA, then sharpen. That was my used order). Except you want to have an untouched "LD-Anime Compact" Version for future Editing (like using Futsuu with a better Graphics card in future or so, then you dont have to calculate the whole thing again). You could also do upscaling and filtering in 2 instances of Hybrid at the same time (first upscale a video with LD-Anime Compact and then if it's done you can start filtering the first video while upscaling the second. That's will not takes more time, i tested it.).

    Edit 2: here is another comapare with additional "LD-Anime Compact + DigitalFilm SuperUltraCompact". Because it's about speed, i thought let's compare this too, because it does not slow down much.

    https://imgsli.com/MTc2MjE5

    Now, after i worked alot with "LD-Anime Compact", "LD-Anime Compact + Filter" and "LD-Anime Compact + FutsuuAnime Compact", i personally think, the add of DigitalFilm looks worse than "LD-Anime Compact + Filter". It has way thicker and not good looking lines than all of the other in my opinion. Maybe you could darken and thin the DigitalFilm-combination too, but you would have to thin it alot stronger because it's line are thicker. So it also would change more than it's needed in the other models.

    Conclusion: If you want to have the best image while doing as little as possible changes on the original look, i would take:

    a) LD-Anime Compact (Edit: You can combine with same filter except darken. It will look similair sharp as with darken, but will retain more original).
    b) LD-Anime Compact + the filter i explained above (i think atm best value for speed)
    c) LD-Anime Compact + Futsuu Anime Compact (best but slow).

    My opinion!

    Edit 3: I did an additional Test with LD-Anime Compact + above filters, but without the darken-filter. I would say it looks as sharp as with darken, but if you do not like the additional dark, you can use only the other filters (thin, AA & Sharp). I'm not sure what i like more. With or without darken. Both are very nice. And for sure you could use a darken-value which is between zero and my used value 1.35 for your taste.

    Picture: https://imgsli.com/MTc2MjM5

    (Sadly i can't rearrange my uploaded pictures there. Now i have uploaded same pictures multiple time lol)
    Last edited by Platos; 5th May 2023 at 21:10.
    Quote Quote  
  21. How does your filter combination deal with scenes with smoke, details etc. ? Not knowing what the source looks like doesn't really help to tell what you aim for.
    Did you try interpolating different models than "LD-Anime Compact + FutsuuAnime Compact"?
    Did you try different masks with different filters?

    There are also tons of anime related stuff over at https://github.com/Irrational-Encoding-Wizardry.
    -> there are is always an option to improve.
    Last edited by Selur; 6th May 2023 at 00:12.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  22. That's a good imput, i will try to check sth. with smoke. There is actually often smoke. I have only to find it.

    And yes, but always a combination with LD-Anime Compact. I tried it with futsuu (different weights), DigitalFilm (horrible green) and anifilm. So they all dont get really a better result. Maybe a 35/65 or 25/75 combination of LD-Anime Compact and FutsuuAnime Compact is ok, but not really better. It's sth, like LD-Anime Compact with darker lines, but not sharper. So LD-Anime Compact + filter is better (and i actually like now Version a) from post # 80 most. So LD-Anime Comapct with all the filters except darken).

    No, i did not try masks because it's not supportet on hybrid when i remember correctly and im not good at command-things. I will invest the command-stuff in vslrt or this thing. Because when i can speed up, it will cost me less (actually, i calculate always the costs of kWh i need for whole series or episode or for 100 Episodes. A 2FPS speed is quite expenisv (in some way). Dont know actually, who is calculating these non-compact models which gives 0.1FPS).

    And thanks, i will check the link.
    Quote Quote  
  23. No, i did not try masks because it's not supportet on hybrid
    Not true, Hybrid has basic masking support. (Enable: Filtering->Vapoursynth->Misc->UI->Show 'Masekd'-controls)
    Hybrid has a bunch of additional options that are not shown by default in the UI, here's how Hybrid can look like:


    I will invest the command-stuff in vslrt or this thing.
    Not needed, when you use vsmlrt.py,...

    Cu Selur
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  24. But i cant use vslrt without commandline? Your thread shows alot code. It's not possible with hybrid.

    Or what do you mean with i dont need when i use vsmlrt.py?

    Edit: Saw your message now. Wow, very cool man. Thank you! This is nice.

    I will test it directly. I will tell you, if i find some bugs or so.

    Thank you very much !!!!
    Last edited by Platos; 6th May 2023 at 15:23.
    Quote Quote  
  25. a. check you pms, send you a link to a version which can also use vstrt through vs-mlrt.
    b. even in old Hybrid on in theory can use these through a custom section
    c. if you use vsmlrt like I did in the other thread and set backend to Backend.TRT vstrt is used and the .engine file gets automatically created

    Cu Selur
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  26. Yes, sorry, i did not check first.

    I try it now. But download wil ltake half an hour first.

    Thank you very much agian. Will try it.
    Quote Quote  
  27. Holy shit man.

    9FPS with VSGAN LD-Anime Compact and with VSLRT and Backend TRT 43FPS !!!!!!
    Thats ~5 time faster (using 16Bit on both, as i always do with 720p+ input).

    That's so cool!

    Thank you so much to implement that. That's so nice. I mean that's like a game changer!

    I will test now more and tomorrow.

    Edit: When you use 2 streams in the settings it's even 53 FPS for me so its nearly 6-times faster.
    Last edited by Platos; 6th May 2023 at 21:04.
    Quote Quote  
  28. I want to add, that this webside offers a lot of models.

    https://openmodeldb.info/?t=arch%3Acompact

    The good thing is, you can filter for example for compact models (like i did above) and it has also some pictures already. So it makes it much much easier to find a model in my opinion. You can also filter by purpose like "dehalo" or sth. And also i found some models, which i did not see on the site below or which there are not available anymore: https://upscale.wiki/wiki/Model_Database

    For example this one: https://openmodeldb.info/models/2x-GT-v2-evA

    Is (as far as i can see) a Version 2, which i did not find on the other webside.
    Last edited by Platos; 17th Jun 2023 at 09:44.
    Quote Quote  
  29. Interesting. Thanks for the info.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  30. Originally Posted by Platos View Post
    Holy shit man.

    9FPS with VSGAN LD-Anime Compact and with VSLRT and Backend TRT 43FPS !!!!!!
    Thats ~5 time faster (using 16Bit on both, as i always do with 720p+ input).

    That's so cool!

    Thank you so much to implement that. That's so nice. I mean that's like a game changer!

    I will test now more and tomorrow.

    Edit: When you use 2 streams in the settings it's even 53 FPS for me so its nearly 6-times faster.
    What is weird I did not complete the processing stuff but ran around 2 min LD Anime compact works similar speed one is 9 fps other one is 10 vsrt
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!