+ Reply to Thread
Results 61 to 90 of 91
-
vsgan does use the tensor cores, but it is not as fast as vs-trt (https://github.com/AmusementClub/vs-mlrt/tree/master/vstrt).
users currently on my ignore list: deadrats, Stears555, marcorocchini -
And does hybrid uses these vs-trt ? Otherwise i could try these docker-thing to look, how much faster it is.
-
Last edited by Platos; 5th May 2023 at 10:37.
-
btw. just did vstrt a try, and it is faster than vsgan.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
You mean you did use one of these models like LD-Anime Comapct with vstrt ?
How much is the speed improvement ?
Sadly i'm not able to install this docker-thing. The github-site says the youtube-tutorial is a) outdated (also the comments proves that) and b) this method shown in the video is worse in speed so actually not good for me.
But i totally dont get, how to install it. I man the docker is ok, just use the .exe but the rest is horrible. -
You mean you did use one of these models like LD-Anime Comapct with vstrt ?
How much is the speed improvement ?users currently on my ignore list: deadrats, Stears555, marcorocchini -
Ah nice thread. I hope someone will answer there.
Maybe i better try this with vstrt because you already documented it. -
It was already documented at the homepage of vs-mlrt, only thing I did is write an example.
btw. the VSGAN author just fixed the loading of the combi models, see: https://github.com/rlaphoenix/VSGAN/issues/35
(so if you know have two models, which work in VSGAN and you can mix them with chainNNer the result should also work in VSGAN)
Cu Selurusers currently on my ignore list: deadrats, Stears555, marcorocchini -
-
Wait did you try to implement in hybrid or why does your path in the first code contains "hybrid" ?
-
Ok, but when you have no new hybrid version, how does the fusion of 2 VSGAN models work. Sth has to be new. Do i have to download the models again, or the chainner-software ?
[Edit: So you say "I replaced the Vapoursynth\Lib\site-packages\vsgan in my setup with latest code:". Where can i get that replacing ?]
And yeah, i understand. Sadly i can't help you with hybrid because im a noob or less than that in programmingBut if i can help you doing that someday in any way tell me. Like Alpha-tester
Update: It works now. Had to download the "vsgan" folder from the files there: https://github.com/rlaphoenix/VSGAN
and replaced the the vsgan-folder in hybrid with the one from github. (hybrid/64bit/Vapoursynth\Lib\site-packages\vsgan)
I will also try to do that with the vs-mlrt, but let's see if i manage to get it work. probably not xD But let's see.Last edited by Platos; 5th May 2023 at 15:04.
-
Warning: I just noticed, that the combi models work fine now, but some of the other models now stopped working properly.
Cu Selurusers currently on my ignore list: deadrats, Stears555, marcorocchini -
Really ? Which one ? I can test it. I tried LD-Anime Compact and Futsuu Anime Compact. Worked.
-
For example 4x_BSRGAN
users currently on my ignore list: deadrats, Stears555, marcorocchini -
Edit: I tested now some chained models. Because the results are not thaaat stunning, i dont make pictures, i just describe it:
- a 50/50 hybrid of LD-Anime Compact and FutsuuAnime Compact is even blurrier than LD-Anime Compact which has the "weakest" sharpening-effect of all these VSGAN Compact models.
- a 35/65 of the same models (65 for Futsuu) does maybe very slightly!! make the model sharper, but i (way) get a better result with using Darken & Thin GLSL A4k and DAAmod and CAS GLSL Sharpening (after one run of LD-Anime compact). So it's not worth it.
- a 25/75 of the same is similair, but it is a bit darker (the more futsuu the darker the lines get). The looks then a bit (really only a bit) "sharper", because in my opinion blacker lines looks sharper. I mean, it looks not worse (i only testet it on one frame). The lines dont get bigger nor they get smaller. It's not worse, but it's no real improvement like when i use 2 models in a row. Maybe you like the darker tone of the lines, but it has still the same sharp (or not so sharp) style of LD-Anime Compact. It's ok.
So compared to LD-Anime Compact and then using again FutsuuAnime Compact the chainned models are all weaker.
Then i did a test with anifilm and LD-Anime compact and it is also not really god.
The hybrid of LD-Anime compact and DigitalFilm Compact is funnyIt produces a green color, so all colors gets more green
Edit: And i dont know why but all the chainned model does cut off or move the picture about some pixels. So on the right side of the picture i see more when i use the single-models (i testet the same scene again with chained and single-models and all the chained have the same. And the sourcefile is same as the single-models.)
It works on my end with 0.11fps. After 5 minutes it started dispalying me the fps.
Used VSGAN 4xBSRGAN with 16bit and 720p to 1080p resoultion. No filter.Last edited by Platos; 5th May 2023 at 17:25.
-
I want to update in some sharpening-improvement in this thread:
I tried to avoid using FutsuuAnime Compact after using LD-Anime Compact because of the speed loss. I could manage to make the LD-Anime Compact version looks closer to "LD-Anime Compact + FutsuuAnime Compact". But i could not reach it.
I used this filters:
- Darken- and thin GLSL A4k (in Hybrid Vapoursynth-line) with 0.4 and 1.3 strength and HQ (its important to use HQ, its really better)
- Then DAAMod (in Hybrid Vapoursynth-line) with Nsize 8x4, NSS 256 and PScnr old (i just use some settings which sounds nice xD)
- Then CAS GLSL (in Hybrid Vapoursynth-sharpen) with "better dialog" and "accurate"
If someone has a better idea how to get closer to the "LD-Anime Compact + FutsuuAnime Compact" without using an AI Model or another slow method or a method which changes the original look, tell me. I used the filters in this order i wrote it.
But here are some pictures to compare:
Picture 1: https://imgsli.com/MTc2MDYx/0/1
Picture 2: https://imgsli.com/MTc2MjE0
So i think sth. around 1.35 strength for darken is ideal to make a similair look. 0.6 would reach in a similair line-thickness, but the more you thin out lines, the more some pattern changes. Like for example have a look on the old man and on his hair near his ear (and above). This "zig zag" lines changes more, when you use a higher level of "thin out". It looks sharper then, but yeah, the negative effect gets bigger. So it's up to you, what you like. I rather take the original look and so i would go for 0.4 strength in line thin.
Then this AntiAliasing and sharpener. I just took 2 which sounds good xD. So i just tried some. It helps a bit.
Edit: It's worth it to run all at once, because on my machine it did not slow down the upscaling in any way (first upscaler, then darken, then thin, then AA, then sharpen. That was my used order). Except you want to have an untouched "LD-Anime Compact" Version for future Editing (like using Futsuu with a better Graphics card in future or so, then you dont have to calculate the whole thing again). You could also do upscaling and filtering in 2 instances of Hybrid at the same time (first upscale a video with LD-Anime Compact and then if it's done you can start filtering the first video while upscaling the second. That's will not takes more time, i tested it.).
Edit 2: here is another comapare with additional "LD-Anime Compact + DigitalFilm SuperUltraCompact". Because it's about speed, i thought let's compare this too, because it does not slow down much.
https://imgsli.com/MTc2MjE5
Now, after i worked alot with "LD-Anime Compact", "LD-Anime Compact + Filter" and "LD-Anime Compact + FutsuuAnime Compact", i personally think, the add of DigitalFilm looks worse than "LD-Anime Compact + Filter". It has way thicker and not good looking lines than all of the other in my opinion. Maybe you could darken and thin the DigitalFilm-combination too, but you would have to thin it alot stronger because it's line are thicker. So it also would change more than it's needed in the other models.
Conclusion: If you want to have the best image while doing as little as possible changes on the original look, i would take:
a) LD-Anime Compact (Edit: You can combine with same filter except darken. It will look similair sharp as with darken, but will retain more original).
b) LD-Anime Compact + the filter i explained above (i think atm best value for speed)
c) LD-Anime Compact + Futsuu Anime Compact (best but slow).
My opinion!
Edit 3: I did an additional Test with LD-Anime Compact + above filters, but without the darken-filter. I would say it looks as sharp as with darken, but if you do not like the additional dark, you can use only the other filters (thin, AA & Sharp). I'm not sure what i like more. With or without darken. Both are very nice. And for sure you could use a darken-value which is between zero and my used value 1.35 for your taste.
Picture: https://imgsli.com/MTc2MjM5
(Sadly i can't rearrange my uploaded pictures there. Now i have uploaded same pictures multiple time lol)Last edited by Platos; 5th May 2023 at 21:10.
-
How does your filter combination deal with scenes with smoke, details etc. ? Not knowing what the source looks like doesn't really help to tell what you aim for.
Did you try interpolating different models than "LD-Anime Compact + FutsuuAnime Compact"?
Did you try different masks with different filters?
There are also tons of anime related stuff over at https://github.com/Irrational-Encoding-Wizardry.
-> there are is always an option to improve.Last edited by Selur; 6th May 2023 at 00:12.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
That's a good imput, i will try to check sth. with smoke. There is actually often smoke. I have only to find it.
And yes, but always a combination with LD-Anime Compact. I tried it with futsuu (different weights), DigitalFilm (horrible green) and anifilm. So they all dont get really a better result. Maybe a 35/65 or 25/75 combination of LD-Anime Compact and FutsuuAnime Compact is ok, but not really better. It's sth, like LD-Anime Compact with darker lines, but not sharper. So LD-Anime Compact + filter is better (and i actually like now Version a) from post # 80 most. So LD-Anime Comapct with all the filters except darken).
No, i did not try masks because it's not supportet on hybrid when i remember correctly and im not good at command-things. I will invest the command-stuff in vslrt or this thing. Because when i can speed up, it will cost me less (actually, i calculate always the costs of kWh i need for whole series or episode or for 100 Episodes. A 2FPS speed is quite expenisv (in some way). Dont know actually, who is calculating these non-compact models which gives 0.1FPS).
And thanks, i will check the link. -
No, i did not try masks because it's not supportet on hybrid
Hybrid has a bunch of additional options that are not shown by default in the UI, here's how Hybrid can look like:
I will invest the command-stuff in vslrt or this thing.
Cu Selurusers currently on my ignore list: deadrats, Stears555, marcorocchini -
But i cant use vslrt without commandline? Your thread shows alot code. It's not possible with hybrid.
Or what do you mean with i dont need when i use vsmlrt.py?
Edit: Saw your message now. Wow, very cool man. Thank you! This is nice.
I will test it directly. I will tell you, if i find some bugs or so.
Thank you very much !!!!Last edited by Platos; 6th May 2023 at 15:23.
-
a. check you pms, send you a link to a version which can also use vstrt through vs-mlrt.
b. even in old Hybrid on in theory can use these through a custom section
c. if you use vsmlrt like I did in the other thread and set backend to Backend.TRT vstrt is used and the .engine file gets automatically created
Cu Selurusers currently on my ignore list: deadrats, Stears555, marcorocchini -
Yes, sorry, i did not check first.
I try it now. But download wil ltake half an hour first.
Thank you very much agian. Will try it. -
Holy shit man.
9FPS with VSGAN LD-Anime Compact and with VSLRT and Backend TRT 43FPS !!!!!!
Thats ~5 time faster (using 16Bit on both, as i always do with 720p+ input).
That's so cool!
Thank you so much to implement that. That's so nice.I mean that's like a game changer!
I will test now more and tomorrow.
Edit: When you use 2 streams in the settings it's even 53 FPS for me so its nearly 6-times faster.Last edited by Platos; 6th May 2023 at 21:04.
-
I want to add, that this webside offers a lot of models.
https://openmodeldb.info/?t=arch%3Acompact
The good thing is, you can filter for example for compact models (like i did above) and it has also some pictures already. So it makes it much much easier to find a model in my opinion. You can also filter by purpose like "dehalo" or sth. And also i found some models, which i did not see on the site below or which there are not available anymore: https://upscale.wiki/wiki/Model_Database
For example this one: https://openmodeldb.info/models/2x-GT-v2-evA
Is (as far as i can see) a Version 2, which i did not find on the other webside.Last edited by Platos; 17th Jun 2023 at 09:44.
-
Interesting. Thanks for the info.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
Similar Threads
-
Delete Scene Blending-Frames (or the effect out of the frame) from animes
By Platos in forum RestorationReplies: 22Last Post: 13th May 2023, 20:58 -
Improving old 90s cartoons/animes quality using madVR
By iruhamu03 in forum RestorationReplies: 0Last Post: 29th Apr 2023, 03:27 -
Computer Suddenly Super Slow
By cornemuse in forum ComputerReplies: 20Last Post: 17th Jul 2020, 17:06 -
nvenc encoding on nvidia 2060 Super vs 2070 Super
By hydra3333 in forum Newbie / General discussionsReplies: 12Last Post: 20th Nov 2019, 20:29 -
I built a super computer for editing but it still seems slow
By Arbutis in forum EditingReplies: 20Last Post: 8th Nov 2018, 05:42