VideoHelp Forum
+ Reply to Thread
Page 1 of 4
1 2 3 ... LastLast
Results 1 to 30 of 91
Thread
  1. First of all: This is a bigger question and if you have interest in the thing or just want to help me, pls do, but it is a very specific question for specific models (or methods). So please dont say just sth like "use RealESRGAN" if there are hundret models. Pls tell me the specific model.

    I search for the best upscaler for animes like Naruto, One Piece (older episodes) and so on. So Animes around 2000, but most important for Naruto. I mean upscaler like RealCUGAN, RealESRGAN etc. If i say RealESRGAN i mean no ESRGAN based upscaler. CUGAN is also in a way based on ESRGAN (i believe). So i mean actually the specific model RealESRGAN x4Plus Anime, when i say RealESRGAN. i use RealCUGAN atm. A lot of people says RealESRGAN is best, but in my opinion it's worse for animes in 1995-2010, because they are often worse quality. RealESRGAN for example drawns double-lines, when there is a blurry single line. Then it looks very ugly. Also it tends to not reconstruct details correctly. Here a picture:

    https://img.isharepc.com/wp-content/uploads/2022/02/Real-CUGAN-2.jpg

    I did saw exactly the same problem when i upscaled with RealESRGAN. But how you can see: RealESRGAN makes the image looking sharper than Real CUGAN, (that's probably why people think its better), but it has way more artefacts and does make things, they never where (intended to be there). So thats why i dont like RealESRGAN (this model) and i dont understand, how people can always say it's good. Because it produces huge artefacts.

    So, but RealCUGAN is way apart of perfect. It produces nearly no artefacts like RealESRGAN, but it changes the artstyle slightly. It makes the picture looking more "childisch", "shining", "cheesy" makes dark lines to thicker (to) black lines and so on. I think, that's because newer animes have a different art-style than animes around 2000.

    I will now show you, what problems i have with RealCUGAN and why i search for sth. better or andy idea, how i can solve this. I will tell you later, what models i already tried.

    Thicker lines: is makes small objects looking bad, because on small objects the lines makes (percentual) a big part on the volume of the object. So if it's get bigger lines and "darker" lines, it will look bad. The smaller problem is, that black lines around people are "more" black and thicker. If the person is not far away, it's not that bad. But how i said, if its sth smallthen...look:

    https://imgsli.com/MTc0NzYx

    This picture also have a "slitght" problem of making "double-lines", but on RealCUGAN this is normally not a huge problem, so dont focus on that pls.

    Less intensiv example, because people are less far away, so are bigger: https://imgsli.com/MTc0NzYz

    Looking cheesy/less "real"/ less "textured" more "comic-like/childisch":

    https://imgsli.com/MTc0NzY0

    For me the moon on the original have some kind of "realness". Its blurry, but i feel like the moon have (should have) some real-looking texture below the blur. But on the upscalled one it just looks completly "cheesy". It looks not how i imagine the picture without blur, if you know, what i mean.

    Another one (look on the sky and leaves): https://imgsli.com/MTc0NzY1
    Another one (look on the clouds): https://imgsli.com/MTc0NzY2

    You can see on the clouds it makes bright parts even brighter. That makes the picture looks childisch.

    Now i will show you also a picutre, where i think it does a very good job. Because you should see the whole picture, not only the bad things. Actually most of time it does a very good job. But i think you probably already know that, if you are into anime upscaling (at least for me it looks good):

    https://imgsli.com/MTc0NzY4

    Maybe you also think, it does a bit thicken and darken the lines. But i tried to solve that with filters (if you have an idea, tell me). But overall it does a god job in my eye.

    So coming to the conclusion:

    I search for a Model, which does have less of these problems, but still have these clarity of RealCUGAN or RealESRGAN. But, it should be not super slow. Atm i have on a specific video 20FPS with RealCUGAN, 1.5-2FPS on RealESRGAN x4plus Anime and 0.1-0.5FPS on some VSGAN models. So everything below 1FPS is just a No-Go, but even 1 or 2 FPS are actually quiet bad. But it you say, you know a model which is like solving all my problems but is slow, then please tell me. What i dont search are things like NNEDI3. I tried this and others and i have to search with magnifying glass to see a difference between that and mediaplayer-integrated upscaler. Maybe you say now, then im blind. Fine, maybe i am, but that's just not, what i search. So please be open minded. Answers like "AI Upscaling are all shit and ugly" are not searched here in this question.

    Now i will tell you, what models i used already, what i think about it and how i used it (i use only GUIs. but you dont have to name me only models, where there is a GUI for):

    With Waifu2x-Extension-GUI:

    - RealCUGAN (Best)
    - RealESRGAN (Ok, but i explaind above)
    - Waifu2x (worthless, does no really increase subjective "sharpness"/"clearity" of the anime compared to mediaplayer integrated upscalers)
    - SRMD (worthless, does no really increase subjective "sharpness"/"clearity" of the anime compared to mediaplayer integrated upscalers)
    - Anime4k (worthless, does no really increase subjective "sharpness"/"clearity" of the anime compared to mediaplayer integrated upscalers)

    With Selur's Hybrid:

    - Real CUGAN + RealESRGAN same models, similair result
    - VSGAN Models:
    - 4x AnimeSharp (super slow and produces double lines. Slower than RealESRGAN but worse or same)
    - 2x_Loyaldk-Keroro_650000_V1.0 (totaly not worth it. Does not reall makes the image looking "sharper"
    - x_OLDIES_290000_G_FINAL_interp_03 and 4x_OLDIES_ALTERNATIVE_FINAL (very slow, result ok, but it has a yellow-tone so it's just not the right thing)
    - 2x-UniScale_CartoonRestore-lite (This is a better one. It is not as "sharp" as RealCUGAN, but it does a decent job, but it had 0.5FPS isntead of 20FPS with RealCUGAN).


    So, thats it. Of VSGAN model i only liked the 2x uniscale carton (last one). So a similar model with more perfomance would (for example) be sth, im interested in. Or even with more clarity. i Have the Models from VSGAN from here: https://upscale.wiki/wiki/Model_Database

    At the end i can only say, i would really, really be glad, if some anime-upscaler-expert would look into my question.
    Last edited by Platos; 30th Apr 2023 at 07:55.
    Quote Quote  
  2. You could always degrain, denoise, sharpen, line darken,... (whatever filter you like maybe combined with masking) and then use NNEDI3, but that would require more work and understanding.
    Since you mentioned 'mediaplayer integrated upscalers' you could also try GLSL based filters.
    Anime fansubbers&co usually spend tons of time to come up with decent scripts and from time to time use scene based filtering,...

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. A lot of it can be personal taste, but for many people RealCUGAN is too sharp - ringing artifacts (ie. white and black lines) , loss of anime textures (the loss of textures and oversharpening lead to the "waterpainting" effect" instead of looking "normal"). It can be ok for certain types of anime, or when used with other filters

    Another useful model that is less sharp but preserves textures more is "2x_LD-Anime_Compact_330k_net_g" . It's a "compact" (SRVGGNet) model conversion from the original ESRGAN model, and about 10-30x faster depending on HW. (ESRGAN classic models are very slow)

    What some people do to address some problems is modulate the effect by mixing layers. For example 50% RealCUGAN , 25% something else, 25% model C etc... Don't limit yourself to 1 model, or 1 filter. It doesn't have to be scene by scene, but unless you use a model specifically trained on the same or very similar source, you won't get ideal results
    Quote Quote  
  4. Originally Posted by poisondeathray View Post
    A lot of it can be personal taste, but for many people RealCUGAN is too sharp - ringing artifacts (ie. white and black lines) , loss of anime textures (the loss of textures and oversharpening lead to the "waterpainting" effect" instead of looking "normal"). It can be ok for certain types of anime, or when used with other filters

    Another useful model that is less sharp but preserves textures more is "2x_LD-Anime_Compact_330k_net_g" . It's a "compact" (SRVGGNet) model conversion from the original ESRGAN model, and about 10-30x faster depending on HW. (ESRGAN classic models are very slow)

    What some people do to address some problems is modulate the effect by mixing layers. For example 50% RealCUGAN , 25% something else, 25% model C etc... Don't limit yourself to 1 model, or 1 filter. It doesn't have to be scene by scene, but unless you use a model specifically trained on the same or very similar source, you won't get ideal results
    Where can i find that model? I google exactly your quote and i did find nothing. Can you link me that ? Can i implement that on Hybrids VSGAN Models by importing own models?

    But what do you mean with mixing layers? Can you explain that further ? You mean masking? I red some guide but i actually dont understand it

    I found right now this test: https://phhofm.github.io/upscale/multimodels.html

    What do you think about that ? Do you know some models from there ?

    But the "sharpness" of RealCUGAN is really not a problem for me. I want that. The problem is the other things, you described. Loss of Texture, white and black lines. Thats why i search for a model, which does not have these problems but have still the sharpnes (you probably not like).

    Originally Posted by Selur View Post
    You could always degrain, denoise, sharpen, line darken,... (whatever filter you like maybe combined with masking) and then use NNEDI3, but that would require more work and understanding.
    Since you mentioned 'mediaplayer integrated upscalers' you could also try GLSL based filters.
    Anime fansubbers&co usually spend tons of time to come up with decent scripts and from time to time use scene based filtering,...

    Cu Selur
    Yeah, but i really cant or have the time to do that.

    But what do you mean with i can try GLSL ? Should the be better than mediaplayer upscalers? Because i tried some of them in filter-section and i dont see much difference.

    But yeah, i dont search for filters this time. Thats why i asked specific for these "AI" models.

    And yeah, would be great to see some of these fansubber-versions.

    Edit: You once wrote "If you use external model, you might want to rename to the form Xx_... since Hybrid can read the scale factor this way."

    I really didn't understand that. Can you make an example? Because on the other models i see nowhere sth like "Xx" in the model-name.

    And can i load in hybrid this "2x_LD-Anime_Compact_330k_net_g" ? Last time i just copied the .pth file in the same folder where the other were.
    Last edited by Platos; 30th Apr 2023 at 09:27.
    Quote Quote  
  5. But what do you mean with i can try GLSL ?
    I wrote that you can try GLSL based filters.
    Hybrid comes with a bunch of them (64bit\vsfilters\GLSL) some are directly included in the GUI, like 'Adaptive Sharpen (DLSL)', 'Luma Sharpen (GLSL)',... others you can load under Other->GLSL. Hybrid uses vs-libplacebo so only GLSL shader that are in mpv syntax can be used.
    Under Resizers under GLSL Resizers can also use one of the resizers under Hybrid\64bit\vsfilters\ResizeFilter\GLSL.

    Usually GLSL filters are simple, but fast, so they might be nice additions.

    Can you make an example?
    If you have a model named 'sudo_RealESRGAN2x_3.332.758_G.pth' rename it to '2x_sudo_RealESRGAN2x_3.332.758_G.pth '.

    And can i load in hybrid this "2x_LD-Anime_Compact_330k_net_g" ? Last time i just copied the .pth file in the same folder where the other were.
    You can load any model that is VSGAN compatible by pressing the 'Select' button and then selecting the file.


    Cu Selur
    Last edited by Selur; 30th Apr 2023 at 10:17.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  6. Ah ok, i got it, thanks.

    Ok, then i just try it.

    Now i only have to find that "2x_LD-Anime_Compact_330k_net_g"-model. Maybe poisondeathray will link it.
    Quote Quote  
  7. users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  8. Which RealCUGAN model are you using ?

    2x_LD-Anime_Compact_330k_net_g is not probably sharp or cheesy enough for your tastes to use alone . It preserves more of the original characteristics . For your purposes , you probably need to mix it with RealCUGAN

    To mix results, you can use overlay with opacity, or merge with weight. You can use higher weight for RealCUGAN
    http://www.vapoursynth.com/doc/functions/video/merge.html
    Quote Quote  
  9. Originally Posted by poisondeathray View Post
    Which RealCUGAN model are you using ?

    2x_LD-Anime_Compact_330k_net_g is not probably sharp or cheesy enough for your tastes to use alone . It preserves more of the original characteristics . For your purposes , you probably need to mix it with RealCUGAN

    To mix results, you can use overlay with opacity, or merge with weight. You can use higher weight for RealCUGAN
    http://www.vapoursynth.com/doc/functions/video/merge.html
    This is actually a good question because i'm quite confused about realCUGAN models and settings. Maybe i can ask you sth about that?

    I've red about: conservativ, pro, se, and non-se. Do you have an idea, what se and pro means? So what are they doing? And what does Sync-Gap means?

    I use model-se and "Very rough Sync". And -1noise, so no noise, becasue in RealCUGAN denoise makes everything bad on RealCUGAN even worse.

    I have found a quite unknown software called "squirrel". I can use there conservative model and it makes RealCUGAN doing not that thick lines. But the problem is: It can't handle camera-blur ( i mean, when the focus of the camera is not there, its blurred, but that should not looks sharped. It should still be blurred and it's not with conservativ on this software. so partly, its not, partly it is).

    And thanks, i will try to use merge with hybrid.

    Can i merge 2 already calculated videos together? So can i upscale 1x the video with realCUGAN and 1x with the other and then like "import" these 2 vids and merge it ? Because i want to upscale with RealCUGAN in an other software than hybrid.
    Quote Quote  
  10. I haven't used RealCUGAN much. You can test them and post your summary findings if you want

    If there is an unexpected issue between versions despite same settings - then report it definitely

    Can i merge 2 already calculated videos together?
    Yes, and if you do not have enough HW resources, sometimes you need to do that (export some lossless version(s) of a video to combine them later. ie. divide up the steps)

    If you have lots of GPU mem, lots of resources, you can sometimes do everything "on the fly" - use 1 script to do everything
    Quote Quote  
  11. Originally Posted by poisondeathray View Post
    I haven't used RealCUGAN much. You can test them and post your summary findings if you want

    If there is an unexpected issue between versions despite same settings - then report it definitely

    Can i merge 2 already calculated videos together?
    Yes, and if you do not have enough HW resources, sometimes you need to do that (export some lossless version(s) of a video to combine them later. ie. divide up the steps)

    If you have lots of GPU mem, lots of resources, you can sometimes do everything "on the fly" - use 1 script to do everything
    Can you say me, how i can make sth lossless ? What does that mean? Highest encodin quality ?

    And i have not bad Hardware, but i have only 12GB of VRAM and 32GB of RAM. But how i said, because (for now) i want to upscale with Waifu2x-Extension-Gui and then merge with Hybrid, i anyway dont want to make all in one step (because then i would have to use hybrid).

    I will try out a bit. I used this model you recommended now. It looks quite good. It's sharper that SRMD 4n (which is in my opinion bad, because of the denoise it let small lines dissappear). Yeah it's not that sharp than RealCUGAN, but i like it. If you have more models like that, tell me pls (maybe i should just try out all compat model for anime on this model-website. Sadly some models are not downloadable.)

    I will try this merge-stuff and if it works, i will make some snapshots.
    Quote Quote  
  12. Hybrid only does support merging the selected resizer with one of the default Resizers through the 'Weighted' sub-option, it does not support mixing RealCUGAN and RealESRGAN.
    If you want to do that, you would have to write your own Vapoursynth scripts.
    Sync Gap. Reduce the impact of image blocking. It's a trade-off between speed and quality.
    see: https://github.com/Kiyamou/VapourSynth-RealCUGAN-ncnn-Vulkan

    Ps.: You did try different settings for the '(De-)Noise'-setting in RealCUGAN, right? (depending on the RealCUGAN model and scale different '(De-)Noise'-settings are available.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  13. Ah ok, yeah maybe i try it. The command on "http://www.vapoursynth.com/doc/functions/video/merge.html" looks not so difficult.

    I will try to find a guide for how to use vapoursynth. Sadly i have to use also Avysynth, because of sRestore does not work in Vapoursynth.

    And ok, according to this site iam using Model se- 2x, because i have all denoise-values. So it has to be that.
    Quote Quote  
  14. Looks like the Real-CUGAN ncnn Vulkan has some unresolved issues with lines
    https://github.com/nihui/realcugan-ncnn-vulkan/issues/18

    This vapoursynth version is based on the ncnn Vulkan version
    https://github.com/Kiyamou/VapourSynth-RealCUGAN-ncnn-Vulkan

    Was that the main reason for using Waifu2x-Extension-GUI ?
    https://github.com/AaronFeng753/Waifu2x-Extension-GUI
    But it uses the Real-CUGAN-ncnn-vulkan version too...


    Do you have a single frame, a clear example of the line issue for testing ?
    Quote Quote  
  15. Hmm, i have to search. So you mean the problem, that RealCUGAN makes thicker and blacker lines? Or what do you mean?

    Because this picture from github is horrible. That's just completly worse than the non-NCNN version. The NCNN version just looks less "normal". It is also looking more "bling-bling". So everything is "shining" there. While the non-NCNN Version looks way better. But actually i have no reference, because he forgot to make a frame for the source. So i have no compare and what is wrong with the (black) lines.

    But now i have to search Non-ncnn Gui vor RealCUGAN to test

    Anyway: I used the Waifu-thing, because i discover it years ago and it did the best job (there are also some other tools, but not supportet anymore or sth like that).
    And i use the Waifu-Gui because it gives me 2.3 times faster speed than hybrid. Dont know why, have to test that later.
    I also discover the Gui "squirrell" but in all profiles it produces worse image quality. I have no idea why. Maybe it uses an older version of RealCUGAN ?
    Quote Quote  
  16. But it uses the Real-CUGAN-ncnn-vulkan version too...

    Do you have a single frame, a clear example of the line issue for testing ?
    here are my 2cents to this:
    a. looking at https://imgsli.com/MTc0ODI1 it only seems to be really an issue with the 'no-se' model.
    b. note that this effect can be basically removed by applying DeHalo_Alpha: https://imgsli.com/MTc0ODI2/2/1

    So you mean the problem, that RealCUGAN makes thicker and blacker lines? Or what do you mean?
    the title of the issue states: "ncnn version produces white color around line"

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  17. For speed and quality differences - the default models and settings are probably different. Set all the same settings then report back if there is a difference

    This version uses syncgap=2 (medium)
    https://github.com/Kiyamou/VapourSynth-RealCUGAN-ncnn-Vulkan

    But the ncnn original version (which Waifu2x-Extension-GUI also uses) uses syncgap=3 (fast)
    https://github.com/nihui/realcugan-ncnn-vulkan


    I would imagine that the CUDA onnx version should be the fastest with a nvidia card . Vulkan versions seem slower than CUDA or Tensor versions for Nvidia cards , in general
    Quote Quote  
  18. @ Selur: I dont know, what that is. In your Frame the lines on the green west of the man with grey hair disappear completly. Did you do that by yourselve? What episode is that, if yes ? But as always: I see not much different with the filter. The lines still are disappeared in non-se filtered

    @ poisondeathray: im still not sure what problem you mean with lines. I only said, lines get drawn bigger by RealCUGAN, as i would think, they should be, when i look on the original video. I can show you some frames, where there are small heads/objects, then you see the Problem. Because the lines are then to good visibale. But it's not such a minor-fail like the picture you linked.

    But i need an hour.

    Edit: This Pciture from Github describes very good, what i mean with "looks shiny, childish, cheesy and has more light, looks not "real", comic-like". If that is really still a problem, then i maybe going for the non-NCNN version. Only Problem is, i have to find a way to use it.

    I will send some pictures in some minutes. But i'm not sure, if it's the problem, you want me showing.
    Last edited by Platos; 30th Apr 2023 at 13:34.
    Quote Quote  
  19. Actually it's quite difficult to find an example

    But i do not have these minor problems like from github. Maybe this is already solved? Especially on that big stuff RealCUGAN has normally no problems. It's weakness is small and bad quality things and artstyle-changing.

    I have 2 pictures, where small persons looks bad because of RealCUGAN making black Lines looking fat. I mean, i dont think it has sth to do with that github-Picture, nor do i now, if this problem still exists. I Really tried to find an ultra bad part, but i found only bad looking. But i dont think it's a bug.

    But you can see, that the image of the NCNN Version looks overlit. This is sth. i also think about RealCUGAN. But i dont know, if its because of NCNN. I need a GUI with non-NCNN because my knowledge is zero with commands.

    Some years ago i had problem, that RealCUGAN can't detect camera-blur. But that's gone for a while now. Maybe because profile changed in the software or so.

    https://imgsli.com/MTc0ODQz

    https://imgsli.com/MTc0ODQx

    And 2 other examples are in the main-questions in first post under "thicker lines"

    Im still hoping i find a model between RealCUGAN and LD-Anime Compact. Because i think i am not able to do the command-line things
    Quote Quote  
  20. I was wondering about the differences between CUGAN implementations. Some examples seem to have more severe ringing, look more oversharpened. But the output might not be the same between the CUGAN implementations. Maybe one is better. But within the same version (e.g NCNN), you would expect the same results with the same settings

    If you don't like thick lines, you can awarpsharp it, or mix models. Most anime models actual go the reverse direction and thin the lines (the awarpsharp look)

    Unless you train the model yourself, you're not going to get 1 model that combines all those things, you have to adjust it with filters and scripts.
    Quote Quote  
  21. Originally Posted by poisondeathray View Post
    I was wondering about the differences between CUGAN implementations. Some examples seem to have more severe ringing, look more oversharpened. But the output might not be the same between the CUGAN implementations. Maybe one is better. But within the same version (e.g NCNN), you would expect the same results with the same settings

    If you don't like thick lines, you can awarpsharp it, or mix models. Most anime models actual go the reverse direction and thin the lines (the awarpsharp look)

    Unless you train the model yourself, you're not going to get 1 model that combines all those things, you have to adjust it with filters and scripts.
    With Implementations you mean in different GUIs?

    [Edit: No, its completly same. I had aspect ratio problem. So at least with same GUI. With hybrid it seems different]But funny, i thought the same. It's even more strange. 1 Year ago i did upscale some Episodes a used now for comparing with LD-Anime_Compact. But now i see, when i upscale the exactly same source with exactly same setting, it's different with even the same Software (Waifu-Extension-GUI). (i documented it, but not the model, but i tried both models. But mabe the old GUI-Version had an old model version. I download now old versions of the gui to test).

    I tried out again LD-Anime Compact and i like it even more now. It does really does not do all the things, i dont like on RealCUGAN. It does not bright up, it does not makes things looking childisch (thats i think because bright up), it does not change colors/textures alot, it does not make lines thicker and so on. Only downside is, it is not that clear, like RealCUGAN.

    A difference i recognized: When i use RealCUGAN there is a huge difference between upscaling to 720p or 1080p. But on LD-Anime_Compact the difference between 720p and 1080p is not that huge. And between 1080p and 2160p i see no difference. So i mean you can not upscale to 2160p to have same sharp picture like RealCUGAN 1080p

    And about lines: I absolutely made the oppisit experience. RealCUGAN and RealESRGAN does makes lines thicker. And yes, i found out you can thin lines with aWarpSharp2. But it also have big downsides. It makes everything smaller. While on big things this is not important, its important on small or thin things like legs (far away), mesh, finger, hair and this stuff. And alos it "warps" away textures from wood (so thin lines).

    Do you know another model which is similair to LD-Anime_Compact (also in speed, so compact)?

    Edit: About filters: I can use aWarpSharp2 to thin lines. But what is good to make Lines sharper without making it "pixely" (or huge downsides like awarpsharp has)? I Mean to compensate that LD-Anime is not that sharp. Most "sharpener" i tested and they tend to make a picture more "pixely" insteatd of blurry. But not really sharper in my opinion. But maybe i have to try agian now after upscaling with LD-Anime-compact
    Last edited by Platos; 30th Apr 2023 at 19:43.
    Quote Quote  
  22. No, I mean different implementations of RealCUGAN, the algorithm . The original one is here
    https://github.com/bilibili/ailab/tree/main/Real-CUGAN

    The GUI's are just front ends. The commandline stuff in the background perform the actual conversions. Some are optimized for CUDA, some for Vulkan, some for CPU... And yes there are some differences when using same model and settings

    I played with waifu2x-extension-gui and it uses the NCNN version. The denoise settings are on the "home" page, but the RealCUGAN settings are on the other "engine settings" page. A very badly organized GUI IMO

    So the output should be the same in hybrid and other GUI's that use the NCNN version if you match the settings . It's similar to the official RealCUGAN output as well in this case (SE,2x upscale, 3x denoise)

    However - the NCNN version produces different output for some of the other models (pro models, SE no denoise) compared to the official RealCUGAN output . vsmlrt produces simliar to official RealCUGAN (not necessarily "better", just different) . Official output seems sharper (too sharp IMO, I'd mix it), thinner converged lines. So there is definitely a discrepancy between NCNN and "official" in some cases. I'm looking at this because of that reported github issue - and I can replicate the differences on some models
    Last edited by poisondeathray; 30th Apr 2023 at 22:17.
    Quote Quote  
  23. I would be interested to see this difference. So if you have some scenes with this difference, i would like to see it. Because then i can see it and tell, if i like the NON NCNN Version more (to deside, if i go on search for a Gui).

    I will Upload in some hours today 5-7 Screenshots about 7 different output-result (so 7 Sliders per picture). And spoiler: i like the result very much and they are better than RealCugan without further editing. I used RealCUGAN, DigitalFilmSuperUltraCompact, LD-Anime Compact and aWarpSharp 2. But not only 1, i combined them a bit and tried out. You will see.

    I also tried some other Compact Models, they are all not bad, but i will explain then, when i post the images. Some of them sadly crashed in Hybrid, like anifilm compact. I have to ask Selur. Maybe i downloaded the wrong. But i also had some crashes on already implemented Models, but other story.

    Aboit no difference between different GUIs: So between Hybrid and Waifu2x-Extension-GUI there is definitifley a difference. Maybe it not because of RealCUGAN, so maybe its because of the settings Hybrid offers (like downsizer, resizer). Bit there is a difference. Mabye i do some screenshots too. Hybrid has better color accuracy while beeing worse in lines. It tend to makes 2 lines out of 1 blurry. The Extension gui tends less to do that.
    Quote Quote  
  24. The 'alternative' compact models are not supported by VSGAN, see: https://github.com/rlaphoenix/VSGAN/issues/25
    Nothing Hybrid can do about that, ask the VSGAN developer to support them.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  25. Originally Posted by Platos View Post
    I would be interested to see this difference.
    I'll post some screenshots later .


    Hybrid has better color accuracy while beeing worse in lines. It tend to makes 2 lines out of 1 blurry. The Extension gui tends less to do that.
    Are you sure you're using the same settings ? waifu2x-extension-gui and hybrid both use the NCNN version, they should give the same results

    Post the input image, and the settings used. Don't use imgsli, just zip it up



    Originally Posted by Selur View Post
    The 'alternative' compact models are not supported by VSGAN, see: https://github.com/rlaphoenix/VSGAN/issues/25
    Nothing Hybrid can do about that, ask the VSGAN developer to support them.
    It's too bad, because even avisynth can use them (in onnx format) with the avs-mlrt plugin (partial port of vs-mlrt) . Consider adding avs-mlrt or vs-mlrt
    Quote Quote  
  26. I'm wondering whether chaiNNer can't be used to convert those alternative compact models to normal models,... (okay, forget that chaiNNer can't use pth files)
    Not planning to add avs-/vs-mlrt any time soon.
    Last edited by Selur; 1st May 2023 at 08:23.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  27. I will post a hybrid - waifu comparison of RealCUGAN later. And thanks. But why no imgsli ?

    I did now a compare of different Upscalers. Following Settings i have used:

    - Original
    - RealCUGAN, RealCUGAN then aWarpSharp2 128 threshold 8 depth
    - LD-Anime_compat, LD-Anime_compat + DigitalFilm SuperUltraCompact, LD-Anime_compat + DigitalFilm SuperUltraCompact + aWarpSharp2
    - LD-Anime_compat + RealCUGAN, LD-Anime_compat + RealCUGAN + aWarpSharp2

    So 7 Settings and i used always 128 Threshold and 8 Depth with aWarpSharp. RealCUGAN model is always model-se and very rough and no -1n (no denoise).

    So that means, i used different models on same video. I did always only one step. So first the one model, then in an next encode the other.

    You can see, RealCUGAN does have some strange artefacts. When i use LD-Anime_Compact, it has actually no artefacts, but it's not that sharp. So that's why i tried to just use other models after LD-Anime_Compact, which are producing sharper pictures. And it worked (a bit). But, if you do that, the lines get thicker (thicker, than if i use the sharp-model alone). So that's why i then did a try with aWarpSharp2.

    The problem about the sharp models are, that they make everything brighter, than it should be. Like the white teeths on the first pictures. RealCUGAN, DigitalFilm SuperUltraCompact and Futsuu Anime compact do all brighten up to much (alone) in my taste (i did test Futsuu too, but it's not in the image i postet here).

    So: When i use RealCUGAN after LD-Anime Compact, its lines get darker and sharper than with LD-AnimeCompact + DigitalFilm SuperUltraCompact, but the color changes (so it's a consideration, what you like more) But the bad impact is less than if you use RealCUGAN alone. If i use LD-AnimeCompact + DigitalFilm SuperUltraCompact + awarpSharp2, then the original look remains quite good, but it's way sharper than only LD-Anime Compact.

    Also: LD-Anime Compact + RealCUGAN is waaay better than RealCUGAN alone (you see on the picture). While RealCUGAN alone produces alot artefact, it's not happening, when i use LD-Anime Comapct before (i have to say, i used the worst scenes, to show, which upscaler does the best job in ugly scenes. So "normally" RealCUGAN does not a bad job).

    So that means: LD-Anime Compact does (so far) the best job in NOT changeing the original Look AND dont produce artefacts WHILE still makes the Picture alot sharper. So that means, i can use LD-Anime Compact like a "before-filter". So for me its like a bit of restauration. After that i have the best input for models, which makes things sharp, but also tends to make artefacts and so on.

    So at the end: Just have a look on the pictures and you will see.

    Here the Pictures:

    Picture 1: https://imgsli.com/MTc0ODk5
    Picture 2: https://imgsli.com/MTc0OTA3
    Picture 3: https://imgsli.com/MTc0OTEy
    Picture 4: https://imgsli.com/MTc1MDQ0
    Picture 5: https://imgsli.com/MTc1MDY4

    Which one do you like most? What do you think about these combinations? Do you think, it's a good idea? Do you have any suggestions, how i can improve ? Or do you have any suggestions for other model-combinations?

    PS: Yes, i know i could merge models. But i have to use commands then and for now i want to stay without commands.

    Edit: @ PoisonDeathRay: If you want to test RealCUGAN NCNN vs Non NCNN you can also use my source-pictures, if you are interested in. Then i could see the difference on sth i know. But only if you have time and want to do.

    Here are the 5 source-pictures from above bundled in a zip-file: https://uploadnow.io/f/ZYXKM3F
    Last edited by Platos; 1st May 2023 at 10:20.
    Quote Quote  
  28. Since keeping background structures seems to be not really a requirement, may be use BasicVSR++ for cleaning and NNEDI3 for scaling.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  29. Originally Posted by Selur View Post
    I'm wondering whether chaiNNer can't be used to convert those alternative compact models to normal models,... (okay, forget that chaiNNer can't use pth files)
    chainner supports pth

    eg. It can be used to convert original pytorch pth models to onnx for avisynth

    https://forum.doom9.org/showthread.php?t=184768


    Originally Posted by Platos View Post
    I will post a hybrid - waifu comparison of RealCUGAN later. And thanks. But why no imgsli ?
    imgsli is ok for comparison

    The issue I have with imgsli (for debugging purposes) is if the original is upscaled, then it's no longer valid. You can't downscale without loss to test the original image (unless it was done with nearest neighbor algorithm on 2x multiples) . I prefer the actual direct inputs and direct outputs (not downscaled to 1440x1080 or something else) for debugging, otherwise you introduce other variables such as other scaling algorithms.

    Which one do you like most? What do you think about these combinations? Do you think, it's a good idea? Do you have any suggestions, how i can improve ? Or do you have any suggestions for other model-combinations?
    It's really personal preference, something you have to experiment with for your tastes



    PS: Yes, i know i could merge models. But i have to use commands then and for now i want to stay without commands.

    It's not really "merging" the models. It's merging or mixing the output(s) of the model(s) . It's essentially "compositing" ; such as layers

    "Merging the models" is combining models to make a new model. Merging the models can be done for some types of models/architectures in something like in chainner . It doesn't always work well, sometimes results are unexpected.
    Quote Quote  
  30. @posiondeathray: Ok, but i mean not only what you like most, i mean also, what your more experienced eye see, what i could improve. So i also want to have most of original look as possible.

    So for example a possibility to makes LD-Anime_Compact looks sharper. I did it now with that strategy. Do you have another/ a better (without mering, what i can't do with my skills)?

    Or do you have an idea, how i can solve this "over-lit" problem of all of these sharp-models ? LD-Anime Compact does not over-lit, but the others. if i could solve that, it would help me. Or how can i thin-out lines, without making lines shorter. So when i use aWarpSharp2, it does not only makes lines thinner, it does also makes the lines shorter. So the end of the line gets "warped" too.

    Originally Posted by Selur View Post
    Since keeping background structures seems to be not really a requirement, may be use BasicVSR++ for cleaning and NNEDI3 for scaling.
    You mean cleaning after the original and then use NNEDI3 or cleaning after i used one of my combination above ?

    And what do you mean with it's not a requirement? Where does background structures disappear in my pictures ?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!