VideoHelp Forum


Try StreamFab All-in-One and rip streaming video! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Page 6 of 9
FirstFirst ... 4 5 6 7 8 ... LastLast
Results 151 to 180 of 269
Thread
  1. I like tecogan sadly there is no Vapoursynth filter for it.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  2. It can be included in vapoursynth scripts, using subprocess module, loading video from clip.output() , running cmd line or other lines in a sequence, but I know what you mean, you need to implement it for your GUI, classic workflow, a workflow using clip.set_output().
    Last edited by _Al_; 1st Jun 2021 at 15:32.
    Quote Quote  
  3. "hand lines" - did you mean the flickering or line thinning ?
    Just thinning.
    Quote Quote  
  4. veai_artemis_hq+pp.mp4
    There's also that unpleasant feel of "trip is edging", I mean, psychedelic trip when VEAI applied in some cases.
    That case is in veai_artemis_hq+pp.mp4 video exactly that was uploaded by poisondeathray,
    Quote Quote  
  5. Another (funny) comparison video: Is TecoGAN better than Topaz Video Enhance AI?
    https://youtu.be/jFfmdboCHco
    Quote Quote  
  6. Originally Posted by forart.it View Post
    Another (funny) comparison video: Is TecoGAN better than Topaz Video Enhance AI?
    https://youtu.be/jFfmdboCHco
    Biggest problem of Topaz is plasticisity and the object of restoration in this video is very plastic in its original.
    Quote Quote  
  7. Here's what I've tried to upscale hand drawn 80's anime, in order from worst to best:

    - Video2x
    practically unusable for a full movie, slow and cumbersome

    - Anime4K
    fast but hardly any better than simple bicubic upscale with post-processing, has a rather blurry oil painting look

    - Waifu2x Caffe
    decent results, well designed, comparably fast for what it achieves, doesn't artefact at all

    - Topaz VEAI
    decent product, sharper than Waifu2x, works better on some scenes than others, artefacting can be controlled with pre-processing in Hybrid and selection of models in VEAI

    - Cupscale GUI for ESRGAN
    super well designed by a great programmer, amazing handling of Python and other dependencies, free (MIT License), large selection of models

    Then within the ESRGAN family I've tried most of what's listed in the Manga / Anime subsection here:
    https://upscale.wiki/wiki/Model_Database

    My favorite is "Anime Oldies" (the first one, not alternative) and is apparently based on 4xPSNR. The clarity and image quality of that model are astounding. Restoration of black anime lines is great and the best I've seen. On the flipside it shifts the colors but I'm happy to revert that back in FFmpeg. Reduce brightness/gamma, modify RGB balance from yellowish tone back to original. I wish there were a 2x version of this model because the 4K image isn't really more detailed than scaled to FHD but slower. For now, I'll take that compromise as well.

    I attached some examples (with preliminary color correction for the Cupscale images). Look at the black lines in A-ko's hair that were very faint and almost gone. Recovered by the model in a phenomenal way. Anime lines are a bit thinner and clear, everything is as crisp as it gets considering the source.

    The model seems to have added a very subtle aquarelle paper texture that adds to the artistic feel but probably won't survive final h264 compression. The model reduces halo artefacts and noise. Knowing that I reduced halos of the oversharpened source just halfway in pre-processing with Hybrid:
    clip = havsfunc.DeHalo_alpha(clip, rx=2.2, ry=2.2, darkstr=0.25, brightstr=0.50)

    Lastly to those who think upscaling by neural networks is nothing but sharpening. That's totally not the case. I've tried most sharpening filters that come with Hybrid, including the line darkening stuff. These simple filters can work to a point but aren't in the same league as a good neural net. Sharpening by filters at some point creates artefacts and increases noise. Overdoing it kills the image. WarpSharp may distort the geometry of objects like rectangles and a silhouette on a dark background. Even with all that stuff applied and tuned the final result won't get close to a good NN upscale.
    Image Attached Files
    Quote Quote  
  8. @Telekinetiker, great work, but what about upscaling non-anime stuff?
    Quote Quote  
  9. Originally Posted by taigi View Post
    @Telekinetiker, great work, but what about upscaling non-anime stuff?
    Thanks. I'd follow a similar process for non-anime. See what Topaz can do on the source. See what the models in the ESRGAN database can do:
    https://upscale.wiki/wiki/Model_Database

    They have many categories and purposes listed there like real world, photographs, photorealism. I'd download a few, place them in my Cupscale's model folder and try them out. Cupscale is great software and a joy to use.

    You can also download the "pretrained models" from the above site, like RRDB_PSNR_x4.pth. These are the ones used by the researchers in their published papers. I think most of the base models are for real world images.

    Some models may artefact on a source with issues. In that case, I'd try to fix the issues in Hybrid as good as possible. Some models may even expect pixel perfect input. In that case, Cupscale has options to downscale the image first with various algos before it's upscaled by the model. I'm not a fan of that process though as it's prone to lose details but can look good on some scenes.

    If you want you can post a 20 second segment of something and I'll see what I can do. However my main machine is busy for the next 3+ days upscaling anime, so it may take me a while to get to it.
    Quote Quote  
  10. btw. I got a Hybrid dev version and an addon which allows to use VSGAN in Hybrid.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  11. @Telekinetiker, this is shorter segment of thing I plan to upscale, thanks. Your advice is very awaited..
    Image Attached Files
    Last edited by taigi; 24th Jun 2021 at 09:03.
    Quote Quote  
  12. Originally Posted by taigi View Post
    @Telekinetiker, this is shorter segment of thing I plan to upscale, thanks. Your advice is very awaited..
    I didn't like the Topaz models on this clip. Couldn't find a good model for Cupscale either. Pre-processing in Hybrid didn't seem to help.

    For now, I settled with Waifu2x Caffe and the 2x UpPhoto model with denoise level 1, TTA enabled. End result is very slightly sharper without artefacts and slightly less noisy. Videos made with lanczos downscale and lossless nvenc. Only used 8-bit colors though, the other option in Waifu2x is 16-bit which shifted the colors in an x264 encoded video (nvenc seems to be 8-bit only).
    Image Attached Files
    Quote Quote  
  13. oh, so Waifu2x and UpPhoto are not free?
    Quote Quote  
  14. @taigi: what gave you that impression?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  15. @Telekinetiker I would plan to apply denoiser after upscaling, so I do not expect upscaler to be denoiser too. What solution in this case then?
    Quote Quote  
  16. Originally Posted by taigi View Post
    @Telekinetiker I would plan to apply denoiser after upscaling, so I do not expect upscaler to be denoiser too. What solution in this case then?
    Waifu2x lets you choose. I used "denoise & magnify". You may want to use "magnify only". Other settings of interest: TTA looks just ever so slightly better but is slower, so you may want to leave it unchecked. Split size and batch size can be increased until Waifu2x crashes or you run out of video memory (it's CUDA optimized). Will be faster if you tune these values. You can use the Sensors page of the GPU-Z utility to see how much VRAM you're using.

    I see Waifu2x is integrated in Hybrid as well but haven't tried it there. Would be interesting to see a speed comparison vs. a standalone app like Waifu2x Caffe. It would certainly save you the need to work with intermediate image files (png/jpg) and things can go wrong there, like color shift.
    Image Attached Thumbnails Click image for larger version

Name:	Waifu2x.png
Views:	42
Size:	28.7 KB
ID:	59586  

    Quote Quote  
  17. What is "trans" in Magnification size?
    Quote Quote  
  18. Originally Posted by taigi View Post
    What is "trans" in Magnification size?
    Probably short for transformed. You'll want to stick with 2.0 upscaling factor as that's typically what the models were trained on. If you choose something else, it will internally still upscale with 2x but then downsample with an algorithm like bicubic.
    Quote Quote  
  19. damn replied to old post,..

    ------

    Would be interesting to see a speed comparison vs. a standalone app like Waifu2x Caffe.
    My guess there is that both methods should produce the same speed unless disk i/o is throtteling.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  20. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Originally Posted by Selur View Post
    with no real way to adjust it
    Yes, that's the downside of machine learning based filtering. You need differently trained models, there is not multiple parameters which allow you to change the outcome.
    So you would need tons of differently trained models to get some more options and you would still not have as much control over it as you do with normal algorithms.
    btw. I upscaled the suzie clip from 176x144 to 704x576 using VSGAN and a BSRGAN model and uploaded the file to my GoogleDrive in case someone is interested. (Taking into account that bsrgan is meant for image not video upscaling I think the result is impressive.
    (Would be nice to have support for some additional ml based video upscaling support in Vapoursynth and a faster gpu to use it.)

    Cu Selur
    Friend Rio Selur, excellent result in your example, considering that, for example, an image like this seen on a 4K TV with Upscaling, and that at a distance of 1.30 to 2 meters away, the hair "comb" effect will not be seen, it seems to be a great result of Upscaling, congratulations.

    Att.

    Druid®.
    Quote Quote  
  21. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    I would like to initially thank everyone for the participation of all who try to help the people here in this forum, especially johnmeyer, rio selur and lordsmurf, sorry if I didn't mention everyone here, there's more I know.

    I'm noobie about Upscaling and my goal here is not commercial, but personal, because like some here I'm not a video editor OK!

    I've been searching for the last two months for free software and paid for a program that allows me to have an Upscale to HD and/or FULL HD, shows (mostly) and a few movies, thus improving the definition of images that I can't get by Upscaling my TV Samsung Crystal TU8000 50".

    It's sad to see that memorable shows don't get released/remastered to FULL HD or ULTRA HD, and when they are they do something like SD-BLURAY.

    Like many here, I came across Topaz's VEAI, and I confess that at first I was quite optimistic, the software is not bad, but as everyone can see, over time I noticed a series of problems that it causes in the final product, effects of plastification (here I use it as a gouache painting effect), ghosting between scene changes, flickering in scene movement, among others that leave a lot to be desired.

    I fully agree with those who said that it is not possible to restore something that was lost in an image when it was downsizing to pass the content to a DVD, it's like you wanting to recover an MP3 that had frequencies cortez to a WAV, because as the name says , there was a CUT, and this cannot be recovered, it would have to be invented.

    Invented is perhaps the most correct word, and this invention is what makes software like TOPAZ VEAI have its main problem, if it doesn't exist in the compressed DVD product, it has to invent a pix/pixels where they don't there are more, and that's causing all sorts of problems that I've found in it and many are finding it at an expensive $200.00 price tag, which for me isn't fair.

    Now the biggest problem I'm facing is: DVD SHOWS are much more problematic than movies, or DV amateur videos from digital cameras. I was only satisfied with a VEAI Upscaling, using the Dione Intelaced Robust V4 preset, any other one doesn't work well at all, namely Roger Hodgson's 2007 NTSC Take The Long Way Home concert, was very good, not perfect, I didn't even expect it, but the bad effects I've seen on other concert DVDs, this one didn't happen.

    I would really appreciate the support of the masters here, if you have time, to help me from basic to advanced, how can I get a better result from the 1999 NTSC Yanni Tribute show, using avisynth and associated programs, but I don't know almost anything about use of avisynth.

    Here are four parts of the video I've separated for a review (https://drive.google.com/drive/folders/1ZopA3bVBQHsiG1bzuj7Fv-uhgDd974Q1?usp=sharing)

    Part 001: This initial part has problems with all the presets for interlaced DV/TV/DVD videos except the Dione Interlaced Robust V4 preset, which doesn't leave the jagged effect, some call it comb due to deinterlacing.

    Part 002: This part would like to inform you that the plastic effect, which I call gouache painting, will appear at the end of this video, at the end, where the show's audience appears and this annoying effect is visible on the audience's distant audience faces. I also noticed that this effect occurs on all the distant faces of the musicians in the show as well, distorting their features into grotesque features, but the Dione Interlaced Robust v4 preset has improved this, but not much.

    Part 003: in this part there is a shake in the Taj Mahal footage in the vertical downward movement that does not occur in the original DVD image, an effect that is also noticeable in some parts of the show in all movements: horizontal, vertical or diagonal .

    Part 004: This part shows a more gross problem over the one reported above, from part 003, which is extreme image shake, to diagonal camera movement.

    I hope I can get some help to improve the final product of this Upscaling, not using VEAI but scripts and plugins and possible image improvement techniques from this show, because I wouldn't even know how it can be improved (outside of Upscaling, how to improve saturated color of most of the shows), but many who respond here know very well RGB, YUV standards, and so on, because the terms and their effects are numerous and require years of knowledge to learn, common for professionals in the field, so please I apologize for my gross mistakes in terminology, and I want to thank everyone who can help me in this endeavor in advance .


    Att.

    Druid®.

    P.S. Please, I would love to know what you thought of Yanni's Upscaling - Tribute DVD NTSC to 1080p with VEAI, where it generated problems visible to the professional eyes of masters in video editing here in this thread, I see a certain loading in color, however most shows abuse the colors blue and red and almost most of them, finally realize other problems that can be solved by avisynth and/or ffmpeg plugins?
    Last edited by DruidCtba; 29th Jun 2021 at 21:39.
    Quote Quote  
  22. Just to be sure: Is the goal to use this for file playback/streaming or Blu-ray? (because when going for Blu-ray 1440×1080i (anamorph) you probably would not deinterlace)
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  23. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Originally Posted by Selur View Post
    Just to be sure: Is the goal to use this for file playback/streaming or Blu-ray? (because when going for Blu-ray 1440×1080i (anamorph) you probably would not deinterlace)
    No friend Selur, it's for personal use, I just want to be able to enjoy, on my Samsung TV, DVD NTSC and/or PAL shows that didn't come out in BD Full HD, with better quality than the Upscaling of my 4K TV Samsung TU8000 50", in MKV format, which in this show is 1440:1080, that is, 4x3 Full HD, that's all that interests me, since there is the possibility of improving other things besides Upscaling, as I sited in post scriptum, improving the colors, maybe in RGB or YUV, sorry if I'm saying something wrong, I know little about these parameters, but reading here and there, I end up coming across these terms, but I know almost nothing about them and with the help of the masters here, maybe I can improve even more the final image of these Upscalings, since as I said above, in concerts we have a very strong loading in blue and red colors, and this often makes the image blurry, distorted, anyway how can I improve an Upscaling with parameters even more not related to Upscaling p properly said, and this is quite remarkable when VEAI Upscaling the final product is always clearer than the original, and gives a better effect on shows, I don't know if this also applies to movies, as I haven't tested it with movies or series yet. .

    Att.

    Druid®.
    Quote Quote  
  24. Okay, here's a quick try on the first clip, using:
    - QTGMC(preset='slow') for deinterlacing; since hardware compatibility isn't the problem I would go for bobbing
    - a bit of cropping
    - DTFTest for denoising
    - NNEDI3 for resizing
    - CAS(sharpness=0.850) for sharpening
    - some line darkening for a bit more contrast
    - adding borders
    no 'special' filtering.
    Code:
    # Imports
    import os
    import sys
    import ctypes
    # Loading Support Files
    Dllref = ctypes.windll.LoadLibrary("I:/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
    import vapoursynth as vs
    core = vs.get_core()
    # Import scripts folder
    scriptPath = 'I:/Hybrid/64bit/vsscripts'
    sys.path.append(os.path.abspath(scriptPath))
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SharpenFilter/CAS/CAS.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DenoiseFilter/NEO_FFT3DFilter/neo-fft3d.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/EEDI3.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/NNEDI3CL.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/temporalsoften.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/scenechange.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
    # Import scripts
    import edi_rpow2
    import havsfunc
    # source: 'C:\Users\Selur\Desktop\Yanni - Tribute (1997 DVD-9 720 x480)_0001.mkv'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: top field first
    # Loading C:\Users\Selur\Desktop\Yanni - Tribute (1997 DVD-9 720 x480)_0001.mkv using LWLibavSource
    clip = core.lsmas.LWLibavSource(source="C:/Users/Selur/Desktop/Yanni - Tribute (1997 DVD-9 720 x480)_0001.mkv", format="YUV420P8", cache=0, prefer_hw=0)
    # making sure input color matrix is set as 470bg
    clip = core.resize.Point(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # setting field order to what QTGMC should assume (top field first)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=2)
    # Deinterlacing using QTGMC
    clip = havsfunc.QTGMC(Input=clip, Preset="Slow", TFF=True, opencl=True) # new fps: 59.94
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    # cropping the video to 708x480
    clip = core.std.CropRel(clip=clip, left=8, right=4, top=0, bottom=0)
    # denoising using DFTTest
    clip = core.dfttest.DFTTest(clip=clip)
    # resizing using NNEDI3CL
    clip = edi_rpow2.nnedi3cl_rpow2(clip=clip, rfactor=4, nsize=3, nns=4)
    # adjusting resizing
    clip = core.fmtc.resample(clip=clip, w=1416, h=1080, kernel="lanczos", interlaced=False, interlacedd=False)
    # contrast sharpening using CAS
    clip = core.cas.CAS(clip=clip, sharpness=0.850)
    # letterboxing 1416x1080 to 1920x1080
    clip = core.std.AddBorders(clip=clip, left=252, right=252, top=0, bottom=0)
    # adjusting output color from: YUV420P16 to YUV420P10 for x265Model (i420@8)
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, range_s="limited")
    # set output frame rate to 59.940fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=60000, fpsden=1001)
    # Output
    clip.set_output()
    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  25. Originally Posted by Selur View Post
    Okay, here's a quick try .....
    You really make easier, revealing scripts and all that how-to for vaporsynth users, thank you
    Quote Quote  
  26. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Excellent work my friend Selur, your work was very good, congratulations .

    How would I run this script friend Selur, please want to give me a step by step please.

    I have Hybrid and I know how to work with it for deinterlacing, but I don't know how to run vapoursynth scripts.

    I tried these days to use MEGUI for this, loading some .avs scripts in it, I had avisynth 2.6 installed, a series of errors started to appear because I couldn't find certain commands, so I gave up because I thought I would need many hours to learn.

    And they also say that you can't keep installing versions of avisynth indistinctly, this is true, how should I install and which should I install in order to work with it round, as I believe your friend Selur is?

    If you allow me to express my opinion about your excellent work and the final product of VEAI, I noticed that VEAI still shows the images, mainly faces and bricks of the Taj Mahal Towers back there with more definitions, however as I am noobie, it could be that this improvement in definition is what I mentioned above, VEAI trying to create something that was lost from the original show tapes when they were compressed and passed to DVD, ie VEAI tried to GUESS and INVENTED/CREATED these pixels, or would it be possible to get more definition using avisynth?

    Att.

    Druid®.

    P.S. Here's the Yanni - Tribute 480i show, in full, for anyone wanting to test Upscaling via AviSynth: https://drive.google.com/file/d/1T3FeD5oOOxaffMUeJ_IFgi6jQ0xLaZXQ/view?usp=sharing
    Last edited by DruidCtba; 30th Jun 2021 at 17:54.
    Quote Quote  
  27. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Notice here how the harp above the violinist's head at the back is quieter and more defined on the VEAI with the Dione Interlaced Robust V4 friend Selur preset.

    Image
    [Attachment 59643 - Click to enlarge]


    Em relação ao seu script

    Image
    [Attachment 59644 - Click to enlarge]


    Att.

    Druid®.
    Quote Quote  
  28. Notice here how the harp above the violinist's head at the back is quieter and more defined on the VEAI with the Dione Interlaced Robust V4 friend Selur preset.
    I think that can probably be adjusted, didn't really tweak anything, just enabled a few things I would start with.
    Main advantage of Avisynth/Vapoursynth over ML based filtering is:
    a. the possibility to tweak settings. (+ color control; tv vs pc scale and color matrix)
    b. no need for huge amounts of intermediate png files depending on the tool (which can cause color issues, depending on what the tool expects and how RGB<>YUV conversions are done)

    And they also say that you can't keep installing versions of avisynth indistinctly, this is true, how should I install and which should I install in order to work with it round, as I believe your friend Selur is?
    By default Hybrid uses Vapoursynth.
    You can switch between Avisynth and Vapoursynth usage, by changing "Filtering->Support (lower right corner)".
    You can configure filters for Vapoursynth anlog to how you would for Avisynth under "Filtering->Vapoursynth".
    And you can change the filter order under 'Filtering->Vapoursynth->Misc->Filter Order/Queue'.
    (if you understand more about Vapoursynth you can also look use 'Fiterling Queue' which allows to use most filters multiple times)
    For detailed setting adjustment I would also recommend to enable:
    - "Filtering->Vapoursynth->Preview->Split View" and set it to 'interleaved'
    - "Filtering->Vapursynth->Filter view"
    and open the script and filter "Vapoursynth Preview" (lower right corner).

    What I did is:
    • Start Hybrid
    • load input file
    • made sure Hybrid uses Vapoursynth (setting "Filtering->Support" to "Vapoursynth")
    • made sure the Preview settings are as mentioned above (Split View + interleaved + Filter View)
    • I also have usually both 'Filtering->Vaporusynth preview' and 'Filtering->Vapoursynth->Script view' enabled. (both in the lower right corner)
    • configured the deinterlacing
      • set "Filtering->(De-)Interlace/Telecine->QTGMC Vapoursynth->Preset" to "Slow"
      • enabled "Filtering->(De-)Interlace/Telecine->QTGMC Vapoursynth->Bob" for bobbed output
      • enabled "Filtering->(De-)Interlace/Telecine->QTGMC Vapoursynth->OpenCL" for a bit of GPU accelleration
      • did not tweak any additional settings for denoising and sharpening
    • enable cropping (Crop/Resize->Base->Picture Crop)
    • start crop detection (Crop/Resize->Base->Picture Crop->Auto crop)
    • tell Hybrid that the output should have a PAR of 1:1, like it's custom for HD content
      • enable "Crop/Resize->Base->Pixel Aspect Ratio (PAR)->Convert output to PAR"
      • set "Crop/Resize->Base->Pixel Aspect Ratio (PAR)->Convert output to PAR" to "Square Pixel"
    • adjust the resizing resolution:
      • setting "Crop/Resize->Base->Picture Resize->Auto adjust" to 'width' (since I want to specify the height)
      • setting "Crop/Resize->Base->Picture Resize->Targe resolution->Height" to 1080
    • Adjusted the letterboxing to add black border to the output to reach the target resolution (1920x1080) that I wanted
      • enable letterboxing (Crop/Resize->Base->Letterbox)
      • set out resolution (Crop/Resize->Base->Letterbox->Width to 1920, Crop/Resize->Base->Letterbox->Height to 1080)
    • configure the resizer:
      • enable "Filtering->Vapoursynth->Resize->Resizer"
      • set "Filtering->Vapoursynth->Resize->Resizer" to "NNEDI3"
      • adjusted the "Filtering->Vapoursynth->Resize->Resizer" to "NNEDI3" settings abit (enable GPU, change Neighbourhood and Neurons count)
    • enabled DFTTTest as Denoiser (without tweaking any settings)
    • enabled CAS (=contrast adjusted sharpening; enabled "Filtering->Vapoursynth->Sharpen->CAS") and set "Filtering->Vapoursynth->Sharpen->CAS->Sharpness" to "0.85"
    • moved the sharpenig Filter under the Resize filter under 'Filtering->Vapoursynth->Misc->Filter Order/Queue'
    • I then used the 'Vapoursynth Preview' to check the results by flipping a bit between the results and since I didn't see any real problems I kept the settings to get things started here.
    • Configured the x265 encoder (sat the Preset to slow and applied it)
    • sat the output, created a the job queue entries and started the job queue processing

    Cu Selur

    Ps.: if you run into a bug let me know and I can send you a link to my current dev version, chances are good that I might have fixed it already or that I can fix it.

    PPs.: using Avisynth is basically the same in Hybrid you just need to:
    a. set Filtering->support to Avisynth
    b. switch 'Config->Internals->Avisynth->Avisynth type' to 64bit if you want to use Avisynth 64bit.
    c. adjust the filters under 'Filtering->Avisynth' instead of 'Filtering->Vapoursynth'
    Last edited by Selur; 30th Jun 2021 at 22:20.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  29. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Perfect friend Selur, you have an impeccable didactic, you can put yourself in the place of a neophyte/apprentice/noobie , I couldn't compare your step-by-step explanation with anyone here on the forum, simply motivational and spectacular, I want every professional who we found it in the forums, which are very good and solve the doubts of many, it was like yours, because the large portion, like me, cannot follow the explanations of professionals if they do not have at least an average knowledge, congratulations and maybe everyone can have this sensitivity you have to put yourself in the shoes of those who have doubts and need guidance at a beginner's level .

    I think that can probably be adjusted, didn't really tweak anything, just enabled a few things I would start with.
    Main advantage of Avisynth/Vapoursynth over ML based filtering is:
    a. the possibility to tweak settings. (+ color control; tv vs pc scale and color matrix)
    b. no need for huge amounts of intermediate png files depending on the tool (which can cause color issues, depending on what the tool expects and how RGB<>YUV conversions are done)
    One thing I would like to know after your nice and detailed explanation: what are the commands I would need to test to improve the sharpness of the image with the avisynth and/or vapouravisynth commands?

    Att.

    Druid®.

    P.S. I'm going to do some tests here my friend Selur, with your guidelines I think that now I'll be able to start in this universe avisynth and vapouravisynth, thank you very much. If there is any problem, I come back here to clear my doubts.
    Quote Quote  
  30. One thing I would like to know after your nice and detailed explanation: what are the commands I would need to test to improve the sharpness of the image with the avisynth and/or vapouravisynth commands?
    In Hybrid there are quite a few sharpeners for Avisynth and Vapoursynth.
    Avisynth sharpeners are under "Filtering->Avisynth->Sharpen->Sharpening", you simply enable that, select a sharpening filter try it's settings while comparing the filtering with the original in the Avisynth Preview (analog to Vapoursynth: Filter view + setting "Filtering->Avisynth->Misc->Preview->Misc->Placement" to "interleaved" might help). Also like with Vapoursynth you can change the filter order.
    In Vapoursynth you can follow the steps I mentioned before and simply try another filter under "Filtering->Vapoursynth->Sharpen".

    When you start with Vapoursynth/Avisynth I recommend to also look at the scripts Hybrid creates (Avisynth Script View / Vapoursynth Script View) to understand what's happening.
    Also to really get great results with Vapoursynth/Avisynth you will have to really understand what is happening and may be even start writing your own scripts, but with Hybrid and similar tools it should be possible to at grasp some of the possibilities that are out there.


    Cu Selur

    Ps.: for general understanding of what filters do and what sharpers&co do an where their limits currently are it is often helpful look at explanations which for image processing in general not just video.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  



Similar Threads