VideoHelp Forum
+ Reply to Thread
Page 10 of 20
FirstFirst ... 8 9 10 11 12 ... LastLast
Results 271 to 300 of 576
Thread
  1. @Selur: Original VOY credits demuxed here

    https://1drv.ms/v/s!AphTLFRW13WMkC8tc95eDg74ju7Q?e=19wcEf
    I hope you also have a Topaz output for this since none of the screenshots you posted so far are from this scene,...
    Assuming you do: what frame should I make screenhots of?
    with gamma=1.3, cas 0.8 and nnedi3, I get:
    (frame 73)
    used script:
    Code:
    # Imports
    import os
    import sys
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Import scripts folder
    scriptPath = 'I:/Hybrid/64bit/vsscripts'
    sys.path.append(os.path.abspath(scriptPath))
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/NNEDI3CL.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SharpenFilter/CAS/CAS.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/DGDecNV/DGDecodeNV.dll")
    # Import scripts
    import edi_rpow2
    # source: 'C:\Users\Selur\Desktop\VTS_02_1-Sample.demuxed.m2v'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine (soft)
    # Loading C:\Users\Selur\Desktop\VTS_02_1-Sample.demuxed.m2v using DGSource
    clip = core.dgdecodenv.DGSource("E:/Temp/m2v_66cb9e8b2008b8011cb1d0272f9b2cc3_853323747.dgi",fieldop=2)
    # making sure input color matrix is set as 470bg
    clip = core.resize.Point(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 29.97
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976
    # Color Adjustment using Levels on YUV420P8 (8 bit)
    clip = core.std.Levels(clip=clip, min_in=16, max_in=235, min_out=16, max_out=235, gamma=1.30, planes=[0])
    # cropping the video to 720x476
    clip = core.std.CropRel(clip=clip, left=0, right=0, top=4, bottom=0)
    # contrast sharpening using CAS
    clip = core.cas.CAS(clip=clip, sharpness=0.800)
    # resizing using NNEDI3CL
    clip = edi_rpow2.nnedi3cl_rpow2(clip=clip, rfactor=4, nns=0)
    # adjusting resizing
    clip = core.fmtc.resample(clip=clip, w=1452, h=1080, kernel="lanczos", interlaced=False, interlacedd=False)
    # adjusting output color from: YUV420P16 to YUV420P10 for x265Model (i420@8)
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, range_s="limited")
    # set output frame rate to 23.976fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    # Output
    clip.set_output()
    no ml used so far.
    (no gamma or special resizer) Simple spline16 + CAS 0.8:

    -> CAS on it's own should be enought to help with that source quite a bit
    just using Waifu2x:

    just using SRMD:

    VSGAN-BSRGAN: (which isn't reall suited for cg content in my opinion the way it is usually trained)

    VSGAN-UniversalUpscaler-Detailed_155000_G:
    Last edited by Selur; 28th Jul 2021 at 07:44.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  2. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Your results using CAS filter are excellent.

    Unfortunately, using it I was not able to get an improvement compared to LSFmod (on the left of the comparison pictures).
    It may depend on the nature of the video, but looking to your output, I will give to CAS another try

    "progressive" frame
    Click image for larger version

Name:	ufo_sII2a_spot_amtv_2_cut_v19_v27_modeA.png
Views:	172
Size:	1.55 MB
ID:	60085

    interlaced frame
    Click image for larger version

Name:	ufo_sII2a_spot_amtv_2_cut_v19_v27_modeC_Nnedi3.png
Views:	173
Size:	1.19 MB
ID:	60086
    Quote Quote  
  3. @lollo: your source has totally different chararacteristics, not surprised that cas doesn't help there especially if you filtered the source a lot before.
    If you can share a short - unprocessed - sample I can look at it, but from a first impression of the screenshots that source simple was smoothed to 'near death'.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  4. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Raw s-vhs capture attached. Consider that the original dvb-s broadcast was bad quality.
    Thanks for your time!

    ufo_sII2a_spot_amtv_2_cut_sample.avi
    Quote Quote  
  5. Okay, I see the problem. Main problem here is to find some good deblocking thar does not crush all of the details.
    Using CAS after such aggressive deblocking/smoothing won't work.
    Cool, Waifu2x really has problems with this source, ist creates some strange artifacts:

    conventional filtering like:
    Code:
    # Color Adjustment
    clip = adjust.Tweak(clip=clip, hue=0.00, sat=1.20, cont=1.00, coring=True)
    # applying deblocking using DeblockPP7
    clip = core.pp7.DeblockPP7(clip=clip, qp=5.00)
    # cropping the video to 684x556
    clip = core.std.CropRel(clip=clip, left=12, right=24, top=8, bottom=12)
    # removing grain using MCDegrain
    clip = mcdegrainsharp.mcdegrainsharp(clip=clip)


    is probably more suited to this than something like

    tweak + VSGAN-lollypop:

    or tweak + VSGAN-BSRGAN:


    just for viewing pleasure adding some texurisation instead of denoising&deblocking might provide a better experience:
    tweak + VSGAN-nmkd-h264Texturize:


    All in all I would recommend to stick with conventional filtering for such content.
    All model based Ai filtering will probably smooth way to heavy an such sources and you end up with an ugly plastic look.

    --
    From what I gather what Topaz does:
    a. gathers some data about the source (resolution etc. and probably some statistical data to quess some general natur of the sourcce)
    b. depening on the gathered data, downloads some model(s) to the client (c:\Users\--username--\AppData\Roaming\Topaz Labs LLC\Video Enhance AI\models\) and then applies those models and then do some basic filtering.
    My hope is that they train models themselves and don't just rely on free models (i.e.: https://upscale.wiki/wiki/Model_Database).

    Cu Selur
    Last edited by Selur; 28th Jul 2021 at 09:04.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  6. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Impressive!

    Thanks a lot, you confirmed the results of my experiments, but I was not checking all the resizers (Waifu2x, VSGAN, etc) because I do not have deep expertize with them, so your contribution is precious and very well appreciated.

    Your recomended processing is very similar to mine (MCdegrainSharp versus TD2 + LSFmod), I am happy to have such a confirmation. I will try DeblockPP7 as well.

    Edit: on top of that I am also curious to experiment an approach like "conventional filtering" + VSGAN, to compare versus "conventional filtering" + nnedi3 and "conventional filtering" + TVEAI; i will do it on my own...
    Last edited by lollo; 28th Jul 2021 at 09:32.
    Quote Quote  
  7. Originally Posted by lollo View Post
    https://imgsli.com/NjI5MDI

    AviSynth-alone on the left, Topaz on the right. I think this output is quite good. TVEAI is cleaning up noise, but it doesn't offer an improvement in terms of detail.
    I agree. However, I do not know if you used some noise reduction for the AviSynth part. If your TVEAI model uses some noise reduction, you should also have some on the right part for a better comparison.
    I appended your script directly to my own pre-processing script, so it benefited from the same denoising I do in AviSynth. I didn't try to perform additional denoising in AviSynth when I added your script because I've had problems with not leaving *enough* noise in output and winding up with something that looked a bit plastic-y as a result. I denoise with QTGMC but reinject grain and noise with NoiseRestore=0.5 and GrainRestore=0.25.

    https://imgsli.com/NjI5OTI

    I resized my output in Paint.net so that I could do an apples-to-apples comparison. We get to a pretty similar place. I don't know why the content gets darker when I resize.


    Your comment about additional noise reduction got my curious, though, so I ran a test script without my own NoiseRestore and GrainRestore arguments. Interesting results. Visually, they aren't much different. Removing the noise and grain injection slightly increases the errors in certain scenes compared with having them there, which I find interesting. Injecting the extra grain and noise in QTGMC definitely helps Topaz, however. It does not help newer versions of Topaz as much as it helps the older ones, but background starfields can be brightened by injecting noise into the video before processing it in TVEAI.
    Quote Quote  
  8. @Selur,

    I'd like to know how to get those other AI models up and running. I experimented with building my own AI upscaler with help from a co-worker who has written about AI. I'd like to test options besides Topaz but had limited luck with doing so.

    I have not used VapourSynth before, but I will try to get it running to duplicate your script / output. When you created those comparison frames, is that my output on the left or are you pulling from a different source? I'll upload an M2V of this shot as well, if you need one. I pull footage from a variety of places to show that I'm not cherry-picking; I have upscaled samples of everything in one state or another. I've processed episodes like "Sacrifice of Angels" several thousand times.

    I'd like to talk to you about how to set up various AI models for comparison purposes if you are willing to have the conversation.

    My name for the sequence you posted with the USS Majestic firing in profile is "Second Fleet Engagement," or SFE. I'm happy to use that section as a comparison clip. I'll upload the M2V.
    Quote Quote  
  9. is that my output on the left or are you pulling from a different source?
    I the script I first load the source ivtc and crop it.
    left side the output of that resized bicubic to the target resolution.
    right is that output with additional filtering applied

    about Vapoursynth:
    I wrote a small step by step in my forum how to setup Vapoursynth with VSGAN, see: https://forum.selur.net/thread-1858.html to explain to folks why I'm not integrating it into normal Hybrid. (PyTorch dependency is 5.4 GB + ~600MB for some of the models from https://upscale.wiki/wiki/Model_Database)

    I also made an 'addOn' for Hybrid (Windows only since I'm lazy) which is not part of the usual due to it's size, that allows to use VSGAN in Hybrid. I used a Hybrid version with that addon to make the screenshots and generate the used code.

    Cu Selur

    Ps.: send you a link to a Hybrid dev version and the addOn I used.
    PPs.: you might also want to read [INFO] *hidden* Hybrid options,... and [INFO] About profiles and saving settings,.... in case you use Hybrid.
    Last edited by Selur; 28th Jul 2021 at 11:30.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  10. https://1drv.ms/v/s!AphTLFRW13WMkDuaQMuycZCsvjz1?e=TB2Qsw

    Full "SFE-1" section. It's about two minutes in total, covering both persons and CGI.
    Quote Quote  
  11. Since I was just playing around with that clip and DPIR, I created a too smooth version
    usind this script: https://pastebin.com/SWaGgrNu










    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  12. if you really want to filter that source, you should filter the CG and normal person scenes separately.
    That said, when going for a one fit all I would probably go with something like:
    Code:
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip, PP=7, slow=2)
    clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976
    from vsgan import VSGAN
    # adjusting color space from YUV420P8 to RGB24 for vsVSGAN
    clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
    # resizing using VSGAN
    vsgan = VSGAN("cuda")
    model = "I:/Hybrid/64bit/vsgan_models/1x_DeJpeg_Fatality_PlusULTRA_200000_G.pth"
    vsgan.load_model(model)
    clip = vsgan.run(clip=clip)
    # adjusting resizing to hit target resolution 
    clip = core.fmtc.resample(clip=clip, w=1440, h=1080, kernel="lanczos", interlaced=False, interlacedd=False)
    # contrast sharpening using CAS
    clip = core.cas.CAS(clip=clip, sharpness=0.800)
    no special resizer, mainly using "1x_DeJpeg_Fatality_PlusULTRA_200000_G" to get rid of a lot of artifacs and some CAS.









    But there are tons of other filters out there.
    Improving the CG scenes is easy.
    I think the main problem is filtering the person scenes since they are riddled with artifacts.
    So if there is a Blu-ray release has better scanned people scenes goind from the Blu-ray to 4k is probably a lot easier. (probably slower due to the resolution,..)

    Cu Selur

    Ps.: going some masked filtering also might help a get a bit more detail without the noise.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  13. Selur,

    There is no Blu-ray release. That's actually why I'm doing this. I've spent the last 17 months writing about my efforts to create a better-looking version of Star Trek: Deep Space Nine, so that people who wanted to create a better version of the show for themselves could do so. My most recent tutorial is here, if you care to check.

    You have shown some very interesting output above. I've spent a lot of time today playing around with Hybrid and I'll see if I can get these scripts running.

    One of my project goals was to give people a simple script they could apply to every episode of Deep Space Nine (well, every episode after Seasons 1 & 2) and expect to see improvement. The tradeoff is that the output is not as good as it the episodes were handled with more specific tuning. I am willing to do more specific tuning in the version I am building for myself.
    Quote Quote  
  14. Selur,

    I loaded the script above in Filtering --> VapourSynth --> Custom, inserting it before the end. It returns the following error: "AttributeError: No attribute with the name tivtc exists. Did you mistype a plugin namespace?"


    EDIT: I cleared the error above by deleting the call to TIVTC. I'm now stumped on a new one. It *now* says:

    "vapoursynth.Error: Resize error: Matrix must be specified when converting to YUV or GRAY from RGB."

    Configuring matrix color settings in the VapourSynth --> Color ---> Matrix menu did not solve the problem. It looks to me like the script does set a color matrix, so I'm not sure what's wrong.


    EDIT #2: I finally managed to get it working by dumping the file into RGB using StaxRip *before* opening it in Hybrid. I also removed the TIVTC call. I'd still like to get that working, but this is all sorts of interesting. I can see a clearly repetitive pattern in the upscale, but Topaz has had that problem in the past as well and the pattern is only noticeable when one zooms in tight.

    I am now testing the various models included in the directory.
    Last edited by JoelHruska; 28th Jul 2021 at 20:28.
    Quote Quote  
  15. When running into problems with Hybrid, read the support page and share proper details with me (especially a DebugOutput is needed)
    Loading the script under Custom will not work at all.

    The custom sections are meant for users how understand the basics of Vapoursynth and want to add additional snippets to the generated code.
    Last edited by Selur; 29th Jul 2021 at 08:25.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  16. "Loading the script under Custom will not work at all.

    The custom sections are meant for users how understand the basics of Vapoursynth and want to add additional snippets to the generated code."

    It was the only place I could find to insert the script you had suggested. It worked wonderfully for previewing AI model output once I tweaked the code a little bit and modified my source file to be more compatible with what the VSGAN model needed.

    I will review the information in the Hybrid forum for the actual way to do it correctly.

    This is a lovely application. It organizes an incredible amount of information in an easy to follow way.
    Quote Quote  
  17. It was the only place I could find to insert the script you had suggested.
    If I post complete scripts, adjusting the the paths in the script to your environment and loading the script as source should work. (does here)
    Blindly adding code snippets does not.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  18. @lollo: ran your clip through vs-ffdnet (https://github.com/HolyWu/vs-ffdnet) with strengh=10 + Spotless, a bit smooth, but this might be a good filter for this. Also added a video where I added CAS at the end.

    Cu Selur
    Image Attached Files
    Last edited by Selur; 31st Jul 2021 at 09:29.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  19. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Thanks a lot, Selur, you are very kind!

    Excellent results, it is really worth to compare them to my original filtering on my side.
    For that, you definitely convinced to do something that I'm procastinating since long time: install VapourSynth, and use it in combination or in alternative to AviSynth
    Quote Quote  
  20. Side note: you can use AviSynth filters in Vapoursynth, see: http://www.vapoursynth.com/doc/functions/loadpluginavs.html, but there are few filters that do not exist in Vapoursynth.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  21. btw. could someone owning Topaz run https://forum.videohelp.com/threads/399360-so-where-s-all-the-Topaz-Video-Enhance-AI-d...10#post2626778 through it, would be interessting what it does to that source.
    Also if anyone is attempting to filter this, I really would recommand to filter the cg and 'real world' scences separately. The amount of denoising needed to cleanup the 'real world' scenes is way too high for the cg scenes.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  22. Will we ever be able to load plugins into Hybrid
    Quote Quote  
  23. @s-mp: Sorry, this thread isn't really about Hybrid. Best post questions about Hybrid in the Hybrid-thread here or over in my forum.
    Not sure what you want to do?
    In Avisynth and Vapoursynth you can add plugins through custom code if you undestand the basics about Avisynth and Vapoursynth. I use it whenever I'm testing a new filter,... (or when I need to load an Avisynth filter in a Vapoursynth script on Windows)

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  24. @Selur,

    I have output I'm working on right now that uses the same filter you did and runs it through Topaz. I used 1x_DeJpeg_Fatality_PlusULTRA_200000_G but lowered the CAS to 0.15. Any higher caused errors on human faces in some scenes.

    This is the output of using the VSGAN model above + Artemis HQ in Topaz VEAI. Topaz does not make much difference in these specific scenes. It does slightly improve banding, though:

    I'll show you the output from just using VSGAN and the output from Topaz + VSGAN:

    https://imgsli.com/NjM0NzI

    https://imgsli.com/NjM0NzM

    This is what my current output looks like after color grading.

    https://imgsli.com/NjM0NzQ

    Thank you for contributing to this conversation. Hybrid is an incredibly useful application and some of these VSGAN models are quite good. I may blend a little more noise into the process after the VSGAN step, but this is not difficult to do. Topaz actually has an option to add grain back in adjustable increments.

    The additional benefit Topaz adds is larger in some scenes than others. There are some places where the VSGAN model breaks -- it has trouble with door panels and the comet in the credits at one point, creating badly blurred content on one edge of the screen. That's fine, though. Those places can be repaired and as you say, picking and choosing where to use these models is the best option. I really appreciate your help this week. You've meaningfully improved this project.

    The color shift in this image is due to color experiments on my part, but I don't have a copy of the original frame at the moment.

    https://imgsli.com/NjM0Nzc

    You can see that the word "Federation" gets a bit muddy in the VSGAN. In my observation it struggles with background Okudagrams created for TNG / DS9 / VOY-era shows by Michael Okuda. The easiest way to deal with it when this happens is to either blend repaired material or substitute a different AI model (like Topaz) in that scene. Topaz also holds the line detail in Sisko's uniform a bit better.

    I currently have an RTX 2080 and a Ryzen 9 5950X in my PC and the filter runs at about 0.47 fps. Would you expect a higher-end GPU from Nvidia's RTX 3000 series to run it more quickly?

    I will have demonstration clips I can upload within a day. Important point. All of the VSGAN shots above were output at 704x480 in hybrid. I upsized them to 2560x1920 for display purposes. As good as Topaz is at clearing up footage while upscaling it, it's clear that the same quality or very nearly the same quality can be achieved by other means. TVEAI is about 4x faster, but I'm open to anything I could do to speed up the 1x_DeJpeg_Fatality_PlusULTRA_200000_G model.
    Last edited by JoelHruska; 31st Jul 2021 at 17:33.
    Quote Quote  
  25. One more comparison, based directly on an image you uploaded. Output from VSGAN at 704x480 native then resized to 2560x1920 against Topaz Artemis-HQ + VSGAN output at 2560x1920:

    https://imgsli.com/NjM0ODM

    Here's the VSGAN against Topaz-only output. This output has again been color graded.

    https://imgsli.com/NjM0ODA

    Here's Topaz only versus Topaz + VSGAN:

    https://imgsli.com/NjM0ODI

    I like the VSGAN output more and the VSGAN + Topaz output the best.
    Quote Quote  
  26. Okay, question was not to apply topaz after vsgan but to see now topaz would fare on it's own on the SFE-Complete.demuxed.m2v clip you provided.
    Personally I'd like to what topaz does with the fog and tons of artifacts.
    Question people here ask if they can get better results than Topaz, personally I think I could better results and I think the vsgan + topaz results are too smooth. (One could get that result by using FFDNet + CAS)

    About the text: seems like that is due to CAS amplifying some artifacts, doing some denoising before and after CAS might help with that.

    I currently have an RTX 2080 and a Ryzen 9 5950X in my PC and the filter runs at about 0.47 fps. Would you expect a higher-end GPU from Nvidia's RTX 3000 series to run it more quickly?
    see: https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  27. "Okay, question was not to apply topaz after vsgan but to see now topaz would fare on it's own on the SFE-Complete.demuxed.m2v clip you provided."

    Alright. So you want to see just Topaz, no QTGMC or pre-processing at all. No problem. I'll put it together tonight.

    I agree with you that the current output is a bit smooth, but I think I can resolve that by putting a bit of noise back in when I do the final processing steps.
    Quote Quote  
  28. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by Selur View Post
    Sorry, this thread isn't really about Hybrid.
    Sure it is.

    - New post asks question about crappy software. (Topaz)
    - Thread eventually starts to discuss better software than can, may, or will do better. (Hybrid)

    Common occurrence in many forums.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  29. @JoielHruska: Thanks!
    About the denoise: Adding some noise will probably only lighten the problem, better look for a way not to smooth so much.
    @lordfsmuf: I just don't want this to turn into a general Hybrid support thread.
    Last edited by Selur; 1st Aug 2021 at 00:24.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  30. @JoielHruska: if you want to do the Vapoursynth + Topaz route, also try how it looks if you use just conventional filtering in Vapoursynth.
    Code:
    # this is just VIVTC with QTGMC preset 'slow' and OpenCL enabled
    clip2clip = clip
    clip2clip = havsfunc.QTGMC(Input=clip2clip, Preset="slow", opencl=True, TFF=False,FPSDivisor=2)
    clip = core.vivtc.VFM(clip=clip, order=0, mode=1)
    # VDecimate helper function 
    def postprocess(n, f, clip, deinterlaced):
      if f.props['_Combed'] > 0:
        return deinterlaced
      else:
        return clip
    clip = core.std.FrameEval(clip=clip, eval=functools.partial(postprocess, clip=clip, deinterlaced=clip2clip), prop_src=clip)
    clip = core.vivtc.VDecimate(clip=clip)# new fps: 23.976
    
    # denoising using DFTTest
    clip = core.dfttest.DFTTest(clip=clip, sigma=12.00, sigma2=12.00)
    
    # debanding using GradFun3
    clip = fvsfunc.GradFun3(src=clip)
    
    # contrast sharpening using CAS
    clip = core.cas.CAS(clip=clip, sharpness=0.800)
    
    # deringing using HQDeringmod
    clip = havsfunc.HQDeringmod(clip, nrmode=2, sharp=3, darkthr=3.0)
    I suspect this might be a better approach when doing Topaz afterwards.


    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!