VideoHelp Forum


Try DVD Fab Video Downloader and rip Netflix video! Or Try DVD Fab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Page 4 of 5
FirstFirst ... 2 3 4 5 LastLast
Results 91 to 120 of 134
Thread
  1. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Curious where negativity resides in the need for scientific peer review? In the need for transparency & clearly defined steps? In the need for ground truth references? In the expectation of objective (Or statistically significant subjective) ABX comparisons?

    You confuse rigorous skepticism with negativity.


    Scott
    Quote Quote  
  2. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by Cornucopia View Post
    Curious where negativity resides in the need for scientific peer review? In the need for transparency & clearly defined steps? In the need for ground truth references? In the expectation of objective (Or statistically significant subjective) ABX comparisons?

    You confuse rigorous skepticism with negativity.

    Scott
    Sometimes the world has too many fanboys/cheerleaders to let facts get in the way.

    Their colors always show when shown the follies of the ways (claims refuted with facts), and they'll usually just double down on the wrong/false information.

    For whatever reason, we're seeing that with this Topaz software.

    I would be quite thrilled if a true "AI" program existed to repair video (size, framerate, etc). Or even if it was a superior way to upsize, compared to what is available in Avisynth. (FYI: A commercial app being better than Avisynth can happen. Mercalli is a great example.) But this Topaz software is neither. It's all bark, no bite. Marketing. Bullshit. All of their software is BS. For example, "JPEG to RAW converter" -- WTF? No. Hell no. Nonsense. This company is bamboozling amateurs with crapware for big bucks. Don't fall for that. Be smarter.
    Quote Quote  
  3. Member
    Join Date
    Dec 2020
    Location
    Valladolid
    Search PM
    I agree about the company statement. Is messy with their releases, models etc...

    But When applying neuronal filters models, its all about the "samples", and the amount of iterations, corrections and training of that neuronal models.
    (It is available in a lot of stuff, For example: sound noise https://ffmpeg.org/ffmpeg-filters.html#arnndn)

    But if this guys are putting their money in training neurons, i am willing to use them, if empirical useful.

    Marketing can say BIG AI, but is about the quality of neuronal models.
    Quote Quote  
  4. Hi everyone, I have yet to read over the thread when I'm not coming off the tail end of a day with far too little sleep, but I've been experimenting with the trial of topaz and have also been looking for alternatives.

    My GTX 1080 means that trying to upscale a 21 minute video from 720x480 to 4k with the gai cg preset takes 24 hours, 14 for upscaling to 1080p. (The original plan was to probably go from 4k back down to 1080 after a second pass but it's going to take ages).


    I feel like something that can leverage my CPU might actually be faster though, I have a 5800x and h264 encoding a 44gb 4k file down to 4gb h.265 takes under 20 minutes so I feel like anisynth could provide impressive results with much less time involved, however I have never looked into this before, I downloaded avisynth and avspmod to get started but could use some help.

    I'm basically looking to improve the dvd rips of an old cg cartoon as much as I can and topaz has been pretty impressive but there are some quirks I noticed here and there in the output.

    I was hoping one of you guys could provide me with a basic profile that would convert the 480p video up to 1080, or maybe even 4k (though not sure that's necessarily a good choice coming from 480p) with similar, or even better results than topaz?

    It sounds like fairly aggressive sharpening and smoothing would be good, however I noticed that topaz does seem to miss some blocky affects and I do feel like it could have done better.

    I'm hoping with a starter script for this sort of use vs live action stuff which seems common, and some tutorials I can get started and at least do a comparison, I'd really appreciate the help from you guys to try and restore this old show.

    Here's a link where you can get some samples, I have the full first episode of the show, as well as the intro alone that I simply sized up to 4k res, as well as the output from topaz for the intro alone. I do also have the full episode churned down to a more reasonable size but that's probably not really needed for this.

    http://www.perpetualarchive.ca/Dragon%20booster%20Stuff/

    It does seem like from some googling there are other plugins/filters you can download that would help further with certain things so maybe I'll have to get some of those, but I think that my source is a reasonably decent quality.

    Also, resolution for further episodes will probably change, but I'm guessing when upscaling it will just pillar or letterbox as needed to fit inside the target res as much as possible? There are some that will be in 720x404 or so.

    edit: also included the full dvd rip as the original mkv in case that's useful. topaz added weird scan lines that don't exist with mkv but I bet avisynth doesn't do that.

    edit: Trying this script but I can't figure out how/where I actually input the path to the original video in a way that it accepts it, it keeps saying not a clip.
    https://www.l33tmeatwad.com/scripts/anime-upscale
    Last edited by XOIIO; 23rd Dec 2020 at 06:05.
    Quote Quote  
  5. Originally Posted by XOIIO View Post
    edit: Trying this script but I can't figure out how/where I actually input the path to the original video in a way that it accepts it, it keeps saying not a clip.
    https://www.l33tmeatwad.com/scripts/anime-upscale
    If you want to use that function copy/paste the entire text into a script. Add your source and a call to the function at the bottom:

    Code:
    # pasted code here, then
    
    LSMASHVideoSource("filename.mp4") # get source
    AnimeUpscale() # apply filter
    Last edited by jagabo; 23rd Dec 2020 at 09:01.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    Originally Posted by XOIIO View Post
    edit: Trying this script but I can't figure out how/where I actually input the path to the original video in a way that it accepts it, it keeps saying not a clip.
    https://www.l33tmeatwad.com/scripts/anime-upscale
    If you want to use that function copy/paste the entire text into a script. Add your source and a call to the function at the bottom:

    Code:
    # pasted code here, then
    
    LSMASHVideoSource("filename.mp4") # get source
    AnimeUpscale() # apply filter
    Ahh I see so that other code is basically just defining the filter, like a separate loop in an arduino script that you call, ok.

    Seems to be working in avspmod but I don't think it's actually outputting a file, it's not showing anything on the right side, only the left. megui just crashes when I try to open that script file. Hmm.

    It's also only running at .25 frames per second, and not even touching my cpu or gpu for some reason usage wise.
    Quote Quote  
  7. Originally Posted by XOIIO View Post
    It's also only running at .25 frames per second, and not even touching my cpu or gpu for some reason usage wise.
    It sounds like something is wrong. With a script like:

    Code:
    Mpeg2Source("opening.d2v", Info=3) 
    TFM(d2v="opening.d2v") 
    TDecimate() 
    vInverse()
    AnimeUpscale(widescreen=false)
    and 64 bit AviSynth+, AVSPmod's Video -> Run Analysis Pass gives about 45 fps (that's single threaded). With prefetch(8) added to the end (8 threads) I got about 160 fps.

    Note, I'm not familiar with AnimeUpscale. I just used the defaults, aside from the widescreen=false option. I don't know how optimal those defaults are. You'll have to play around with the various settings.
    Last edited by jagabo; 23rd Dec 2020 at 20:31.
    Quote Quote  
  8. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by XOIIO View Post
    Seems to be working in avspmod but I don't think it's actually outputting a file,.
    AvsPmod is a script previewer.
    VirtualDub2/etc is needed to load the .avs file, process to a new video file.
    Quote Quote  
  9. I was playing around with AnimeUpscale. The script a few posts above opens properly in avspmod and VirtualDub but when I went to encode it with x264 cli or ffmpeg it just aborted with no results and no error report. So I started looking at what the program did and decided its best feature was the noise reduction with mDegrainSimple. So I borrowed from that and did some other cleanup and upscaling of the original DVD data. I also wanted to use Mpeg2Source's deringing filter which only works in 32 bit AviSynth. So I used MP_Pipeline to run 32 bit Mpeg2Source in 64 bit AviSynth.

    Code:
    function mDegrainSimple(clip, int "frames", int "blksize") # stolen from AnimeUpscale
    {
        originalvideo = clip
    
        blksize = Default(blksize, 8)
        frames  = Default(frames, 1)
        overlap = blksize/2
    
        super = MSuper(originalvideo, pel=2, sharp=1)
        backward_vec3 = (frames==3) ? MAnalyse(super, isb = true, delta = 3, blksize=blksize, overlap=overlap) : super
        backward_vec2 = (frames>=2) ? MAnalyse(super, isb = true, delta = 2, blksize=blksize, overlap=overlap) : super
        backward_vec1 = MAnalyse(super, isb = true, delta = 1, blksize=blksize, overlap=overlap)
        forward_vec1 = MAnalyse(super, isb = false, delta = 1, blksize=blksize, overlap=overlap)
        forward_vec2 = (frames>=2) ? MAnalyse(super, isb = false, delta = 2, blksize=blksize, overlap=overlap) : super
        forward_vec3 = (frames==3) ? MAnalyse(super, isb = false, delta = 2, blksize=blksize, overlap=overlap) : super
        mvvideo = (frames==3) ? MDegrain3(originalvideo, super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=400) : (frames==2) ? MDegrain2(originalvideo, super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400) : MDegrain1(originalvideo, super, backward_vec1,forward_vec1,thSAD=400)
    
        return mvvideo
    }
    
    
    MP_Pipeline(""" 
    ### platform: win32 
    Mpeg2Source("opening.d2v", CPU2="ooooxx", Info=3) 
    ### ### 
    """) 
    
    import("C:\Program Files (x86)\AviSynth+\plugins64+\deblock_qed.avs") 
    import("C:\Program Files (x86)\AviSynth+\plugins64+\deblock_qed_i.avs") 
    import("C:\Program Files (x86)\AviSynth+\plugins64+\Santiag.avs") 
    
    deblock_qed_i(quant1=30, quant2=30)
    TFM(d2v="opening.d2v")
    TDecimate() 
    
    Santiag() # antialias some bad edges
    dehalo_alpha(rx=1.0, ry=2.0) # reduce vertical oversharpening halos
    #removegrain(2).flash3kyuu_deband() # didn't to a lot so I removed it
    mDegrainSimple(frames=2,blksize=4) # heavy noise reduction
    
    // stepwize sharp upscaling
    aWarpSharp2(depth=2)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=960, fheight=720)
    aWarpSharp2(depth=2)
    Sharpen(0.2)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1440, fheight=1080)
    aWarpSharp2(depth=5)
    Sharpen(0.5)
    
    prefetch(8) # 8 threads
    The opening titles had a lot of distortion, more than most of the body of the episodes. So this is probably more noise reduction than is necessary for the rest. Some light details are lost with the strong noise reduction. And there's some banding in some shots. But there's ideas you can use.

    XOIIO's dumb upscale, cropped and downscaled to 1440x1080 (bicubic):
    Image
    [Attachment 56425 - Click to enlarge]


    result of above script:
    Image
    [Attachment 56426 - Click to enlarge]


    Encoding with x264 CLI at the slow preset ran around 16 fps (i9 9900K).

    The m2v of the opening sequence is attached if anyone wants to play around with it.
    Image Attached Files
    Last edited by jagabo; 25th Dec 2020 at 08:37.
    Quote Quote  
  10. @lordsmurf
    I just wanted to tune in and thank you. I agree with everything you said in this thread 100%, down to the dogfood analogy. Really speaks from my heart. People are so lazy and ignorant they choose to consume rather then to learn.
    Quote Quote  
  11. The problem here is the same as the other thread that just got reactivated, about deblurring a license plate. The commonality between both threads is that people see Hollywood movies where someone pushes a button and the image suddenly becomes full of details and looks like it was shot in IMAX from six feet away. They therefore think this is possible when, in fact, it is 100% hocus-pocus and has no grounding in reality. These things are in the same category as traveling faster than the speed of light. That particular movie science fiction simply cannot be done, not now, and not 1,000 years from now.

    Adding detail might be possible some day (not now), although whether that detail matches the original reality is like expecting colorized movies to match the colors in the original scene.
    Quote Quote  
  12. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Just to educate uneducated, there is fine way to do blind deconvolution, and it is known fact.
    Quote Quote  
  13. If you took jagabo's clean up script, skipped the upscaling step,, and instead upscaled with a NN model appropriate for that animation style (it looks like it's supposed to be clean and flat textures)

    This is an ESRGAN 4x model , downscaled to 1440x1080 (jagabo's screenshot looks like 601/709 mistmatch)
    Image Attached Thumbnails Click image for larger version

Name:	jag+esrgan_df2_flat_1440x1080.png
Views:	104
Size:	1.10 MB
ID:	56432  

    Quote Quote  
  14. Originally Posted by poisondeathray View Post
    (jagabo's screenshot looks like 601/709 mistmatch)
    Oops, yes. I used VirtualDub2 to make the screen shots -- it converted to RGB with rec.601 instead of rec.709. I should have done the RGB conversion myself.
    Quote Quote  
  15. Originally Posted by johnmeyer View Post
    The problem here is the same as the other thread that just got reactivated, about deblurring a license plate. The commonality between both threads is that people see Hollywood movies where someone pushes a button and the image suddenly becomes full of details and looks like it was shot in IMAX from six feet away. They therefore think this is possible when, in fact, it is 100% hocus-pocus and has no grounding in reality. These things are in the same category as traveling faster than the speed of light. That particular movie science fiction simply cannot be done, not now, and not 1,000 years from now.
    Yes , to that extent it's not possible, where you have 4 pixels and suddenly a full picture emerges

    Adding detail might be possible some day (not now), although whether that detail matches the original reality is like expecting colorized movies to match the colors in the original scene.
    It is possible today, in some situations, to get better scaling with deep neural network scaler frameworks (not necessarily VEAI), than traditional scalers . Real fine details that match closer to the ground truth. They look like HD instead of upscaled SD. Plenty of examples , scientific research, scientific proof , ground truth. I can post some full examples . They are usable in python and vapoursynth, and some people have made simple GUI's for some of them

    The short version is it's a mixed bag, because you need appropriate models for a given situation. I'll post more on this later... But for 1) certain types of animation, anime, cartoons 2) live action derived from clean downscale , the NN scaling can be significantly better . You need specific trained models for specific situations - that's the problem. And it can take months for the training on regular hardware . Also, you need proper pre and post processing. For people that think they can just use a single program and presto... forget it. There are common issues that arise from the NN scalers
    Quote Quote  
  16. I have the same perspective on this, enhanced object is going to be a made up object, believe it or not.

    There is apples and oranges going in this thread, one side insists on explaining that above mentioning enhancing software is good one click solution to perceive video as better by pulling some training models , sharpening etc.

    And other side, that actually talks about restoration of video, which needs clever human, to select filters, settings, right order and doing that basically for every video. This needs heavy work that needs lots of knowledge and experience, knowing colorspaces, software, how they work etc. Comparing results and deciding when filter is too much etc. One click software cannot do that. And also even it starts do that, the content will be something else, as johnmeyer mentioned. Folks forget that this Universe is not determined as it is in minecraft.
    Quote Quote  
  17. Hi everyone, I do really need a (n open source, preferably) tool - or script - to easily upsize my old 576i stuff to 1080p or, even better, to 2160p: any suggestion ?

    Last but not least do you think these open source tools maybe useful ?

    - Video2X: Machine learning video/GIF/image upscaling
    - Waifu2x-Extension-GUI: Image, GIF and Video enlarger/upscaler(super-resolution) achieved with Waifu2x, SRMD, RealSR, Anime4K and ACNet.
    - BasicSR: an open source image and video restoration toolbox based on PyTorch, such as super-resolution, denoise, deblurring, JPEG artifacts removal, etc.
    - Video Super Resolution: A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
    - Fast-SRGAN: enable real time super resolution for upsampling low resolution videos.
    - Anime4K: a set of open-source, high-quality real-time anime upscaling/denoising algorithms that can be implemented in any programming language.
    - Zooming-Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution

    Infos:
    - Video-Super-Resolution list: collection of papers


    EDIT: just found another interesting AS script @ Doom9...

    Avisynth AiUpscale v1.1.0
    Last edited by forart.it; 12th Jan 2021 at 07:08.
    Quote Quote  
  18. Mr
    Join Date
    Feb 2021
    Location
    Salisbury, Wiltshire, UK
    Search Comp PM
    Hullo everyone,

    I think it might be worth updating this conversation now the new version of Topaz has been released which adds de-interlacing.


    Here's some still examples to compare, all originals interlaced PAL SD made into full 1080 HD 50p files.

    LEFT: - QTGMC with NNEDI3 resizing via Vapoursynth in Hybrid with Denoiser & Sharpener.
    RIGHT: - Topaz Enhance AI v.1.9.0 with the new Dione DV and TV presets.


    From DV PAL source clip:






    From Hi8 PAL source clip:






    Here are the full size screengrabs and video files: https://1drv.ms/u/s!AlStBzht9Fsbgb9yaZ8cwfZBSxOb7g?e=XaRYOJ


    I tried to choose frames which have movement and fine details in, as well as a variety of exposures, textures, colours and shapes.

    Topaz obviously costs money, doesn't like passing through audio very much, and doesn't let you create a 1440x1080 output file from a 4:3 source; output must be full HD with pillarboxing. But it does appear to be faster, clearly much much easier to set up, and on the strength of my initial tests does just as well, if not better, than I've been able to do with many hours of fiddling with Hybrid and the many many Vapoursynth/Avisynth options. If I get two clients who request upscaling, it's worth me buying Topaz - I am using the 30 day free trial at the moment, and would recommend you give it a try before firing back.

    Oh, I should just add that the Hybrid exports are ProRes, and the Topaz are H264 - and they still look as good/better!


    Just though you might like to hear what I've found so far. Just to upset the AviSynth die-hards even more - I did all the above on a Mac!

    What do you think?


    Best to you all,

    John
    Salisbury, Wilts. UK
    Quote Quote  
  19. Did you add contrast sharpening, since from the looks of it Topaz did,... ?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  20. Originally Posted by Fryball View Post
    I think it might be worth updating this conversation now the new version of Topaz has been released which adds de-interlacing.
    Some workflow issues with Topaz ? Looks like interlaced chroma upsampling errors ( probably progressive instead of interlaced conversion to RGB before deinterlacing)

    Image
    [Attachment 57131 - Click to enlarge]


    What was the procedure? Direct DV input ?
    Quote Quote  
  21. Mr
    Join Date
    Feb 2021
    Location
    Salisbury, Wiltshire, UK
    Search Comp PM
    Originally Posted by poisondeathray View Post

    What was the procedure? Direct DV input ?
    Yes, direct from an HVR-1500 via Firewire into FCP.

    I added a Sharpener so that probably accounts for that.

    The chroma errors are pretty much the only thing to distinguish the Topaz from the Hybrid, which for the speed increase I can live with, but if anyone has a suggestion on how to speed up my Hybrid/QTGMC/ffmpeg/Vapoursynth render time Id rather use that.

    My aim overall is to get a set of adjustments and filters I can use universally to deinterlace and resize any clients SD clip to HD without needing to fiddle endlessly with Hybrid every time. Thats the attraction on Topaz for me - much simpler & quicker than Avisynch, which is more valuable overall to my business, even if it doesnt do quite as good a job. Clients wont care anyway. They pay a set rate which only allows for a certain amount of time spent on each tape, so whatever hurrys that up means more efficiency, more tapes done, more money in to buy more old video equipment...!

    Any advice? Im running QTGMC, ffmpeg & Vapoursynch through Hybrid on my 2018 MacBook Pro 15. Just want to speed it up basically.
    Quote Quote  
  22. Originally Posted by Fryball View Post

    The chroma errors are pretty much the only thing to distinguish the Topaz from the Hybrid, which for the speed increase I can live with, but if anyone has a suggestion on how to speed up my Hybrid/QTGMC/ffmpeg/Vapoursynth render time Id rather use that.
    It's probably "fixable" if the dev's were aware of it, you can report it. IIRC think they are using ffmpeg/libav libraries on the back end. Upsample with interl=1 when deinterlacing

    For the 2nd, you should start a different thread, but you can optimize the settings, perhaps GPU (or not, sometimes it's slower for some steps). Don't use stuff like "placebo". Batch process. If you value speed, make some tradeoffs with bottlenecking settings
    Quote Quote  
  23. Hybrid speedUp: post the Vapoursynth script you use atm. in Hybrid, otherwise how do you expect recommendations for speedUp?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  24. There is some local contrast enhancement with the Topaz example; it causes high frequency luma flickering around objects. For example it affects the area around the white shirt, this is a separate problem from the interlaced chroma

    Another issue is the superbrights are clipped. All these neural net algorithms (including Topaz) work in RGB, and if you don't rescue or legalize the highlights pre RGB conversion, or use a full range conversion to RGB, you clip the data

    There is temporal aliasing caused by the NNEDI3 upscaling in the QTGMC example (you could have smoothed it over with QTGMC in progressive mode, but that would be even slower) . Some elements like the front tractor grille are magically "fixed" and straightened by Topaz, when I bet the DV source had aliasing and moire patterns to begin with. But there are the same luma flicker issues in Topaz clip, the top 1/4 of frame around the sky. For QTGMC there is some at the very edges, but you can use border=true to process the edge pixels
    Quote Quote  
  25. Mr
    Join Date
    Feb 2021
    Location
    Salisbury, Wiltshire, UK
    Search Comp PM
    Originally Posted by Selur View Post
    Hybrid speedUp: post the Vapoursynth script you use atm. in Hybrid, otherwise how do you expect recommendations for speedUp?
    Um, I'm not using a script - just whatever is already loaded into Hybrid. I check a box.

    The boxes I have checked are:
    - Crop/Resize to PAR 1:1 square pixel 1440x1080
    - Filtering - De-interlace/Telecine - QTGMC Auto - Bob - Final temporal smoothing: 2 - Superfast, all else defaults
    - Filtering - Filtering - Noise, Chroma noise strength: 1, temporal, all else defaults
    - Filtering - Vapoursynth - Denoise - VagueDenoiser 85%, all else defaults
    - Filtering - Vapoursynth - Sharpener - FineSharp, mode 1, Sharpen 2.50, all else defaults
    - Filtering - Vapoursynth - Resizer: NNEDI3, all at defaults

    That's it. I've not modified any other setting or enabled anything else, mainly because anything else I tried added time, and most of the videos I get in aren't actually that bad so much more processing isn't needed. Colour & whatnot I can do in Final Cut before final output from the timeline.

    Do you need to know anything else to give me some pointers?

    I really don't want to have to learn & try every million options just to do the basics, I just don't have the time, motivation or will ever earn enough from this to be worth it. Again, which is what drew me to the Topaz thing.

    Any idiot proof suggestions welcome!
    Quote Quote  
    • Simply posting the script content (Filtering->Vapoursynth->Vapoursynth script view) is probably more useful for most users.
    • QTGMC&NNEDI3 both habe a gpu option which should speedUp things (assuming it works on your system and on Mac in general; only got a MacOS running in a VM without hardware accelleration)
    • instead of FineSharp I would recommend to use CAS, FineSharp will add lots of artifacts with that setting.
    • mixing FFmpeg and Vapoursynth filters usually is a bach idea, better use 'AddGrain' from Vapoursynth
    Any idiot proof suggestions welcome!
    Wrong tool. Hybrid is far from "idiot proof".

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  26. Mr
    Join Date
    Feb 2021
    Location
    Salisbury, Wiltshire, UK
    Search Comp PM
    Thanks Selur!

    Those are some useful tricks to try. I didn't even realise you could view the scripts within Hybrid. While I'm looking at it, you may as well cast your eyes in case anything else leaps out that I could do to improve the speed of my renders:

    # Imports
    import os
    import sys
    import vapoursynth as vs
    core = vs.get_core()
    # Import scripts folder
    scriptPath = '/Applications/Hybrid.app/Contents/MacOS/vsscripts'
    sys.path.append(os.path.abspath(scriptPath))
    # Import scripts
    import edi_rpow2
    import havsfunc
    # loading source: /Users/Fryfilm/Movies/VIDEO RESTORATION TESTS/Hi8 to HD/Tractors via JVC FW FCP7 50i to DV .mov
    # color sampling YUV420P8@8, matrix:470bg, scantyp: bottom field first
    # luminance scale TV
    # resolution: 720x576
    # frame rate: 50 fps
    # input color space: YUV420P8, bit depth: 8, resolution: 720x576, fps: 25
    # Loading /Users/Fryfilm/Movies/VIDEO RESTORATION TESTS/Hi8 to HD/Tractors via JVC FW FCP7 50i to DV .mov using LibavSMASHSource
    clip = core.lsmas.LibavSMASHSource(source="/Users/Fryfilm/Movies/VIDEO RESTORATION TESTS/Hi8 to HD/Tractors via JVC FW FCP7 50i to DV .mov")
    # making sure input color matrix is set as 470bg
    clip = core.resize.Point(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 25
    clip = core.std.AssumeFPS(clip, fpsnum=25, fpsden=1)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # setting field order to what QTGMC should assume
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=1)
    # Deinterlacing using QTGMC
    clip = havsfunc.QTGMC(Input=clip, Preset="Super Fast", TFF=True, InputType=0, TR2=2, Sharpness=1.0, SourceMatch=0, Lossless=0) # new fps: 50
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    # denoising using VagueDenoiser
    clip = core.vd.VagueDenoiser(clip=clip)
    # contrast sharpening using CAS
    clip = core.cas.CAS(clip=clip, sharpness=0.010)
    # resizing using NNEDI3CL
    clip = edi_rpow2.nnedi3cl_rpow2(clip=clip, rfactor=2, qual=1, pscrn=2)
    # adjusting resizing
    clip = core.fmtc.resample(clip=clip, w=1440, h=1080, kernel="lanczos", interlaced=False, interlacedd=False)
    # adjusting output color from: YUV420P16 to YUV422P10 for ProResModel (i422)
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV422P10, range_s="limited")
    # Output
    clip.set_output()


    Thank you all for your input!
    Quote Quote  
  27. Thanks for taking the time to post those comparisons.

    The Topaz output looks good, but I don't see anything that I haven't done myself with various AVISynth scripts. The emergence of apparent detail is similar, although not as good as what I posted earlier in this thread. For the result I posted, I simply used MVTools2 denoising, and various sharpening filters:

    https://forum.videohelp.com/threads/399360-so-where-s-all-the-Topaz-Video-Enhance-AI-d...e3#post2603010
    Last edited by johnmeyer; 2nd Feb 2021 at 18:46.
    Quote Quote  
  28. CAS -> I would use a higher value
    With QTGMC try GPU and another preset, it's probably not slower since the gpu can be partially used.
    Also try whether it help to move CAS behind the resizer
    Denoising -> KNLMeans2CL or similar might be interesting.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  29. Mr
    Join Date
    Feb 2021
    Location
    Salisbury, Wiltshire, UK
    Search Comp PM
    Great, thank you all!

    I will try out those suggestions.

    While I'm testing, it would be nice not to have to process a whole clip just to see the results. Is there a straightforward way to limit the output frames or trim the base clip?
    I guess this means importing a script into Vapoursynth, so is there one that I can make work with it's GUI? If loading that is a process I only need to do once I can follow instruction well, just want to enable easy trimming for future use, if possible.

    There must be a thread already about this but I keep getting lost, so if someone could point me in the right direction I'd be most grateful.

    Best,

    John
    Quote Quote  



Similar Threads