VideoHelp Forum
+ Reply to Thread
Results 1 to 7 of 7
Thread
  1. Member
    Join Date
    Oct 2020
    Location
    Australia
    Search PM
    Hi all,

    I'm working on some video restoration with this cartoon clip that I have attached below.

    (I'm very new to this, so please go easy on me)

    The clip is 29.970 fps and clearly interlaced.



    But I have a feeling that whoever encoded it, made some serious mistakes and those mistakes are now hard baked into the video as a result. However I'm wondering if there are any plugins or filters that I can use to try and repair the damage.

    So I'm running the following script to detelecine, remove duplicate frames and most importantly restore the framerate to 23.976.

    Code:
    LWLibavVideoSource("TEP_Sample.mkv")
    
    TFM()
    TDecimate()
    vinverse()
    
    crop(2,0,712,480)
    lanczosresize(640,480)
    I've been using that very basic script for years now and it's always done exactly what I need it to do. But for some reason, this particular sample clip won't play nicely. If you scroll through the clip, you will notice that there are signs of leftover ghosting and pixelated artifacts (every few frames). This is what I was talking about when I mentioned that the person who encoded the clip, made mistakes.

    Image
    [Attachment 69686 - Click to enlarge]

    Image
    [Attachment 69687 - Click to enlarge]


    If you look closely at the screenshots I provided above, you will see artifacts and ghosting around his arm and the tape measure.

    As an alternative I tried QTGMC using TFF and it provided similar results.

    This is where things get really interesting. I imported the video clip into VirtualDub and applied a deinterlace filter. I wanted to unfold the fields so I could see them next to each other as I slowly played back the video.




    So I'm not sure if that's supposed to be normal, but I'm guessing not. It's like each field has artifacts to the left and right. Then when trying to deinterlace it's kind of combining them together. -- Honestly I have no idea what I'm talking about or if I'm using the correct terminology hahaha...

    tl;dr is this video damaged or can it still be salvaged?

    Like I said, I've attached the sample clip below, if anyone wants to have a play around, I'd be so grateful!

    Cheers!
    Image Attached Files
    Last edited by ThaKarra; 10th Mar 2023 at 13:14.
    Quote Quote  
  2. Playing around with TIVTC + BasicVSR++ and some additional filtering.
    Main problems:
    a. freckles of the girl get lost
    b. ribbon of the girl, still not as stable as I would have wanted
    Attached clip.
    script used: https://pastebin.com/7GLWb9M8

    To me, TIVTC and then applying additional filtering seems to be the way to go.
    Also added a version where I used SCUNet instead of BasicVSR++.
    I'll play around with some other filters too. (will attach samples)

    Cu Selur

    Ps.: To be frank, I think all that the noise can be removed with conventional denoise filters and those ml/ai based filters I used are not mandatory, but can be an easy solution.
    Image Attached Files
    Last edited by Selur; 10th Mar 2023 at 15:26.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. If you can't find the original source to start work with, along the lines of what selur is doing - there are machine learning models such as DPIR than can improve those types of artifacts that are now available in avisynth with NCNN or ONNX through the avs-mlrt plugin . If you have NVidia card, I would consider using vapoursynth, because the pytorch and tensor versions are generally faster

    Ideally, you would filter the "worse" frames with higher strength, the "good" frames with lower strength but on smooth cartoon artwork like this, it's probably less important. Otherwise you might use conditionalfilter, with some parameter to distinguish "noisy" vs. "less noisy", so you don't have to use such strong dpir settings on all the frames

    You might consider addressing the rainbows and dot crawl. I used a separate dotcrawl model combined with weaker settings for dpir than I think what selur used, but I think it retains slightly more details (more freckles, eyes, glasses). eg only the outer freckle of the girl's left eye in the last few frames are blurred away. It's not from the dotcrawl removal. If you lower the dpir strength, they come back. If you reduce the DPIR strength, the "bad" frames start gaining back their artifacts, so you either accept the balance or attempt to selectively filter

    In this example, I also used a real-esrgan (compact) model called 1x_Dotzilla_Compact_80k_net_g with DPIR color at strenght at 16. No differential filtering in this example .


    Code:
    DGSource("TEP_Sample.dgi")
    tfm().tdecimate()
    z_convertformat(pixel_type="RGBPS")
    addborders(2,0,2,0) #requires mod8
    mlrt_ncnn(network_path="PATH\1x_Dotzilla_Compact_80k_net_g_onnx.onnx", builtin=false)
    mlrt_DPIR(strength=16, model=1)
    crop(2,0,-2,0,true)
    z_convertformat(pixel_type="YV12")

    To use ESRGAN models in avisynth, you need to convert the models to onnx or ncnn (most original ESRGAN models are pth, python). There is a thread in Doom9 that has a link to preconverted models, and how to convert them in a GUI called chainner
    Image Attached Files
    Quote Quote  
  4. Member
    Join Date
    Oct 2020
    Location
    Australia
    Search PM
    Massive thanks to you both for your replies, I really appreciate the effort!!

    Originally Posted by poisondeathray View Post
    If you can't find the original source to start work with, along the lines of what selur is doing - there are machine learning models such as DPIR than can improve those types of artifacts that are now available in avisynth with NCNN or ONNX through the avs-mlrt plugin . If you have NVidia card, I would consider using vapoursynth, because the pytorch and tensor versions are generally faster

    Ideally, you would filter the "worse" frames with higher strength, the "good" frames with lower strength but on smooth cartoon artwork like this, it's probably less important. Otherwise you might use conditionalfilter, with some parameter to distinguish "noisy" vs. "less noisy", so you don't have to use such strong dpir settings on all the frames

    You might consider addressing the rainbows and dot crawl. I used a separate dotcrawl model combined with weaker settings for dpir than I think what selur used, but I think it retains slightly more details (more freckles, eyes, glasses). eg only the outer freckle of the girl's left eye in the last few frames are blurred away. It's not from the dotcrawl removal. If you lower the dpir strength, they come back. If you reduce the DPIR strength, the "bad" frames start gaining back their artifacts, so you either accept the balance or attempt to selectively filter

    In this example, I also used a real-esrgan (compact) model called 1x_Dotzilla_Compact_80k_net_g with DPIR color at strenght at 16. No differential filtering in this example .


    Code:
    DGSource("TEP_Sample.dgi")
    tfm().tdecimate()
    z_convertformat(pixel_type="RGBPS")
    addborders(2,0,2,0) #requires mod8
    mlrt_ncnn(network_path="PATH\1x_Dotzilla_Compact_80k_net_g_onnx.onnx", builtin=false)
    mlrt_DPIR(strength=16, model=1)
    crop(2,0,-2,0,true)
    z_convertformat(pixel_type="YV12")

    To use ESRGAN models in avisynth, you need to convert the models to onnx or ncnn (most original ESRGAN models are pth, python). There is a thread in Doom9 that has a link to preconverted models, and how to convert them in a GUI called chainner
    Mate that looks bloody awesome! Are there any beginner guides floating around online on how to set this up properly?

    I've sunk in about 4-5 hours now trying to set this up on my own and I just can't figure it out. I've been trying to replicate your script (that you posted above) and I just can't do it.

    I installed all the relevant apps, plugins and models (I think):
    -AviSynth 3.7.2
    -Python
    -VapourSynth (I do have an Nvidia RTX card)
    -Avspmod
    -Avsresize
    -Avs-mlrt

    In addition I made a models folder and put in all the .onnx files that I wanted to use (in this case the Dot Crawl one). I then made sure I was linking to the correct "network_path" and the .onnx file.

    I ended up getting countless errors popup when trying to preview (pressing F5) and one by one I worked my way through them. Whenever it said I had something missing, I went online to find it and put it in the relevant folder. - For example Avspmod said I was missing something called "drunet_color.onnx" - I have no idea what that is or what it does, but I went and found it and put it in the folder it was asking me too (C:\Program Files (x86)\AviSynth+\plugins64+\models\dpir).

    Finally after all that. I got to the point where I would press F5 to preview and Avspmod would just crash (after hanging for about 5 seconds). I have no idea where to go from here.

    If I remove the 2 mlrt lines, I can press F5 and preview the video fine. The moment I add them both back, it just crashes.

    Note: I'm not sure if I was supposed to edit anything in the "mlrt.avsi" file. I did open it, but I know NOTHING about Python. So I just put it in the Avisynth plugins folder and left it alone.

    I do have experience with using models. I use ESRGAN Cupscale quite a bit to upscale frames using various different models. So I'm not completely clueless... But just enough to be stuck hahaha
    Last edited by ThaKarra; 11th Mar 2023 at 11:31.
    Quote Quote  
  5. Originally Posted by ThaKarra View Post
    I installed all the relevant apps, plugins and models (I think):
    -AviSynth 3.7.2
    avisynth 3.7.2 (r3600) is likely too old . If you look at the requirements for avs-mlrt:

    Code:
    - Vulkan device
    
    - AviSynth+ r3682 (can be downloaded from [here](https://gitlab.com/uvz/AviSynthPlus-Builds) until official release is uploaded) (r3689 recommended) or later
    
    - Microsoft VisualC++ Redistributable Package 2022 (can be downloaded from [here](https://github.com/abbodi1406/vcredist/releases))
    Some people have problems with the gitlab version(s) for whatever reason. Various problems. pinterf builds seem to work for everyone.

    This is the newest one from pinterf
    http://forum.doom9.org/showthread.php?p=1983526#post1983526

    If you have avisynth+ installed, all you need to do is replace avisynth.dll to "upgrade" versions


    In addition I made a models folder and put in all the .onnx files that I wanted to use (in this case the Dot Crawl one). I then made sure I was linking to the correct "network_path" and the .onnx file.
    When you use "builtin=false" , you can put the models anywhere as long as you specify the full path . (People have different ways of organizing their models and directory structure)


    Finally after all that. I got to the point where I would press F5 to preview and Avspmod would just crash (after hanging for about 5 seconds). I have no idea where to go from here.
    Likely avs+ version too old


    Note: I'm not sure if I was supposed to edit anything in the "mlrt.avsi" file. I did open it, but I know NOTHING about Python. So I just put it in the Avisynth plugins folder and left it alone.
    python not required for using avs version
    Quote Quote  
  6. Member
    Join Date
    Oct 2020
    Location
    Australia
    Search PM
    Originally Posted by poisondeathray View Post
    Originally Posted by ThaKarra View Post
    I installed all the relevant apps, plugins and models (I think):
    -AviSynth 3.7.2
    avisynth 3.7.2 (r3600) is likely too old . If you look at the requirements for avs-mlrt:

    Code:
    - Vulkan device
    
    - AviSynth+ r3682 (can be downloaded from [here](https://gitlab.com/uvz/AviSynthPlus-Builds) until official release is uploaded) (r3689 recommended) or later
    
    - Microsoft VisualC++ Redistributable Package 2022 (can be downloaded from [here](https://github.com/abbodi1406/vcredist/releases))
    Some people have problems with the gitlab version(s) for whatever reason. Various problems. pinterf builds seem to work for everyone.

    This is the newest one from pinterf
    http://forum.doom9.org/showthread.php?p=1983526#post1983526

    If you have avisynth+ installed, all you need to do is replace avisynth.dll to "upgrade" versions


    In addition I made a models folder and put in all the .onnx files that I wanted to use (in this case the Dot Crawl one). I then made sure I was linking to the correct "network_path" and the .onnx file.
    When you use "builtin=false" , you can put the models anywhere as long as you specify the full path . (People have different ways of organizing their models and directory structure)


    Finally after all that. I got to the point where I would press F5 to preview and Avspmod would just crash (after hanging for about 5 seconds). I have no idea where to go from here.
    Likely avs+ version too old


    Note: I'm not sure if I was supposed to edit anything in the "mlrt.avsi" file. I did open it, but I know NOTHING about Python. So I just put it in the Avisynth plugins folder and left it alone.
    python not required for using avs version
    Thanks so much!!! I managed to get it working perfectly.

    I can't wait to have a mess around now and see what I can do!
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!