Hey,
So I'm learning still but I do have the ability to do basic QTGMC encoding, applying filters, and upscaling, muxing etc.
What I want to do now is clean this image up from a DVD I own.
I think it needs a dotcrawl pass, and the color is mottled. There's ugly patterning on the colors etc.
Any suggestions for filters to apply? Do you agree it has dotcrawl? How would you characterize all the issues?
Thanks
[Attachment 71705 - Click to enlarge]
+ Reply to Thread
Results 1 to 12 of 12
-
-
Here's a bit more information from DGINDEX.
[Attachment 71706 - Click to enlarge]
If it's interlaced then QTGMC should be useful and I don't need to do a inverse telecine? -
-
Okay so I figured this much out:
Deblock(quant=25, aOffset=25, bOffset=25, planes="yuv")
Before:
[Attachment 71707 - Click to enlarge]
After:
[Attachment 71708 - Click to enlarge]
So to my eyes it looks like deblocking gives a major boost to this source. -
Here is the m2v... just learned how to save this
https://forum.videohelp.com/attachment.php?attachmentid=71709&stc=1&d=1686810035 -
Maybe some anti-aliasing too? The deblocking is the best thing I've done to it so far. I wonder why it got so blocky to begin with bad tape transfer? Thanks
-
You can try this, though cartoons are not my forté
https://imgsli.com/MTg2MTA1*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
-
Generally, telecined video should be inverse telecined, not deinterlaced.
The video isn't blocky. Using a deblocking filter removes some of the noise but only as a byproduct of it's processing. Dedicated spacial and temporal noise reduction filters will work better.
Halo reduction will help.
Code:Mpeg2Source("sample.demuxed.d2v", CPU2="ooooxx", Info=3) # enable deringing Crop(8,0,-8,-0) TFM(d2v="sample.demuxed.d2v") TDecimate() dehalo_alpha(rx=3.0, ry=2.0) # blur away residual dot crawl artifacts near horizontal edges hzlines = mt_edge("-16, -16, -16 16 16 16 0 0 0").mt_expand().Blur(1.5) hzblurred = BilinearResize(288,height).Spline36Resize(width, height) Overlay(last, hzblurred, mask=hzlines) MCTemporalDenoise(settings="high") aWarpSharp2(depth=5) Sharpen(0.4, 0.2)
-
Another approach: Soft Telecine handling through DGDecNV + SCUNet: (don't think there's a SCUNet for Avisynth)
Code:# Imports import vapoursynth as vs import os import sys # getting Vapoursynth core core = vs.core # Import scripts folder scriptPath = 'F:/Hybrid/64bit/vsscripts' sys.path.insert(0, os.path.abspath(scriptPath)) # Loading Plugins core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/DGDecNV/DGDecodeNV.dll") # source: 'C:\Users\Selur\Desktop\sample.demuxed.m2v' # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine (soft) # Loading C:\Users\Selur\Desktop\sample.demuxed.m2v using DGSource # using 'softpulldown' through DGDecNV clip = core.dgdecodenv.DGSource("J:/tmp/m2v_d35cd4c630b0ddc4ebee73a92a6f76c2_853323747.dgi",fieldop=1)# 23.976 fps, scanorder: progressive # Making sure content is preceived as frame based # Setting detected color matrix (470bg). clip = core.std.SetFrameProps(clip, _Matrix=5) # Setting color transfer info (470bg), when it is not set clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5) # Setting color primaries info (), when it is not set clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5) # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) # making sure frame rate is set to 23.976 clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001) clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive from vsscunet import scunet as SCUNet # adjusting color space from YUV420P8 to RGBS for vsSCUNet clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited") # denoising using SCUNet clip = SCUNet(clip=clip, model=4) # adjusting output color from: RGBS to YUV420P8 for x264Model clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion") # set output frame rate to 23.976fps (progressive) clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001) # Output clip.set_output()
users currently on my ignore list: deadrats, Stears555, marcorocchini -
-
Attached a few examples on what machine learning and anime specific upscalers look like. (no additional denoising aside from what the resizers offer done)
Cu Selurusers currently on my ignore list: deadrats, Stears555, marcorocchini
Similar Threads
-
Improving old 90s cartoons/animes quality using madVR
By iruhamu03 in forum RestorationReplies: 0Last Post: 29th Apr 2023, 03:27 -
How to filter cartoons?
By Selur in forum Newbie / General discussionsReplies: 4Last Post: 13th Aug 2021, 05:09 -
Restoring really old cartoons
By Fleischacker in forum RestorationReplies: 7Last Post: 13th Nov 2020, 22:23 -
Cleaning minidv tapes - cleaning machine available ?
By VideoEdwin in forum RestorationReplies: 2Last Post: 12th Dec 2019, 09:33 -
Ripping cartoons
By WrongWayDown in forum Newbie / General discussionsReplies: 7Last Post: 29th Oct 2019, 14:20