VideoHelp Forum




+ Reply to Thread
Results 1 to 12 of 12
  1. Hey,

    So I'm learning still but I do have the ability to do basic QTGMC encoding, applying filters, and upscaling, muxing etc.

    What I want to do now is clean this image up from a DVD I own.

    I think it needs a dotcrawl pass, and the color is mottled. There's ugly patterning on the colors etc.

    Any suggestions for filters to apply? Do you agree it has dotcrawl? How would you characterize all the issues?

    Thanks

    Image
    [Attachment 71705 - Click to enlarge]
    Quote Quote  
  2. Here's a bit more information from DGINDEX.

    Image
    [Attachment 71706 - Click to enlarge]


    If it's interlaced then QTGMC should be useful and I don't need to do a inverse telecine?
    Quote Quote  
  3. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    Originally Posted by pulsar8 View Post
    Here's a bit more information from DGINDEX.



    If it's interlaced then QTGMC should be useful and I don't need to do a inverse telecine?
    Post a minimum 10 second clip straight from the source (m2v cut from DGindex is good)
    Something showing steady movement if possible
    Quote Quote  
  4. Okay so I figured this much out:

    Deblock(quant=25, aOffset=25, bOffset=25, planes="yuv")

    Before:

    Image
    [Attachment 71707 - Click to enlarge]


    After:

    Image
    [Attachment 71708 - Click to enlarge]



    So to my eyes it looks like deblocking gives a major boost to this source.
    Quote Quote  
  5. Here is the m2v... just learned how to save this

    https://forum.videohelp.com/attachment.php?attachmentid=71709&stc=1&d=1686810035
    Image Attached Files
    Quote Quote  
  6. Maybe some anti-aliasing too? The deblocking is the best thing I've done to it so far. I wonder why it got so blocky to begin with bad tape transfer? Thanks
    Quote Quote  
  7. You can try this, though cartoons are not my forté

    https://imgsli.com/MTg2MTA1
    Image Attached Files
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  8. Originally Posted by themaster1 View Post
    You can try this, though cartoons are not my forté

    https://imgsli.com/MTg2MTA1
    That's quite an improvement thanks. I'll have to study all these settings in this file. Lots to learn.
    Quote Quote  
  9. Generally, telecined video should be inverse telecined, not deinterlaced.

    The video isn't blocky. Using a deblocking filter removes some of the noise but only as a byproduct of it's processing. Dedicated spacial and temporal noise reduction filters will work better.

    Halo reduction will help.


    Code:
    Mpeg2Source("sample.demuxed.d2v", CPU2="ooooxx", Info=3)  # enable deringing
    
    Crop(8,0,-8,-0)
    TFM(d2v="sample.demuxed.d2v") 
    TDecimate() 
    
    dehalo_alpha(rx=3.0, ry=2.0)
    
    # blur away residual dot crawl artifacts near horizontal edges
    hzlines = mt_edge("-16, -16, -16   16 16 16    0 0 0").mt_expand().Blur(1.5)
    hzblurred = BilinearResize(288,height).Spline36Resize(width, height)
    Overlay(last, hzblurred, mask=hzlines)
    
    MCTemporalDenoise(settings="high")
    
    aWarpSharp2(depth=5)
    Sharpen(0.4, 0.2)
    Image Attached Files
    Quote Quote  
  10. Another approach: Soft Telecine handling through DGDecNV + SCUNet: (don't think there's a SCUNet for Avisynth)
    Code:
    # Imports
    import vapoursynth as vs
    import os
    import sys
    # getting Vapoursynth core
    core = vs.core
    # Import scripts folder
    scriptPath = 'F:/Hybrid/64bit/vsscripts'
    sys.path.insert(0, os.path.abspath(scriptPath))
    # Loading Plugins
    core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/DGDecNV/DGDecodeNV.dll")
    # source: 'C:\Users\Selur\Desktop\sample.demuxed.m2v'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine (soft)
    # Loading C:\Users\Selur\Desktop\sample.demuxed.m2v using DGSource
    # using 'softpulldown' through DGDecNV
    clip = core.dgdecodenv.DGSource("J:/tmp/m2v_d35cd4c630b0ddc4ebee73a92a6f76c2_853323747.dgi",fieldop=1)# 23.976 fps, scanorder: progressive
    # Making sure content is preceived as frame based
    # Setting detected color matrix (470bg).
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    # Setting color transfer info (470bg), when it is not set
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    # Setting color primaries info (), when it is not set
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 23.976
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
    from vsscunet import scunet as SCUNet
    # adjusting color space from YUV420P8 to RGBS for vsSCUNet
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using SCUNet
    clip = SCUNet(clip=clip, model=4)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
    # set output frame rate to 23.976fps (progressive)
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    # Output
    clip.set_output()
    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  11. Originally Posted by Selur View Post
    Another approach: Soft Telecine handling through DGDecNV + SCUNet: (don't think there's a SCUNet for Avisynth)
    Code:
    # Imports
    import vapoursynth as vs
    import os
    import sys
    # getting Vapoursynth core
    core = vs.core
    # Import scripts folder
    scriptPath = 'F:/Hybrid/64bit/vsscripts'
    sys.path.insert(0, os.path.abspath(scriptPath))
    # Loading Plugins
    core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/DGDecNV/DGDecodeNV.dll")
    # source: 'C:\Users\Selur\Desktop\sample.demuxed.m2v'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine (soft)
    # Loading C:\Users\Selur\Desktop\sample.demuxed.m2v using DGSource
    # using 'softpulldown' through DGDecNV
    clip = core.dgdecodenv.DGSource("J:/tmp/m2v_d35cd4c630b0ddc4ebee73a92a6f76c2_853323747.dgi",fieldop=1)# 23.976 fps, scanorder: progressive
    # Making sure content is preceived as frame based
    # Setting detected color matrix (470bg).
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    # Setting color transfer info (470bg), when it is not set
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    # Setting color primaries info (), when it is not set
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 23.976
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
    from vsscunet import scunet as SCUNet
    # adjusting color space from YUV420P8 to RGBS for vsSCUNet
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using SCUNet
    clip = SCUNet(clip=clip, model=4)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
    # set output frame rate to 23.976fps (progressive)
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    # Output
    clip.set_output()
    Cu Selur
    That's quite a result. It almost looks like it went through an AI upscale but obviously it did not.

    I'll have to study this. Thanks.
    Quote Quote  
  12. Attached a few examples on what machine learning and anime specific upscalers look like. (no additional denoising aside from what the resizers offer done)

    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!