VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 33
Thread
  1. Hi guys, in this video I would like to use several kinds of noise removal. I would like to:
    1) correct the green and blue streaks. For this I found MergeChroma(TemporalMedian(2));
    2) correct the white comets. For that I found DeVCR;
    3) correct the general noise. The solution I found is RC Basher's script (althoug it was not written with VHS in mind, as far as I know)
    4) correct the slight "scanline" effect. For that I didn't found a solution yet.

    The point is that MergeChroma(TemporalMedian(2)) works with the Trim() function; but I can't make the same with DeVCR and RC Basher's script: they work on the whole video, because I don't know how to tweak the variables for just one part of the file. How can I achieve this? And how can I eliminate the scanlines?

    My specifications: Windows 7, Avisynth 2.60, Terratec Grabster Av 350 MX

    This is the script

    Code:
    DirectShowSource("F:\Svaghi\Video Editing\Progetti\[COMPLETATI]\Video French 1988\03 Terminale Rock.mpg", fps=25, audio=false).converttoyv12()
    
    #MergeChroma(TemporalMedian(2))
    
    Trim(103460,104020)
    This is the unfiltered, trimmed part http://www.mediafire.com/file/60f656y1157fioi/Telexpress.mpg/file
    Regards

    References:
    MergeChroma(TemporalMedian(2)) https://forum.videohelp.com/threads/389937-UltraTrash-Help
    http://avisynth.nl/index.php/DeVCR
    RC Basher's script http://filmshooting.com/scripts/forum/viewtopic.php?t=23118
    Last edited by VHS_Hunter; 31st Aug 2018 at 12:45.
    Quote Quote  
  2. The video appears to be interlaced, so your scripts need to take that into account. Some of the artifacts only appear on one field.
    Quote Quote  
  3. You're right, I didn't know that chroma noise reduction is even better after deinterlacing:

    Code:
    DirectShowSource("F:\Svaghi\Video Editing\Progetti\[COMPLETATI]\Video French 1988\03 Terminale Rock.mpg", fps=25, audio=false).converttoyv12()
    QTGMC( Preset="Slow" )
    SelectEven() 
    MergeChroma(TemporalMedian(2))
    Trim(103460,104020)
    And now how should I add DeVCR? When I write it

    Code:
    DirectShowSource("F:\Svaghi\Video Editing\Progetti\[COMPLETATI]\Video French 1988\03 Terminale Rock.mpg", fps=25, audio=false).converttoyv12()
    QTGMC( Preset="Slow" )
    SelectEven() 
    MergeChroma(TemporalMedian(2))
    
    myclip = DirectShowSource("F:\Svaghi\Video Editing\Progetti\[COMPLETATI]\Video French 1988\03 Terminale Rock.mpg", fps=25, audio=false).converttoyv12()
    fixedclip = deVCR(myclip,30)
    StackHorizontal(myclip,fixedclip,Overlay(myclip,fixedclip,mode = "subtract"))
    
    Trim(103460,104020)
    AvsPmod shows me the result but without the MergeChroma effect.
    Quote Quote  
  4. dgindex report the video as progressive (yet it's interlaced), that's a problem, you'd also need to know the right parity (prety sure it's top field first but must be verified)
    last but not least forget directshowsource forever (not reliable/ frame accurate), use mpeg2source (with dgindex) or LWLibavVideoSource (from LSMASH package)
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  5. Ok themaster1, I will change my script tomorrow
    Quote Quote  
  6. I was NOT suggesting that you deinterlace. I am not a fan of deinterlacing unless you are re-sizing, in which case deinterlacing is mandatory. I was only suggesting that you need to modify any scripts so that they operate on fields. Also, your color conversion must specify interlace.

    I did actually try to improve the video. MDegrain and CNR do a great job on the red flickering. I expect that RemoveDirt or Decomet or Despot can take care of the spots. The green flashes can only be fixed by copying information from an adjacent frame. What I usually do is duplicate the previous frame, and then use FilldropsI (a small AVISynth function I adapted from some old MugFunky code) to replace the duplicate with a motion estimated frame. It works amazingly well.
    Quote Quote  
  7. Although the chroma channels are messed up by the progressive encoding you can still recover both fields of the interlaced frames. But many of the chroma discolorations appear on two consecutive fields. So TemoralMedian(2) won't remove them. You can compensate for that with:

    Code:
    Mpeg2Source("Telexpress.d2v", CPU2="ooooxx", Info=3) # deringing filter
    AssumeTFF()
    # ColorYUV(gain_y=130, off_y=-18, gamma_y=100, opt="coring") # optional levels/gamma adjustment
    QTGMC(preset="fast") # or whatever preset you want
    even = SelectEven()
    even = MergeChroma(even, even.TemporalMedian(2))
    odd = SelectOdd()
    odd = MergeChroma(odd, odd.TemporalMedian(2))
    Interleave(even,odd)
    # at this point you will have 50p
    Quote Quote  
  8. Yesterday I tried this: I wrote Trim() directly when I loaded the clip, and it worked

    Code:
    video= DirectShowSource("F:\Svaghi\Video Editing\Progetti\[COMPLETATI]\Video French 1988\03 Terminale Rock.mpg", fps=25, audio=false).Trim(103460,104020).killaudio().converttoyv12()
    
    video= QTGMC(video, Preset="Slower" )
    video= SelectEven(video) 
    video= MergeChroma(video, TemporalMedian(video, 2))
    myclip = video
    fixedclip = deVCR(myclip,5)
    #StackHorizontal(myclip,fixedclip,Overlay(myclip,fixedclip,mode = "subtract"))
    return video
    But now I'm using jagabo's script. Chroma channels are messed up because the source is an interlaced vhs, but I "told" the Grabster to capture in a fake progressive (through Magix software) because otherwise there would be too many bad frames. Just one thing: what if I want to burn the video into a dvd with AVStoDVD, will the file have 25 fps or 50 fps? Will it be a problem? Once I eliminate this doubt, I will experiment with MDegrain, CNR and RemoveDirt.
    Quote Quote  
  9. Originally Posted by VHS_Hunter View Post
    Chroma channels are messed up because the source is an interlaced vhs, but I "told" the Grabster to capture in a fake progressive (through Magix software) because otherwise there would be too many bad frames.
    You should capture VHS as interlaced. The chroma blending in this clip isn't too obvious since it doesn't have large motions and the color is very low resolution. But it will be more obvious in parts of the video with high motion of colored content.

    Originally Posted by VHS_Hunter View Post
    what if I want to burn the video into a dvd with AVStoDVD, will the file have 25 fps or 50 fps?
    DVD will require 25p or 25i. You can convert 50p to 25i with:

    Code:
    SeparateFields()
    SelectEvery(4,0,3)
    Weave()
    Or convert 50p to 25p with:

    Code:
    SelectEven() # or SelectOdd()
    The video also needs to have the chroma shifted to better align it with the luma:

    Code:
    ChromaShift(c=-4, l=-6)
    There are some places where the two frames (after QTGMC) are identical except for a dropout (and other noise) in one. You can copy the good frame over the bad frame to fix that.

    In some places you can use a motion interpolating filter to substitute good frames for frames with dropouts and other distortions. On the left the original frame, on the right a frame by interpolating motion between the two frames surrounding it (ReplaceFramesMC):

    Click image for larger version

Name:	rx.jpg
Views:	214
Size:	79.8 KB
ID:	46635
    Quote Quote  
  10. You can replace the static credits at the start by repeating a blend of several frames:

    Code:
    Mpeg2Source("Telexpress.d2v", CPU2="ooooxx", Info=3) 
    AssumeTFF()
    
    ColorYUV(gain_y=130, off_y=-18, gamma_y=100, opt="coring")
    QTGMC(preset="fast")
    
    subst = Trim(0,7) # first 8 frames
    subst = Merge(subst.SelectEven(), subst.SelectOdd()) # average to 4 frames
    subst = Merge(subst.SelectEven(), subst.SelectOdd()) # average to 2 frames
    subst = Merge(subst.SelectEven(), subst.SelectOdd()) # average to 1 frame
    subst = Loop(subst, 20,0,0).AssumeFPS(last.framerate) # convert 1 frame to 20
    return(StackHorizontal(Trim(0,19), subst)) # show original and stabilized
    Image Attached Files
    Quote Quote  
  11. Jagabo, it definitely looks better!

    In some places you can use a motion interpolating filter to substitute good frames for frames with dropouts and other distortions. On the left the original frame, on the right a frame by interpolating motion between the two frames surrounding it (ReplaceFramesMC)
    Can you please explain me how to use it?

    Where can I find ReplaceFramesMC and MDegrain? I cannot find them on http://avisynth.nl/index.php/External_filters

    Sorry for the numerous questions, but I feel I'm close to the goal
    Quote Quote  
  12. ReplaceFramesMC() was originally RX(). You can find it in this post:

    https://forum.videohelp.com/threads/348054-Help-with-filtering-pictures-problems-from-...re#post2177383

    I've updated it since then, sometimes this works better:

    Code:
    ######################################################
    
    function ReplaceFramesMC(clip Source, int N, int "X")
    {
     # N is number of the 1st frame in Source that needs replacing. 
     # X is total number of frames to replace
     #e.g. ReplaceFramesMC(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for mflowfps interpolation
     
     X = Default(X, 1)
    
     start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point
     end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point
     
     start+end
     AssumeFPS(1) #temporarily FPS=1 to use mflowfps
      
     super = MSuper(pel=2, hpad=0, vpad=0, rfilter=4)
     backward_1 = MAnalyse(super, chroma=false, isb=true, blksize=16, searchparam=3, plevel=0, search=3, badrange=(-24))
     forward_1 = MAnalyse(super, chroma=false, isb=false, blksize=16, searchparam=3, plevel=0, search=3, badrange=(-24))
     backward_2 = MRecalculate(super, chroma=false, backward_1, blksize=8, searchparam=1, search=3)
     forward_2 = MRecalculate(super, chroma=false, forward_1, blksize=8, searchparam=1, search=3)
     backward_3 = MRecalculate(super, chroma=false, backward_2, blksize=4, searchparam=0, search=3)
     forward_3 = MRecalculate(super, chroma=false, forward_2, blksize=4, searchparam=0, search=3)
     MBlockFps(super, backward_3, forward_3, num=X+1, den=1, mode=0)
    
     AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining
     Trim(1, framecount-1) #trim ends, leaving replacement frames
      
     Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0)
    }
    
    
    
    
    ######################################################
    #
    # Insert missing frames
    #
    ######################################################
    
    function InsertFramesMC(clip Source, int N, int "X")
    {
      # Insert X motion interpolated frames at N
      # N is the insertion point
      # X is the number of frames to insert
      # the frames will be interpolated with Source frames N-1 and N as references
    
      X = Default(X, 1)
    
      loop(Source, X+1, N, N)
      ReplaceFramesMC(N, X)
    }
    
    ######################################################
    
    function ReplaceFrameNext(clip Source, int N)
    {
      # Replace frame at N with frame at N+1
      # N is the frame to replace
      # with frame at N+1
    
      loop(Source, 0, N, N)
      loop(last, 2, N, N)
    }
    
    ######################################################
    
    function ReplaceFramePrev(clip Source, int N)
    {
      # Replace frame at N with frame at N-1
      # N is the frame to replace
      # with frame at N-1
    
      loop(Source, 0, N, N)
      loop(last, 2, N-1, N-1)
    }
    
    ######################################################
    That also includes InsertFramesMC() which inserts a frame rather than replacing an existing frame. Also included are ReplaceFramePrev() and ReplaceFrameNext() which replace a frame with a copy of the previous or next frame. You have to manually locate the frames you want to replace. Then add calls to the script.
    Quote Quote  
  13. Thanks. I searched thoroughly on the forum for MDegrain and I merged your script with johnmeyer's one, modifying its values (see lines 25, 45). Now I have a better comprehension of the variables, but there are still some problems left (see point 1):


    Code:
    source= Mpeg2Source("F:\Svaghi\Video Editing\Report e Guide\Avisynth Guide\[Provati]\[Decomet - Purple]\Telexpress.d2v", CPU2="ooooxx", Info=3).AssumeTFF()
    
    QTGMC(source, preset="fast") # or whatever preset you want
    even = SelectEven()
    even = MergeChroma(even, even.TemporalMedian(2))
    odd = SelectOdd()
    odd = MergeChroma(odd, odd.TemporalMedian(2))
    source= Interleave(even,odd)
    
    return source 
    
    source= ChromaShift(c=-4, l=-6)
    #source= ReplaceFramesMC(source, 273, 3)
    return source
    
    source= DeSpot(p1=35, p2=14, mthres=25)
    
    #Denoiser script for interlaced video using MDegrain2
    SetMemoryMax(768)
    
    Loadplugin("C:\Program Files\AviSynth 2.5\plugins\MVTools\mvtools2.dll")
    
    source=source
    output=MDegrain2i2(source,8,4,500,0)  
    return output
    
    #Remove /* */ comments from following code and comment out the above line
    #to see each frame before, then after.
    
    /*
    return Interleave(
    \    source
    \  , output
    \ )
    */
    
    #-------------------------------
    
    function MDegrain2i2(clip source, int "blksize", int "overlap", int "denoising_strength", int "dct")
    {
    Vshift=0 # 2 lines per bobbed-field per tape generation (PAL); original=2; copy=4 etc
    Hshift=0 # determine experimentally 
    overlap=default(overlap,0) # overlap value (0 to 4 for blksize=8)
    denoising_strength=default(denoising_strength, 500)
    dct=default(dct,0) # use dct=1 for clip with light flicker
    
    fields=source.SeparateFields() # separate by fields
    
    #This line gets rid of vertical chroma halo
    #fields=MergeChroma(fields,crop(fields,Hshift,Vshift,0,0).addborders(0,0,Hshift,Vshift))
    #This line will shift chroma down and to the right instead of up and to the left
    #fields=MergeChroma(fields,Crop(AddBorders(fields,Hshift,Vshift,0,0),0,0,-Hshift,-Vshift))
    
    super = fields.MSuper(pel=2, sharp=1)
    backward_vec2 = super.MAnalyse(isb = true, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
    forward_vec2 = super.MAnalyse(isb = false, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
    backward_vec4 = super.MAnalyse(isb = true, delta = 4, blksize=blksize, overlap=overlap, dct=dct)
    forward_vec4 = super.MAnalyse(isb = false, delta = 4, blksize=blksize, overlap=overlap, dct=dct)
    
    MDegrain2(fields,super, backward_vec2,forward_vec2,backward_vec4,forward_vec4,thSAD=denoising_strength ) 
    
    Weave()
    }
    I would like to improve the result http://www.mediafire.com/file/d7dznz6lztoe958/better.mpg/file
    1 johnmeyer, could you please modify my script to add cnr? Even if I experimented with MDegrain2 setting there's still too much red flickering. I found it here http://priede.bf.lu.lv/ftp/pub/MultiVide/video/VirtualDub/apraksts/cnr-en.html but I don't know how to use it.
    2 if I eliminate the first return source I can use ChromaShift and ReplaceFramesMC but MergeChroma is disabled. Why? I would like to have all of them enabled.

    If your scripts are very different from mine, I don't mind. I just care about the result. I'm 90% done, please let me finish, and thank you for your patience
    Last edited by VHS_Hunter; 31st Aug 2018 at 13:50.
    Quote Quote  
  14. I used a Vegas script to replace each really bad frame (primarily the green flashes) with a duplicate of the previous frame. I then frameserved this out of Vegas into the following script. As you will see, this script includes the "Filldrop" function I created many years ago which replaces exact duplicates with a motion estimated frame, but leaves all other frames alone. I then used my RemoveTears function, which uses Despot, to attempt to remove some of the horizontal tearing. Feel free to knock yourself out playing around with the Despot parameters. That function is capable of great things, but it requires the patience of a saint to get the variables right for any given clip.

    I did add CNR2 to the script, so you can see how that functions.

    Here is a link to the result:

    Denoised Intro

    Code:
    Loadplugin("E:\Documents\My Videos\AVISynth\AVISynth Plugins\plugins\MVTools\mvtools2.dll")
    LoadPlugin("E:\Documents\My Videos\AVISynth\AVISynth Plugins\plugins\Cnr2.dll")
    
    #Modify this line to point to your video file
    source=AVISource("E:\fs.avi").killaudio().AssumeTFF().converttoYV12()
    
    fields = filldropsI(source).bob(0.0,1.0)
    
    denoised=MDegrain(fields,8,4,800,0)  
    output = separatefields(denoised).selectevery(4,0,3).weave()
    return output
    
    #-------------------------------
    function MDegrain(clip source, int "blksize", int "overlap", int "denoising_strength", int "dct") {
      denoising_strength=default(denoising_strength, 400)
      overlap=default(overlap,0) # overlap value (0 to 4 for blksize=8)
      dct=default(dct,0) # use dct=1 for clip with light flicker
      chroma=source.Cnr2("oxx",8,16,191,100,255,32,255,false)
      super = chroma.MSuper(pel=2, sharp=1)
      backward_vec1 = super.MAnalyse(isb = true, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
      forward_vec1 = super.MAnalyse(isb = false, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
      backward_vec2 = super.MAnalyse(isb = true, delta = 4, blksize=blksize, overlap=overlap, dct=dct)
      forward_vec2 = super.MAnalyse(isb = false, delta = 4, blksize=blksize, overlap=overlap, dct=dct)
    
      denoised = MDegrain2(chroma,super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=800) 
    
    #  RemoveDirtMC(denoised, 80, false)
       Remove_Tears(denoised)
    }
    
    function RemoveDirtMC(clip,int "limit", bool "_grey")
    {
      _grey=default(_grey, false)
      limit = default(limit,6)
      
      prefiltered = RemoveGrain(clip,2)
      superfilt = MSuper(prefiltered, hpad=32, vpad=32,pel=2)
    
      super=MSuper(clip, hpad=32, vpad=32,pel=2)
    
      bvec = MAnalyse(superfilt,isb=true,  blksize=16, overlap=2,delta=1, truemotion=true)
      fvec = MAnalyse(superfilt,isb=false, blksize=16, overlap=2,delta=1, truemotion=true)
    
      bvec_re = Mrecalculate(super,bvec,blksize=8, overlap=0,thSAD=100)
      fvec_re = Mrecalculate(super,fvec,blksize=8, overlap=0,thSAD=100)
    
      backw = MFlow(clip,super,bvec_re)
      forw  = MFlow(clip,super,fvec_re)
    
      clp=interleave(forw,clip,backw)
      clp=clp.RemoveDirt(limit,_grey)
      clp=clp.SelectEvery(3,1)
      return clp
    }
    
    function RemoveDirt(clip input, int "limit", bool "_grey")
    {
      clensed=input.Clense(grey=_grey, cache=4)
      alt=input.RemoveGrain(2)
      return RestoreMotionBlocks(clensed,input,alternative=alt,pthreshold=6,cthreshold=8, gmthreshold=40,dist=3,dmode=2,debug=false,noise=limit,noisy=4, grey=_grey)
    }
    
    function Remove_Tears(clip source) {
    
    #Create mask
    ml           = 100 # mask scale
    scene_change = 400 # scene change
    
    super = MSuper(source,pel=2, sharp=1)
    
    vf = MAnalyse(super,isb=false) # forward vectors 
    vb = MAnalyse(super,isb=true)  # backward vectors 
    
    cf = MFlow(source,super,vf,thSCD1=scene_change) # previous compensated forward
    cb = MFlow(source,super,vb,thSCD1=scene_change) # next compensated backward 
    
    sadf = MMask(super,vf, ml=100,kind=1,gamma=1, thSCD1 = scene_change)
    msadf= sadf.Binarize() # binary inverted forward SAD mask
    
    sadb = MMask(super,vb, ml=ml, gamma=1, kind=1, thSCD1 = scene_change) # backward SAD mask
    msadb= sadb.Binarize() # binary inverted backward SAD mask
    
    msad = Logic(msadf,msadb,"OR") # combined inverted SAD mask
    msad = msad.Expand() # expanded inverted SAD mask
    msadi = Interleave(msad, msad, msad) # interleaved 3-frame inverted SAD mask
    
    Interleave(cf,source,cb) # interleave forward compensated, source, and backward compensated
    
    #DeSpot(show=0,p1percent=3,dilate=3,maxpts=400,p2=6,mthres=9,p1=12,pwidth=140,pheight=4,mwidth=7,mheight=5,merode=33,interlaced=false,seg=1,sign=-1,ranked=true)
    #DeSpot(show=0,p1percent=10,dilate=1,maxpts=400,p2=12,mthres=18,p1=24,pwidth=40,pheight=12,mwidth=7,mheight=5,merode=33,interlaced=false,seg=1,sign=1,ranked=true,extmask=msadi)
    DeSpot(show=0,p1percent=3,dilate=3,maxpts=0,p2=6,mthres=9,p1=12,pwidth=640,pheight=14,mwidth=7,mheight=5,merode=33,interlaced=false,seg=1,sign=0,ranked=true)
    
    SelectEvery(3,1)
    }
    
    function filldropsI (clip c)
    {
      even = c.SeparateFields().SelectEven()
      super_even=MSuper(even,pel=2)
      vfe=manalyse(super_even,truemotion=true,isb=false,delta=1)
      vbe=manalyse(super_even,truemotion=true,isb=true,delta=1)
      filldrops_e = mflowinter(even,super_even,vbe,vfe,time=50)
    
      odd  = c.SeparateFields().SelectOdd()
      super_odd=MSuper(odd,pel=2)
      vfo=manalyse(super_odd,truemotion=true,isb=false,delta=1)
      vbo=manalyse(super_odd,truemotion=true,isb=true,delta=1)
      filldrops_o = mflowinter(odd,super_odd,vbo,vfo,time=50)
    
      evenfixed = ConditionalFilter(even, filldrops_e, even, "YDifferenceFromPrevious()", "lessthan", "0.1")
      oddfixed  = ConditionalFilter(odd,  filldrops_o, odd,  "YDifferenceFromPrevious()", "lessthan", "0.1")
    
      Interleave(evenfixed,oddfixed)
      Weave()
    }
    Quote Quote  
  15. This is some really heavy and slow filtering but maybe it's something like what you want. I didn't apply any "manual" fixes with ReplaceFramesMC. There are some distortions in the rotating box but they're not too noticeable.
    Image Attached Files
    Quote Quote  
  16. That function is capable of great things, but it requires the patience of a saint
    Ahah, good to know

    Johnmeyer and Jagabo, thank you very much! Your results are both excellent
    Quote Quote  
  17. Here's the script I used in post #15:

    Code:
    Mpeg2Source("Telexpress.d2v", CPU2="ooooxx", Info=3) # use the deringing feature, no deblocking
    AssumeTFF()
    
    ColorYUV(gain_y=130, off_y=-18, gamma_y=100, opt="coring")
    BilinearResize(width/2, height)
    QTGMC(preset="slow")
    
    # replace the opening text with a single frame, averaged from the first 8
    subst = Trim(0,7)
    subst = Merge(subst.SelectEven(), subst.SelectOdd())
    subst = Merge(subst.SelectEven(), subst.SelectOdd())
    subst = Merge(subst.SelectEven(), subst.SelectOdd())
    subst = Loop(subst, 20,0,0).AssumeFPS(last.framerate)
    subst + Trim(20,0)
    
    even = SelectEven()
    even = MergeChroma(even, even.TemporalMedian(2)).RemoveDirtMC(150)
    odd = SelectOdd()
    odd = MergeChroma(odd, odd.TemporalMedian(2)).RemoveDirtMC(150)
    Interleave(even,odd)
    
    rmask = MaskHS(StartHue=100, EndHue=130, MaxSat=150, MinSat=30).Blur(1.0).BilinearResize(last.width, last.height).ConvertToYV12()
    filtered = TNLMeans(ax=8, ay=8, az=8, h=16)
    Overlay(last, filtered, mask=rmask)
    
    MergeChroma(aWarpSharp(depth=5), aWarpSharp(depth=20))
    Spline36Resize(width*2, height)
    ChromaShift(c=-4, l=-6)
    Stab(dxmax=0, dymax=2)
    I normally wouldn't use TemporalMedian(2) to fix the chroma because it would cause too much color artifacting. But this clip doesn't have any fast motion, the color is already screwed up, and VHS has very low color resolution anyway, so it's ok here.

    150 is normally way to high for RemoveDirtMC(). But again, motion here is pretty low so it doesn't screw things up too much. It fixes a lot of the dropouts so I think it's worth using here.

    I then build a mask that marks all the red areas, and create a strongly smoothed video with a temporal/spacial filter, TNLMeans. Finally, I replace the red portions of each frame with that heavily filtered version. TNLMeans with such high values is very slow.

    The luma is slightly sharpened, the chroma more strongly sharpened, the chroma is shifted to better align it with the luma, and some vertical bounce is reduced with Stab().
    Quote Quote  
  18. Thank you
    Quote Quote  
  19. Here's a list of frame replacements that can be added to the end of the previous script:

    Code:
    ReplaceFramesMC(26)
    ReplaceFramesMC(28,3)
    ReplaceFramesMC(38)
    ReplaceFramesMC(40)
    ReplaceFramesMC(57,5)
    ReplaceFramesMC(126)
    ReplaceFramesMC(251,5)
    ReplaceFramesMC(567)
    ReplaceFramesMC(569)
    ReplaceFramesMC(652,2)
    ReplaceFramesMC(696)
    ReplaceFramesMC(698)
    ReplaceFramesMC(721)
    ReplaceFramesMC(764,4)
    ReplaceFramesMC(1015)
    ReplaceFramesMC(1017)
    ReplaceFramesMC(1089)
    ReplaceFramesMC(1091)
    ReplaceFramePrev(1120)
    ReplaceFramePrev(1121)
    I also cropped away the crap in the borders and restored them with pure black. The original video (simple bob) and filtered, side by side, attached.
    Image Attached Files
    Quote Quote  
  20. Ok; I opened another video with DGIndex, selected Save Project and Demux Video, and modified your script:

    Code:
    Mpeg2Source("C:\Users\utente\Desktop\Registrazione - 0002.d2v", CPU2="ooooxx", Info=3).Trim(625,2502).AssumeTFF().converttoYV12()
    
    ColorYUV(gain_y=130, off_y=-18, gamma_y=100, opt="coring")
    BilinearResize(width/2, height)
    QTGMC(preset="slow")
    
    even = SelectEven()
    even = MergeChroma(even, even.TemporalMedian(2)).RemoveDirtMC(150)
    odd = SelectOdd()
    odd = MergeChroma(odd, odd.TemporalMedian(2)).RemoveDirtMC(150)
    Interleave(even,odd)
    
    rmask = MaskHS(StartHue=100, EndHue=130, MaxSat=150, MinSat=30).Blur(1.0).BilinearResize(last.width, last.height).ConvertToYV12()
    filtered = TNLMeans(ax=8, ay=8, az=8, h=16)
    Overlay(last, filtered, mask=rmask)
    
    MergeChroma(aWarpSharp(depth=5), aWarpSharp(depth=20))
    Spline36Resize(width*2, height)
    ChromaShift(c=-4, l=-6)
    Stab(dxmax=0, dymax=2)
    and it crashes. Is it the plugins' fault? I have Avisynth 32 bit 260 on Windows 7, 64 bit; I added TNLMeans 103 (http://avisynth.nl/index.php/TNLMeans) and aWarpSharp beta 1 (http://avisynth.nl/index.php/AWarpSharp).
    Quote Quote  
  21. Any error message?
    Quote Quote  
  22. I had none; I just had to wait for AvsPmod and WinFF to load the script, it's just very slow on my pc, and everything worked. Thank you again to the both of you, johnmeyer and jagabo
    Quote Quote  
  23. Yes, it takes a long time for that script to load. It took several hours to render on my computer.

    You could probably speed it up by quite a bit by splitting it into two or three separate scripts and rendering to losslessly compressed intermediates. If it was a much longer clip I'd do it that way.
    Quote Quote  
  24. You could probably speed it up by quite a bit by splitting it into two or three separate scripts and rendering to losslessly compressed intermediates
    ...Or open an avs in another avs. By the way, which intermediate format would you use? I choose mp4, through WinFF rendering.
    Quote Quote  
  25. Originally Posted by VHS_Hunter View Post
    You could probably speed it up by quite a bit by splitting it into two or three separate scripts and rendering to losslessly compressed intermediates
    ...Or open an avs in another avs.
    That probably won't help. The issue when using multiple temporal filters is the way frames may be read out-of-order. If a later filter requests a frame that's no longer in memory from an earlier temporal filter that earlier filter has to read several frames and process them, not just the one frame. So processing can get much slower than the sum of the two filters' processing time.

    Originally Posted by VHS_Hunter View Post
    By the way, which intermediate format would you use? I choose mp4, through WinFF rendering.
    I usually use Lagarith or UT Video Codec in an AVI file.
    Quote Quote  
  26. Thanks; I'm curious about this method, but I think it's better for me to start another thread, I don't want to go off-topic.
    Quote Quote  
  27. Hi to everybody, it's been a long time since my last post; I would like to have a better understanding of jagabo's script in #17.

    Code:
    BilinearResize(width/2, height)
    QTGMC(preset="slow")
    Why resizing the video?
    Why using QTGMC without further settings (EZ Denoise, Sigma....)?
    Quote Quote  
  28. Originally Posted by VHS_Hunter View Post
    Why resizing the video?
    Some noise reduction from downscaling. VHS is low resolution horizontally so downscaling by half doesn't hurt it much. Faster processing by QTGMC later. You don't have to downscale if you don't want to.

    Originally Posted by VHS_Hunter View Post
    Why using QTGMC without further settings (EZ Denoise, Sigma....)?
    Because it was just an example. Use whatever settings you want. QTGMC's noise reduction isn't very good. If you use strong enough settings to remove noise from a video like this it removes too much detail.
    Quote Quote  
  29. Thanks.

    Code:
    rmask = MaskHS(StartHue=100, EndHue=130, MaxSat=150, MinSat=30).Blur(1.0).BilinearResize(last.width, last.height).ConvertToYV12()
    ......................
    Overlay(last, filtered, mask=rmask)
    1) I cannot understand what are the values in "last", I just know it is a reserved keyword for Avisynth language.
    2) Does the point indicates an insertion, e.g. BilinearResize().ConvertToYV12() is equal to ConvertToYV12(BilinearResize())?
    Quote Quote  
  30. Originally Posted by VHS_Hunter View Post
    1) I cannot understand what are the values in "last", I just know it is a reserved keyword for Avisynth language.
    Most filters take an input clip and produce an output clip. Any time you don't specify a clip by name the name "last" is assumed. So a sequence like:

    Code:
    AviSource("filename.avi")
    BilinearResize(width/2, height/2)
    QTGMC()
    is equivalent to:

    Code:
    last = AviSource("filename.avi")
    last = BilinearResize(last, width/2, height/2)
    last = QTGMC(last)

    Originally Posted by VHS_Hunter View Post
    2) Does the point indicates an insertion, e.g. BilinearResize().ConvertToYV12() is equal to ConvertToYV12(BilinearResize())?
    The period pipes the output of the filter on the left to the filter on the right. So yes, your assumption is correct. It's also effectively the same as

    Code:
    BilinearResize()
    ConvertToYV12()
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!