VideoHelp Forum
+ Reply to Thread
Results 1 to 17 of 17
Thread
  1. Hi There
    Im new to the world of vhs digitalization and am trying to get childhood memories into a digital form.
    How would I best go about achieving the following in my sample:

    - I would like to minimize / get rid of the "color changes" that appear mostly on the top half of the screen
    - Reduze / eliminate the grain
    - Sharpen
    - Do some color correction to make the whole thing more natural

    I'm aware that I have to de-interlace first, I'm using avisynth+ with QTGMC for that and
    Code:
    QTGMC(preset="Slow",EdiThreads=1)
    Here is my sample file:
    https://drive.google.com/file/d/1nBNQn1-xOMsof_-kKWn0M7Ee6tki9SsJ/view?usp=sharing

    Thank you for your tips.
    Quote Quote  
  2. You do it with this script:
    AVISource("MyVideo")
    assumetff()
    ConvertToYV16(interlaced=true)
    orig=last
    ev=orig.assumetff().separatefields().selecteven()
    od=orig.assumetff().separatefields().selectodd()
    ev
    ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    YToUV(ue_chroma, ve_chroma)
    MergeLuma(ev)
    ev_filtered=last
    od
    uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    YToUV(uo_chroma, vo_chroma)
    MergeLuma(od)
    od_filtered=last
    interleave(ev_filtered,od_filtered)
    assumefieldbased().assumetff().weave()
    Or another variant (less destructive for the chroma) that i like
    AVISource("MyVideo")
    assumetff()
    ConvertToYV16(interlaced=true)
    orig=last
    ev=orig.assumetff().separatefields().selecteven()
    od=orig.assumetff().separatefields().selectodd()
    ev
    ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    YToUV(ue_chroma, ve_chroma)
    MergeLuma(ev)
    ev_filtered=last
    od
    uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    YToUV(uo_chroma, vo_chroma)
    MergeLuma(od)
    od_filtered=last
    interleave(ev_filtered,od_filtered)
    assumefieldbased().assumetff().weave()
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  3. Originally Posted by themaster1 View Post
    You do it with this script:
    AVISource("MyVideo")
    assumetff()
    ConvertToYV16(interlaced=true)
    orig=last
    ev=orig.assumetff().separatefields().selecteven()
    od=orig.assumetff().separatefields().selectodd()
    ev
    ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    YToUV(ue_chroma, ve_chroma)
    MergeLuma(ev)
    ev_filtered=last
    od
    uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
    YToUV(uo_chroma, vo_chroma)
    MergeLuma(od)
    od_filtered=last
    interleave(ev_filtered,od_filtered)
    assumefieldbased().assumetff().weave()
    Or another variant (less destructive for the chroma) that i like
    AVISource("MyVideo")
    assumetff()
    ConvertToYV16(interlaced=true)
    orig=last
    ev=orig.assumetff().separatefields().selecteven()
    od=orig.assumetff().separatefields().selectodd()
    ev
    ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    YToUV(ue_chroma, ve_chroma)
    MergeLuma(ev)
    ev_filtered=last
    od
    uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
    YToUV(uo_chroma, vo_chroma)
    MergeLuma(od)
    od_filtered=last
    interleave(ev_filtered,od_filtered)
    assumefieldbased().assumetff().weave()
    Thanks. To my understanding, that mostly solves the chroma problems. Any idea how I'd best battle the grainyness and colorcorrect a bit?
    Quote Quote  
  4. You may want to try something along this route:

    Code:
    AVISource("your.avi")
    qtgmc(Preset="Fast") #bob-deinterlace
    crop(16,4,-16,-4)
    mergechroma(Levels(0,1.0,255,4,235,coring=false),last) #try or skip
    Tweak(hue=10,sat=1.0,bright=0,cont=1.0,coring=false)  #tweak the colors (hue) here
    converttoYV12().derainbow(10) #reduce the rainbows
    MCDegrainSharp() #denoise, degrain and sharpen
    addborders(8,4,8,4)
    Last edited by Sharc; 24th Nov 2020 at 14:44.
    Quote Quote  
  5. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    Originally Posted by manono View Post
    The source is 720x480 (typical with NTSC capture) but is 25 frames per second (typical of PAL capture)
    It's also interlaced, but the file is marked progressive, chroma is 4.2.0 sub-sampling

    Is this the actual capture file ?
    Quote Quote  
  6. Originally Posted by capfirepants View Post
    Why does your sample contain interlaced frames that encoded progressive? You should provide a sample of the original video, not re-encoded.
    Quote Quote  
  7. I had to capture the sample multiple times due to missing frames - I cut them together in premiere pro and exported that.
    I realise I should probably do this at the end?

    I've now cut a new sample from the originally captured file with ffmpeg:
    ffmpeg.exe -ss 100s -i input.mp4 -vcodec copy -vframes 600 -an output.mp4

    I think this should prevent ffmpeg from recoding. Here is that sample, is it better?
    https://drive.google.com/file/d/1t1SbCseu_WA0dmJmwErPXICGnoQMi1Bp/view?usp=sharing
    Quote Quote  
  8. The new clip still has interlaced YUV 4:2:0 frames encoded progressively. That causes the chroma of the two fields to blend together. You need to set the encoder to use interlaced mode. Actually, you shouldn't be using an AVC encoder to capture. That causes noise to get clumpy and harder to remove. You should capture raw YUV video and encode with a lossless codec like huffyuv, lagarith, ut video codec, etc. Then filter and encode from there.

    Here are some quick adjustments in AviSynth:

    Code:
    LWLibavVideoSource("sample2.mp4", cache=false, prefer_hw=2) 
    AssumeTFF()
    Crop(4,0,-12,-0) # ITU 704x480 frame size
    ColorYUV(off_y=-8, cont_u=-25, cont_v=-25) # lower levels, reduce saturation to legalize chroma
    QTGMC() # double frame rate deinterlace
    dehalo_alpha(rx=4.0, ry=3.0) # halo reduction (pretty damaging)
    MergeChroma(aWarpSharp2(depth=5),aWarpSharp2(depth=15)) # sharpen chroma more than luma
    MergeChroma(MCTemporalDenoise(settings="low"), MCTemporalDenoise(settings="very high")) # light noise reduction in luma, heavy  in chroma
    ChromaShiftSP(x=1, y=1) # shift colors up and left
    Image Attached Files
    Quote Quote  
  9. thanks for the help. Still very new to this topic.
    Can you tell me how you found out that the clip is interlaced YUV 4:2:0 - is there software to show me this information?
    Also, could you tell me if this last file is better?
    https://drive.google.com/file/d/1ycnXJdhgATW4b5b1789Bnm7kbtTlt9fZ/view?usp=sharing
    Quote Quote  
  10. MediaInfo is usually pretty good at determining if a video is RGB, YUV 4:2:2, YUV 4:2:0, etc. It can also determine if the video is encoded interlaced or progressive (this is a matter of how the codec handles the video internally -- interlaced video needs special handling within the codec). You can tell the clip is interlaced by viewing it in an editor that doesn't deinterlace. Interlaced video will show comb artifacts whenever there is motion.

    That UT video is screwed up. It was resized vertically (my guess is you cropped away the head switching noise at the bottom of the frame and resized back to 576 lines) while it was interlaced, without special interlaced handling. That has caused partial blending the two fields. They can no longer be separated cleanly. It needs to be re-captured without that resizing.

    An interlaced frame of video contains two half pictures (called fields), one in all the even scan lines, one in all the odd scan lines. Those two picture are intended to be viewed separately, and sequentially (they represent pictures taken at two different times, 1/50th second apart in PAL video). If there is no motion between the two fields they weave together into a nice single picture. But if there is motion you will see comb artifacts (unless the editor/player is hiding that from you by deinterlacing).
    Quote Quote  
  11. Thanks for the detailed explaination. Are there any settings I need to have set in virtualdub2 to make sure the recording works correctly?

    Edit: I've gone and downloaded virtualdub2 to save the file (was using obs before) and tried 2 captures, one with huffyuv and another with lagarith. wow, those files are absolutly HUGE
    While playing back in vlc I noticed that the audio is wierd and vlc reports the wrong clip length - both clips are only a few seconds long. The audio is distorted and really low, like it's being slowly played back. I've attatched two samples - do you know what i'm doing wrong?

    lagarith: https://drive.google.com/file/d/1hlPyEpDGYefQP-1pEULz1bCGHJn91dtk/view?usp=sharing
    huffyuv: https://drive.google.com/file/d/1IH9o3j0oA2RbiF3go--3EfJfKl6L70fa/view?usp=sharing

    Edit 2: I think i've fixed the audio / length problem by changing the timing settings from "sync audio to video by resampling..." to "synch video to audio by adjusting...". Is this a sensible setting?
    Last edited by capfirepants; 25th Nov 2020 at 17:48.
    Quote Quote  
  12. The video looks ok in s4.avi and s5.avi -- though it looks like frames were dropped at the start of the clips. That may be the reason for the audio problem. Try disabling A/V sync entirely. If everything is working right it's not needed. When capturing with VirtualDub turn off audio playback, untick Audio -> Enable Audio Playback. That often causes A/V sync problems. Is VirtualDub reporting any dropped/duplicated frames while capturing?

    Oh, s5.avi is RGB, not YUV 4:2:2. Maybe a settings error?
    Quote Quote  
  13. I think I have things figured out quite well now - more or less
    I've tried using this script to edit this sample: https://drive.google.com/file/d/1A6DkaAv5PzPFHVG6Vh2ajn9wUdcX1bV8/view?usp=sharing

    Code:
    ffVideoSource("input.avi")
    assumetff()
    QTGMC(preset="Slower")
    
    orig=last
    ev=orig.assumetff().separatefields().selecteven()
    od=orig.assumetff().separatefields().selectodd()
    ev
    ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2)
    ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2)
    
    YToUV(ue_chroma, ve_chroma)
    MergeLuma(ev)
    ev_filtered=last
    od
    uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2)
    vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2)
    
    YToUV(uo_chroma, vo_chroma)
    MergeLuma(od)
    od_filtered=last
    interleave(ev_filtered,od_filtered)
    assumefieldbased().assumetff().weave() 
    
    crop(12,4,-12,-16)
    Tweak(hue=0,sat=1.3,bright=0,cont=1.0,coring=false)  #tweak the colors (hue) here
    MCDegrainSharp() #denoise, degrain and sharpen
    AutoAdjust(auto_gain=true, dark_limit=1.5, bright_limit=1.50, gamma_limit=1.25, gain_mode=0, chroma_process=100, auto_balance=false, chroma_limit=1.05, balance_str=0.75, change_status="", high_quality=false, high_bitdepth=false, input_tv=true, output_tv=true, debug_view=false)
    Prefetch(8)
    This is my result:
    https://drive.google.com/file/d/12tmjbSgxhaH4hapQjGVjGLP6WLO5KMzN/view?usp=sharing

    I'm not unhappy with the result but also not sure if it could be sharper and a bit more vibrant?
    In your experience would that be possible?
    Anything in perticular I should tweak?
    Quote Quote  
  14. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Stop using preset=Slow/Slower.
    It blurs video.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  15. Originally Posted by capfirepants View Post
    Anything in perticular I should tweak?
    I didn't examine it in detail and I couldn't download your encoded result (access problem), so just few thoughts about your source and script:

    - Your captured .avi has heavily clipped and crushed whites from frame 158 onwards (the sky has no color, no structure). The damage may have been done by the camera already. You could however try with avisynth's Level(...) to mitigate the problem somewhat.
    - The first block in your script re-interlaces the video at the end which is ok if you want to keep it interlaced. However, the subsequent MCDegrainSharp() is a temporal-spatial filter which should be applied on progressive video or on the even/odd grouped fields. Or you could try SMDegrain(interlaced=true) instead, or perhaps use the denoiser in QTGMC().
    - The Tweak + AutoAdjust seems to introduce high contrast and saturation, but main point is it suits your eyes.
    Last edited by Sharc; 27th Nov 2020 at 10:38.
    Quote Quote  
  16. I tried some halo reduction :

    Code:
    LWlibavVideoSource("bp1_testsample_na.avi", cache=false) 
    assumetff()
    ConvertToYV12(interlaced=true)
    
    SeparateFields()
    BilinearResize(480, height) # downscale width
    
    before = last
    dehalo_alpha(rx=2.0, ry=1.0, BrightStr=1.3, DarkStr=1.3)
    dehalo_alpha(rx=6.0, ry=1.0, BrightStr=0.5, DarkStr=0.5)
    emask = mt_edge(before, thy1=30, thy2=30).mt_expand().mt_expand().mt_expand().Blur(1.0)
    Overlay(before, last, mask=emask)
    
    Spline36Resize(720,288) # scale back up
    Weave()
    QTGMC()
    MergeChroma(MCTemporalDenoise(settings="low"), MCTemporalDenoise(settings="very high")) # light luma NR, heavy chroma NR
    Dehalo_alpha() is pretty damaging to the picture (recapture the video without the VHS sharpen filters if you can -- some decks have a sharpness control) so I limited it to only the sharpest edges. If you change the downscale size you also have to change the dehalo_alpha parameters. There's still a wide third order halo though it's not too prominent.
    Image Attached Files
    Last edited by jagabo; 28th Nov 2020 at 08:20.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!