Hi There
Im new to the world of vhs digitalization and am trying to get childhood memories into a digital form.
How would I best go about achieving the following in my sample:
- I would like to minimize / get rid of the "color changes" that appear mostly on the top half of the screen
- Reduze / eliminate the grain
- Sharpen
- Do some color correction to make the whole thing more natural
I'm aware that I have to de-interlace first, I'm using avisynth+ with QTGMC for that and
Here is my sample file:Code:QTGMC(preset="Slow",EdiThreads=1)
https://drive.google.com/file/d/1nBNQn1-xOMsof_-kKWn0M7Ee6tki9SsJ/view?usp=sharing
Thank you for your tips.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 17 of 17
Thread
-
-
You do it with this script:
AVISource("MyVideo")
assumetff()
ConvertToYV16(interlaced=true)
orig=last
ev=orig.assumetff().separatefields().selecteven()
od=orig.assumetff().separatefields().selectodd()
ev
ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
YToUV(ue_chroma, ve_chroma)
MergeLuma(ev)
ev_filtered=last
od
uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=6,lthresh=150, strength=6).KNLMeansCL(d=3,a=8,s=2,h=6)
YToUV(uo_chroma, vo_chroma)
MergeLuma(od)
od_filtered=last
interleave(ev_filtered,od_filtered)
assumefieldbased().assumetff().weave()
AVISource("MyVideo")
assumetff()
ConvertToYV16(interlaced=true)
orig=last
ev=orig.assumetff().separatefields().selecteven()
od=orig.assumetff().separatefields().selectodd()
ev
ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
YToUV(ue_chroma, ve_chroma)
MergeLuma(ev)
ev_filtered=last
od
uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).ttempsmooth(m axr=1,lthresh=70, strength=1)
YToUV(uo_chroma, vo_chroma)
MergeLuma(od)
od_filtered=last
interleave(ev_filtered,od_filtered)
assumefieldbased().assumetff().weave()*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
-
You may want to try something along this route:
Code:AVISource("your.avi") qtgmc(Preset="Fast") #bob-deinterlace crop(16,4,-16,-4) mergechroma(Levels(0,1.0,255,4,235,coring=false),last) #try or skip Tweak(hue=10,sat=1.0,bright=0,cont=1.0,coring=false) #tweak the colors (hue) here converttoYV12().derainbow(10) #reduce the rainbows MCDegrainSharp() #denoise, degrain and sharpen addborders(8,4,8,4)
Last edited by Sharc; 24th Nov 2020 at 13:44.
-
You can also denoise within QTGMC.
http://avisynth.nl/index.php/QTGMC#Noise_Bypass_.2F_Denoising -
-
-
I had to capture the sample multiple times due to missing frames - I cut them together in premiere pro and exported that.
I realise I should probably do this at the end?
I've now cut a new sample from the originally captured file with ffmpeg:
ffmpeg.exe -ss 100s -i input.mp4 -vcodec copy -vframes 600 -an output.mp4
I think this should prevent ffmpeg from recoding. Here is that sample, is it better?
https://drive.google.com/file/d/1t1SbCseu_WA0dmJmwErPXICGnoQMi1Bp/view?usp=sharing -
The new clip still has interlaced YUV 4:2:0 frames encoded progressively. That causes the chroma of the two fields to blend together. You need to set the encoder to use interlaced mode. Actually, you shouldn't be using an AVC encoder to capture. That causes noise to get clumpy and harder to remove. You should capture raw YUV video and encode with a lossless codec like huffyuv, lagarith, ut video codec, etc. Then filter and encode from there.
Here are some quick adjustments in AviSynth:
Code:LWLibavVideoSource("sample2.mp4", cache=false, prefer_hw=2) AssumeTFF() Crop(4,0,-12,-0) # ITU 704x480 frame size ColorYUV(off_y=-8, cont_u=-25, cont_v=-25) # lower levels, reduce saturation to legalize chroma QTGMC() # double frame rate deinterlace dehalo_alpha(rx=4.0, ry=3.0) # halo reduction (pretty damaging) MergeChroma(aWarpSharp2(depth=5),aWarpSharp2(depth=15)) # sharpen chroma more than luma MergeChroma(MCTemporalDenoise(settings="low"), MCTemporalDenoise(settings="very high")) # light noise reduction in luma, heavy in chroma ChromaShiftSP(x=1, y=1) # shift colors up and left
-
thanks for the help. Still very new to this topic.
Can you tell me how you found out that the clip is interlaced YUV 4:2:0 - is there software to show me this information?
Also, could you tell me if this last file is better?
https://drive.google.com/file/d/1ycnXJdhgATW4b5b1789Bnm7kbtTlt9fZ/view?usp=sharing -
MediaInfo is usually pretty good at determining if a video is RGB, YUV 4:2:2, YUV 4:2:0, etc. It can also determine if the video is encoded interlaced or progressive (this is a matter of how the codec handles the video internally -- interlaced video needs special handling within the codec). You can tell the clip is interlaced by viewing it in an editor that doesn't deinterlace. Interlaced video will show comb artifacts whenever there is motion.
That UT video is screwed up. It was resized vertically (my guess is you cropped away the head switching noise at the bottom of the frame and resized back to 576 lines) while it was interlaced, without special interlaced handling. That has caused partial blending the two fields. They can no longer be separated cleanly. It needs to be re-captured without that resizing.
An interlaced frame of video contains two half pictures (called fields), one in all the even scan lines, one in all the odd scan lines. Those two picture are intended to be viewed separately, and sequentially (they represent pictures taken at two different times, 1/50th second apart in PAL video). If there is no motion between the two fields they weave together into a nice single picture. But if there is motion you will see comb artifacts (unless the editor/player is hiding that from you by deinterlacing). -
Thanks for the detailed explaination. Are there any settings I need to have set in virtualdub2 to make sure the recording works correctly?
Edit: I've gone and downloaded virtualdub2 to save the file (was using obs before) and tried 2 captures, one with huffyuv and another with lagarith. wow, those files are absolutly HUGE
While playing back in vlc I noticed that the audio is wierd and vlc reports the wrong clip length - both clips are only a few seconds long. The audio is distorted and really low, like it's being slowly played back. I've attatched two samples - do you know what i'm doing wrong?
lagarith: https://drive.google.com/file/d/1hlPyEpDGYefQP-1pEULz1bCGHJn91dtk/view?usp=sharing
huffyuv: https://drive.google.com/file/d/1IH9o3j0oA2RbiF3go--3EfJfKl6L70fa/view?usp=sharing
Edit 2: I think i've fixed the audio / length problem by changing the timing settings from "sync audio to video by resampling..." to "synch video to audio by adjusting...". Is this a sensible setting?Last edited by capfirepants; 25th Nov 2020 at 16:48.
-
The video looks ok in s4.avi and s5.avi -- though it looks like frames were dropped at the start of the clips. That may be the reason for the audio problem. Try disabling A/V sync entirely. If everything is working right it's not needed. When capturing with VirtualDub turn off audio playback, untick Audio -> Enable Audio Playback. That often causes A/V sync problems. Is VirtualDub reporting any dropped/duplicated frames while capturing?
Oh, s5.avi is RGB, not YUV 4:2:2. Maybe a settings error? -
I think I have things figured out quite well now - more or less
I've tried using this script to edit this sample: https://drive.google.com/file/d/1A6DkaAv5PzPFHVG6Vh2ajn9wUdcX1bV8/view?usp=sharing
Code:ffVideoSource("input.avi") assumetff() QTGMC(preset="Slower") orig=last ev=orig.assumetff().separatefields().selecteven() od=orig.assumetff().separatefields().selectodd() ev ue_chroma = UToY(ev).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2) ve_chroma = VToY(ev).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2) YToUV(ue_chroma, ve_chroma) MergeLuma(ev) ev_filtered=last od uo_chroma = UToY(od).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2) vo_chroma = VToY(od).blur(0,1.5).binomialblur(5).vsTTempsmooth(maxr=1,ythresh=70, strength=2) YToUV(uo_chroma, vo_chroma) MergeLuma(od) od_filtered=last interleave(ev_filtered,od_filtered) assumefieldbased().assumetff().weave() crop(12,4,-12,-16) Tweak(hue=0,sat=1.3,bright=0,cont=1.0,coring=false) #tweak the colors (hue) here MCDegrainSharp() #denoise, degrain and sharpen AutoAdjust(auto_gain=true, dark_limit=1.5, bright_limit=1.50, gamma_limit=1.25, gain_mode=0, chroma_process=100, auto_balance=false, chroma_limit=1.05, balance_str=0.75, change_status="", high_quality=false, high_bitdepth=false, input_tv=true, output_tv=true, debug_view=false) Prefetch(8)
https://drive.google.com/file/d/12tmjbSgxhaH4hapQjGVjGLP6WLO5KMzN/view?usp=sharing
I'm not unhappy with the result but also not sure if it could be sharper and a bit more vibrant?
In your experience would that be possible?
Anything in perticular I should tweak? -
Stop using preset=Slow/Slower.
It blurs video.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
I didn't examine it in detail and I couldn't download your encoded result (access problem), so just few thoughts about your source and script:
- Your captured .avi has heavily clipped and crushed whites from frame 158 onwards (the sky has no color, no structure). The damage may have been done by the camera already. You could however try with avisynth's Level(...) to mitigate the problem somewhat.
- The first block in your script re-interlaces the video at the end which is ok if you want to keep it interlaced. However, the subsequent MCDegrainSharp() is a temporal-spatial filter which should be applied on progressive video or on the even/odd grouped fields. Or you could try SMDegrain(interlaced=true) instead, or perhaps use the denoiser in QTGMC().
- The Tweak + AutoAdjust seems to introduce high contrast and saturation, but main point is it suits your eyes.Last edited by Sharc; 27th Nov 2020 at 09:38.
-
I tried some halo reduction :
Code:LWlibavVideoSource("bp1_testsample_na.avi", cache=false) assumetff() ConvertToYV12(interlaced=true) SeparateFields() BilinearResize(480, height) # downscale width before = last dehalo_alpha(rx=2.0, ry=1.0, BrightStr=1.3, DarkStr=1.3) dehalo_alpha(rx=6.0, ry=1.0, BrightStr=0.5, DarkStr=0.5) emask = mt_edge(before, thy1=30, thy2=30).mt_expand().mt_expand().mt_expand().Blur(1.0) Overlay(before, last, mask=emask) Spline36Resize(720,288) # scale back up Weave() QTGMC() MergeChroma(MCTemporalDenoise(settings="low"), MCTemporalDenoise(settings="very high")) # light luma NR, heavy chroma NR
Last edited by jagabo; 28th Nov 2020 at 07:20.
Similar Threads
-
A comparison of AVIsynth denoise filters
By hello_hello in forum Video ConversionReplies: 52Last Post: 14th Jul 2021, 19:04 -
TV Episode Denoise or Degrain
By Siluvian in forum Newbie / General discussionsReplies: 10Last Post: 9th Aug 2020, 16:44 -
Camcorder Color Denoise
By rurimoon in forum Video ConversionReplies: 5Last Post: 6th Jun 2018, 05:59 -
FFMpeg: can either resize or denoise video, but not both.
By junglemike in forum Video ConversionReplies: 3Last Post: 20th Sep 2016, 19:28 -
Someone Explain Deinterlace , Decomb and Denoise.. ??
By FurqanHanif in forum Newbie / General discussionsReplies: 2Last Post: 22nd Jul 2016, 13:57