If you used AviSynth to clean up video would you mind sharing what you did?
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 31 to 60 of 66
Thread
-
-
Wish I had any family recordings from childhood like this. But I do not, no one afforded to have camera to record at that time. So no taped memories.
Thanks soviet union. -
Code:
AVISource("Old JVC Deck Audio Test plus es10 Composite Video clip.avi") converttoYV16(last,matrix="Rec601",interlaced=true) #convert to YUV AssumeTFF() crop(16,8,-20,-12) #apply color and levels corrections to taste SmoothTweak(brightness=0,contrast=1.0,saturation=0.7,hue1=3,hue2=9) SmoothLevels(input_low=16,gamma=1.0,input_high=235,output_low=8,output_high=250,HQ=true) #field grouping for interlaced filtering separatefields() e=selecteven() o=selectodd() #filtering e=e.MCDegrainSharp() o=o.MCDegrainSharp() #re-weaving interleave(e,o).weave() addborders(10,10,10,10) #pad to 704 x 480 return last
Last edited by Sharc; 5th Feb 2023 at 04:30.
-
Thank you. I’m going to play around with your script so I can learn what everything is doing. I might have some questions if that’s ok. It is so different from the functions I’ve been using in my template script. Also I always use QTGMC but I thought your version looked very smooth when I watched it. I’ve never separated the fields like you did. Appreciate you sending.
-
I kept it interlaced all the way through. I have not always been happy how QTGMC deals with the noise. It depends on the source though. The deinterlacing is left to the player (TV) in this case. The fields separation and even/odd grouping is recommended for temporal or spatial-temporal filtering of interlaced video.
Of course you may also (bob-)deinterlace the video using QTGMC, and apply any extra filters (like denoising, upscaling ....) on the progressive video frames. If needed (e.g. for standards compliance or player compatibility), you may reinterlace it at the end.
It's also a matter of personal preference how one processes interlaced footage. But if you want to (vertically) upscale the video you would have to deinterlace it.Last edited by Sharc; 5th Feb 2023 at 08:48.
-
Btw. I noticed that your "old JVC - ES10" captures have a few dropped frames. Keep an eye on this. If you can't avoid it this could justify to use the newer S-VHS model with internal TBC for capturing the video, if it helps.
-
Ok. Thanks. How can you tell by looking at it afterwards? I’m pretty sure virtualdub didn’t report any dropped frames. Is that possible or did I just not notice?
Ps. If I’m dropping frames with one deck and not the other, that sounds like it would make it very difficult to sync up the audio between 2 different captures. -
A dropped frame in this case actually means a missed frame which is substituted by a repetition of the preceeding frame in order to keep video and audio in sync. For motion scenes you will notice a stutter at these positions when you play the video. You can discover such dropped and repeated frames easily by stepping through the frames with VirtualDub or MPC-HC for example.
For the file which you posted in post#26 'Old JVC Deck ....' you find such duplicates for frames 197/198, 239/240, 247/248, 279/280.
Ps. If I’m dropping frames with one deck and not the other, that sounds like it would make it very difficult to sync up the audio between 2 different captures.Last edited by Sharc; 5th Feb 2023 at 17:44. Reason: typos
-
The field separation for temporal or spatial/temporal filtering is less effective, because you filter even fields 0,2,4,etc separately from odd fields 1,3,5,etc
and there is no temporal filtering between even and odd fields, but only within fields of the same type.
Here a comparison between your script and the same where QTGMC is used in lossless mode (the original frame is untouched), in order to apply MCDegrainSharp() on a progressive vdeo, as it should be; at the end the video is interlaced back so it will be as you like. See how more details are preserved: https://imgsli.com/MTUzMDYw
Here the script, small (but decisive) modification to yours:
Code:video_org=AVISource("Old JVC Deck Audio Test plus es10 Composite Video clip.avi").converttoYV16(last,matrix="Rec601",interlaced=true)\ .AssumeTFF().crop(16,8,-20,-12) # plugins directory plugins_dir="C:\Users\giuse\Documents\VideoSoft\MPEG\AviSynth\extFilters\" # SmoothAdjust loadPlugin(plugins_dir + "SmoothAdjust-v3.20\x86\SmoothAdjust.dll") # QTGMC Import(plugins_dir + "QTGMC.avsi") # Zs_RF_Shared Import(plugins_dir + "Zs_RF_Shared.avsi") # RgTools loadPlugin(plugins_dir + "RgTools-v1.0\x86\RgTools.dll") # MaskTools2 loadPlugin(plugins_dir + "masktools2-v2.2.23\x86\masktools2.dll") # FFT3DFilter loadPlugin(plugins_dir + "FFT3dFilter-v2.6\x86\fft3dfilter.dll") # FFTW loadPlugin(plugins_dir + "LoadDll\LoadDll.dll") loadDll(plugins_dir + "fftw-3.3.5-dll32\libfftw3f-3.dll") # Nnedi3 loadPlugin(plugins_dir + "NNEDI3_v0_9_4_55\x86\Release_W7\nnedi3.dll") # MCDegrainSharp import(plugins_dir + "McDegrainSharp.avsi") # MVTools loadPlugin(plugins_dir + "mvtools-2.7.41-with-depans20200430\x86\mvtools2.dll") #apply color and levels corrections to taste video_org_st=video_org.SmoothTweak(brightness=0,contrast=1.0,saturation=0.7,hue1=3,hue2=9) video_org_st_sl=video_org_st.SmoothLevels(input_low=16,gamma=1.0,input_high=235,output_low=8,output_high=250,HQ=true) #field grouping for interlaced filtering deinterlaced=video_org_st_sl.QTGMC(lossless=1) #filtering denoised=deinterlaced.MCDegrainSharp() #re-weaving video_restored=denoised.AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave().addborders(10,10,10,10) return(video_restored)
-
Another way to detect duplicate frames is to subtract sequential frames and amplify the differences:
Code:Subtract(last, last.Trim(1,0)).ColorYUV(cont_y=1000)
frame 196 - frame 197 (different):
[Attachment 69088 - Click to enlarge]
frame 197 - frame 198 (identical, just a little noise difference):
[Attachment 69089 - Click to enlarge]
A variation of this is to use Abs(Y1-Y2) and amplify the result. With identical frames you get a black image. With non-identical frames you get lots of noise.
You can also use the runtime filters to generate a text file with a list of the identical frames.Last edited by jagabo; 5th Feb 2023 at 19:13.
-
This script:
Code:########################################################################## # # Abs(v1-v2) # # Works for YUY2 and YV12 only # ########################################################################## function AbsSubtractY(clip v1, clip v2) { IsYUY2(v1) ? mt_lutxy(v1.ConvertToYV16(), v2.ConvertToYV16(),"x y - abs", chroma="-128").ConvertToYUY2() \ : mt_lutxy(v1, v2,"x y - abs", chroma="-128") } ########################################################################## LWLibavVideoSource("Old JVC Deck Audio Test plus es10 Composite Video clip.avi") AssumeTFF() ConvertToYV12(interlaced=true) AbsSubtractY(last, last.Trim(1,0)) WriteFileIf(last, "DupFrames.txt", "AverageLuma<1.00", "current_frame", """ " : " """, "AverageLuma", append=false)
Code:197 : 0.457364 239 : 0.451861 247 : 0.457057 279 : 0.453411 324 : 0.465017 396 : 0.000000
Open the script in VirtualDub(2) and use File -> Run Video Analysis Pass to parse the entire video. -
-
Well well, just take another frame ...
For all the rest of the frames there is no match.
Also, take into account that QTGMC applies some extra sharpening of its own.
Code:video_org=AviSource("<finename>.avi") deinterlaced=video_org.AssumeTFF().QTGMC(lossless=1) interlaced=deinterlaced.AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave() ### check difference between original video and original video after QTGMC_lossless and re-interlacing ### difference_video_org_interlaced=Subtract(video_org, interlaced).Levels(65, 1, 255-64, 0, 255, coring=false) stackhorizontal(\ subtitle(video_org,"video_org",size=20,align=2),\ subtitle(interlaced,"interlaced",size=20,align=2),\ subtitle(difference_video_org_interlaced,"difference",size=20,align=2)\ ) ############################################################################################################ ### check difference between original video separated fields and original video after QTGMC_lossless and re-interlacing separated fields ### #video_org_sep_tff=video_org.AssumeTFF().separateFields() #interlaced_sep_tff=interlaced.AssumeTFF().separateFields() # equivalent to #deinterlaced_sep_sel403_tff=deinterlaced.AssumeTFF().SeparateFields().SelectEvery(4,0,3) #difference_video_org_sep_tff_interlaced_sep_tff=Subtract(video_org_sep_tff, interlaced_sep_tff).Levels(65, 1, 255-64, 0, 255, coring=false) #stackhorizontal(\ #subtitle(video_org_sep_tff,"video_org_sep_tff",size=20,align=2),\ #subtitle(interlaced_sep_tff,"interlaced_sep_tff",size=20,align=2),\ #subtitle(difference_video_org_sep_tff_interlaced_sep_tff,"difference",size=20,align=2)\ #) ############################################################################################################################################
Both methods have their pros and cons.
edit: also consider that we are testing here a quite static video. If benchmarking a high motion video, the superiority of applying the temporal degrain over a motion compensated architecture (MCdegrainSharp) on the frames rather than on the fields will be even more evidentLast edited by lollo; 6th Feb 2023 at 05:03.
-
What about QTGMC not lossless? I’ve never used lossless and I have been producing deinterlaced mp4 files so they can be watched on any device.
I know there’s some debate as well over which preset to use (slower faster etc). But I haven’t come across too much discussion about lossless in my research. -
What about QTGMC not lossless
For a real deinterlace operation, lossless option is not appropriate, because it forces QTGMC to do not perform at its best while removing artifacts generating deinterlaced frames. -
Last edited by Bwaak; 6th Feb 2023 at 12:52.
-
TBH, I am so appalled by the stairstepping on diagonals that I don't care about losing detail on the sleeve.
A (generally better) alternative is to perform a real deinterlace on the original video, and then apply the filtering. This approach may introduce over-processing, because QTGMC does by itself denoise and sharpening. It must be used with caution and the right options. -
-
you sound like a person born after h265 was released
Stairstepping diagonals? Do you watch the interlaced frames on a progressive monitor and complain about the combing?
[Attachment 69104 - Click to enlarge]
unless you remove the whole field, all of the interpolation deinterlacers (like yadif2 for example) tend to leave jagged linesLast edited by rrats; 6th Feb 2023 at 16:37.
-
-
unless you remove the whole field, all of the interpolation deinterlacers (like yadif2 for example) tend to leave jagged lines
Code:deinterlaced=video_org_st_sl.QTGMC().addborders(10,10,10,10) stackhorizontal(\ subtitle(video_org.SelectEvery(1,0,0).addborders(10,10,10,10),"video_org",size=20,align=2),\ subtitle(deinterlaced,"deinterlaced",size=20,align=2)\ )
-
I finally had some time to go through all of these posts again and try some of the scripts you all shared to clean up the video. Thank you for taking the time to help me out!
Questions:
1. If I am ultimately creating deinterlaced output files using QTGMC, would it still be beneficial to use MCDegrainSharp, or does QTGMC take care of that filtering? I never used MCDegrainSharp so I don't have the plugin installed and I'm having trouble finding where to download it, if someone could share the link.
2. I also never used SmoothTweak or SmoothLevels and don't have them installed. Always used ColorYUV() to fix levels and colors. Is there any notable difference between SmoothTweak and SmoothLevels compared with ColorYUV that I should install and learn the parameters? (I sometimes used to use just plain old Tweak and Levels but have since always defaulted to ColorYUV as that is what I'm now comfortable with.)
3. I'm struggling with figuring out the correct code / order to do things when I have to color correct 2 parts of a video separately but then join them. Do I QTGMC first and then color correct after keeping each section separate til the end? I'm getting a little confused on how to carry the 2 variables throughout and return the right thing, without constantly redefining each step. I'm getting from point A to point B but I think I am going the long way. What's the right way to do this?
Example:
Code:SetFilterMTMode("QTGMC", 2) Avisource("capture.avi") Part1=Trim(76399,86293) #This is the problem scene that is red Part2=Trim(86296,90746) #This is the following scene, colors are fine Part1Deinterlaced=Part1.AssumeTFF().QTGMC(Preset="slower", Edithreads=2, Sharpness=0.8) Part2Deinterlaced=Part2.AssumeTFF().QTGMC(Preset="slower", Edithreads=2, Sharpness=0.8) Part1ColorCorrect=Part1Deinterlaced.ColorYUV(off_u=-10, off_v=-25, gain_y=70, off_y=-20, cont_u=-100, cont_v=-125) #fixred Part2ColorCorrect=Part2Deinterlaced.ColorYUV(gain_y=70, off_y=-20) Part1ColorCorrect++Part2ColorCorrect #Then crop/add borders, resize, and intro titling
-
My 3 cents only:
1. QTGMC does a lot of good things under the hood. See the wiki in avisynth,
http://avisynth.nl/index.php/QTGMC. Lot of parameters to tweak to obtain 'best' results, but you may most probably be happy with what you get using its presets. QTGMC is basically an excellent deinterlacer. Whether you need something else depends on the source, its defects (noise, grain, rainbows, dotcrawl, comets, spots, color bleeding, halos etc.) and your ambition what you want to "fix" and what matters for your eyes.
Personally I have (in the past) not been entirely happy how QTGMC processes the noise of my interlaced VHS sources. So one may define another denoiser within QTGMC, or tweak it, or configure it such that it leaves the noise alone as much as possible and then add an external denoiser/degrainer to taste. There are many posts and opinions which address QTGMC's 'best' settings for various scenarios. Members have spent countless hours on it.
MCDegrainSharp was just an example of a filter (actually a script). There are many more filters, see the AviSynth catalogue http://avisynth.nl/index.php/External_filters#Adjustment_Filters. Every filter has its pros and cons and has desired effects and undesidered side effects. One has to try and judge the benefits.
General rule: Don't overdo with filtering, and check the result on your TV (or your intended main player) as well. It makes sense to do pixel-peeping on the PC for examining details and effects, but at the end we cannot judge the beauty of a waterfall by analyzing some of its droplets only.
2. Basically you can achieve the same or very similar with different color tweaking filters. Some operate in VUV and some in RGB colorspace though. Personally I found tweak (or SmoothAdjust) to be more intuitive and easier to use than ColorYUV. ColorYUV gives you all the freedom for tweaking the YUV parameters though.
More info about SmoothAdjust here: http://avisynth.nl/index.php/SmoothAdjust
3. I would even consider not to capture long videos in one run, but to capture in parts, process the parts individually and eventually join the final results (risk of dropped frames, re-do the capture ....)
Otherwise I would split the long capture in several parts (using VDub in stream copy mode for example), and then process the various parts individually and independently, and at the end join them together.
(You may also consider an NLE (Shotcut, Kdenlive etc.) for joining the parts on the timeline, add transition effects etc., but that's another topic)
Added:
Here a version of MCDegrainSharp in case you want to play with it:Last edited by Sharc; 7th Feb 2023 at 05:36. Reason: typos
-
Just as complement to what Sharc said, with which I agree:
1. If I am ultimately creating deinterlaced output files using QTGMC, would it still be beneficial to use MCDegrainSharp, or does QTGMC take care of that filtering?
3. I'm struggling with figuring out the correct code / order to do things
In short, a generic flow if the following:
- level correction: needed only if you plan to move from YUV color space to RGB color space, because the Y values <16 and > 235 are clipped. For example if you wish to use later ColorMill in VirtualDub (a RGB plugin, superior to ColorYUV or SmoothTweak). It is better to only act on luma levels, keeping chroma unchanged (LevelsLumaOnly). If you stay in YUV color space it is not needed.
- preliminary color correction (ColorYUV, SmoothTweak, ColorMill)
- check, and eventually fix again the levels after preliminary color correction if they moved away from 16-235 range and a RGB processing is applied later
- filtering: deinterlace/.../denoise/.../sharpening/... (... = any other filter for specific actions, like stabilization, dot removals, etc)
- check, and eventually fix again the levels, because the previous filtering expands the levels to 0-255, if needed for further processing or if your display does not support full range
- final main color correction (a NLE tool like DaVince resolve can be used, but also staying in AviSynth/VirtualDub can be effective)
- check, and eventually fix again the levels if previous step acted on Y values and needed by your display options.
This is just a generic flow. Each video, or portion of a video, is different and requires a specific filtering and/or filtering order. You have to experiment on your captures the best filters to be used, the best parameters for each filter, and the best order of processing. -
@ Christina: When you want to experiment with filters and don't like to bother much with Avisynth or Vapoursynth details and syntax I would recommend Selur's Hybrid. It comes packed with a rich set of Avisynth, Vapoursynth an ffmpeg filters including any dependencies. As you are already familiar with Avisynth basics it could be worth to try and familiarize yourself with Hybrid.
https://www.selur.de/Last edited by Sharc; 7th Feb 2023 at 06:58. Reason: Link added
-
Tweak, SmoothTweak, and ColorYUV do many of the same things. The advantage of SmoothTweak is that it can dither the output. This can lead to less posterization (banding) on smooth shallow gradients. How much that dithering helps (or hurts) depends on the nature of the source and any filtering that happens earlier and/or later in the script. The inherent noise (natural dithering) in VHS caps makes that advantage less noticeable. Also, if you apply any noise reduction after the tweak that advantage may disappear. I usually perform the brightness/color adjustments before QTGMC or noise reduction.
Another way of filtering sections of the video differently is simply to to produce a separate video for each filters sequence, then use RemapFramesSimple() to pick the right video for each sequence:
Code:SourceFilter() filtered1 = Some().Filter().Sequence() filtered2 = Different().Filters() filtered3 = More().Different().Filters() ReplaceFramesSimple(filtered1, filtered2, Mappings="[5414 5488]") ReplaceFramesSimple(last, filtered3, Mappings="[7501 7725]")
Last edited by jagabo; 7th Feb 2023 at 08:01.
-
Captured about an hour of the tape last night using the regular JVC VCR with the ES10 and used the script below provided by Jagabo to detect duplicate frames, and there were only about 6 or 7. They all appeared in the first second or two of when one scene changed to the next scene on the original recording (i.e. when camcorder was stopped and started) so I'm pretty happy about that! For some reason I am not getting the same dropped frames I got when I did my first short test capture last week that I shared on this thread. I haven't done anything different except fast forward the tape to the end and rewind in this VCR.
Jagabo, thank you so much for sharing this script. It was extremely helpful!!!
Now, I can decide to use this capture since I didn't have too many dropped frames, or I can attempt to sync the video from the S-VHS Deck with the audio from this capture.
If I want to use the audio from one cap and video from another, what tools (besides trial and error) do I use to do that? Do I just try to find the same video frame in each capture in VirtualDub and trim from that point on and then mux in AviSynth?
-
That's not totally unusual. You will never get identical captures in repeated tests because capturing is an analog process, and rewinding the tape could have moved some 'dirt' along the tape, for example. Also, computer load may vary. This randomness can even make a comparison between captures sometimes difficult.
Sidenote: what capturing SW are you using? It is generally agreed that one should use VirtualDub 1.10.4 rather than VirtualDub2 for reliable capturing. Personally I preferred AmarecTV which delivered the most stable results here re.no glitches and no sync issues. -
Personally I preferred AmarecTV which delivered the most stable results here re.no glitches and no sync issues.
-
I'm using VirtualDub 1.9.11 for capturing. I use VirtualDub 2 for some other things but not for capturing. Never tried AmarecTV, but I haven't had any sync issues with VirtualDub (that I am aware of, but I'm pretty sensitive to audio being out of sync even the slightest bit so I would most likely notice.)
Originally Posted by lollo
Similar Threads
-
vhs tape will NOT move through vhs recorder at all
By tepeco in forum RestorationReplies: 52Last Post: 24th Oct 2022, 13:31 -
Using Media Player Classic and Xvid codecs to capture VHS.
By Caius in forum CapturingReplies: 3Last Post: 16th Nov 2021, 03:52 -
Before you digitize that VHS tape, check the VHS VAULT first!
By babygdav in forum Latest Video NewsReplies: 20Last Post: 22nd Jun 2020, 17:33 -
vhs tape question, 1/2 tape is fine and 1/2 way thru gets lines
By tomthetank in forum Newbie / General discussionsReplies: 8Last Post: 13th Jan 2019, 08:14 -
standard VHS tape in S-VHS cassette
By smartel in forum Newbie / General discussionsReplies: 8Last Post: 27th Dec 2018, 22:28