VideoHelp Forum
+ Reply to Thread
Page 2 of 3
FirstFirst 1 2 3 LastLast
Results 31 to 60 of 66
Thread
  1. Originally Posted by Sharc View Post
    I could live with option 1 (old VHS+ES 10) with a rock solid picture (no wiggling) where the audio is ok, and focus a bit on the video (noise, colors ...). I don't really see that the S-VHS has a significant advantage for this tape regarding recovery of details.
    If you used AviSynth to clean up video would you mind sharing what you did?
    Quote Quote  
  2. Wish I had any family recordings from childhood like this. But I do not, no one afforded to have camera to record at that time. So no taped memories.
    Thanks soviet union.
    Quote Quote  
  3. Originally Posted by Christina View Post
    If you used AviSynth to clean up video would you mind sharing what you did?
    Code:
    AVISource("Old JVC Deck Audio Test plus es10 Composite Video clip.avi")
    
    converttoYV16(last,matrix="Rec601",interlaced=true) #convert to YUV
    AssumeTFF()
    crop(16,8,-20,-12)
    
    #apply color and levels corrections to taste
    SmoothTweak(brightness=0,contrast=1.0,saturation=0.7,hue1=3,hue2=9)
    SmoothLevels(input_low=16,gamma=1.0,input_high=235,output_low=8,output_high=250,HQ=true)
    
    #field grouping for interlaced filtering
    separatefields()
    e=selecteven()
    o=selectodd()
    
    #filtering
    e=e.MCDegrainSharp()
    o=o.MCDegrainSharp()
    
    #re-weaving
    interleave(e,o).weave()
    
    addborders(10,10,10,10) #pad to 704 x 480
    
    return last
    Encoded with x264, --tff --sar 10:11
    Last edited by Sharc; 5th Feb 2023 at 04:30.
    Quote Quote  
  4. Originally Posted by Sharc View Post
    Thank you. I’m going to play around with your script so I can learn what everything is doing. I might have some questions if that’s ok. It is so different from the functions I’ve been using in my template script. Also I always use QTGMC but I thought your version looked very smooth when I watched it. I’ve never separated the fields like you did. Appreciate you sending.
    Quote Quote  
  5. Originally Posted by Christina View Post
    Thank you. I’m going to play around with your script so I can learn what everything is doing. I might have some questions if that’s ok. It is so different from the functions I’ve been using in my template script. Also I always use QTGMC but I thought your version looked very smooth when I watched it. I’ve never separated the fields like you did. Appreciate you sending.
    I kept it interlaced all the way through. I have not always been happy how QTGMC deals with the noise. It depends on the source though. The deinterlacing is left to the player (TV) in this case. The fields separation and even/odd grouping is recommended for temporal or spatial-temporal filtering of interlaced video.
    Of course you may also (bob-)deinterlace the video using QTGMC, and apply any extra filters (like denoising, upscaling ....) on the progressive video frames. If needed (e.g. for standards compliance or player compatibility), you may reinterlace it at the end.
    It's also a matter of personal preference how one processes interlaced footage. But if you want to (vertically) upscale the video you would have to deinterlace it.
    Last edited by Sharc; 5th Feb 2023 at 08:48.
    Quote Quote  
  6. Btw. I noticed that your "old JVC - ES10" captures have a few dropped frames. Keep an eye on this. If you can't avoid it this could justify to use the newer S-VHS model with internal TBC for capturing the video, if it helps.
    Quote Quote  
  7. Originally Posted by Sharc View Post
    Btw. I noticed that your "old JVC - ES10" captures have a few dropped frames. Keep an eye on this. If you can't avoid it this could justify to use the newer S-VHS model with internal TBC for capturing the video, if it helps.
    Ok. Thanks. How can you tell by looking at it afterwards? I’m pretty sure virtualdub didn’t report any dropped frames. Is that possible or did I just not notice?

    Ps. If I’m dropping frames with one deck and not the other, that sounds like it would make it very difficult to sync up the audio between 2 different captures.
    Quote Quote  
  8. Originally Posted by Christina View Post
    Ok. Thanks. How can you tell by looking at it afterwards? I’m pretty sure virtualdub didn’t report any dropped frames. Is that possible or did I just not notice?
    A dropped frame in this case actually means a missed frame which is substituted by a repetition of the preceeding frame in order to keep video and audio in sync. For motion scenes you will notice a stutter at these positions when you play the video. You can discover such dropped and repeated frames easily by stepping through the frames with VirtualDub or MPC-HC for example.
    For the file which you posted in post#26 'Old JVC Deck ....' you find such duplicates for frames 197/198, 239/240, 247/248, 279/280.

    Ps. If I’m dropping frames with one deck and not the other, that sounds like it would make it very difficult to sync up the audio between 2 different captures.
    Not necessarily when the dropping & substitution is working as described above. You have to try. Needs some time and patience, probably.
    Last edited by Sharc; 5th Feb 2023 at 17:44. Reason: typos
    Quote Quote  
  9. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Originally Posted by Sharc View Post
    The fields separation and even/odd grouping is recommended for temporal or spatial-temporal filtering of interlaced video.
    The field separation for temporal or spatial/temporal filtering is less effective, because you filter even fields 0,2,4,etc separately from odd fields 1,3,5,etc
    and there is no temporal filtering between even and odd fields, but only within fields of the same type.

    Here a comparison between your script and the same where QTGMC is used in lossless mode (the original frame is untouched), in order to apply MCDegrainSharp() on a progressive vdeo, as it should be; at the end the video is interlaced back so it will be as you like. See how more details are preserved: https://imgsli.com/MTUzMDYw

    Click image for larger version

Name:	cfr.png
Views:	57
Size:	1.51 MB
ID:	69086

    Here the script, small (but decisive) modification to yours:

    Code:
    video_org=AVISource("Old JVC Deck Audio Test plus es10 Composite Video clip.avi").converttoYV16(last,matrix="Rec601",interlaced=true)\
    .AssumeTFF().crop(16,8,-20,-12)
    
    # plugins directory
    plugins_dir="C:\Users\giuse\Documents\VideoSoft\MPEG\AviSynth\extFilters\"
    
    	# SmoothAdjust
    loadPlugin(plugins_dir + "SmoothAdjust-v3.20\x86\SmoothAdjust.dll")
    
    	# QTGMC
    Import(plugins_dir + "QTGMC.avsi")
    	# Zs_RF_Shared
    Import(plugins_dir + "Zs_RF_Shared.avsi")
    	# RgTools
    loadPlugin(plugins_dir + "RgTools-v1.0\x86\RgTools.dll")
    	# MaskTools2
    loadPlugin(plugins_dir + "masktools2-v2.2.23\x86\masktools2.dll")
    	# FFT3DFilter
    loadPlugin(plugins_dir + "FFT3dFilter-v2.6\x86\fft3dfilter.dll")
    	# FFTW
    loadPlugin(plugins_dir + "LoadDll\LoadDll.dll")
    loadDll(plugins_dir + "fftw-3.3.5-dll32\libfftw3f-3.dll")
    	# Nnedi3
    loadPlugin(plugins_dir + "NNEDI3_v0_9_4_55\x86\Release_W7\nnedi3.dll")
    
    	# MCDegrainSharp
    import(plugins_dir + "McDegrainSharp.avsi")
    	# MVTools
    loadPlugin(plugins_dir + "mvtools-2.7.41-with-depans20200430\x86\mvtools2.dll")
    
    #apply color and levels corrections to taste
    video_org_st=video_org.SmoothTweak(brightness=0,contrast=1.0,saturation=0.7,hue1=3,hue2=9)
    video_org_st_sl=video_org_st.SmoothLevels(input_low=16,gamma=1.0,input_high=235,output_low=8,output_high=250,HQ=true)
    
    #field grouping for interlaced filtering
    deinterlaced=video_org_st_sl.QTGMC(lossless=1)
    
    #filtering
    denoised=deinterlaced.MCDegrainSharp()
    
    #re-weaving
    video_restored=denoised.AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave().addborders(10,10,10,10)
    
    return(video_restored)
    Quote Quote  
  10. Another way to detect duplicate frames is to subtract sequential frames and amplify the differences:

    Code:
    Subtract(last, last.Trim(1,0)).ColorYUV(cont_y=1000)
    Subtract performs Y1-Y2+126 for each pixel. That ColorYUV amplifies the differences by about 4x. When frames are identical you get a flat grey image. When they are different lots of details will show up.

    frame 196 - frame 197 (different):
    Image
    [Attachment 69088 - Click to enlarge]


    frame 197 - frame 198 (identical, just a little noise difference):
    Image
    [Attachment 69089 - Click to enlarge]


    A variation of this is to use Abs(Y1-Y2) and amplify the result. With identical frames you get a black image. With non-identical frames you get lots of noise.

    You can also use the runtime filters to generate a text file with a list of the identical frames.
    Last edited by jagabo; 5th Feb 2023 at 19:13.
    Quote Quote  
  11. This script:

    Code:
    ##########################################################################
    #
    # Abs(v1-v2)
    #
    # Works for YUY2 and YV12 only
    #
    ##########################################################################
    
    function AbsSubtractY(clip v1, clip v2)
    {
        IsYUY2(v1) ? mt_lutxy(v1.ConvertToYV16(), v2.ConvertToYV16(),"x y - abs", chroma="-128").ConvertToYUY2() \
                  : mt_lutxy(v1, v2,"x y - abs", chroma="-128")
    }
    
    ##########################################################################
    
    
    LWLibavVideoSource("Old JVC Deck Audio Test plus es10 Composite Video clip.avi") 
    AssumeTFF()
    ConvertToYV12(interlaced=true)
    AbsSubtractY(last, last.Trim(1,0))
    WriteFileIf(last, "DupFrames.txt", "AverageLuma<1.00", "current_frame", """ " : " """, "AverageLuma", append=false)
    writes this list of frames (and the average luma of that frame after subtraction) to DupFrames.txt:

    Code:
    197 : 0.457364
    239 : 0.451861
    247 : 0.457057
    279 : 0.453411
    324 : 0.465017
    396 : 0.000000
    It always falsely reports the last frame as a duplicate. You may have to adjust the threshold value (1.00) to some other value depending on the amount of noise, compression artifacts, etc.

    Open the script in VirtualDub(2) and use File -> Run Video Analysis Pass to parse the entire video.
    Quote Quote  
  12. Originally Posted by lollo View Post
    The field separation for temporal or spatial/temporal filtering is less effective, because you filter even fields 0,2,4,etc separately from odd fields 1,3,5,etc
    and there is no temporal filtering between even and odd fields, but only within fields of the same type.

    Here a comparison between your script and the same where QTGMC is used in lossless mode (the original frame is untouched), in order to apply MCDegrainSharp() on a progressive vdeo, as it should be; at the end the video is interlaced back so it will be as you like. See how more details are preserved:
    Well well, just take another frame, e.g. #375 and compare the wood panel structure above the boy's head and the boy's sleeve....
    Also, take into account that QTGMC applies some extra sharpening of its own. Both methods have their pros and cons.
    Image Attached Thumbnails Click image for larger version

Name:	Screenshot 2023-02-06 093310.png
Views:	35
Size:	1.20 MB
ID:	69094  

    Quote Quote  
  13. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Well well, just take another frame ...
    Done. Even in that particolar frame I see less artifacts (and no more details) for QTGMC lossless approach (the wood and the right sleeve of the baby): https://imgsli.com/MTUzMTM1
    For all the rest of the frames there is no match.

    Also, take into account that QTGMC applies some extra sharpening of its own.
    Not in lossless mode in the primary frame that you keep after processing. Do this experiment on your own:
    Code:
    video_org=AviSource("<finename>.avi")
    
    deinterlaced=video_org.AssumeTFF().QTGMC(lossless=1)
    interlaced=deinterlaced.AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
    
    ### check difference between original video and original video after QTGMC_lossless and re-interlacing ###
    difference_video_org_interlaced=Subtract(video_org, interlaced).Levels(65, 1, 255-64, 0, 255, coring=false)
    
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(interlaced,"interlaced",size=20,align=2),\
    subtitle(difference_video_org_interlaced,"difference",size=20,align=2)\
    )
    ############################################################################################################
    
    ### check difference between original video separated fields and original video after QTGMC_lossless and re-interlacing separated fields ###
    #video_org_sep_tff=video_org.AssumeTFF().separateFields()
    #interlaced_sep_tff=interlaced.AssumeTFF().separateFields()
    	# equivalent to
    	#deinterlaced_sep_sel403_tff=deinterlaced.AssumeTFF().SeparateFields().SelectEvery(4,0,3)
    
    #difference_video_org_sep_tff_interlaced_sep_tff=Subtract(video_org_sep_tff, interlaced_sep_tff).Levels(65, 1, 255-64, 0, 255, coring=false)
    
    #stackhorizontal(\
    #subtitle(video_org_sep_tff,"video_org_sep_tff",size=20,align=2),\
    #subtitle(interlaced_sep_tff,"interlaced_sep_tff",size=20,align=2),\
    #subtitle(difference_video_org_sep_tff_interlaced_sep_tff,"difference",size=20,align=2)\
    #)
    ############################################################################################################################################
    Both methods have their pros and cons.
    The lossless deinterlace/reinterlace method has no cons

    edit: also consider that we are testing here a quite static video. If benchmarking a high motion video, the superiority of applying the temporal degrain over a motion compensated architecture (MCdegrainSharp) on the frames rather than on the fields will be even more evident
    Last edited by lollo; 6th Feb 2023 at 05:03.
    Quote Quote  
  14. What about QTGMC not lossless? I’ve never used lossless and I have been producing deinterlaced mp4 files so they can be watched on any device.

    I know there’s some debate as well over which preset to use (slower faster etc). But I haven’t come across too much discussion about lossless in my research.
    Quote Quote  
  15. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    What about QTGMC not lossless
    QTGMS lossless makes sense only if you want to keep the original field/frames, processing, and interlacing back.

    For a real deinterlace operation, lossless option is not appropriate, because it forces QTGMC to do not perform at its best while removing artifacts generating deinterlaced frames.
    Quote Quote  
  16. Originally Posted by Sharc View Post
    Well well, just take another frame, e.g. #375 and compare the wood panel structure above the boy's head and the boy's sleeve....
    Also, take into account that QTGMC applies some extra sharpening of its own. Both methods have their pros and cons.
    TBH, I am so appalled by the stairstepping on diagonals that I don't care about losing detail on the sleeve. I think MSU Deinterlacer treats straight diagonals better.
    Last edited by Bwaak; 6th Feb 2023 at 12:52.
    Quote Quote  
  17. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    TBH, I am so appalled by the stairstepping on diagonals that I don't care about losing detail on the sleeve.
    That's because Sharc prefers to keep the video interlaced, and leave to the player the deinterlacing task.

    A (generally better) alternative is to perform a real deinterlace on the original video, and then apply the filtering. This approach may introduce over-processing, because QTGMC does by itself denoise and sharpening. It must be used with caution and the right options.
    Quote Quote  
  18. Originally Posted by Bwaak View Post
    TBH, I am so appalled by the stairstepping on diagonals that I don't care about losing detail on the sleeve. I think MSU Deinterlacer treats straight diagonals better.
    Stairstepping diagonals? Do you watch the interlaced frames on a progressive monitor and complain about the combing? The files were kept interlaced (same as the source) or were re-interlaced after deinterlacing.
    Mind you upload a sample?
    Quote Quote  
  19. Originally Posted by helloImNoob View Post
    ooh, scary VHS movie XD, no offense
    you sound like a person born after h265 was released

    Stairstepping diagonals? Do you watch the interlaced frames on a progressive monitor and complain about the combing?
    Image
    [Attachment 69104 - Click to enlarge]


    unless you remove the whole field, all of the interpolation deinterlacers (like yadif2 for example) tend to leave jagged lines
    Last edited by rrats; 6th Feb 2023 at 16:37.
    Quote Quote  
  20. Originally Posted by lollo View Post
    TBH, I am so appalled by the stairstepping on diagonals that I don't care about losing detail on the sleeve.
    That's because Sharc prefers to keep the video interlaced, and leave to the player the deinterlacing task.
    Ah, I missed that. Thanks.
    Originally Posted by Sharc View Post
    Stairstepping diagonals? Do you watch the interlaced frames on a progressive monitor and complain about the combing? The files were kept interlaced (same as the source) or were re-interlaced after deinterlacing. Mind you upload a sample?
    I did not watch the video, I only took a look at the stills. I made a silly assumption that the one marked with "qtgmc" was deinterlaced, but turned out it was not.
    Quote Quote  
  21. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    unless you remove the whole field, all of the interpolation deinterlacers (like yadif2 for example) tend to leave jagged lines
    Not really, a simple not optimized QTGMC() is not too bad (for the vertical white lines fine tuning is needed)

    Click image for larger version

Name:	lolloX.png
Views:	54
Size:	1.64 MB
ID:	69107

    Code:
    deinterlaced=video_org_st_sl.QTGMC().addborders(10,10,10,10)
    
    stackhorizontal(\
    subtitle(video_org.SelectEvery(1,0,0).addborders(10,10,10,10),"video_org",size=20,align=2),\
    subtitle(deinterlaced,"deinterlaced",size=20,align=2)\
    )
    Quote Quote  
  22. I finally had some time to go through all of these posts again and try some of the scripts you all shared to clean up the video. Thank you for taking the time to help me out!

    Questions:

    1. If I am ultimately creating deinterlaced output files using QTGMC, would it still be beneficial to use MCDegrainSharp, or does QTGMC take care of that filtering? I never used MCDegrainSharp so I don't have the plugin installed and I'm having trouble finding where to download it, if someone could share the link.

    2. I also never used SmoothTweak or SmoothLevels and don't have them installed. Always used ColorYUV() to fix levels and colors. Is there any notable difference between SmoothTweak and SmoothLevels compared with ColorYUV that I should install and learn the parameters? (I sometimes used to use just plain old Tweak and Levels but have since always defaulted to ColorYUV as that is what I'm now comfortable with.)

    3. I'm struggling with figuring out the correct code / order to do things when I have to color correct 2 parts of a video separately but then join them. Do I QTGMC first and then color correct after keeping each section separate til the end? I'm getting a little confused on how to carry the 2 variables throughout and return the right thing, without constantly redefining each step. I'm getting from point A to point B but I think I am going the long way. What's the right way to do this?

    Example:

    Code:
    SetFilterMTMode("QTGMC", 2)      
    Avisource("capture.avi")
    
    Part1=Trim(76399,86293)      #This is the problem scene that is red
    Part2=Trim(86296,90746)   #This is the following scene, colors are fine
    
    Part1Deinterlaced=Part1.AssumeTFF().QTGMC(Preset="slower", Edithreads=2, Sharpness=0.8)  
    Part2Deinterlaced=Part2.AssumeTFF().QTGMC(Preset="slower", Edithreads=2, Sharpness=0.8)  
    
    Part1ColorCorrect=Part1Deinterlaced.ColorYUV(off_u=-10, off_v=-25, gain_y=70, off_y=-20, cont_u=-100, cont_v=-125)  #fixred
    
    Part2ColorCorrect=Part2Deinterlaced.ColorYUV(gain_y=70, off_y=-20)
    
    Part1ColorCorrect++Part2ColorCorrect
    
    #Then crop/add borders, resize, and intro titling
    Quote Quote  
  23. Originally Posted by Christina View Post
    Questions: ....
    My 3 cents only:

    1. QTGMC does a lot of good things under the hood. See the wiki in avisynth,
    http://avisynth.nl/index.php/QTGMC. Lot of parameters to tweak to obtain 'best' results, but you may most probably be happy with what you get using its presets. QTGMC is basically an excellent deinterlacer. Whether you need something else depends on the source, its defects (noise, grain, rainbows, dotcrawl, comets, spots, color bleeding, halos etc.) and your ambition what you want to "fix" and what matters for your eyes.

    Personally I have (in the past) not been entirely happy how QTGMC processes the noise of my interlaced VHS sources. So one may define another denoiser within QTGMC, or tweak it, or configure it such that it leaves the noise alone as much as possible and then add an external denoiser/degrainer to taste. There are many posts and opinions which address QTGMC's 'best' settings for various scenarios. Members have spent countless hours on it.

    MCDegrainSharp was just an example of a filter (actually a script). There are many more filters, see the AviSynth catalogue http://avisynth.nl/index.php/External_filters#Adjustment_Filters. Every filter has its pros and cons and has desired effects and undesidered side effects. One has to try and judge the benefits.
    General rule: Don't overdo with filtering, and check the result on your TV (or your intended main player) as well. It makes sense to do pixel-peeping on the PC for examining details and effects, but at the end we cannot judge the beauty of a waterfall by analyzing some of its droplets only.

    2. Basically you can achieve the same or very similar with different color tweaking filters. Some operate in VUV and some in RGB colorspace though. Personally I found tweak (or SmoothAdjust) to be more intuitive and easier to use than ColorYUV. ColorYUV gives you all the freedom for tweaking the YUV parameters though.
    More info about SmoothAdjust here: http://avisynth.nl/index.php/SmoothAdjust

    3. I would even consider not to capture long videos in one run, but to capture in parts, process the parts individually and eventually join the final results (risk of dropped frames, re-do the capture ....)
    Otherwise I would split the long capture in several parts (using VDub in stream copy mode for example), and then process the various parts individually and independently, and at the end join them together.
    (You may also consider an NLE (Shotcut, Kdenlive etc.) for joining the parts on the timeline, add transition effects etc., but that's another topic)

    Added:
    Here a version of MCDegrainSharp in case you want to play with it:
    Image Attached Files
    Last edited by Sharc; 7th Feb 2023 at 05:36. Reason: typos
    Quote Quote  
  24. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Just as complement to what Sharc said, with which I agree:

    1. If I am ultimately creating deinterlaced output files using QTGMC, would it still be beneficial to use MCDegrainSharp, or does QTGMC take care of that filtering?
    As correlate processing of deinterlacing, QTGMC does its own denoise and sharpening, but it is not as good as a dedicated denoiser and sharpener. I personally try to reduce the denoise/sharpening in QTGMC and use a specific denoiser (TemporalDegran2 is a fan favorite, there are many others) and a specific sharpener (LSFmod is a fan favorite, there are many others).

    3. I'm struggling with figuring out the correct code / order to do things
    Concerning the order, the color correction should be at the end, but sometimes is better to anticipate a preliminary color correction to work with videos in better shape while checking the outputs of each filter.

    In short, a generic flow if the following:

    - level correction: needed only if you plan to move from YUV color space to RGB color space, because the Y values <16 and > 235 are clipped. For example if you wish to use later ColorMill in VirtualDub (a RGB plugin, superior to ColorYUV or SmoothTweak). It is better to only act on luma levels, keeping chroma unchanged (LevelsLumaOnly). If you stay in YUV color space it is not needed.

    - preliminary color correction (ColorYUV, SmoothTweak, ColorMill)

    - check, and eventually fix again the levels after preliminary color correction if they moved away from 16-235 range and a RGB processing is applied later

    - filtering: deinterlace/.../denoise/.../sharpening/... (... = any other filter for specific actions, like stabilization, dot removals, etc)

    - check, and eventually fix again the levels, because the previous filtering expands the levels to 0-255, if needed for further processing or if your display does not support full range

    - final main color correction (a NLE tool like DaVince resolve can be used, but also staying in AviSynth/VirtualDub can be effective)

    - check, and eventually fix again the levels if previous step acted on Y values and needed by your display options.

    This is just a generic flow. Each video, or portion of a video, is different and requires a specific filtering and/or filtering order. You have to experiment on your captures the best filters to be used, the best parameters for each filter, and the best order of processing.
    Quote Quote  
  25. @ Christina: When you want to experiment with filters and don't like to bother much with Avisynth or Vapoursynth details and syntax I would recommend Selur's Hybrid. It comes packed with a rich set of Avisynth, Vapoursynth an ffmpeg filters including any dependencies. As you are already familiar with Avisynth basics it could be worth to try and familiarize yourself with Hybrid.
    https://www.selur.de/
    Last edited by Sharc; 7th Feb 2023 at 06:58. Reason: Link added
    Quote Quote  
  26. Originally Posted by Christina View Post
    2. I also never used SmoothTweak or SmoothLevels and don't have them installed. Always used ColorYUV() to fix levels and colors. Is there any notable difference between SmoothTweak and SmoothLevels compared with ColorYUV that I should install and learn the parameters? (I sometimes used to use just plain old Tweak and Levels but have since always defaulted to ColorYUV as that is what I'm now comfortable with.)

    3. I'm struggling with figuring out the correct code / order to do things when I have to color correct 2 parts of a video separately but then join them. Do I QTGMC first and then color correct after keeping each section separate til the end?
    Tweak, SmoothTweak, and ColorYUV do many of the same things. The advantage of SmoothTweak is that it can dither the output. This can lead to less posterization (banding) on smooth shallow gradients. How much that dithering helps (or hurts) depends on the nature of the source and any filtering that happens earlier and/or later in the script. The inherent noise (natural dithering) in VHS caps makes that advantage less noticeable. Also, if you apply any noise reduction after the tweak that advantage may disappear. I usually perform the brightness/color adjustments before QTGMC or noise reduction.

    Another way of filtering sections of the video differently is simply to to produce a separate video for each filters sequence, then use RemapFramesSimple() to pick the right video for each sequence:

    Code:
    SourceFilter()
    filtered1 = Some().Filter().Sequence()
    filtered2 = Different().Filters()
    filtered3 = More().Different().Filters()
    ReplaceFramesSimple(filtered1, filtered2, Mappings="[5414 5488]")
    ReplaceFramesSimple(last, filtered3, Mappings="[7501 7725]")
    The first ReplaceFramesSimple will return filtered2 for frames 5414 through 5488, filtered1 for the rest. The second ReplaceFramesSimple takes the output of the first, and replaces frames 7501 through 7725 with frames from filtered3.
    Last edited by jagabo; 7th Feb 2023 at 08:01.
    Quote Quote  
  27. Originally Posted by Sharc View Post
    Btw. I noticed that your "old JVC - ES10" captures have a few dropped frames. Keep an eye on this. If you can't avoid it this could justify to use the newer S-VHS model with internal TBC for capturing the video, if it helps.
    Captured about an hour of the tape last night using the regular JVC VCR with the ES10 and used the script below provided by Jagabo to detect duplicate frames, and there were only about 6 or 7. They all appeared in the first second or two of when one scene changed to the next scene on the original recording (i.e. when camcorder was stopped and started) so I'm pretty happy about that! For some reason I am not getting the same dropped frames I got when I did my first short test capture last week that I shared on this thread. I haven't done anything different except fast forward the tape to the end and rewind in this VCR.

    Jagabo, thank you so much for sharing this script. It was extremely helpful!!!

    Now, I can decide to use this capture since I didn't have too many dropped frames, or I can attempt to sync the video from the S-VHS Deck with the audio from this capture.

    If I want to use the audio from one cap and video from another, what tools (besides trial and error) do I use to do that? Do I just try to find the same video frame in each capture in VirtualDub and trim from that point on and then mux in AviSynth?




    Originally Posted by jagabo View Post
    This script:

    Code:
    ##########################################################################
    #
    # Abs(v1-v2)
    #
    # Works for YUY2 and YV12 only
    #
    ##########################################################################
    
    function AbsSubtractY(clip v1, clip v2)
    {
        IsYUY2(v1) ? mt_lutxy(v1.ConvertToYV16(), v2.ConvertToYV16(),"x y - abs", chroma="-128").ConvertToYUY2() \
                  : mt_lutxy(v1, v2,"x y - abs", chroma="-128")
    }
    
    ##########################################################################
    
    
    LWLibavVideoSource("Old JVC Deck Audio Test plus es10 Composite Video clip.avi") 
    AssumeTFF()
    ConvertToYV12(interlaced=true)
    AbsSubtractY(last, last.Trim(1,0))
    WriteFileIf(last, "DupFrames.txt", "AverageLuma<1.00", "current_frame", """ " : " """, "AverageLuma", append=false)
    writes this list of frames (and the average luma of that frame after subtraction) to DupFrames.txt:

    Code:
    197 : 0.457364
    239 : 0.451861
    247 : 0.457057
    279 : 0.453411
    324 : 0.465017
    396 : 0.000000
    It always falsely reports the last frame as a duplicate. You may have to adjust the threshold value (1.00) to some other value depending on the amount of noise, compression artifacts, etc.

    Open the script in VirtualDub(2) and use File -> Run Video Analysis Pass to parse the entire video.
    Quote Quote  
  28. Originally Posted by Christina View Post
    For some reason I am not getting the same dropped frames I got when I did my first short test capture last week that I shared on this thread. I haven't done anything different except fast forward the tape to the end and rewind in this VCR.
    That's not totally unusual. You will never get identical captures in repeated tests because capturing is an analog process, and rewinding the tape could have moved some 'dirt' along the tape, for example. Also, computer load may vary. This randomness can even make a comparison between captures sometimes difficult.
    Sidenote: what capturing SW are you using? It is generally agreed that one should use VirtualDub 1.10.4 rather than VirtualDub2 for reliable capturing. Personally I preferred AmarecTV which delivered the most stable results here re.no glitches and no sync issues.
    Quote Quote  
  29. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Personally I preferred AmarecTV which delivered the most stable results here re.no glitches and no sync issues.
    I agree. In addition, both AmarecTV and VirtualDub marks the inserted frames in the captured file and report the inserted and the dropped frame in the log. Opening the file with VirtualDub and using Go -> Prev drop frame or Go -> Next drop frame is possible to display the inserted frame (the name "drop" in the menus is wrong).
    Quote Quote  
  30. Originally Posted by Sharc View Post
    Sidenote: what capturing SW are you using? It is generally agreed that one should use VirtualDub 1.10.4 rather than VirtualDub2 for reliable capturing. Personally I preferred AmarecTV which delivered the most stable results here re.no glitches and no sync issues.
    I'm using VirtualDub 1.9.11 for capturing. I use VirtualDub 2 for some other things but not for capturing. Never tried AmarecTV, but I haven't had any sync issues with VirtualDub (that I am aware of, but I'm pretty sensitive to audio being out of sync even the slightest bit so I would most likely notice.)

    Originally Posted by lollo
    I agree. In addition, both AmarecTV and VirtualDub marks the inserted frames in the captured file and report the inserted and the dropped frame in the log. Opening the file with VirtualDub and using Go -> Prev drop frame or Go -> Next drop frame is possible to display the inserted frame (the name "drop" in the menus is wrong).
    I tried this, but it doesn't do anything. Is it possible I don't have the logging enabled?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!