VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 37
Thread
  1. Hi all,

    New to all this, done my first capture of a Video8 Tape. Workflow was Video8 Tape -> PAL Handycam -> S-Video out -> TBC-1000 -> USB-710 -> VirtualDub Huffyuv

    My first step has been to de-interlace and I am only a few days into learning avisynth. I have played around with manual scriptingand Hybrid and arrived at this configuration



    Code:
    AVISource("VIDEO.avi")
    
    SetFilterMTMode("DEFAULT_MT_MODE", 2)
    
    ConvertToYV12(interlaced=true) 
    
    
    AssumeTFF()
    QTGMC(TR0=2, TR1=2, TR2=1, Rep0=1, Rep1=0, Rep2=4,
    \ DCT=5, ThSCD1=300, ThSCD2=110,
    \ SourceMatch=3, Lossless=2, Sharpness=1.2, SLMode=1, Sbb=0, MatchPreset="slow",
    \ NoiseProcess=2, GrainRestore=0.0, NoiseRestore=0.4, NoisePreset="slow",
    \ StabilizeNoise=false, NoiseTR=0, NoiseDeint="bob")
    
    ConvertToYUY2()


    Excuse the terrible explanation - but if you look at the attached images, there is is bluryness (dont know how to describe it) to the window frames and some jagged lines on the flag poles. Seems to impact a lot of things that are white in the video. Any guidance on how to fix or improve that?

    Any other general comments or feedback is much appreciated.


    Image
    [Attachment 73536 - Click to enlarge]

    Image
    [Attachment 73538 - Click to enlarge]


    EDIT - I should add some more context, I am digitising around 30 Video8 tapes. The bulk of it will be uploaded to Youtube to share with family who will view it on their smart TVs. A select few scenes will be compiled together and played on an outdoor projector for a movie night.

    EDIT 2 - I have uploaded a couple more videos

    Clip 1.avi, Clip 2.avi are deinterlaced
    Norway Raw Short.avi is the interlaced capture without any filters, processing...
    Image Attached Files
    Last edited by VideoYak; 30th Aug 2023 at 10:26.
    Quote Quote  
  2. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    Jagged lines a line TBC during capture would fix this. There is no way of fixing this properly after capture.

    Blurryness on the horizontal axis is normal. This is what Video8 and all similar tape formats look like because of their limited bandwidth.



    I noticed you are using my QTGMC settings, but changed the sharpness.

    In the QTGMC documentation it says:

    Since source-matching recovers sharpness, the Sharpness default is reduced to 0.2. Source-matching may initially appear less sharp than standard processing because it will not oversharpen. However, be careful if raising the sharpness, because sharpness limiting is switched off by default. This is because sharpness limiting reduces the the accuracy of these modes. Use the MatchEnhance setting to exaggerate additional detail found by modes 2 & 3. This gives a sharpening / detail enhancing effect and works well at sensible levels - but it's a slight cheat that should be used with care as it can easily enhance noise.
    In other words, SLMode=1 is not recommended to be used in conjuction with SourceMatch.
    Better use SLMode=0 (default when SourceMatch is used) and adjust sharpness setting. A good range is 0.1 (neutral) to 0.4 (sharp). You can also try if you like a higher MatchEnhance setting (default is 0.5, recommended max is 1.0).
    Last edited by Skiller; 30th Aug 2023 at 19:03.
    Quote Quote  
  3. Skiller I think you are right, I did find your configuration! I found it was the best starting point to learn, doing it from scratch was a very steep learning curve. Your configuration gave me the best results, so thank you! To be honest, I don't know why I changed the sharpness. I'll fix that up and give it a shot. I did have the line TBC on the Sony Camcorder running and a frame TBC as well, so I assume this is as good as I will get. It's good to know this, I was worried it was something I was getting wrong in the deinterlacing.
    Quote Quote  
  4. Do you see this as better?
    Image Attached Files
    Quote Quote  
  5. Originally Posted by jagabo View Post
    Do you see this as better?
    Yes, a lot better than what I have produced so far - I am still playing around with QTGMC settings. Could I please trouble you for your process/code/workflow?
    Quote Quote  
  6. Here's the script I used:

    Code:
    LWLibavVideoSource("NORWAY RAW SHORT.avi") 
    AssumeTFF()
    ColorYUV(off_y=-23, gamma_y=50) # small black level and gamma adjustment
    
    # the top field is slightly brighter than the bottom field, so make it darker
    SeparateFields()
    top = SelectEven().ColorYUV(gain_y=-1)
    bot = SelectOdd()
    Interleave(top,bot)
    Weave()
    
    BilinearResize(480, 576) # smaller frame reduces noise and some of the horizontal time base errors
    QTGMC() # deinterlace 
    FineDehalo(rx=2.5, ry=1.0) # reduce the worst VHS oversharpening halos
    SMDegrain(thsad=100, tr=2, PreFilter=4) # motion compensated temporal noise reduction
    Sharpen(0.5, 0.0) # sharpen  horizontally a bit
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=720, fheight=576) # restore the frame size
    Crop(8,0,-8,-0) # get rid if ITU borders
    
    prefetch(8)
    Quote Quote  
  7. Thanks Jagabo! Really appreciate you sharing that. A lot for to learn and this really helps.

    The need for the slight black level and gamma adjustment. Could you just tell by looking or did you use a tool of some sort? I don't have a trained eye (and years of experience)!
    Quote Quote  
  8. Originally Posted by VideoYak View Post
    The need for the slight black level and gamma adjustment. Could you just tell by looking or did you use a tool of some sort? I don't have a trained eye (and years of experience)!
    I noticed that the black levels was too high just by looking at the video -- there were no true black aside from the front and back porch (the left and right edges of the frame). I used Histogram() (really a waveform monitor) to verify the black level and tune the adjustments (y offset and gamma). I chose values that were an acceptable compromise for the entire video. You may need to make further adjustments to accomodate other shots. Or work shot by shot to give each one the look you want.

    Code:
    LWLibavVideoSource("NORWAY RAW SHORT.avi") 
    AssumeTFF()
    ColorYUV(off_y=-23, gamma_y=50) # small black level and gamma adjustment
    TurnRight().Histogram().TurnLeft() # waveform monitor
    off_y=-23 pulls everything down by 23 units. So brights get darker as well (making them too dark in this case). With the black level down I found some of the details in the shadows were harder to see. So I increased the gamma to bring those details out a bit. Then tweaked the two to get something I though looked acceptable. I didn't tune it any further to adjust the white level as the main point of this thread was deinterlacing, reducing time base jiggle and other noise.

    A waveform monitor explanation: https://forum.videohelp.com/threads/340804-colorspace-conversation-elaboration#post2121568
    Quote Quote  
  9. I have played around with the waveform monitor, and think I understand.

    Please correct me if I am wrong on the two points below:

    POINT 1 if something should be black in the scene then it should be as close to the bottom yellow line as possible.

    E.g. in this screen grab below, some parts of the regions circled in black should closer to the bottom yellow bar as the soldier has black clothes on.

    POINT 2 As long as I have not gone beyond those yellow bars, then I have not crushed any colors during capture.

    Image
    [Attachment 73642 - Click to enlarge]
    Quote Quote  
  10. Right corner circle indicates that maybe clamp is used or I am wrong?
    Quote Quote  
  11. Originally Posted by mammo1789 View Post
    Right corner circle indicates that maybe clamp is used or I am wrong?
    What you see at the very right is just the narrow black border which is at Y=16 (or RGB (0,0,0). Same for the head switching crud. I don't see clamping for the active picture.
    Quote Quote  
  12. Originally Posted by VideoYak View Post
    I have played around with the waveform monitor, and think I understand.

    Please correct me if I am wrong on the two points below:

    POINT 1 if something should be black in the scene then it should be as close to the bottom yellow line as possible.

    E.g. in this screen grab below, some parts of the regions circled in black should closer to the bottom yellow bar as the soldier has black clothes on.

    POINT 2 As long as I have not gone beyond those yellow bars, then I have not crushed any colors during capture.
    Point 1: Yes. Check for various scenes.

    Point2: Yes, for capturing. Still, you may see crushed or clipped darks and/or whites which are baked into the source.
    (The waveform of the active picture should eventually stay in between the top and bottom yellow bar (luma range Y 16....235 aka TV range aka limited range) to be on the safe side for subsequent decoding to RGB, IMO.)
    Last edited by Sharc; 5th Sep 2023 at 06:35.
    Quote Quote  
  13. Thanks Sharc, that makes sense.


    So in Jagabo's code
    Code:
    ColorYUV(off_y=-23, gamma_y=50)
    I understand off_y brings everything down, then gamma_y shifts the overall brightness without moving either the "top end whites" or the "low end blacks".

    I am curious now to learn what it means to adjust the white level:

    Originally Posted by jagabo View Post
    I didn't tune it any further to adjust the white level as the main point of this thread was deinterlacing, reducing time base jiggle and other noise.
    My guess is that it means playing with the Gain (i.e. gain_y), along with the offset and gamma value to get proper whites as well?

    My final question is that Jagabo somehow picked up that, " the top field is slightly brighter than the bottom field" does the waveform monitor show this? Or is there a way I could have seen this?
    Quote Quote  
  14. Originally Posted by VideoYak View Post
    So in Jagabo's code
    Code:
    ColorYUV(off_y=-23, gamma_y=50)
    I understand off_y brings everything down
    Yes.

    Originally Posted by VideoYak View Post
    then gamma_y shifts the overall brightness without moving either the "top end whites" or the "low end blacks".
    Yes. But there is some nuance here. With full range video full black remains at zero and full white at 255. Here you see a linear grey ramp from 0 to 255 on the left, and after ColorYUV(gamma_y=200) on the right:

    Image
    [Attachment 73646 - Click to enlarge]


    Code:
    GreyRamp()
    ConvertToYV24(matrix="pc.601")
    StackHorizontal(last, ColorYUV(gamma_y=200))
    TurnRight().Histogram().TurnLeft()
    You can find my GreyRamp() filter in these forums.

    But with limited range (Y=16 to 235) the end points move:

    Image
    [Attachment 73648 - Click to enlarge]


    Code:
    GreyRamp()
    ConvertToYV24(matrix="rec601")
    StackHorizontal(last, ColorYUV(gamma_y=200))
    TurnRight().Histogram().TurnLeft()
    Since we're working with integers that can only range from 0 to 255, the increase in shadow detail comes at the expense of losing detail in brighter parts of the frame. Working at higher bit depths can alleviate this problem.

    Originally Posted by VideoYak View Post
    I am curious now to learn what it means to adjust the white level:

    Originally Posted by jagabo View Post
    I didn't tune it any further to adjust the white level as the main point of this thread was deinterlacing, reducing time base jiggle and other noise.
    My guess is that it means playing with the Gain (i.e. gain_y), along with the offset and gamma value to get proper whites as well?
    Yes.

    Originally Posted by VideoYak View Post
    My final question is that Jagabo somehow picked up that, " the top field is slightly brighter than the bottom field" does the waveform monitor show this? Or is there a way I could have seen this?
    You can see it on the waveform monitor and by looking at the picture. Although a 1 unit difference is pretty hard to see, especially when there is a lot of noise. In your source it was more than just a levels offset problem (some gamma too) but I didn't look any deeper. I originally thought it was going to be more than one unit, hence the code. You can probably leave that code out -- you won't see much difference.

    Code:
    # get source
    SeparateFields()
    #BinomialBlur(10) # optional blur to reduce noise
    TurnRight().Histogram().TurnLeft()
    Watch for the waveform to bounce up and down with each frame. With larger differences you can see the brightness increase and decrease as you step through frames (really fields at this point). And with larger differences you can see alternating darker and lighter lines just looking at an interlaced frame.
    Quote Quote  
  15. Originally Posted by jagabo View Post

    But with limited range (Y=16 to 235) the end points move:

    Image
    [Attachment 73648 - Click to enlarge]


    Code:
    GreyRamp()
    ConvertToYV24(matrix="rec601")
    StackHorizontal(last, ColorYUV(gamma_y=200))
    TurnRight().Histogram().TurnLeft()

    Using Levels() instead of ColorYUV() would keep the end points for limited range, not loosing details in the brights. No?

    Code:
    GreyRamp()
    ConvertToYV24(matrix="rec601")
    StackHorizontal(last, Levels(16, 2.0, 235, 16, 235, coring=false,dither=false))
    TurnRight().Histogram().TurnLeft()
    Image Attached Thumbnails Click image for larger version

Name:	levels.png
Views:	4
Size:	2.7 KB
ID:	73651  

    Quote Quote  
  16. Yes, Levels() can be used for gamma, with the advantage that you can be applied to a limited range of levels.

    By the way, here are some possible Levels() values for the OP's sample:

    Code:
    Levels(36, 1.2, 235, 16, 235, coring=false, dither=false)
    Last edited by jagabo; 5th Sep 2023 at 13:38. Reason: added levels values
    Quote Quote  
  17. Originally Posted by jagabo View Post
    Yes, Levels() can be used for gamma, with the advantage that you can be applied to a limited range of levels.

    By the way, here are some possible Levels() values for the OP's sample:

    Code:
    Levels(36, 1.2, 235, 16, 235, coring=false, dither=false)
    I have read the levels() page on Avisynth, but don't fully understand.

    Any recommended posts on the forums that might guide a newbie?
    Quote Quote  
  18. It's actually pretty simple: Levels linearly interpolates the specified input range (input_low to input_high) to the specified output range (output_low to output_high). Ie, Y values that start as input_low become output_low. Values that start at input_high become output_high. Everything in between is linearly interpolated between those ends -- unless you specify a gamma value. If you do then that gamma curve is applied to the input curve as it's being interpolated.

    So for limited range YUV you examine your video to see where the current blacks are (I decided the blacks were at 36). That becomes input_low. Then you see where current whites are. That becomes input_high (they were already about 235 so I used that for input_hing). Then you set output_low to 16 and output_high to 235 (the defined black and white levels for limited range 8 bit YUV video). If after doing that you find there's not enough detail in the darks you can add a gamma value (I used 1.2 to bring out some dark detail).

    When coring is true (the default) any pixels below input_low are set to input_low, all values above input_high are set to input_high, before they are scaled. This prevents any pixels falling outside the specified output range.
    Quote Quote  
  19. Originally Posted by VideoYak View Post
    I have read the levels() page on Avisynth, but don't fully understand.

    Any recommended posts on the forums that might guide a newbie?

    Based on jagabo's GrayRamp() as a source, you can play with the parameters in line 3 (tweak= .....) to practize and visualize the effect of the Levels() tweaking.
    You may also want to replace the GreyRamp in line 1 with your actual video source.

    Source (left) and tweak (right) are displayed side-by-side for comparison.

    Code:
    source=GreyRamp()  #or use your source here
    source=source.ConvertToYV16(Matrix="Rec601")
    tweak=source.Levels(36,1.2,235,16,235,coring=false) #adjust as you like
    
    out=stackhorizontal(source,tweak)
    out=out.TurnRight().Histogram().TurnLeft()
    
    Return out
    
    
    
    ######################################################
    
    function GreyRamp()   #by jagabo
    {
       BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32")
       StackHorizontal(last, last.RGBAdjust(rb=1, gb=1, bb=1))
       StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2))
       StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4))
       StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8))
       StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16))
       StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32))
       StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64))
       StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128))
    }
    
    ######################################################
    Last edited by Sharc; 6th Sep 2023 at 04:27.
    Quote Quote  
  20. Jagabo, I have been trying to run your script on my old laptop and have 2 questions to trouble you (or others with). I am playing around with the level() thanks to you and Sharc's advice and also removing horizontal sharpen to see what the impacts are and get a better understanding. I am running into trouble running the script.

    VirtualDub

    Video -> Fast Recompress
    Video -> Compression -> Huffyuv 2.1.1
    Audio -> Direct Stream Copy

    I get the error: Video format negotiation failed: use normal-recompress or full mode.

    If I do normal recompress or full mode, Virtual-Dub doesnt do the audio.
    Image
    [Attachment 73728 - Click to enlarge]


    I have two questions

    Q1Am I doing something wrong?
    EDIT I think I know what is causing the problem, but dont know how to fix it. I replaced LWLibavVideoSource("Video.avi") with AviSource("Video.avi") and then put in a convertToYV12() before FineDehalo and convertToYUY2() at the end, it works in VirtualDub, but the video is horrible.

    Code:
    #LWLibavVideoSource("Video.avi") 
    AVISource("Video.avi") 
    AssumeTFF()
    BilinearResize(480, 576) # smaller frame reduces noise and some of the horizontal time base errors
    QTGMC()
    convertToYV12()
    FineDehalo(rx=2.5, ry=1.0) # reduce the worst VHS oversharpening halos
    SMDegrain(thsad=100, tr=2, PreFilter=4) # motion compensated temporal noise reduction
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=720, fheight=576) # restore the frame size
    convertToYUY2()
    prefetch(4)
    Original Code

    Code:
    LWLibavVideoSource("Video.avi") 
    AssumeTFF()
    BilinearResize(480, 576) # smaller frame reduces noise and some of the horizontal time base errors
    QTGMC()
    FineDehalo(rx=2.5, ry=1.0) # reduce the worst VHS oversharpening halos
    SMDegrain(thsad=100, tr=2, PreFilter=4) # motion compensated temporal noise reduction
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=720, fheight=576) # restore the frame size
    prefetch(4)
    Q2I am going to buy a new/used PC to do this video processing - what are the important components - I understand HDD, CPU and RAM. I want to upscale to upload to YouTube - I am still trying to get my head around if I need to upscale to avoid YT using a bad codec for compression - but would I need a fancy GPU or would an RTX 2XXX be sufficient.
    Last edited by VideoYak; 10th Sep 2023 at 06:27.
    Quote Quote  
  21. I don't have a VFW Huffyuv decoder installed so I can't use AviSource to open your video. I used LWlibavVideoSource instead. That only gets the video. If you want audio too you need to add LWlibavAudioSource:

    Code:
    a = LWlibavAudioSource("Video.avi") # get the audio as the named stream "a"
    v = LWlibavVideoSource("Video.avi") # get the video as the named stream "v"
    AudioDub(v, a) # join video and audio streams together into a stream called "last"
    Note that whenever you don't specify a stream by name the name "last" is assumed by AviSynth, for both inputs and outputs. So:

    Code:
    LWlibavVideoSource("Video.avi")
    SomeFilter()
    is shorthand for:

    Code:
    last = LWlibavVideoSource("Video.avi")
    last = SomeFilter(last)
    LWlibavVideoSource with your source gives a YV16 video. That's a planar form of YUV 4:2:2 as opposed to the interleaved form of YUV 4:2:2 that AviSource returns. My script didn't change the pixel format and whatever encoder you used didn't accept it with Fast Recompress mode. Switching to Normal Recompress of Full Processing mode allows VirtualDub to change the pixel format to whatever that codec would accept.

    I don't see any reason your script would have returned a horrible video. What was wrong with it?

    None of the filters I used use the GPU for anything. If you're going to use GPU upscaling with some other software, the best GPU will depend on what that software supports.
    Last edited by jagabo; 10th Sep 2023 at 09:34.
    Quote Quote  
  22. Thanks jagabo, steep learning curve, but it seems there are so many different ways get a similar outcome. Good to get an understanding! Maybe after 30 tapes, I'll be deemed a novice.

    You are right my code does work, I was using this code of QTGMC which replaced the QTGMC() in your code. Just trialling to see the difference, but it did not like it.
    Code:
    QTGMC(TR0=2, TR1=2, TR2=1, Rep0=1, Rep1=0, Rep2=4,
    \ DCT=5, ThSCD1=300, ThSCD2=110,
    \ SourceMatch=3, Lossless=2, Sharpness=0.1, SLMode=0, Sbb=0, MatchPreset="slow",
    \ NoiseProcess=2, GrainRestore=0.0, NoiseRestore=0.4, NoisePreset="slow",
    \ StabilizeNoise=false, NoiseTR=0, NoiseDeint="bob")
    Quote Quote  
  23. QTGMC requires several other filters:

    http://avisynth.nl/index.php/QTGMC#Core_Plugins_and_Scripts

    And depending on what settings you use it may require even more:

    http://avisynth.nl/index.php/QTGMC#Optional_Plugins

    And don't be surprised if some of those filters have requirements of their own.
    Quote Quote  
  24. I believe I have all the dependencies sorted, would AvsPmod throw an error if I didnt? I also checked with Avsynth Info Tool and only one error detected. Output down the bottom.

    THere is something that replacing QTGMC() with the QTGMC below that produces a funny video. But just to be sure I will double check all the dependencies manually, I just thought an error would be flagged. Unless I have old versions of dependencies? I will re-download all the core and option dependencies to the latest version, re-do it with the QTGMC below and post the video up if it is garbage.

    Code:
    QTGMC(TR0=2, TR1=2, TR2=1, Rep0=1, Rep1=0, Rep2=4,
    \ DCT=5, ThSCD1=300, ThSCD2=110,
    \ SourceMatch=3, Lossless=2, Sharpness=0.1, SLMode=0, Sbb=0, MatchPreset="slow",
    \ NoiseProcess=2, GrainRestore=0.0, NoiseRestore=0.4, NoisePreset="slow",
    \ StabilizeNoise=false, NoiseTR=0, NoiseDeint="bob")

    Code:
    [OS/Hardware info]
    Operating system:           Windows 10 (x64) (Build 19044)
    
    CPU:                        Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz / Kaby Lake-U (Core i5)
                                MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, FMA3, RDSEED, ADX, AVX, AVX2
                                2 physical cores / 4 logical cores
    
    
    [Avisynth info]
    VersionString:              AviSynth+ 3.7.3 (r4003, 3.7, x86_64)
    VersionNumber:              2.60
    File / Product version:     3.7.3.0 / 3.7.3.0
    Interface Version:          10
    Multi-threading support:    Yes
    Avisynth.dll location:      C:\WINDOWS\SYSTEM32\avisynth.dll
    Avisynth.dll time stamp:    2023-07-15, 13:48:08 (UTC)
    PluginDir2_5 (HKLM, x64):   C:\Program Files (x86)\AviSynth+\plugins64
    PluginDir+   (HKLM, x64):   C:\Program Files (x86)\AviSynth+\plugins64+
    
    
    [C++ 2.5 Plugins (64 Bit)]
    C:\Program Files (x86)\AviSynth+\plugins64+\warpsharp.dll  [2023-08-24]
    
    [C++ 2.6 Plugins (64 Bit)]
    C:\Program Files (x86)\AviSynth+\plugins64+\AutoLevels_x64.dll  [0.12.3.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\aWarpsharpMT.dll  [2.1.8.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\ConvertStacked.dll  [2023-07-15]
    C:\Program Files (x86)\AviSynth+\plugins64+\Deblock.dll  [2021-03-09]
    C:\Program Files (x86)\AviSynth+\plugins64+\Deflicker.dll  [0.6.0.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\DePan.dll  [2.13.1.6]
    C:\Program Files (x86)\AviSynth+\plugins64+\DePanEstimate.dll  [2.10.0.4]
    C:\Program Files (x86)\AviSynth+\plugins64+\dfttest.dll  [1.9.7.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\DirectShowSource.dll  [2023-07-15]
    C:\Program Files (x86)\AviSynth+\plugins64+\ffms2.dll  [2020-08-22]
    C:\Program Files (x86)\AviSynth+\plugins64+\fft3dfilter.dll  [2.10.0.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\FredAverage_x64.dll  [0.3.0.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\GamMac.dll  [1.10.0.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\GamMatch_x64.dll  [0.5.0.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\ImageSeq.dll  [2023-07-15]
    C:\Program Files (x86)\AviSynth+\plugins64+\LSMASHSource.dll  [1129.0.1.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\masktools2.dll  [2.2.30.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\mvtools2.dll  [2.7.45.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\neo-fft3d.dll  [1.0.0.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\nnedi3.dll  [0.9.4.62]
    C:\Program Files (x86)\AviSynth+\plugins64+\RemoveDirt.dll  [0.9.3.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\RgTools.dll  [1.2.0.0]
    C:\Program Files (x86)\AviSynth+\plugins64+\Shibatch.dll  [2023-07-15]
    C:\Program Files (x86)\AviSynth+\plugins64+\TimeStretch.dll  [2023-07-15]
    C:\Program Files (x86)\AviSynth+\plugins64+\VDubFilter.dll  [2023-07-15]
    
    [Scripts (AVSI)]
    C:\Program Files (x86)\AviSynth+\plugins64+\colors_rgb.avsi  [2022-10-06]
    C:\Program Files (x86)\AviSynth+\plugins64+\Dehalo_alpha_MT2.avsi  [2023-09-05]
    C:\Program Files (x86)\AviSynth+\plugins64+\ExTools.avsi  [2023-09-05]
    C:\Program Files (x86)\AviSynth+\plugins64+\FineDehalo.avsi  [2023-09-05]
    C:\Program Files (x86)\AviSynth+\plugins64+\LSFmod.v1.9.avsi  [2023-08-31]
    C:\Program Files (x86)\AviSynth+\plugins64+\mt_expand_multi.avsi  [2023-09-05]
    C:\Program Files (x86)\AviSynth+\plugins64+\QTGMC.avsi  [2023-08-17]
    C:\Program Files (x86)\AviSynth+\plugins64+\santiag.avsi  [2023-08-25]
    C:\Program Files (x86)\AviSynth+\plugins64+\SMDegrain.avsi  [2023-09-05]
    C:\Program Files (x86)\AviSynth+\plugins64+\Stab.avsi  [2023-08-18]
    C:\Program Files (x86)\AviSynth+\plugins64+\TemporalDegrain-v2.6.6.avsi  [2023-08-31]
    C:\Program Files (x86)\AviSynth+\plugins64+\Zs_RF_Shared.avsi  [2023-08-17]
    
    [Uncategorized files]
    C:\Program Files (x86)\AviSynth+\plugins64+\colors_rgb.txt  [2022-10-06]
    C:\Program Files (x86)\AviSynth+\plugins64+\santiag.avs  [2023-08-18]
    
    
    
    [Plugin errors/warnings]
    ________________________________________________________________________________
    
    Function duplicates:
    
    "undefined" : "[InternalFunction]"
    "Undefined" : "C:\Program Files (x86)\AviSynth+\plugins64+\Zs_RF_Shared.avsi"
    
    ________________________________________________________________________________
    Quote Quote  
  25. Yes, it looks like you have all the dependencies installed. So it may be a version issue. Using your script from post #20 (except LWlibavVideoSource instead of AviSource, and the lossless QTGMC settings):

    Code:
    LWLibavVideoSource("NORWAY RAW SHORT.avi") 
    AssumeTFF()
    BilinearResize(480, 576) # smaller frame reduces noise and some of the horizontal time base errors
    QTGMC(TR0=2, TR1=2, TR2=1, Rep0=1, Rep1=0, Rep2=4,
      \ DCT=5, ThSCD1=300, ThSCD2=110,
      \ SourceMatch=3, Lossless=2, Sharpness=0.1, SLMode=0, Sbb=0, MatchPreset="slow",
      \ NoiseProcess=2, GrainRestore=0.0, NoiseRestore=0.4, NoisePreset="slow",
      \ StabilizeNoise=false, NoiseTR=0, NoiseDeint="bob")convertToYV12()
    FineDehalo(rx=2.5, ry=1.0) # reduce the worst VHS oversharpening halos
    SMDegrain(thsad=100, tr=2, PreFilter=4) # motion compensated temporal noise reduction
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=720, fheight=576) # restore the frame size
    convertToYUY2()
    prefetch(4)
    I get:
    Image Attached Files
    Quote Quote  
  26. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    @ jagabo
    Excellent result, looks very good to me. Not over-sharpened, not too strong of a denoise, ugly halos on the flag poles pretty much gone.
    Quote Quote  
  27. Thanks jagabo, issue appears to be with avisource(). LWLibavVideoSource doesn't produce the same artefacts. They are horizontal white lines, across four columns.

    Given LWLibavVideoSource works, I am not fussed. But will post up a sample tonight to serve as a reference to someone that stumbles across this thread in the future.
    Quote Quote  
  28. Ok, dhere is the code with avisource() and attached is the broken clip. You can see the white lines. If anyone could explain what I am doing wrong that would be appreciated.

    Code:
    AVISource("VH Clip.avi") 
    AssumeTFF()
    BilinearResize(480, 576) # smaller frame reduces noise and some of the horizontal time base errors
    QTGMC(TR0=2, TR1=2, TR2=1, Rep0=1, Rep1=0, Rep2=4,
      \ DCT=5, ThSCD1=300, ThSCD2=110,
      \ SourceMatch=3, Lossless=2, Sharpness=0.1, SLMode=0, Sbb=0, MatchPreset="slow",
      \ NoiseProcess=2, GrainRestore=0.0, NoiseRestore=0.4, NoisePreset="slow",
      \ StabilizeNoise=false, NoiseTR=0, NoiseDeint="bob")convertToYV12()
    FineDehalo(rx=2.5, ry=1.0) # reduce the worst VHS oversharpening halos
    SMDegrain(thsad=100, tr=2, PreFilter=4) # motion compensated temporal noise reduction
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=720, fheight=576) # restore the frame size
    convertToYUY2()
    prefetch(4)
    It works when I use this instead of avisource()

    Code:
    a = LWlibavAudioSource("Median.avi") # get the audio as the named stream "a"
    v = LWlibavVideoSource("Median.avi") # get the video as the named stream "v"
    AudioDub(v, a) # join video and audio streams together into a stream called "last

    I believe I have the Huffyuv VFW installed.
    Image
    [Attachment 73783 - Click to enlarge]


    Here is the media info for the file I ran the script on
    Image
    [Attachment 73784 - Click to enlarge]


    I have also re-attached the raw clip for reference.

    I am concerned that something is broken and I have no idea what it is.
    Image Attached Files
    Quote Quote  
  29. I can't reproduce your AviSource() issue. Works fine here. I suspect there must be something wrong with your installation.
    I see however a glitch at frame 6, (7) in your capture (which is independent of the source filter).

    Edit:
    And double-check your script: ConvertToYV12() should be on a new line IMO. It is not part of the QTGMC settings.
    (Maybe it's cosmetics only as Avisynth does not complain ....)
    Image Attached Thumbnails Click image for larger version

Name:	VH Clip RAW000006.png
Views:	8
Size:	520.0 KB
ID:	73787  

    Last edited by Sharc; 14th Sep 2023 at 03:42.
    Quote Quote  
  30. Originally Posted by Sharc View Post
    I can't reproduce your AviSource() issue. Works fine here. I suspect there must be something wrong with your installation.
    I see however a glitch at frame 6, (7) in your capture (which is independent of the source filter).

    Edit:
    And double-check your script: ConvertToYV12() should be on a new line IMO. It is not part of the QTGMC settings.
    (Maybe it's cosmetics only as Avisynth does not complain ....)

    Ok thanks Sharc, I think I might do a complete re-install of avisynth+ and see what happens.

    Maybe I need to remove and re-install Huffyuv as well.

    Anything else you'd recommend?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!