VideoHelp Forum

+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 33
Thread
  1. Hi... I have this video with like a sort of windy noise (like a mottles of color, in the black). How can I correct this noise or mottles of color?

    https://drive.google.com/file/d/1IbAyvcMw_aNt9gX-PtQyhWg6ebvU0fLb/view?usp=sharing

    Thanks.
    Quote Quote  
  2. "like a mottles of color, in the black" -> applying some strong deblocking to the dark part of the image should help.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. No, the deblock doesn't help, but Probably I found the culprit. I opened the file in Premiere and with Lumetri I given +100 to Exposure and -100 to Contrast and I found that probably this is a chrominance noise problem. Now please can you suggest me the best denoiser and the settings only for chrominance noise? Tell me if I am wrong on my hypothesis thanks.

    I have the latest version of Neat Video. Can you suggest me something better for Chorminance Noise. that I can use with Staxrip or without using Premiere or this is the best Denoiser?

    This is a video that I have deinterlaced with QTGMC (after Checkmate). If you want to suggest something that I can use in the same process and QTGMC or some function for chrominance noise of the QTGMC, let me know, thanks.
    Last edited by salvo00786; 29th May 2023 at 12:07.
    Quote Quote  
  4. Those are posterization artifacts. But it's not clear to me if that is the source you are starting with or the result of your encoding. If the latter you just need to encode with a higher bitrate and bit depth. If the former it's hard to fix but you need to work at a higher bit depth, apply a debanding filter, a temporal noise reduction filter, and dithering. Then use sufficient bitrate when you encode.

    And yes, lowering the black level can crush that noise away (along with other dark details).
    Image Attached Files
    Last edited by jagabo; 29th May 2023 at 12:31.
    Quote Quote  
  5. This is the original. In this image with Exposure at maximum value and contrast at minimum valuee you can see the problems:
    Image
    [Attachment 71327 - Click to enlarge]


    I tried with derainbow and chubbyrain but these doesn't resolve the problems.

    Please can you explain me temporal noise reduction filter, and dithering and what filters and what settings you used?

    I know that with QTGMC It's possible to denoise, but can QTGMC handle chrominance noise and how to use it?
    Image Attached Thumbnails Click image for larger version

Name:	original.jpg
Views:	10
Size:	397.2 KB
ID:	71326  

    Last edited by salvo00786; 29th May 2023 at 13:40.
    Quote Quote  
  6. Originally Posted by salvo00786 View Post
    Please can you explain me temporal noise reduction filter, and dithering and what filters and what settings you used?
    In my earlier post I used something like:

    Code:
    LWLibavVideoSource("wind noise.mkv", cache=false, prefer_hw=2) 
    GradFun2dbmod() # or GradFun3()
    ConvertBits(10)
    SMDegrain(thsad=500, tr=3, PreFilter=4)
    This is even better:
    Code:
    LWLibavVideoSource("wind noise.mkv", cache=false, prefer_hw=2) 
    
    BilinearResize(768,576) # correct aspect ratio
    GradFun2dbmod() # or GradFun3(), reduce posterization
    ColorYUV(gamma_y=-50, opt="coring") # hide some of the dark noise
    ConvertBits(10)
    
    l = last
    c = SMDegrain(thsad=500, tr=5, PreFilter=4, plane=3) # heavy temporal filter the chroma
    MergeChroma(l, c) # keep the earlier luma, use the filtered chroma
    A sample processed with the latter script is attached.

    Originally Posted by salvo00786 View Post
    I know that with QTGMC It's possible to denoise, but can QTGMC handle chrominance noise and how to use it?
    QTGMC's denoising will help much with this problem. But you can use it with something like QTGMC(FPSDivisor=2, EZDenoise=2.0, DenoiseMC=true) and see for yourself. Higher EZDenoise values will give more noise reduction but you will lose lots of detail to.
    Image Attached Files
    Quote Quote  
  7. No, the deblock doesn't help,...
    Worked here, to illustrate what I had in mind:

    Original:

    Retinex: (to make problem visible)

    Masked DPIR (luma <= 30) + Retinex:

    Masked DPIR (luma <= 30) + Flash3kDB + Retinex:

    Vapoursynth script used: https://pastebin.com/jT3FwZBa
    (all used filters are available for Avisynth too)

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  8. No, Your solution is different, more elaborate. I tried with a simple Deblock_QED. When you told me to use Deblock, I used this, because f3kdb is a deband filter. For this reason I haven't tried it.

    1)Curiosity, why you used f3kdb instead of f3kdb neo?Isn't f3kdb neo better than f3kdb? why output_depth=-1
    It's possible to use these in Staxrip? I know that i can use f3kdb but how can I obtain this "Masked DPIR (luma <= 30)"?

    2)Why in a precedent post you corrected the aspect ratio? how can you tell if a video has a wrong aspect ratio?

    In the meantime I deinterlaced another time from the start using EZDENOISE (and the result is much better than before) and I am trying another time from the start using a light denoise with knlmeanscl instead of EZDENOISE for comparison purposes.

    3) What is the correct use of MergeChroma?
    For example I use this Checkmate command

    chroma = checkmate(thr=12,max=25,tthr2=25)
    luma = checkmate(thr=12,max=25,tthr2=0)

    The correct command at the end will be:

    fixed = luma.MergeChroma(chroma)
    or
    fixed = MergeChroma(luma,chroma)

    4) If I convert to HEVC H265 10bit, it's necessary to use this command: ConvertBits(10) or not?

    5) I don't know why If I add Prefilter 1 to 3 and 5 to 8 option to SMDegrain, the plugin work. If I add Prefilter=4, the filter give an error. Can you help me with this?
    Ok, with old releases of SMDegrain it work. With the latest 4.4.0d doesn't work. I will use the second to last release
    Last edited by salvo00786; 30th May 2023 at 16:02.
    Quote Quote  
  9. Latest attempt with neo_fk3db. All filtering and encoding is now in 10 bits.

    Code:
    LWLibavVideoSource("wind noise.mkv") 
    
    ConvertBits(10)
    BilinearResize(768,576)
    ColorYUV(gamma_y=-50, opt="coring") # hide noise in the dark
    Neo_f3kdb(dynamic_grain=true)
    
    l = last # or milder filtering like SMDegrain(thsad=100, tr=3, PreFilter=4, plane=0)
    c = SMDegrain(thsad=500, tr=5, PreFilter=4, plane=3)
    MergeChroma(l, c)
    Image Attached Files
    Last edited by jagabo; 30th May 2023 at 23:02.
    Quote Quote  
  10. Ok, thanks Jagabo, but please answer to these questions (Im learning a lot here)

    1) how can you tell if a video has a wrong aspect ratio?

    2) What is the correct use of MergeChroma?
    For example I use this Checkmate command

    chroma = checkmate(thr=12,max=25,tthr2=25)
    luma = checkmate(thr=12,max=25,tthr2=0)

    The correct command at the end will be:
    fixed = luma.MergeChroma(chroma)
    or
    fixed = MergeChroma(luma,chroma)

    3) If I use your script and I encode it in a hevc 10bit h.265 or in a prores 10bit mov, it's necessary to use ConvertBits(10) or not?
    Quote Quote  
  11. Originally Posted by salvo00786 View Post
    1) how can you tell if a video has a wrong aspect ratio?
    Look at it. That video is obviously a ~4:3 source stretched horizontally to 16:9. The best method is to find something that you know should be a perfect circle or square and measure it on-screen.

    Originally Posted by salvo00786 View Post
    2) What is the correct use of MergeChroma?
    For example I use this Checkmate command

    chroma = checkmate(thr=12,max=25,tthr2=25)
    luma = checkmate(thr=12,max=25,tthr2=0)

    The correct command at the end will be:
    fixed = luma.MergeChroma(chroma)
    or
    fixed = MergeChroma(luma,chroma)
    Those are both correct. They do exactly the same thing. The latter is arguably a little clearer (as in easier to understand). In the first case you are piping luma to MergeChroma and chroma is being merged with it. In the second case you are explicitly merging luma and chroma.

    http://avisynth.nl/index.php/MergeChroma

    Originally Posted by salvo00786 View Post
    3) If I use your script and I encode it in a hevc 10bit h.265 or in a prores 10bit mov, it's necessary to use ConvertBits(10) or not?
    You will get better results with all the processing and encoding in 10 bits. But 10 bit encoding will give smoother gradients even with 8 bit input (this can depend on the playback device).
    Quote Quote  
  12. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    1) how can you tell if a video has a wrong aspect ratio?
    16:9 on the left, 4:3 (768x576) on the right. Check the heads.

    Image
    [Attachment 71366 - Click to enlarge]
    Quote Quote  
  13. Ok, thanks for all the answers.
    Now a difficult question.
    All the video in some framing has a sort of colors line like a faulty camera or I don't know. It's possible to correct only these parts of the video?
    Probably I have to split all the parts that have these problems and correct the colors, right? What is the best method to correct these colors?

    Example:
    Image
    [Attachment 71380 - Click to enlarge]


    In this example the lines are visible, but in some other parts of the video, these are more visible.
    Last edited by salvo00786; 31st May 2023 at 10:41.
    Quote Quote  
  14. Do the light bands move up and down? Are they stationary? You'll need to provide a video sample.
    Quote Quote  
  15. It looks like there was something wrong with one of the cameras used to shoot the concert. The idea here, since the noise is fairly consistent across the width of frame and across frames, is to take a piece of one frame where the background should have been all black and subtract that from all the frames.

    Code:
    LWLibavVideoSource("C:\Users\John\Desktop\wind\lines\Lines 1.mkv", cache=false, prefer_hw=2) 
    BilinearResize(768,576)
    patch = Trim(77,77).Crop(0,0,256,-0).Spline36Resize(width, height).ColorYUV(off_y=-16)
    Overlay(last, patch, mode="Subtract", opacity=1.0).ColorYUV(off_y=2)
    This works pretty well for frames 49 to 283; though not as well at the end as at the beginning of that range. In other parts of the video it creates dark bands. You would have to limit the processing to only that section with Trim():

    Code:
    part1 = Trim(0,48)
    part2 = Trim(49,283).filtered_as_above()
    part3 = Trim(284,0)
    
    part1++part2++part3
    The attached video is filtered with the first script above. Original on the left, filtered on the right. You can see how the subtraction adversely effects the shots without the light bands.

    Unfortunately, the banding isn't consistent enough to use this exact same patch for other parts of the video.
    Image Attached Files
    Quote Quote  
  16. I Know, I told you, probably If I split all the parts with this problem, I can correct the problem, but anyway thanks for the explanation for how to do it. Please Can you explain me in details this line?

    patch = Trim(77,77).Crop(0,0,256,-0).Spline36Resize(width, height).ColorYUV(off_y=-16)
    Overlay(last, patch, mode="Subtract", opacity=1.0).ColorYUV(off_y=2)

    Why Spline36Resize, after Bilinear Resize and why Trim (77,77) and crop (0,0,256,-0)? What have you done here with this line?

    For example, if I want to process the entire file, I can use the two files, one original and one modified and mix the two In Premiere using the scene analyzer.

    MMM... I tried your exact line, but something doesn't add up.

    https://drive.google.com/file/d/127gilnHnNSiA8FS0JInv4InIJdCLsg55/view?usp=sharing
    Last edited by salvo00786; 1st Jun 2023 at 06:40.
    Quote Quote  
  17. Originally Posted by salvo00786 View Post
    MMM... I tried your exact line, but something doesn't add up.
    After reading the rest of this post you will understand why.

    Originally Posted by salvo00786 View Post
    Can you explain me in details this line?

    patch = Trim(77,77).Crop(0,0,256,-0).Spline36Resize(width, height).ColorYUV(off_y=-16)
    Overlay(last, patch, mode="Subtract", opacity=1.0).ColorYUV(off_y=2)
    Since the light bands are fairly consistent within clip 1 (the same position, the same strength, across the entire frame) the idea is to subtract an image of just the light bands from each frame of video. Unfortunately, there's no frame of the video that has just the light bands on a black background. But there are some frames where a large portion of the frame is black with only the light bands. One such frame in the first clip is #77:

    Image
    [Attachment 71414 - Click to enlarge]


    You can see that about 1/3 of the frame at the left side is black with the light bands. First I select only frame 77 with Trim(77,77). Then I crop it down to just the 256 pixels at the left with Crop(0,0,256,-0):

    Image
    [Attachment 71415 - Click to enlarge]


    But I need an image that is the full width of the frame for the subtraction step. So I use Spline36Resize() to make it full width:

    Image
    [Attachment 71416 - Click to enlarge]


    Finally, I subtract 16 from all luma values because the video is limited range (black Y=16, not 0). I now have an image of just the light bars on a black (y=0) background.

    Then I subtract that image from each frame of the video using Overlay(last, patch, mode="Subtract", opacity=1.0). That leaves the image a little too dark so I bumped the Y values up a bit with ColorYUV(off_y=2). Here's frame 77 with the light bands subtracted:

    Image
    [Attachment 71417 - Click to enlarge]


    The result is pretty good. It's not as good for some later banded frames because the banding is a little different there. And it's a fail in parts of the clip where there is no banding -- it creates dark band where there were no problems before. So obviously, you need to apply the filter selectively.

    The exact same code doesn't work for the second clip because frame 77 of that video isn't black with just the light bands at the left edge. You're subtracting random image data from the video. Also, the bands are in different locations in that clip. So you need to find a similar frame with the bands over an otherwise black part of the frame and trim/crop/resize as necessary.

    Originally Posted by salvo00786 View Post
    For example, if I want to process the entire file, I can use the two files, one original and one modified and mix the two In Premiere using the scene analyzer.
    But given that the light bands are different throughout the video you are going to need many different videos. There is a function in AviSynth that is good for this, ReplaceFramesSimple(), included in the RemapFrames package.

    http://avisynth.nl/index.php?title=RemapFrames&redirect=no#ReplaceFramesSimple

    It lets you replace a range of frames in one clip with the same range from another clip. You'll end up with a sequence like:

    Code:
    # after building all the "fix" videos as described above
    
    ReplaceFramesSimple(source, fix1, mappings="[a b]")
    ReplaceFramesSimple(last, fix2, mappings="[c d]")
    ReplaceFramesSimple(last, fix3, mappings="[e f]")
    etc.
    Quote Quote  
  18. Ok, thanks, your explanation is very simple and understandable. Thank for your time
    Quote Quote  
  19. Hi... Neo_f3kdb and SMDegrain are a CPU Only plugin? Because I use it with Staxrip but for 20min of video at 768x576 it require something like three or four hours.
    Quote Quote  
  20. Originally Posted by salvo00786 View Post
    Hi... Neo_f3kdb and SMDegrain are a CPU Only plugin?
    neo_f3kdb, yes. SMDegrain has some ability to use a GPU. See the docs.

    Originally Posted by salvo00786 View Post
    Because I use it with Staxrip but for 20min of video at 768x576 it require something like three or four hours.
    Are you using a multithreaded build of AviSynth? If so, have you enabled multithreading in the script? That will help with filtering speed.
    Quote Quote  
  21. Please Help Me and tell me if I am wrong

    1) I have to insert this probably: "SetFilterMTMode("DEFAULT_MT_MODE", 2)" but where in the script? I have to write it before each filter?

    Code:
    ConvertBits(10)
    ColorYUV(gamma_y=-10, opt="coring")
    Neo_f3kdb(dynamic_grain=true)
    l = SMDegrain(thsad=100, tr=3, PreFilter=4, plane=0)
    c = SMDegrain(thsad=500, tr=5, PreFilter=4, plane=3)
    MergeChroma(l, c)
    2) At the end I have to add this Prefetch(6), 6 because i Have a 4 core 8 thread cpu, or can I go over 8? Same question for SetFilterMTMode, I have to add this after each filter?

    Something like this?

    Code:
    SetFilterMTMode("DEFAULT_MT_MODE", 2)
    ConvertBits(10)
    Prefetch(6)
    
    SetFilterMTMode("DEFAULT_MT_MODE", 2)
    ColorYUV(gamma_y=-10, opt="coring")
    Prefetch(6)
    
    SetFilterMTMode("DEFAULT_MT_MODE", 2)
    Neo_f3kdb(dynamic_grain=true)
    Prefetch(6)
    
    SetFilterMTMode("DEFAULT_MT_MODE", 2)
    l = SMDegrain(thsad=100, tr=3, PreFilter=4, plane=0)
    c = SMDegrain(thsad=500, tr=5, PreFilter=4, plane=3)
    MergeChroma(l, c)
    Prefetch(6)
    3) SetFilterMTMode("DEFAULT_MT_MODE", 2) 2 is the correct number?

    4) you told me SMDegrain has some ability to use a GPU, but I can't find this in the docs. Prefilter 4 use knlmeanscl (gpu denoiser). For this reason I don't know if SMDegrain use my gpu, but when I use Staxrip, in task manager, my gpu use is 0. Tell me what you know.

    5) At this page, http://avisynth.nl/index.php/SetFilterMTMode#Enabling_MT, there is an option OnCUDA. What is the utility of that function?

    6) I rode in a forum sometime ago, that some vapoursynth filters are much more faster than the same avisynth filters. It's true or false?

    7) why at the start of your scripts you use after the file: cache=false, prefer_hw=2? Is it relevant?

    8) What is better at deinterlacing, Yadif or NNeedi? QTGMC IS much better than these two?

    9) What is the best denoiser that I can use in Staxrip outside of QTGMC and what is the best script for that denoiser without too much denoising for not loosing details?
    Last edited by salvo00786; 4th Jun 2023 at 17:31.
    Quote Quote  
  22. Hi... After some testing, I will give you my considerations about my questions

    Only for info, I was trying this script:

    Code:
    chroma = checkmate(thr=12,max=25,tthr2=25)
    luma   = checkmate(thr=12,max=25,tthr2=0)
    fixed  = MergeChroma(chroma, luma)
    SetFilterMTMode("DEFAULT_MT_MODE", 2)
    QTGMC(preset="Slower", InputType=0, sourceMatch=3, Lossless=2, sharpness=0.2, NoisePreset="Slower", Denoiser="dfttest", ChromaMotion=true, ChromaNoise=true, EZDenoise=2.0, DenoiseMC=true, NoiseDeint="Generate", StabilizeNoise=true, ediThreads=8)
    Prefetch(8)
    Normally if I use a Slower clean QTGMC, my fps are about 40.
    With the script above, the fps dropped to 2/3. After some testing I found that Chromamotion dropped all my fps. With all the above parameters but without Chromamotion, the fps are about 28/30.


    1/2/3) In Staxrip Mený, Options/Filter, I wrote this: SetFilterMTMode("DEFAULT_MT_MODE", 2). With this options, the multithread is set by default for all my filters. At the end of each script I write Prefetch(8)

    6) I have tried Vapoursynth Qtgmc. Avisynth QTGMC is like half of the speed, but Vapoursynth doesn't have other filters like checkmate, that for me is very important. For this reason I have to continue with Avisynth.

    8) NNedi is so much better than Yadif, but not better than QTGMC that is on another level (these are my conclusions)

    9) After so many tests, I have tried that probably this is the best QTGMC script for my video (I like the result, no artifacts and good denoise with no details lost)
    QTGMC(preset="Slower", sharpness=0.2, ediThreads=8).QTGMC(Preset="Slower", InputType=1, sharpness=0.2, ediThreads=8)
    Runnin a double pass of QTGMC, sometimes is very good.

    Please when you have time, reply to these questions:

    4) you told me SMDegrain has some ability to use a GPU, but I can't find this in the docs. Prefilter 4 use knlmeanscl (gpu denoiser). For this reason I don't know if SMDegrain use my gpu, but when I use Staxrip, in task manager, my gpu use is 0. Tell me what you know.

    5) At this page, http://avisynth.nl/index.php/SetFilterMTMode#Enabling_MT, there is an option OnCUDA. What is the utility of that function?

    7) why at the start of your scripts you use after the file: cache=false, prefer_hw=2? Is it relevant?

    What do you think about a double pass of QTGMC? I liked the result, but do you think is so much destructive?

    Probably I can tweak the script some more with a second Medium pass instead of Slower, but I haven't tried yet
    Last edited by salvo00786; 7th Jun 2023 at 05:44.
    Quote Quote  
  23. Originally Posted by salvo00786 View Post
    4) you told me SMDegrain has some ability to use a GPU, but I can't find this in the docs. Prefilter 4 use knlmeanscl (gpu denoiser). For this reason I don't know if SMDegrain use my gpu, but when I use Staxrip, in task manager, my gpu use is 0. Tell me what you know.
    That's all I'm aware of -- the prefilter for the clip that is used to generate motion vectors. Removing noise can make it easier to find motion vectors. The noise reduced clip does not appear in the final output.

    Originally Posted by salvo00786 View Post
    5) At this page, http://avisynth.nl/index.php/SetFilterMTMode#Enabling_MT, there is an option OnCUDA. What is the utility of that function?
    I don't use CUDA. I don't know what that does.

    Originally Posted by salvo00786 View Post
    why at the start of your scripts you use after the file: cache=false, prefer_hw=2? Is it relevant?
    I have several batch files in my Send To folder so I can right click on a video file and select Send To -> (a batch file that builds a default AVS script for me). It's easier to remove arguments than to add them when I start using the script. So my LWlibavVideoSource batch file creates a script with those arguments. cache=false prevents it from creating an index file (it's kept in memory only). That reduces clutter but increases load time with large files. prefer_hw=2 tells the filter to use Intel's QSV decoder when possible. That can speed up or slow down processing depending on the situation.

    Originally Posted by salvo00786 View Post
    What do you think about a double pass of QTGMC? I liked the result, but do you think is so much destructive?
    I didn't look that closely at your video with that double QTGMC call. But I sometimes use multiple QTGMC calls for processing. You might be able to save some processing time including PrevGlobals="Reuse" in the second call.

    http://avisynth.nl/index.php/QTGMC#Multiple_QTGMC_Calls

    If you think the video looks better with two calls and can afford the processing time go ahead and use two.
    Quote Quote  
  24. If I want to use a good antialiasing, what can I use. Qtgmc is good as antialiaser or can you give me something better?
    Quote Quote  
  25. AAA(), Santiag().

    A trick with vertical edges and QTGMC()

    TurnRight().QTGMC(InputType=2).TurnLeft()
    Quote Quote  
  26. Ok, thank you. I have seen that AAA is very old, probably is better Santiag that is more recent. I will try the QTGMC trick too, thanks, but please can you give me an example of vertical edges in a movie?
    Quote Quote  
  27. Originally Posted by salvo00786 View Post


    6) I have tried Vapoursynth Qtgmc. Avisynth QTGMC is like half of the speed, but Vapoursynth doesn't have other filters like checkmate, that for me is very important. For this reason I have to continue with Avisynth.

    At the end of each script I write Prefetch(8)

    Half speed ? The speed difference shouldn't be that much . Maybe you aren't using the same settings, or maybe avs prefetch wasn't setup correctly - that's part of the avs threading problem, you might need a different number for different scripts. Too high, you get thrashing and slower. Too low, you get wasted resources, and slower. vpy threading is more elegant in that respect


    Checkmate port to vapoursynth
    https://github.com/dnjulek/vapoursynth-checkmate

    What other filters are missing that you need?

    Many avs .dll filters can be loaded into vapoursynth using core.avs.LoadPlugin , and the output of avs scripts can actually be imported into vapoursynth using AVISource
    Quote Quote  
  28. Jagabo, why you used:
    TurnRight().QTGMC(InputType=2).TurnLeft()

    Why InputType=2?

    This function is not good with InputType=0, 1 or 3?

    What are the differences between InputType 2 and 3?

    I know that you prefer a linear QTGMC for a first Interlaced pass, but It's better to use Sourcematch=3 and lossless=2 with the second pass of QTGMC on progressive material? A script example like this maybe
    TurnRight().QTGMC(InputType=3, Sourcematch=3, Lossless=2).TurnLeft()

    What do you think?
    Last edited by salvo00786; 10th Jun 2023 at 07:17.
    Quote Quote  
  29. Originally Posted by salvo00786 View Post
    Jagabo, why you used:
    TurnRight().QTGMC(InputType=2).TurnLeft()
    Poor deinterlacing produces aliasing on near horizontal edges. QTGMC cleans up those aliasing artifacts. Video can sometimes have aliasing on vertical edges. Those aren't created by deinterlacing but rather by bad scaling or camera problems. QTGMC doesn't fix that. To trick it into doing so you can rotate the frame 90 degrees, call QTGMC, then rotate back to normal.

    Originally Posted by salvo00786 View Post
    Why InputType=2?
    Because I had to put a number there. Use whatever setting works best for a particular video.

    Originally Posted by salvo00786 View Post
    What are the differences between InputType 2 and 3?
    Read the manual. http://avisynth.nl/index.php/QTGMC#Progressive_Input
    Quote Quote  



Similar Threads