VideoHelp Forum
+ Reply to Thread
Results 1 to 17 of 17
Thread
  1. can be used directly without depending on another filter ?

    for example I always use it this way with dfftest...

    dfttest(dither=3)

    But I would like to use it independently, is it possible ?

    dither() ????

    Thanks so much for your help

    *Avisynth x64
    Quote Quote  
  2. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    If you are not doing anything else to the file, then the addition of dither after the fact is pointless. Use it when applying modifications (where interim higher precision math will again downconvert back to lower precision storage) or at end when downrezzing to lower bitdepth permanently. That's its job and that's when it is needed.
    Otherwise, all you are doing is adding noise.

    Scott
    Quote Quote  
  3. Originally Posted by Cornucopia View Post
    If you are not doing anything else to the file, then the addition of dither after the fact is pointless. Use it when applying modifications (where interim higher precision math will again downconvert back to lower precision storage) or at end when downrezzing to lower bitdepth permanently. That's its job and that's when it is needed.
    Otherwise, all you are doing is adding noise.

    Scott
    i know & thank you for your comment but I'm looking for an answer to my question specifically.
    Quote Quote  
  4. The dither function in dfttest is built into it and not callable on its own. You can use AddGrain() or AddGrainC() to add random noise to existing video. The random/ordered dither used in GradFun3 may be available externally, see DitherTools.

    http://avisynth.nl/index.php/Dither_tools

    Maybe SmoothAdjust too.

    https://forum.doom9.org/showthread.php?t=154971
    Last edited by jagabo; 13th Jul 2019 at 20:57.
    Quote Quote  
  5. Originally Posted by jagabo View Post
    The dither function in dfttest is built into it and not callable on its own. You can use AddGrain() or AddGrainC() to add random noise to existing video. The random/ordered dither used in GradFun3 may be available externally, see DitherTools.

    http://avisynth.nl/index.php/Dither_tools

    Maybe SmoothAdjust too.

    https://forum.doom9.org/showthread.php?t=154971
    hello

    https://www79.zippyshare.com/v/Ls2zGjid/file.html

    https://drive.google.com/open?id=1M1ZNiH_nq14JEKs_B7i-NTzJ4KUt7033

    Could you help me with this video please, it has a lot of macro block and banding, even after applying a good denoiser the banding is present in the image but I can not eliminate it at all.

    Could you please help me to clean as much as possible this video.

    The dfftest dither is very good and that's why I would like to apply it at the end to hide the banding that remains after applying the denoiser.

    All other anti-banding filters do not generate a good random noise sequence (Flash3kyuu_deband, GradFun2db, GradFun2DBmod, GradFun3 and f3kdb, etc).

    You can use any Deinterlacing filter, I'm using the QTGMC modifying many options and it gives me good results. but is not perfect...

    ---------------------------------------------------------------------------------------------------

    MPEG2Source("C:\Users\Carlos\Desktop\Juan Gabriel & Rocio Durcal.d2v", cpu=0)
    Spline64Resize(632, 480, src_left=4.0, src_top=2.0, src_width=-4.0, src_height=0.0)

    QTGMC(showsettings=False, Preset="Placebo",EdiMode="EEDI3",EdiMaxD=8, ShutterBlur=3, ShutterAngleSrc=180, ShutterAngleOut=180, SBlurLimit=8, EZDenoise=5.0,EZKeepGrain=0.0, NoisePreset="Slower", NoiseProcess=1, ChromaNoise=True, Denoiser="fft3dfilter",DenoiseMC=True,NoiseTR=2,Si gma=8,ShowNoise=False,GrainRestore=0.0,NoiseRestor e=0.0,NoiseDeint="Generate", StabilizeNoise=True)
    SelectEven()

    Prefetch(16)
    -------------------------------------------------------------------------------------------------

    surely there is some other way to decently restore this DVD

    but I know you have a lot of experience and you could help me a lot better.

    Thank you for your help with this restoration.


    https://www79.zippyshare.com/v/Ls2zGjid/file.html

    https://drive.google.com/open?id=1M1ZNiH_nq14JEKs_B7i-NTzJ4KUt7033
    Last edited by zerowalk; 13th Jul 2019 at 22:10.
    Quote Quote  
  6. What dfttest settings did you use?
    Quote Quote  
  7. Originally Posted by jagabo View Post
    What dfttest settings did you use?
    in this case I don't use dfftest but when I use it this is the configuration. i only want the dither=3 results. (noise)

    ----------------------------------------------------------------------------------------------
    dfttest(threads=16, sigma=16, tbsize=1, tmode=0, lsb_in=false, lsb=false, dither=3)
    ----------------------------------------------------------------------------------------------

    with this configuration also gives good results but is not as good as dfftest (dither=3)
    ----------------------------------------------------------------------------------------------
    f3kdb(range=15, Y=140, Cb=140, Cr=140, grainY=140, grainC=140, sample_mode=2, blur_first=True, dynamic_grain=True, opt=-1, mt=True, dither_algo=3, keep_tv_range=False, input_mode=0, input_depth=8, output_mode=0, output_depth=8)
    ----------------------------------------------------------------------------------------------

    I think it's the best way to fight the banding that's present in the picture. Maybe I'm wrong and you have a better option.

    thanks so much
    Last edited by zerowalk; 13th Jul 2019 at 22:16.
    Quote Quote  
  8. You can essentially disable dfttest's noise filtering by setting sigma to 0. Also, that clip is damaged pretty badly by QTGMC (the sparkle in the woman's dress mostly disappears). It's an odd mix of 24p and 30p material with hard pulldown. If you make it 24p the 30p sections will be jerky. If you make it 30p the 24p sections will be jerky. I'd probably use TFM() without TDecimate and leave it at 30p. Maybe something like:

    Code:
    Mpeg2Source("Juan Gabriel &  Rocio Durcal_track1_und.d2v", CPU2="ooooxx", Info=3) 
    Crop(8,0,-8,-0)
    ColorYUV(gain_y=16, off_y=-16)
    Deblock_qed_i(quant1=28, quant2=32)
    TFM() 
    #TDecimate() 
    vInverse()
    dfttest(threads=16, sigma=2, tbsize=1, tmode=0, lsb_in=false, lsb=false, dither=3)
    That uses Mpeg2Source to reduce the DCT ringing artifacts. vinverse cleans up some of the remaining light combing from compression artifacts. I left a little sigma in dfttest for very light denoising. You might replace dfttest with AddGrainC(var=1.0, uvar=1.0) -- though it uses random noise rather than error diffusion.
    Quote Quote  
  9. Originally Posted by jagabo View Post
    You can essentially disable dfttest's noise filtering by setting sigma to 0. Also, that clip is damaged pretty badly by QTGMC (the sparkle in the woman's dress mostly disappears). It's an odd mix of 24p and 30p material with hard pulldown. If you make it 24p the 30p sections will be jerky. If you make it 30p the 24p sections will be jerky. I'd probably use TFM() without TDecimate and leave it at 30p. Maybe something like:

    Code:
    Mpeg2Source("Juan Gabriel &  Rocio Durcal_track1_und.d2v", CPU2="ooooxx", Info=3) 
    Crop(8,0,-8,-0)
    ColorYUV(gain_y=16, off_y=-16)
    Deblock_qed_i(quant1=28, quant2=32)
    TFM() 
    #TDecimate() 
    vInverse()
    dfttest(threads=16, sigma=2, tbsize=1, tmode=0, lsb_in=false, lsb=false, dither=3)
    That uses Mpeg2Source to reduce the DCT ringing artifacts. vinverse cleans up some of the remaining light combing from compression artifacts. I left a little sigma in dfttest for very light denoising. You might replace dfttest with AddGrainC(var=1.0, uvar=1.0) -- though it uses random noise rather than error diffusion.
    Is it very difficult to restore the video without losing much detail ?
    Quote Quote  
  10. It's always a balancing act between fixing the problems without further damaging the video.
    Quote Quote  
  11. Oh I forgot to give you deblock_qed_i():

    Code:
    function Deblock_QED_i ( clip clp, int "quant1", int "quant2", int "aOff1", int "bOff1", int "aOff2", int "bOff2", int "uv" )
    {
        quant1 = default( quant1, 24 ) # Strength of block edge deblocking
        quant2 = default( quant2, 26 ) # Strength of block internal deblocking
    
        aOff1 = default( aOff1, 1 ) # halfway "sensitivity" and halfway a strength modifier for borders
        aOff2 = default( aOff2, 1 ) # halfway "sensitivity" and halfway a strength modifier for block interiors
        bOff1 = default( bOff1, 2 ) # "sensitivity to detect blocking" for borders
        bOff2 = default( bOff2, 2 ) # "sensitivity to detect blocking" for block interiors
    
        uv    = default( uv, 3 )    # u=3 -> use proposed method for chroma deblocking
                                    # u=2 -> no chroma deblocking at all (fastest method)
                                    # u=1|-1 -> directly use chroma debl. from the normal|strong deblock()
    
        last=clp
        par=getparity()
        SeparateFields().PointResize(width,height)
        Deblock_QED(last, quant1, quant2, aOff1, aOff2, bOff1, bOff2, uv)
        AssumeFrameBased()
        SeparateFields()
        Merge(SelectEven(),SelectOdd())
        par ? AssumeTFF() : AssumeBFF()
        Weave() 
    }
    Modified from: https://forum.videohelp.com/threads/361152-Correct-usage-of-Deblock_QED-for-Interlaced...ge#post2290489
    Quote Quote  
  12. Originally Posted by jagabo View Post
    Oh I forgot to give you deblock_qed_i():

    Code:
    function Deblock_QED_i ( clip clp, int "quant1", int "quant2", int "aOff1", int "bOff1", int "aOff2", int "bOff2", int "uv" )
    {
        quant1 = default( quant1, 24 ) # Strength of block edge deblocking
        quant2 = default( quant2, 26 ) # Strength of block internal deblocking
    
        aOff1 = default( aOff1, 1 ) # halfway "sensitivity" and halfway a strength modifier for borders
        aOff2 = default( aOff2, 1 ) # halfway "sensitivity" and halfway a strength modifier for block interiors
        bOff1 = default( bOff1, 2 ) # "sensitivity to detect blocking" for borders
        bOff2 = default( bOff2, 2 ) # "sensitivity to detect blocking" for block interiors
    
        uv    = default( uv, 3 )    # u=3 -> use proposed method for chroma deblocking
                                    # u=2 -> no chroma deblocking at all (fastest method)
                                    # u=1|-1 -> directly use chroma debl. from the normal|strong deblock()
    
        last=clp
        par=getparity()
        SeparateFields().PointResize(width,height)
        Deblock_QED(last, quant1, quant2, aOff1, aOff2, bOff1, bOff2, uv)
        AssumeFrameBased()
        SeparateFields()
        Merge(SelectEven(),SelectOdd())
        par ? AssumeTFF() : AssumeBFF()
        Weave() 
    }
    Modified from: https://forum.videohelp.com/threads/361152-Correct-usage-of-Deblock_QED-for-Interlaced...ge#post2290489
    Thanks so much
    Quote Quote  
  13. One last question, for you what is the best filter to deinterlace?

    EEDI3 and NNEDI ?

    Could you explain to me which is better in which case, thank you.
    Quote Quote  
  14. QTGMC is usually best. But with this video its temporal filtering ends up erasing many of the sparkles in the woman's top. You have to try the different deinterlacers to see which is best for a particular source.

    But this video is telecined 24p and 30p, not real interlaced video. It should be field matched (TFM), not deinterlaced.
    Quote Quote  
  15. Originally Posted by jagabo View Post
    QTGMC is usually best. But with this video its temporal filtering ends up erasing many of the sparkles in the woman's top. You have to try the different deinterlacers to see which is best for a particular source.

    But this video is telecined 24p and 30p, not real interlaced video. It should be field matched (TFM), not deinterlaced.

    Is there any way that the grain generated can survive compression effectively?

    For example in f3kdb

    https://f3kdb.readthedocs.io/en/latest/usage.html#parameters

    dither_algo

    1: No dithering, LSB is truncated
    2: Ordered dithering
    3: Floyd-Steinberg dithering

    Notes:

    Visual quality of mode 3 is the best, but the debanded pixels may easily be destroyed by x264, you need to carefully tweak the settings to get better result.

    Mode 1 and mode 2 don’t look the best, but if you are encoding at low bitrate, they may be better choice since the debanded pixels is easier to survive encoding, mode 3 may look worse than 1/2 after encoding in this situation.

    (Thanks sneaker_ger @ doom9 for pointing this out!)

    This parameter is ignored if output_depth = 16.

    10bit x264 command-line example:

    avs2yuv -raw "script.avs" -o - | x264-10bit --demuxer raw --input-depth 16 --input-res 1280x720 --fps 24 --output "out.mp4" -

    Or compile x264 with the patch on https://gist.github.com/1117711, and specify the script directly:

    x264-10bit --input-depth 16 --output "out.mp4" script.avs
    Quote Quote  
  16. Originally Posted by zerowalk View Post

    Is there any way that the grain generated can survive compression effectively?
    Higher bitrate, grain retention settings (--tune grain or similar) . If you have specific sections that require attention you can use zones
    Quote Quote  
  17. Originally Posted by poisondeathray View Post
    Originally Posted by zerowalk View Post

    Is there any way that the grain generated can survive compression effectively?
    Higher bitrate, grain retention settings (--tune grain or similar) . If you have specific sections that require attention you can use zones
    thanks so much for your help

    will you please have an example of how to use the zones in an advanced way? =
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!