VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 42 of 42
Thread
  1. I'm running late.... I should leave for work soon, so just a quick reply.

    GradFun3 smooths the gradients but doesn't add enough noise to always prevent x264 from creating banding, at least with the default settings, and f3kdb() adds a little too much noise for my liking as a rule. I can't remember what combinations I tried, but for animation I somehow ended up using both of them, and it worked, so I do that.

    Higher thr/thrc values deband more, and also blur more. The defaults are 0.35.
    For "film" or "video" I'd try hard not to increase them too much, but for animation with flat blocks of color there's generally not much fine detail to blur, so you can get away with it.

    GradFun3(thr=1.0, thrc=1.0)
    f3kdb()

    GradFun3 is doing the debanding or smoothing, and f3kdb() adds more noise to prevent x264 creating banding again.
    You could possibly adjust settings and get the same result with just one of them, but I liked the result and I'm lazy, so when using both worked, I kept using both.
    Quote Quote  
  2. Originally Posted by hello_hello View Post
    GradFun3 smooths the gradients but doesn't add enough noise to always prevent x264 from creating banding, at least with the default settings, and f3kdb() adds a little too much noise for my liking as a rule. I can't remember what combinations I tried, but for animation I somehow ended up using both of them, and it worked, so I do that.

    Higher thr/thrc values deband more, and also blur more. The defaults are 0.35.
    For "film" or "video" I'd try hard not to increase them too much, but for animation with flat blocks of color there's generally not much fine detail to blur, so you can get away with it.

    GradFun3(thr=1.0, thrc=1.0)
    f3kdb()

    GradFun3 is doing the debanding or smoothing, and f3kdb() adds more noise to prevent x264 creating banding again.
    You could possibly adjust settings and get the same result with just one of them, but I liked the result and I'm lazy, so when using both worked, I kept using both.
    As always, thanks for the clear explanation . The results are mostly good, but looking to fine tune the GradFun3-F3KDB settings (and out of respect for your laziness ), I asked in the color banding thread at doom9.
    Last edited by LouieChuckyMerry; 5th May 2019 at 12:44. Reason: Grammar
    Quote Quote  
  3. I read your post in the doom9 thread. I'm surprised you've noticed using both increases banding sometimes. I haven't, at least not using the settings I mentioned earlier.

    By the way, by default, GradFun3 and f3kdb() both output an 8 bit clip, and GradFun3 seems to have the same dithering options as DitherPost and the same defaults, so this would dither to 8 bit:

    Dither_Convert_8_To_16()
    Gradfun3(Lsb_In=True)

    as would this:

    Dither_Convert_8_To_16()
    f3kdb(input_mode=1)

    Anyway, I looked the the various options and took a guess at what would give a similar output. My first guess turned out to be pretty good. They're all but identical. See the attached screenshots.

    The big difference between the two is the ability to add grain, which f3kdb does by default (the default f3kdb grainY and grainC value is 64). You can add it with the dither plug-in though:

    Gradfun3(thr=1.0, thrc=1.0, Lsb_In=True, Lsb=True, staticnoise=true)
    Dither_add_grain16()
    DitherPost()

    So now I've had a closer look it seems all I was effectively doing is smoothing with GradFun3, and when the dithering wasn't enough to prevent x264 from causing banding again I was giving it something else to encode by adding grain with f3kdb. The result would probably be similar doing this:

    Gradfun3(thr=1.0, thrc=1.0)
    AddGrainC()

    I didn't play around with any of the options for adding grain, and there's other settings too, but I think that's the meat of it.... smooth as much as you need to or can get away with, and if dithering isn't enough, keep the encoder busy with some grain.

    Screenshots, smoothing and dithering, no grain (and I forgot to convert the colors when upscaling):

    Screenshot 1:
    TFM(Mode=7,UBSCO=False)
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960, FHeight=720)
    Dither_Convert_8_To_16()
    Gradfun3(thr=1.0, thrc=1.0, Lsb_In=True, staticnoise=true)

    Screenshot 2:
    TFM(Mode=7,UBSCO=False)
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960, FHeight=720)
    Dither_Convert_8_To_16()
    f3kdb(Y=100, Cb=100, Cr=100, grainY=1, grainC=1, input_mode=1, dither_algo=2)

    Screenshot 3, only upscaled.
    Image Attached Thumbnails Click image for larger version

Name:	1.png
Views:	7
Size:	506.1 KB
ID:	48991  

    Click image for larger version

Name:	2.png
Views:	8
Size:	577.2 KB
ID:	48992  

    Click image for larger version

Name:	3.png
Views:	8
Size:	413.7 KB
ID:	48993  

    Last edited by hello_hello; 6th May 2019 at 19:04.
    Quote Quote  
  4. I forgot to mention.... this should prevent the encoding police from coming after you over the top and bottom half lines.

    Crop(10,2,-10,-2)

    If you work it out based on the mpeg4 pixel aspect ratio, which you were effectively using by cropping 8 pixels from each side and resizing to 4:3 dimensions, it crops to 1.335 and the aspect error is 0.127%. An aspect error too small to see and no half lines.

    If the filters prior to resizing prefer mod16, you could do this:

    ### Crop ###
    Crop(8,0,-8,0)
    ### Lots Of Stuff ###
    BlahBlah()
    ### Resize ###
    Crop(2,2,-2,-2)
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960, FHeight=720)

    Half lines vs no half lines
    Image Attached Thumbnails Click image for larger version

Name:	r1.jpg
Views:	3
Size:	362.9 KB
ID:	49004  

    Click image for larger version

Name:	r2.jpg
Views:	3
Size:	361.1 KB
ID:	49005  

    Last edited by hello_hello; 6th May 2019 at 20:24.
    Quote Quote  
  5. For such a small amount of top and bottom artifacts you can keep from having to crop by using BorderControl like so:

    BorderControl(YBS=2,YBSF=2,YTS=2,YTSF=2)

    You'd have to do some serious zooming to spot anything amiss. You can even use odd numbers, if that stuff is only 1 pixel thick.
    Quote Quote  
  6. Originally Posted by hello_hello View Post
    I read your post in the doom9 thread. I'm surprised you've noticed using both increases banding sometimes. I haven't, at least not using the settings I mentioned earlier.
    I was wrong. What I saw as banding on my 14" laptop screen next to a bright window proved to be nothing but the added noise from F3KDB when viewed in a darkened room. Sorry for the mistake. After checking, the test with F3KDB added roughly 30% extra bit rate to the encode.


    Originally Posted by hello_hello View Post
    By the way, by default, GradFun3 and f3kdb() both output an 8 bit clip, and GradFun3 seems to have the same dithering options as DitherPost and the same defaults, so this would dither to 8 bit:

    Dither_Convert_8_To_16()
    Gradfun3(Lsb_In=True)

    as would this:

    Dither_Convert_8_To_16()
    f3kdb(input_mode=1)
    I always send 16 bit raw to x264 10 bit. My original deband line (to be improved, however) is:

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    DitherPost()

    Originally Posted by hello_hello View Post
    Anyway, I looked the the various options and took a guess at what would give a similar output. My first guess turned out to be pretty good. They're all but identical. See the attached screenshots.

    The big difference between the two is the ability to add grain, which f3kdb does by default (the default f3kdb grainY and grainC value is 64). You can add it with the dither plug-in though:

    Gradfun3(thr=1.0, thrc=1.0, Lsb_In=True, Lsb=True, staticnoise=true)
    Dither_add_grain16()
    DitherPost()

    So now I've had a closer look it seems all I was effectively doing is smoothing with GradFun3, and when the dithering wasn't enough to prevent x264 from causing banding again I was giving it something else to encode by adding grain with f3kdb. The result would probably be similar doing this:

    Gradfun3(thr=1.0, thrc=1.0)
    AddGrainC()

    I didn't play around with any of the options for adding grain, and there's other settings too, but I think that's the meat of it.... smooth as much as you need to or can get away with, and if dithering isn't enough, keep the encoder busy with some grain.

    Screenshots, smoothing and dithering, no grain (and I forgot to convert the colors when upscaling):

    Screenshot 1:
    TFM(Mode=7,UBSCO=False)
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960, FHeight=720)
    Dither_Convert_8_To_16()
    Gradfun3(thr=1.0, thrc=1.0, Lsb_In=True, staticnoise=true)

    Screenshot 2:
    TFM(Mode=7,UBSCO=False)
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960, FHeight=720)
    Dither_Convert_8_To_16()
    f3kdb(Y=100, Cb=100, Cr=100, grainY=1, grainC=1, input_mode=1, dither_algo=2)

    Screenshot 3, only upscaled.
    Thank you for this. As soon as I've time--unfortunately I'm really busy the next some days--I'll test these with glee.


    Originally Posted by hello_hello View Post
    I forgot to mention.... this should prevent the encoding police from coming after you over the top and bottom half lines.

    Crop(10,2,-10,-2)

    If you work it out based on the mpeg4 pixel aspect ratio, which you were effectively using by cropping 8 pixels from each side and resizing to 4:3 dimensions, it crops to 1.335 and the aspect error is 0.127%. An aspect error too small to see and no half lines.

    If the filters prior to resizing prefer mod16, you could do this:

    ### Crop ###
    Crop(8,0,-8,0)
    ### Lots Of Stuff ###
    BlahBlah()
    ### Resize ###
    Crop(2,2,-2,-2)
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960, FHeight=720)

    Half lines vs no half lines
    Originally Posted by manono View Post
    For such a small amount of top and bottom artifacts you can keep from having to crop by using BorderControl like so:

    BorderControl(YBS=2,YBSF=2,YTS=2,YTSF=2)

    You'd have to do some serious zooming to spot anything amiss. You can even use odd numbers, if that stuff is only 1 pixel thick.
    Thank you both for your suggestions. After I finalize the debanding I'll attack the cropping.
    Last edited by LouieChuckyMerry; 7th May 2019 at 08:08. Reason: Information
    Quote Quote  
  7. Originally Posted by LouieChuckyMerry View Post

    I always send 16 bit raw to x264 10 bit. My original deband line (to be improved, however) is:

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    DitherPost()
    Now I'm confused, given the scripts you posted have ended with DitherPost(). You do realise DitherPost converts the 16 bit clip back to 8 bit?
    And your previous script already had the GradFun3 options Lsb_In and Lsb both set to true.
    I've done almost no 10 bit encoding so I can't speak with much experience, but the main selling point for 10 bit encoding is the extra precision so banding should be less of a problem. If you're encoding as 10 bit you mightn't need to worry too much about dithering or adding noise. You might be able to smooth out the existing banding with GradFun3 or f3kdb and encode without having to add grain etc.
    Although I think you're brave encoding as 10 bit with x264, mainly because hardware players don't support it and probably never will. It's a different story for h265 as 10 bit was part of the spec from the beginning. If I was to encode as 10 bit, I think I'd use x265 instead. At least in theory (I haven't done much x265 encoding).

    Originally Posted by LouieChuckyMerry View Post
    After checking, the test with F3KDB added roughly 30% extra bit rate to the encode.
    That's generally the price you pay for encoding the added grain, although 30% is quite an increase.

    I'm not even sure how much of this you need to worry about now. I mentioned your script wasn't quite getting all the banding and it wouldn't hurt to increase the debanding a little bit, and that's still true, but I'm not sure how much banding prevention you need to do after that if you're encoding as 10 bit.
    Quote Quote  
  8. Originally Posted by hello_hello View Post
    Originally Posted by LouieChuckyMerry View Post

    I always send 16 bit raw to x264 10 bit. My original deband line (to be improved, however) is:

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    DitherPost()
    Now I'm confused, given the scripts you posted have ended with DitherPost(). You do realise DitherPost converts the 16 bit clip back to 8 bit?
    And your previous script already had the GradFun3 options Lsb_In and Lsb both set to true.
    I've done almost no 10 bit encoding so I can't speak with much experience, but the main selling point for 10 bit encoding is the extra precision so banding should be less of a problem. If you're encoding as 10 bit you mightn't need to worry too much about dithering or adding noise. You might be able to smooth out the existing banding with GradFun3 or f3kdb and encode without having to add grain etc.
    Although I think you're brave encoding as 10 bit with x264, mainly because hardware players don't support it and probably never will. It's a different story for h265 as 10 bit was part of the spec from the beginning. If I was to encode as 10 bit, I think I'd use x265 instead. At least in theory (I haven't done much x265 encoding).
    I wondered why it seemed so easy. A couple-three weeks ago I upgraded from MeGUI Version 2855 with SEt's AviSynth MT to the most recent version with pinterf's AviSynth+. With 2855-SEt my scripts' last lines were:

    Code:
    GradFun3("SettingsDependOnSource",Lsb_In=True,Lsb=True)
    ### Preview Source OR Send 16-bit Output To x264 10-bit ###
    # DitherPost()
    Dither_Out()
    with an x264 custom command line of

    Code:
    --demuxer raw --input-depth 16 --sar 1:1
    When I wanted to open the video preview, I'd unhash "DitherPost()" and hash "Dither_Out"; when I wanted to encode the video, I'd hash "DitherPost()" and unhash "Dither_Out".

    As I puttered about trying to get pinterf's AviSynth+ to work with the newest version of MeGUI, the only way I could output proper, non-acid-flashback-green-double-width video was to encode with"Dither_Out()" hashed and "DitherPost()" unhashed. Silly me; I assumed--yes yes, I know--that pinterf's AviSynth+ invoked a different syntax. Seems I was simply sending 8-bit to x264 10-bit, now that you remind me what "DitherPost()" does .

    Anyway, I've tried every combination of "Dither_Out()" and the x264 custom command line and, for the life of me, the only "video" I can't output is acid-flashback-green-double-width. Any ideas? I'll keep fiddling about; maybe I'll get lucky.

    EDIT: I found a thread on MeGUI-AviSynth+-16-bit To 10-bit, and this seems to work with my x264 custom command line:

    Code:
    ### Preview Source OR Send 16-bit Output To x264 10-bit ###
    ## Trim()
    # SelectRangeEvery(1000,66)
    # DitherPost()
    ConvertFromStacked.ConvertBits(10,Dither=0)
    as does this:

    Code:
    ### Preview Source OR Send 16-bit Output To x264 10-bit ###
    ## Trim()
    # SelectRangeEvery(1000,66)
    # DitherPost()
    ConvertFromStacked
    Guess it was an AviSynth+ syntax issue after all . Anyway, I'm now wondering if there's any difference in the above methods; ie, has AviSynth+ evolved to where it no longer needs the "ConvertBits(10,Dither=0)" or does skipping this change the quality of the output video in any way? Both seem to output the same thing, but a bit of paranoia at this point is definitely warranted as I only want to figure this out once.

    EDITEDIT: And it's no longer necessary to unhash "DitherPost()" to preview the video; it looks normal with both last lines.



    Originally Posted by hello_hello View Post
    Originally Posted by LouieChuckyMerry View Post
    After checking, the test with F3KDB added roughly 30% extra bit rate to the encode.
    That's generally the price you pay for encoding the added grain, although 30% is quite an increase.

    I'm not even sure how much of this you need to worry about now. I mentioned your script wasn't quite getting all the banding and it wouldn't hurt to increase the debanding a little bit, and that's still true, but I'm not sure how much banding prevention you need to do after that if you're encoding as 10 bit.
    Perhaps if I can sort out the above, then my 10-bit output from 16-bit input will eliminate the slight banding. I went with 10-bit output when I first started encoding because everything I read said it would lessen the banding and I don't use a player. I either watch on my laptop or, on the rare occasion I've access to an HDTV, I run an HDMI cable from my laptop to the TV.
    Last edited by LouieChuckyMerry; 8th May 2019 at 12:34. Reason: Fixed Quote; Updates
    Quote Quote  
  9. Originally Posted by manono View Post
    For such a small amount of top and bottom artifacts you can keep from having to crop by using BorderControl like so:

    BorderControl(YBS=2,YBSF=2,YTS=2,YTSF=2)

    You'd have to do some serious zooming to spot anything amiss. You can even use odd numbers, if that stuff is only 1 pixel thick.
    Thanks again for the suggestion, manono, I finally have a bit of time to test it. Where would you suggest placing the BorderControl line in my script?
    Quote Quote  
  10. Doesn't matter. In a script I'm working on now it's right below the Source filter line.

    AVISource("movie5.avi")

    BorderControl(XLS=2,XLSF=2)


    Mine has crap on the left side.
    Quote Quote  
  11. Thanks for your reply, manono; good luck with your script (at least the crap is left ).
    Quote Quote  
  12. I finally had time to run tests. Along with the set part of the script:

    Code:
    SOURCE INFORMATION HERE
    ### Deinterlace ###
    TFM(Mode=7,UBSCO=False)
    ### Color Conversion ###
    ColorMatrix(Mode="Rec.601->Rec.709")
    ### Adjust Color ###
    MergeChroma(aWarpSharp2(Depth=16))
    ### Crop ###
    Crop(8,0,-8,0)
    ### Gibbs Noise Block ###
    Edge=MT_Edge("prewitt",ThY1=20,ThY2=40).RemoveGrain(17)
    Mask=MT_Logic(Edge.MT_Expand().MT_Expand().MT_Expand().MT_Expand(),Edge.MT_Inflate().MT_Inpand(),"xor").Blur(1.0)
    MT_Merge(Minblur(),Mask,Luma=True)
    ### Overall Temporal Denoise ###
    SMDegrain(TR=2,ThSAD=200,ContraSharp=True,RefineMotion=True,Plane=0,Chroma=False,PreFilter=2,,Lsb=True,Lsb_Out=False,Mode=6)
    ### Resize ###
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960,FHeight=720)
    aWarpSharp2(Depth=5)
    Sharpen(0.2)
    ### Fix Frame Borders ###
    BorderControl(YTS=1,YTSF=1,YBS=1,YBSF=1)
    ### Darken-Thin Lines ###
    Dither_Convert_8_To_16()
    F=DitherPost(Mode=-1)
    S=F.FastLineDarkenMod(Strength=24,Prot=6).aWarpSharp2(Blur=4,Type=1,Depth=8,Chroma=2)
    D=MT_MakeDiff(S,F).Dither_Convert_8_To_16()
    Dither_Add16(Last,D,Dif=True,U=2,V=2)
    I tried:

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    AddGrainC()
    and

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    Dither_Add_Grain16()
    and

    Code:
    ### Deband ###
    F3KDB(Y=100,Cb=100,Cr=100,GrainY=1,GrainC=1,Input_Mode=1,Output_Depth=16)
    and

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    and

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    F3KDB(Input_Mode=1,Output_Depth=16)
    and

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=1.0,ThRC=1.0,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    and

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=1.0,ThRC=1.0,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    F3KDB(Input_Mode=1,Output_Depth=16)
    and finally

    Code:
    ### Deband ###
    GradFun3(Radius=16,ThR=1.0,ThRC=1.0,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    F3KDB(Y=100,Cb=100,Cr=100,GrainY=1,GrainC=1,Input_Mode=1,Output_Depth=16)
    To my eyes the result with "F3KDB(Y=100,Cb=100,Cr=100,GrainY=1,GrainC=1,Input _Mode=1,Output_Depth=16)" looks the best, so thanks for the idea hello_hello .

    For the top and bottom frame borders "BorderControl(YTS=1,YTSF=1,YBS=1,YBSF=1)" did the job very well indeed, so thanks again manono .

    I do have questions about the AviSynth+ syntax for sending raw 16-bit to x264 10-bit (red text). After much searching I found a solution that seems to work, but I want to make sure it's the proper way; that is, does this script process in high bit depth, where possible, then truly send 16-bit to x264 10-bit correctly. With the x264 custom command line:

    Code:
    --demuxer raw --input-depth 16 --sar 1:1 --colorprim bt709 --transfer bt709 --colormatrix bt709
    the script is:

    Code:
    SOURCE INFORMATION HERE
    ### Deinterlace ###
    TFM(Mode=7,UBSCO=False)
    ### Color Conversion ###
    ColorMatrix(Mode="Rec.601->Rec.709")
    ### Adjust Color ###
    MergeChroma(aWarpSharp2(Depth=16))
    ### Crop ###
    Crop(8,0,-8,0)
    ### Gibbs Noise Block ###
    Edge=MT_Edge("prewitt",ThY1=20,ThY2=40).RemoveGrain(17)
    Mask=MT_Logic(Edge.MT_Expand().MT_Expand().MT_Expand().MT_Expand(),Edge.MT_Inflate().MT_Inpand(),"xor").Blur(1.0)
    MT_Merge(Minblur(),Mask,Luma=True)
    ### Overall Temporal Denoise ###
    SMDegrain(TR=2,ThSAD=200,ContraSharp=True,RefineMotion=True,Plane=0,Chroma=False,PreFilter=2,Lsb=True,Lsb_Out=False,Mode=6)
    ### Resize ###
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960,FHeight=720)
    aWarpSharp2(Depth=5)
    Sharpen(0.2)
    ### Fix Frame Borders ###
    BorderControl(YTS=1,YTSF=1,YBS=1,YBSF=1)
    ### Darken-Thin Lines ###
    Dither_Convert_8_To_16()
    F=DitherPost(Mode=-1)
    S=F.FastLineDarkenMod(Strength=24,Prot=6).aWarpSharp2(Blur=4,Type=1,Depth=9,Chroma=2)
    D=MT_MakeDiff(S,F).Dither_Convert_8_To_16()
    Dither_Add16(Last,D,Dif=True,U=2,V=2)
    ### Deband ###
    F3KDB(Y=100,Cb=100,Cr=100,GrainY=1,GrainC=1,Input_Mode=1,Output_Depth=16)
    ConvertFromStacked.ConvertBits(10,Dither=0)
    Prefetch(X)
    My question is whether "ConvertBits(10,Dither=0)" is redundant, given that the output is being sent to x264 10-bit. I ask because the output with or without "ConvertBits(10,Dither=0)" looks the same to my eyes on my 14" laptop screen.

    Finally, a couple questions about the SMDegrain line (green text):

    1) Is "PreFilter=2" needed? The SMDegrain wiki states "For sources with Gibbs noise, especially on anime, try prefilter=1 or 2..." but the Gibbs Noise Block comes before the SMDegrain call.

    2) Is "Mode=6" correct? The SMDegran wiki states "This is the mode of DitherPost when lsb_out=False, as a dithering method must be chosen for the 32bit->8bit conversion. (Interlaced content is locked to mode=6) The default mode=0 will help you optimize the dithering for optimum encodings when no further non-edge processing is done. Use mode=6 (error diffusion) if further processing will be done."

    EDIT: As I stared slack-jawed at the above script, trying to fully understand it, I realized that the Darken-Thin Lines segment was incorrect. SMDegrain runs in 16-bits ("Lsb=True") then sends 8-bit ("Lsb_Out=False") to be resized and have the frame borders fixed; the Darken-Thin Lines section then converts 8-bits to 16-bits ("Dither_Convert_8_To_16()"), only to immediately revert to 8-bits ("F=DitherPost(Mode=-1)") for FastLineDarkenMod to do its business. Then, I think, 16-bit is sent to F3KDB. Seems to me that:

    Code:
    ### Darken-Thin Lines ###
    Dither_Convert_8_To_16()
    F=DitherPost(Mode=-1)
    S=F.FastLineDarkenMod(Strength=24,Prot=6).aWarpSharp2(Blur=4,Type=1,Depth=9,Chroma=2)
    D=MT_MakeDiff(S,F).Dither_Convert_8_To_16()
    Dither_Add16(Last,D,Dif=True,U=2,V=2)
    ### Deband ###
    F3KDB(Y=100,Cb=100,Cr=100,GrainY=1,GrainC=1,Input_Mode=1,Output_Depth=16)
    should properly be:

    Code:
    ### Darken-Thin Lines ###
    FastLineDarkenMod(Strength=24,Prot=6).aWarpSharp2(Blur=4,Type=1,Depth=8,Chroma=2)
    ### Deband ###
    Dither_Convert_8_To_16()
    F3KDB(Y=100,Cb=100,Cr=100,GrainY=1,GrainC=1,Input_Mode=1,Output_Depth=16)
    which almost doubles the encoding speed and looks the same to my eyes. Is this change correct?

    EDITEDIT:
    After more slack-jawed staring, as well as a bit of involuntary drooling, while attempting to decipher the original Darken-Thin Lines section, I decided to place the Resize block before the Overall Temporal Denoise line. I realize this is considered bad form, but given that the source is pretty clean it made sense to me to to resize before starting any of the high bit depth actions; plus, I can then use my original Darken-Thin Lines block. Also, I moved the Fix Frame Borders line up in the script so it's processing the video before upscaling, which a couple quick tests showed increases the encoding speed quite a bit. So now I've:

    Code:
    SOURCE INFORMATION HERE
    ### Deinterlace ###
    TFM(Mode=7,UBSCO=False)
    ### Color Conversion ###
    ColorMatrix(Mode="Rec.601->Rec.709")
    ### Adjust Color ###
    MergeChroma(aWarpSharp2(Depth=16))
    ### Crop ###
    Crop(8,0,-8,0)
    ### Fix Frame Borders ###
    BorderControl(YTS=1,YTSF=1,YBS=1,YBSF=1)
    ### Gibbs Noise Block ###
    Edge=MT_Edge("prewitt",ThY1=20,ThY2=40).RemoveGrain(17)
    Mask=MT_Logic(Edge.MT_Expand().MT_Expand().MT_Expand().MT_Expand(),Edge.MT_Inflate().MT_Inpand(),"xor").Blur(1.0)
    MT_Merge(Minblur(),Mask,Luma=True)
    ### Resize ###
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960,FHeight=720)
    aWarpSharp2(Depth=5)
    Sharpen(0.2)
    ### Overall Temporal Denoise ###
    SMDegrain(TR=2,ThSAD=200,ContraSharp=True,RefineMotion=True,Plane=0,Chroma=False,PreFilter=2,,Lsb=True,Lsb_Out=True)
    ### Darken-Thin Lines ###
    F=DitherPost(Mode=-1)
    S=F.FastLineDarkenMod(Strength=24,Prot=6).aWarpSharp2(Blur=4,Type=1,Depth=8,Chroma=2)
    D=MT_MakeDiff(S,F).Dither_Convert_8_To_16()
    Dither_Add16(Last,D,Dif=True,U=2,V=2)
    ### Deband ###
    F3KDB(Y=100,Cb=100,Cr=100,GrainY=1,GrainC=1,Input_Mode=1,Output_Depth=16)
    ConvertFromStacked.ConvertBits(10,Dither=0)
    Prefetch(3)
    which seems to work quite well. I still wonder, however, if "ConvertBits(10,Dither=0)" and "PreFilter=2" are necessary...
    Last edited by LouieChuckyMerry; 14th May 2019 at 12:36. Reason: Clarity; Improvement; More!
    Quote Quote  



Similar Threads