VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Results 1 to 26 of 26
Thread
  1. Hello and Happy Saturday . Some years ago kind people here helped me develop an AviSynth script for encoding my Futurama NTSC DVD's that does about as good a job as possible given the poor quality of the DVD's. The only remaining flaws are inherent in the video:



    And, recently, kind people here helped me transform this script such that it now does a fine job of upscaling the Futurama NTSC DVD's to 720p. Thanks again . While I was preparing to upscale the 72 episode from seasons 1-4 I found myself bothered by the above-style video flaws, as they appear, albeit briefly, multiple times in each episode. So I did some checking, and it turns out the PAL version of the DVD's don't have this issue. Unfortunately, the PAL DVD's are limited to 2.0 192 Kbps audio, whereas the NTSC DVD's have 5.1 audio. Thus my question: is there any reasonable way to combine the PAL video with the NTSC audio. I know that the PAL video is 25 frames per second and the processed NTSC DVD's result in 23.976 frames per second. I understand that I could use an application, such as MeGUI, to speed up the NTSC audio, but the few times I've been forced to do this have resulted in very poorly synced audio. I guess, then, my question is actually: is there any elegant way to input 25 fps PAL video and output 23.976 fps? Thanks for any suggestions.
    Image Attached Thumbnails Click image for larger version

Name:	S1.E1-SpacePilot3000[FlawedVideo].jpg
Views:	214
Size:	66.1 KB
ID:	48648  

    Quote Quote  
  2. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    Since you're going to be re-encoding the PAL source, just slow the video down to 23.976 and use the NTSC audio as-is
    Quote Quote  
  3. Originally Posted by davexnet View Post
    Since you're going to be re-encoding the PAL source, just slow the video down to 23.976 and use the NTSC audio as-is
    davexnet: Thanks for your reply. How would you suggest accomplishing this during the encoding process?
    Quote Quote  
  4. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    An Avisynth script, one that simply opens the source and uses AssumeFPS to slow the source video.
    Then you can encode it in any program that accepts the script
    Or you could encode the video directly in Virtualdub2, using video/frame rate/change frame rate to...

    Then when you have your new video, combine it with the audio using mkvtoolnix.avidemux, etc,etc
    Quote Quote  
  5. Here's the original upscale script for my NTSC DVD's:

    Code:
    DGINDEX SOURCE INFORMATION HERE
    ### Color Conversion ###
    ColorMatrix(Mode="Rec.601->Rec.709")
    ### Deinterlace-Match Fields-Decimate ###
    AssumeTFF()
    TFM(Chroma=False,PP=0) 
    AssumeBFF()
    Interleave(TFM(Mode=1,PP=0,Field=1),TFM(Mode=1,PP=0,Field=0))
    TFM(Field=0,Clip2=Yadif())
    vInverse()
    SRestore(23.976)
    ### Adjust Color ###
    MergeChroma(aWarpSharp2(Depth=19))
    SmoothTweak(Saturation=1.01)
    ### Crop ###
    Crop(8,0,-8,0)
    ### Gibbs Noise Block ###
    Edge=MT_Edge("prewitt",ThY1=20,ThY2=40).RemoveGrain(17)
    Mask=MT_Logic(Edge.MT_Expand().MT_Expand().MT_Expand().MT_Expand(),Edge.MT_Inflate().MT_Inpand(),"xor").Blur(1.0)
    MT_Merge(Minblur(),Mask,Luma=True)
    ### Overall Temporal Denoise ###
    SMDegrain(TR=1,ThSAD=200,ContraSharp=True,RefineMotion=True,Plane=0,PreFilter=2,Chroma=False,Lsb=True,Lsb_Out=False)
    ### Resize ###
    NNEDI3_RPow2(4,CShift="Spline64Resize",FWidth=960,FHeight=720)
    aWarpSharp2(Depth=5)
    Sharpen(0.2)
    ### Darken-Thin Lines ###
    Dither_Convert_8_To_16()
    F=DitherPost(Mode=-1)
    S=F.FastLineDarkenMod(Strength=24,Prot=6).aWarpSharp2(Blur=4,Type=1,Depth=3,Chroma=2)
    D=MT_MakeDiff(S,F).Dither_Convert_8_To_16()
    Dither_Add16(Last,D,Dif=True,U=2,V=2)
    ### Deband ###
    GradFun3(Radius=16,ThR=0.55,SMode=2,StaticNoise=True,Lsb_In=True,Lsb=True)
    DitherPost()
    And here's a sample from my PAL DVD's: S1.E1-FuturamaSample[PAL]

    Suggestions for a change to the "### Deinterlace-Match Fields-Decimate ###" section that would cleanly decimate the PAL video to 23.976 fps are much appreciated.

    EDIT: I'm much more interested in quality over speed.
    Last edited by LouieChuckyMerry; 14th Apr 2019 at 12:16. Reason: Information. Information...
    Quote Quote  
  6. Is the sample typical or an anomaly?

    Because pretty much all it needs is
    TFM()
    AssumeFPS(24000,1001)

    Except for the glitch around frame 118, but there's blending in both fields, so you either live with some blending for a frame, or you take it out and there'll be a glitch in the motion. Both top and bottom fields are blended for a couple of frames, so I don't think there's any way to extract a clean one.

    I had a look at your script, but it made my head hurt and I had to lie down for a bit.

    So the attached thingy is:

    Code:
    mpeg2source("D:\S1.E1-FuturamaSample[PAL].d2v", cpu=6)
    TFM()
    SomeFilteringCopiedFromAScriptStillOnMyHardDriveSoImNotClaimingItsClever()
    ExtraBandingBeGone()
    AssumeFPS(24000,1001)
    Image Attached Files
    Quote Quote  
  7. Originally Posted by hello_hello View Post
    Is the sample typical or an anomaly?

    Because pretty much all it needs is
    TFM()
    AssumeFPS(24000,1001)
    Typical. I took a clip with panning because that tends to look the worst.


    Originally Posted by hello_hello View Post
    Except for the glitch around frame 118, but there's blending in both fields, so you either live with some blending for a frame, or you take it out and there'll be a glitch in the motion. Both top and bottom fields are blended for a couple of frames, so I don't think there's any way to extract a clean one.
    How would one "take it out"? Any way to replace it somehow, like duplicating a "clean" frame next to it? Any idea how the [CC] encode was able to minimize that glitch?: S1.E1-Glitches[PAL]


    Originally Posted by hello_hello View Post
    I had a look at your script, but it made my head hurt and I had to lie down for a bit.
    I hope you're feeling better; you get used to it eventually .


    Originally Posted by hello_hello View Post
    So the attached thingy is:

    Code:
    mpeg2source("D:\S1.E1-FuturamaSample[PAL].d2v", cpu=6)
    TFM()
    SomeFilteringCopiedFromAScriptStillOnMyHardDriveSoImNotClaimingItsClever()
    ExtraBandingBeGone()
    AssumeFPS(24000,1001)
    Would you please share the remainder of your script; mine results in some kind of horizontal artifacts that are very annoying:



    EDIT: I ran the above script but with:

    Code:
    ### Deinterlace-Match Fields-Decimate ###
    QTGMC()
    SRestore(23.976)
    and there's no more glitch. But there are still those horizontal artifacts...
    Image Attached Thumbnails Click image for larger version

Name:	S1.E1-Artifacts[TCM()].jpg
Views:	149
Size:	92.1 KB
ID:	48656  

    Last edited by LouieChuckyMerry; 14th Apr 2019 at 16:02. Reason: Clarity
    Quote Quote  
  8. Originally Posted by LouieChuckyMerry View Post
    How would one "take it out"? Any way to replace it somehow, like duplicating a "clean" frame next to it?
    I've never tried to interpolate missing frames, but I think there's a function for it.
    Much of the time, I'd repeat a frame and drop the problem one if possible
    Trim(0,117) + \
    Trim(117,117) + \
    Trim(119,0)
    but there's too much motion and you can see the glitch. Fortunately, there's plenty of motion where the blended frame happens so it's pretty hard to spot.

    You can minimise the blending by bossing TFM around a bit. I think it was something like this:

    A = TFM()
    B = TFM(pp=3, field=0)
    A.Trim(0,117) + \
    B.Trim(118,118) + \
    A.Trim(119,0)

    If I remembered the frame numbers correctly, it'll match and de-interlace in the other direction, and there's less blending, but it adds a slight glitch to the motion.

    Originally Posted by LouieChuckyMerry View Post
    Any idea how the [CC] encode was able to minimize that glitch?: S1.E1-Glitches[PAL]
    I missed the question. See my next post.

    Originally Posted by LouieChuckyMerry View Post
    Would you please share the remainder of your script; mine results in some kind of horizontal artifacts that are very annoying
    Maybe because ColorMatrix is too early in your script? The screenshot is from a section where the fields go out of alignment (from memory) so maybe if you TFM'd before ColorMatrix? Or you could try Interlaced=true, but I always color convert after de-interlacing or field matching.

    My script's a bit embarrassing. It's not very long and it's 8 bit all the way.
    It's probably largely ideas ideas borrowed from jagabo at some stage. Although not so long ago, I discovered GradFun3() and f3kdb() together can work quite well for animation, at least when you're staying in 8 bit mode. I was converting some old Family Guy episodes and GradFun3() got rid of most of the existing banding, but x264 still added some back when encoding. f3kdb() prevented most of that, but I didn't like the look of it. I'm not sure I could explain why exactly. Something about it didn't look natural to me, I guess. Together though....

    DGDecode_mpeg2source("D:\S1.E1-FuturamaSample[PAL].d2v", cpu=6)
    TFM()
    TTempSmooth()
    FastLineDarken(Thinning=0)
    MergeChroma(AwarpSharp(Depth=5), AwarpSharp(Depth=20))
    CropResize(960,720,8,2,-8,-2,InDAR=15.0/11.0,Resizer="Spline36") # CropResize also converts to rec.709
    FastLineDarken(Thinning=0)
    MAA()
    DeHalo_alpha()
    CSMod(strength=150)
    GradFun3(thr=1.0, thrc=1.0)
    f3kdb()
    AssumeFPS(24000,1001)
    Actually.... if you happen to give MAA() a spin, could you try MAA2() in a script with MeGUI? For some reason MAA2() always causes MeGUI to crash when I close the preview. MAA() is fine. I'd be interested to know if it's just my PC. The dither package does the same. I'm using Dither 1.27.1 because the newer versions don't play nice with MeGUI.
    MAA
    MAA2
    Last edited by hello_hello; 18th Apr 2019 at 06:25.
    Quote Quote  
  9. Originally Posted by LouieChuckyMerry View Post
    Chances are if the NTSC version was telecined, it glitched differently. AnimeIVTC has an option for fixing that sort of thing. It's described like this in the help file.

    chrfix : Use to correct chroma swap between fields (to find out, apply bob() on your clip and examine the frames. If at some point the chroma of a frame is in the other and vice-versa, the issue is present).

    I don't know how it works as such, and I've not used the option much myself, but it looks more like some sort of "chroma-swap" in the NTSC version. I don't really know how it gets that way either.
    Quote Quote  
  10. Originally Posted by LouieChuckyMerry View Post
    How would one "take it out"? Any way to replace it somehow, like duplicating a "clean" frame next to it?
    I didn't have much luck with frame interpolation so I used a picture editor (Photofiltre) to first fix frame #118 and AviSynth to stick it back into the video:

    TFM()
    Q=ImageSource("118.bmp",End=118).ConvertToYV12()
    ReplaceFramesSimple(Last,Q,Mappings="118")
    Image Attached Thumbnails Click image for larger version

Name:	118Before.jpg
Views:	6
Size:	69.0 KB
ID:	48657  

    Click image for larger version

Name:	118After.jpg
Views:	6
Size:	76.7 KB
ID:	48658  

    Quote Quote  
  11. That looks pretty good. I'll have to check out the Photofiltre program to which you refer.
    Quote Quote  
  12. Originally Posted by LouieChuckyMerry View Post
    EDIT: I ran the above script but with:

    Code:
    ### Deinterlace-Match Fields-Decimate ###
    QTGMC()
    SRestore(23.976)
    and there's no more glitch. But there are still those horizontal artifacts...
    Step through the frames where there's constant movement, and compare it to TFM().AssumeFPS(24000,1001).
    I think you'll find there's frames missing from the Srestore version (a frame per second for 24fps), and I suspect you're lucky it just happened to pick the bad frame to remove, this time.
    Quote Quote  
  13. I agree. With TFM alone and making it 25fps, during that long pan in the middle there are no duplicate frames as there would be if it was supposed to be 23.976fps. So, to get it to 23.976fps you use an AssumeFPS command after TFM and slow the audio to match.

    I'll have to check out the Photofiltre program to which you refer.
    Any picture editor can do the job. I use Photofiltre because it's good and free and I've learned how to use it pretty well. But I don't mind recommending it. You load the bad frame as well as the good frames on either side. Then you add pieces from both sides to 'restore' the bad one. In principle it's easy to do, and only takes time.
    Quote Quote  
  14. Originally Posted by hello_hello View Post
    Originally Posted by LouieChuckyMerry View Post
    Would you please share the remainder of your script; mine results in some kind of horizontal artifacts that are very annoying
    Maybe because ColorMatrix is too early in your script? The screenshot is from a section where the fields go out of alignment (from memory) so maybe if you TFM'd before ColorMatrix? Or you could try Interlaced=true, but I always color convert after de-interlacing or field matching.
    After my last post (see the EDIT) I ran a test with:

    Code:
    ### Deinterlace-Match Fields-Decimate ###
    QTGMC()
    SRestore(23.976)
    and the rest of the script the same, and the results were very interesting; the horizontal artifacts--please, what are they called--were still there but the single bad frame was gone. Then I ran the same script but with the deinterlacing before the color conversion, and the horizontal artifacts were gone but the single bad frame was back. This I don't understand .


    Originally Posted by hello_hello View Post
    My script's a bit embarrassing. It's not very long and it's 8 bit all the way.
    It's probably largely ideas ideas borrowed from jagabo at some stage. Although not so long ago, I discovered GradFun3() and f3kdb() together can work quite well for animation, at least when you're staying in 8 bit mode. I was converting some old Family Guy episodes and GradFun3() got rid of most of the existing banding, but x264 still added some back when encoding. f3kdb() prevented most of that, but I didn't like the look of it. I'm not sure I could explain why exactly. Something about it didn't look natural to me, I guess. Together though....
    DGDecode_mpeg2source("D:\S1.E1-FuturamaSample[PAL].d2v", cpu=6)
    TFM()
    TTempSmooth()
    FastLineDarken(Thinning=0)
    MergeChroma(AwarpSharp(Depth=5),AwarpSharp(Depth=2 0))
    CropResize(960,720,8,2,-8,-2,InDAR=15.0/11.0,Resizer="Spline36") # CropResize also converts to rec.709
    FastLineDarken(Thinning=0)
    MAA()
    DeHalo_alpha()
    CSMod(strength=150)
    GradFun3(thr=1.0, thrc=1.0)
    f3kdb()
    AssumeFPS(24000,1001)
    Why do you "repeat" the aWarpSharp call?


    Originally Posted by hello_hello View Post
    Actually.... if you happen to give MAA() a spin, could you try MAA2() in a script with MeGUI? For some reason MAA2() always causes MeGUI to crash when I close the preview. MAA() is fine. I'd be interested to know if it's just my PC. The dither package does the same. I'm using Dither 1.27.1 because the newer versions don't play nice with MeGUI.
    MAA
    MAA2
    I'll run a test after I post this reply if I'm still capable.


    Originally Posted by manono View Post
    Originally Posted by LouieChuckyMerry View Post
    How would one "take it out"? Any way to replace it somehow, like duplicating a "clean" frame next to it?
    I didn't have much luck with frame interpolation so I used a picture editor (Photofiltre) to first fix frame #118 and AviSynth to stick it back into the video:

    TFM()
    Q=ImageSource("118.bmp",End=118).ConvertToYV12()
    ReplaceFramesSimple(Last,Q,Mappings="118")
    That looks really good, but I doubt if even Matt Groening himself would sift through 72 episodes of Futurama and find all the frames that need fixing .
    Quote Quote  
  15. Originally Posted by hello_hello View Post
    Originally Posted by LouieChuckyMerry View Post
    EDIT: I ran the above script but with:

    Code:
    ### Deinterlace-Match Fields-Decimate ###
    QTGMC()
    SRestore(23.976)
    and there's no more glitch. But there are still those horizontal artifacts...
    Step through the frames where there's constant movement, and compare it to TFM().AssumeFPS(24000,1001).
    I think you'll find there's frames missing from the Srestore version (a frame per second for 24fps), and I suspect you're lucky it just happened to pick the bad frame to remove, this time.
    Yes, I'm slow. I ran a test a shortish while ago that output the first 2m32s of season one, episode one of Futurama with the QTGMC section for deinterlacing and the rest of the script per the original. Checking frame-by-frame (death by drowning) the results were the best yet. Honestly, I didn't see even one gltichy frame. The results with QTGMC and AssumeFPS were similar but not as good. A small sample size but promising.
    Quote Quote  
  16. A fading thought before drifting off?:

    Code:
    QTGMC()
    AssumeFPS(24000,1001)
    Quote Quote  
  17. Originally Posted by LouieChuckyMerry View Post
    A fading thought before drifting off?:

    Code:
    QTGMC()
    AssumeFPS(24000,1001)
    That's very good. Especially if you want to play all your Futurama encodes in super slo-mo, so you can enjoy each episode for much longer.
    Quote Quote  
  18. Originally Posted by manono View Post
    Originally Posted by LouieChuckyMerry View Post
    A fading thought before drifting off?:

    Code:
    QTGMC()
    AssumeFPS(24000,1001)
    That's very good. Especially if you want to play all your Futurama encodes in super slo-mo, so you can enjoy each episode for much longer.
    What's up with that? I let a single episode run overnight and MediaInfo shows 23.976 fps but the time of the episode seems to have about doubled and it does play in slow motion.
    Quote Quote  
  19. Originally Posted by LouieChuckyMerry View Post
    After my last post (see the EDIT) I ran a test with:

    Code:
    ### Deinterlace-Match Fields-Decimate ###
    QTGMC()
    SRestore(23.976)
    and the rest of the script the same, and the results were very interesting; the horizontal artifacts--please, what are they called--were still there but the single bad frame was gone. Then I ran the same script but with the deinterlacing before the color conversion, and the horizontal artifacts were gone but the single bad frame was back. This I don't understand .
    SRestore normally looks for blended frames and tries to keep the non-blended ones, but for a field blended source the blending usually happens in a fairly consistent pattern.
    Your sample isn't field blended (apart from a couple of frames) so to reduce the frame rate SRestore needs to delete "some" frames, and probably for a non-blended source, duplicate frames would be removed in preference to non-duplicates, but there's no blending pattern as such, which is why I said you probably got lucky the first time, and there'd have to be some sort of a glitch in motion where the blended fields are, because there's no clean fields to use.

    Originally Posted by LouieChuckyMerry View Post
    Why do you "repeat" the aWarpSharp call?
    Probably because jagabo told me to

    Because they're used inside MergeChroma, the first instance is only sharpening the luma (it's sharpening luma and chroma, but MergeChroma takes the chroma from the second instance), so by using only mild sharpening it's not doing horrible things to lines. The second instance can go to town on the chroma.

    Screenshot 1 is
    FastLineDarken(Thinning=0)
    Spline36Resize(1440,1080) # the resizing is only to make the differences easier to see.

    Screenshot 2 is
    FastLineDarken(Thinning=0)
    AwarpSharp(Depth=20)
    Spline36Resize(1440,1080)

    Screenshot 3 is
    FastLineDarken(Thinning=0)
    MergeChroma(AwarpSharp(Depth=5), AwarpSharp(Depth=20))
    Spline36Resize(1440,1080)

    The "good" in screenshot 3 is the chroma meets the lines (or gets much closer) without messing with them too much. You can see it in the orange in her hair, or the green, which doesn't bleed onto her arm as much.
    The "bad" is maybe the chroma sharpening is a little over-done. The line on the left side of Peter's face looks a bit orange in screenshot 3. Depth=10 might have been enough. Still, I'd rather live with that. Everything in video filtering seems to be some sort of a compromise.

    I assume MergeChroma(aWarpSharp2(Depth=19)) in your script is doing much the same thing, except the luma is being taken from the source clip after de-interlacing or IVTC, rather than a clip with AwarpSharp(Depth=5) applied. Depth=5 is pretty mild, so the difference wouldn't be huge.
    Image Attached Thumbnails Click image for larger version

Name:	1.png
Views:	9
Size:	800.3 KB
ID:	48664  

    Click image for larger version

Name:	2.png
Views:	8
Size:	683.6 KB
ID:	48665  

    Click image for larger version

Name:	3.png
Views:	9
Size:	716.9 KB
ID:	48666  

    Last edited by hello_hello; 15th Apr 2019 at 08:01.
    Quote Quote  
  20. Originally Posted by LouieChuckyMerry View Post
    What's up with that? I let a single episode run overnight and MediaInfo shows 23.976 fps but the time of the episode seems to have about doubled and it does play in slow motion.
    You should have used
    QTGMC(FPSDivisor=2)
    AssumeFPS(24000,1001)

    AssumeFPS just changes the frame rate without changing the number of frames, so without FPSDivisor=2 you slowed a 50fps clip down to 23.976fps.
    Quote Quote  
  21. Originally Posted by hello_hello View Post
    Originally Posted by LouieChuckyMerry View Post
    After my last post (see the EDIT) I ran a test with:

    Code:
    ### Deinterlace-Match Fields-Decimate ###
    QTGMC()
    SRestore(23.976)
    and the rest of the script the same, and the results were very interesting; the horizontal artifacts--please, what are they called--were still there but the single bad frame was gone. Then I ran the same script but with the deinterlacing before the color conversion, and the horizontal artifacts were gone but the single bad frame was back. This I don't understand .
    SRestore normally looks for blended frames and tries to keep the non-blended ones, but for a field blended source the blending usually happens in a fairly consistent pattern.
    Your sample isn't field blended (apart from a couple of frames) so to reduce the frame rate SRestore needs to delete "some" frames, and probably for a non-blended source, duplicate frames would be removed in preference to non-duplicates, but there's no blending pattern as such, which is why I said you probably got lucky the first time, and there'd have to be some sort of a glitch in motion where the blended fields are, because there's no clean fields to use.
    Yep, more tests and AssumeFPS(24000,1001) certainly trumps SRestore(23.976). Thanks (and manolo, too).


    Originally Posted by hello_hello View Post
    Originally Posted by LouieChuckyMerry View Post
    Why do you "repeat" the aWarpSharp call?
    Probably because jagabo told me to
    So so true .


    Originally Posted by hello_hello View Post
    Because they're used inside MergeChroma, the first instance is only sharpening the luma (it's sharpening luma and chroma, but MergeChroma takes the chroma from the second instance), so by using only mild sharpening it's not doing horrible things to lines. The second instance can go to town on the chroma.
    Thank you for the clear explanation.


    Originally Posted by hello_hello View Post
    Originally Posted by LouieChuckyMerry View Post
    What's up with that? I let a single episode run overnight and MediaInfo shows 23.976 fps but the time of the episode seems to have about doubled and it does play in slow motion.
    You should have used
    QTGMC(FPSDivisor=2)
    AssumeFPS(24000,1001)

    AssumeFPS just changes the frame rate without changing the number of frames, so without FPSDivisor=2 you slowed a 50fps clip down to 23.976fps.
    Thanks for the reminder. I tinkered with QTGMC way back when, but that initial testing left me preferring SMDegrain and I've not used QTGMC since.
    Last edited by LouieChuckyMerry; 16th Apr 2019 at 21:41.
    Quote Quote  
  22. Given the earlier kind advice from hello_hello and manolo to focus on TFM().AssumeFPS(24000,1001), I've run many more tests, the results of which, at least to me, are really interesting. Tinkering with TFM's "Mode" and "PP" settings, I've found that "Mode=5" combined with "PP=3", "PP=4", and "PP=6" (the default) by far render the best results for this source. The differences are subtle, but mostly can be seen in (I think it's called) combing around the mouths of characters as they speak. The biggest discovery, which I stumbled upon, was to set "UBSCO=False"; this made everything markedly better, especially scene changes. Ahhh, all of this is based on a longer sample clip, 45s as opposed to 10s, which encompasses the original sample but adds more action and pans. It's here: S1.E1-FuturamaSample[PAL][Extended]
    Quote Quote  
  23. Of course if you want to use the NTSC audio, you've still got to drop it in and hope it syncs up. The chances of that aren't huge. Often they're edited slightly differently, sometimes it's an extra frame at the end of a scene here.... one less frame there.... and before you know it you'll be editing and re-encoding the audio.

    Have you tried mode=7?
    UBSCO=False does seem to improve the matches occasionally, but I think sometimes mode=7 does better than mode=5.
    It needs linear access, so you have to play the frames in order for it to work properly, but here's a couple of examples.

    The first three screenshots are: TFM(mode=5, pp=3, UBSCO=False)
    The next three screenshots are: TFM(mode=7, pp=3, UBSCO=False)

    Once again it's something of a compromise. Where mode=7 does find a clean frame, sometimes there's a little "glitch" in the motion instead of blending. You'd have to decide for yourself which you dislike the most. I could live with the blending where there's a lot of motion as it goes by quickly and doesn't stand out.
    Image Attached Thumbnails Click image for larger version

Name:	1.jpg
Views:	6
Size:	86.3 KB
ID:	48686  

    Click image for larger version

Name:	2.jpg
Views:	5
Size:	88.3 KB
ID:	48687  

    Click image for larger version

Name:	3.jpg
Views:	6
Size:	91.8 KB
ID:	48688  

    Click image for larger version

Name:	4.jpg
Views:	5
Size:	84.6 KB
ID:	48689  

    Click image for larger version

Name:	5.jpg
Views:	6
Size:	91.5 KB
ID:	48690  

    Click image for larger version

Name:	6.jpg
Views:	6
Size:	90.3 KB
ID:	48691  

    Last edited by hello_hello; 17th Apr 2019 at 01:02.
    Quote Quote  
  24. Originally Posted by hello_hello View Post
    Have you tried mode=7?
    UBSCO=False does seem to improve the matches occasionally, but I think sometimes mode=7 does better than mode=5.
    It needs linear access, so you have to play the frames in order for it to work properly, but here's a couple of examples.

    The first three screenshots are: TFM(mode=5, pp=3, UBSCO=False)
    The next three screenshots are: TFM(mode=7, pp=3, UBSCO=False)

    Once again it's something of a compromise. Where mode=7 does find a clean frame, sometimes there's a little "glitch" in the motion instead of blending. You'd have to decide for yourself which you dislike the most. I could live with the blending where there's a lot of motion as it goes by quickly and doesn't stand out.
    Given that the TIVTC/TCM AviSynth Wiki states (emphasis mine) "Mode 7 is is not one of the normal modes and is specifically for material with blended fields that follows a specific pattern." I didn't try Mode=7. Silly me for not understanding what "specific pattern" means. Anyway, a couple quick test with Mode=7 are very promising--a single bad frame passes faster than several--so I'll run some longer tests with various PP's and compare them to my Mode=5 tests. Many thanks for the suggestion.


    Originally Posted by hello_hello View Post
    Of course if you want to use the NTSC audio, you've still got to drop it in and hope it syncs up. The chances of that aren't huge. Often they're edited slightly differently, sometimes it's an extra frame at the end of a scene here.... one less frame there.... and before you know it you'll be editing and re-encoding the audio.
    I'm planning to encode the entire first episode overnight then check the audio sync tomorrow. Any idea how to upmix 192 Kbps two-channel .ac3 audio to 768 Kbps six-channel DTS? Seriously, if it comes to it I'd be inclined to live with the PAL audio given how much better the video looks compared to the NTSC.
    Last edited by LouieChuckyMerry; 17th Apr 2019 at 07:55. Reason: Correction
    Quote Quote  
  25. For your first sample, Mode=7 didn't help. Trying it on your second sample was just an experiment. I was surprised it helped myself.

    Because surround sound sucks and blows at the same time, I've never tried to upmix audio.
    MeGUI has a couple of upmixing options in it's audio encoder configuration. I think they use SoX to upmix. I don't know if there's alternative/better/free methods.
    Quote Quote  
  26. Something you might consider, assuming the blending problem only occurs in a few places, is to create two versions of the video. I used my previous script below, except "B" is the Mode=7 clip and "C" is Mode=5.
    Having to give Mode=7 linear access makes it harder, but you could run a quick encode with Mode=7, with none of the other filtering, then check the blended spots for problem frames. If you see any bad frames you could make note of the frame numbers, then switch the script to "C" and check if Mode=5 did a better job. If so, you could use Trim() to replace those frames with the frames from the mode=5 clip, and then run the final encode with filtering.

    Anyone know if there's a function for checking for blended frames that can provide a list of frame numbers?. That'd be better. Not having to check the Mode=7 version of the clip manually. Anyway, it's just a thought...

    Code:
    A = last
    B = A.TFM(mode=7, pp=3, UBSCO=False)
    C = A.TFM(mode=5, pp=3, UBSCO=False)
    
    B.Trim(0,0)
    # C.Trim(0,0)
    
    TTempSmooth()
    FastLineDarken(Thinning=0)
    MergeChroma(AwarpSharp(Depth=5), AwarpSharp(Depth=20))
    CropResize(960,720,8,2,-8,-2,InDAR=15.0/11.0,Resizer="Spline36")
    FastLineDarken(Thinning=0)
    MAA()
    DeHalo_alpha()
    CSMod(strength=150)
    GradFun3(thr=1.0, thrc=1.0)
    f3kdb()
    AssumeFPS(24000,1001)
    Last edited by hello_hello; 18th Apr 2019 at 06:24.
    Quote Quote  



Similar Threads