VideoHelp Forum

Try DVDFab and copy Ultra HD Blu-rays and DVDs! Or rip iTunes movies and music! Download free trial !
+ Reply to Thread
Results 1 to 13 of 13
Thread
  1. I'm trying to encode an NTSC DVD as nicely as possible, but this one is proving hard. Here's two snippets:

    https://www.sendspace.com/file/rk76fz

    After using SeparateFields(), I recognize that there's a pattern of BGBGR -- B= blended field, G= good field, R=repeated field. I tried QTGMC().srestore(23.976) but it doesn't give good results, sometimes it's perfect but sometimes it just doesn't choose the right frames. I then tried QTGMC(preset="medium").srestore(23.976) and I thought that modification had helped srestore select the right frames, but it only helped in some sections but made it worse in others. Same with "fast". Maybe there's another setting I should be tweaking?

    After reading https://forum.videohelp.com/threads/378461-Query-about-identifying-field-blending I tried this:

    SeparateFields()
    SelectEvery(5,2,4)
    Spline36Resize(720,480)
    AssumeFrameBased()

    But then I recognized that in some sequences I should rather be using SelectEvery(5,1,3), and surely there's something better than selecting the sections by hand...!? Moreover, the thing flickers up and down (I guess because I'm selecting some upper and some lower fields).

    Help appreciated!

    EDIT: I'm now trying AnimeIVTC(mode=2) and it seems promising, though I have to make more tests... It's a bit confusing though, I thought this mode of AnimeIVTC was essentially srestore, but I guess not?

    EDIT2: I'm pretty convinced now that AnimeIVTC(mode=2,bbob=4) does a pretty good job, I haven't spotted any of the problems described above using this. If somebody has another suggestion or some explanation of why this works well, and the QTGMC+srestore option above not so much, I'm all ears!
    Last edited by bruno321; 19th Oct 2020 at 10:29.
    Quote Quote  
  2. In a sequence like BGBGR you would rather select the G with
    Code:
    selectevery(5,1,3)
    because the frame counting starts with zero, not with 1. It depends on the start frame of the cycle though.
    Last edited by Sharc; 19th Oct 2020 at 18:01.
    Quote Quote  
  3. I only saw a few frames near the start of 1.demuxed with mild blending after QTGMC().SRestore(23.976). But that's quite common as it can take a ~50 frames for SRestore() to "lock in" to the pattern. When I appended 1.demuxed to 2.demuxed almost all of the blending went away. Are you seeing much worse blending?

    Code:
    v1 = Mpeg2Source("1.demuxed.d2v")
    v2 = Mpeg2Source("2.demuxed.d2v")
    v2+v1
    
    QTGMC()
    SRestore(frate=23.976)
    Image Attached Files
    Quote Quote  
  4. @Sharc: Thanks, yeah, I noticed that later, silly me. But anyway, (5,1,3) was sometimes working well and sometimes clearly not selecting the good ones...

    @jagabo: I guess this is something you'd need the whole film to notice, and not just a snippet. But I had an old xvid encode made by a good encoder (dunno how it was done, but it was from the same source), which had no blending whatsoever, so it was easy to make comparisons, and in some cases QTGMC().srestore(23.976) clearly wasn't choosing the right ones. It was affecting playback.

    But AnimeIVTC(mode=2,bbob=4) really worked wonders. For the purpose of informing future passers-by, I asked at doom9 what's the difference, and I was told: it use srestore with ", dclip=i.bob(-0.2,0.6).reduceflicker(strength=1)". I guess that modification of srestore was important enough so that it was doing a better recognition job than vanilla srestore. Though I can't say I really understand what's going on. I understand that dclip is about choosing a better clip to do the recognition from, but I don't know what the rest of the code really does or what's the rationale behind it.
    Quote Quote  
  5. okay so using:
    Code:
    QTGMC()
    srestore(frate=23.967,omode=6,dclip=last.ReduceFlicker(strength=1))
    should work too,..
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  6. Is the first sample telecined? It seems to have a 3:2 pattern of combed and clean frames. I couldn't work out how to IVTC and remove the blending though.

    If you want to try something that might retain the film look more than QTGMC, maybe try TDeint. You can use the edeint argument to repair combing with QTGMC if you want to.

    The FixBlend function seems to work well for me (see my signature). The advantage over SRestore is it won't miss blending at the beginning (once you've set the pattern for blending removal) while the disadvantages are it's not automatic, so if the blending patten changes it'll stop working, and you have to configure it manually. Much of the time the blending pattern doesn't change though.

    This seems to get all the blending for the first sample:

    Code:
    GreyScale()    # kill the rainbows
    DeintClip=QTGMC()    # if you want QTGMC to repair any combing for TDeint
    TDeint(mode=1, tryWeave=true, MI=50, metric=1, edeint=DeintClip)    # or QTGMC
    FixblendX(1, 24.0/1.001, 10) # or SRestore(24.0/1.001)
    AssumeFPS(24000,1001)
    The same works for the second sample only FixBlendX needs to be adjusted because the blending pattern is different relative to the first frame.

    FixBlendX(8, 24.0/1.001, 10)
    Last edited by hello_hello; 20th Oct 2020 at 23:32.
    Quote Quote  
  7. Thanks for looking at it. I tried it in the whole film and it works generally OK but lets some blends through that the AnimeIVTC code above didn't. Comparison:

    Your code:
    Image
    [Attachment 55580 - Click to enlarge]

    AnimeIVTC(mode=2,bbob=4) (+other unimportant for this matter aesthetic filters)
    Image
    [Attachment 55581 - Click to enlarge]


    Using the variation with FixBlendX(8, 24.0/1.001, 10), the blend above gets fixed, but others pass through. I guess one could go down this road identifying where the pattern changes. But is there a way to do that other than manually? (I understand this discussion is academic since AnimeIVTC gets this thing right, but I'm still interested if you are, for the purposes of learning something for other future cases.)

    Could you elaborate (or link to a post where you elaborate) on what FixBlendX does? The manual says

    FixBlendX can be used to specify the frames to keep, the output frame rate,
    and the amount of over-sampling. ie FixBlendX(3, 24.0/1.001, 10)
    But I found it a bit too concise. Frames to keep from what? What's "the amount of over-sampling"?
    Last edited by bruno321; 21st Oct 2020 at 01:48.
    Quote Quote  
  8. If there's an occasional blended frame popping up, adjusting the frames being extracted will often fix it, assuming the pattern doesn't change. FixBlendX(9, 24.0/1.001, 10) or FixBlendX(10, 24.0/1.001, 10) might solve it. Your sample was short, so I could specify the frame to be extracted as 8, 9 or 10 without seeing blending. Blending 23.976 film into NTSC isn't that common, as there's better ways to do it, so the over-sampling might need adjusting too. It's often trial and error.

    There's a link to a doom9 thread at the top of the script. That's where I borrowed the idea from and the explanation there might help. The functions are just an easier way to do what's shown in the opening post.

    Basically... when a video is field blended to change the frame rate, the top fields of a frame usually have blending at a time when the bottom ones don't, and likewise the bottom fields have blending when the top ones don't.

    Bob de-interlacing turns each field into a full height frame, and for progressive video that means every frame is repeated, but because only one of the fields from each frame has blending, at any point in time there should be a new bobbed frame that's not blended.

    The FixBlend functions then create a bunch more duplicate frames (that's what the over-sampling refers to) and from there picks out a frame at regular intervals that has no blending to give you the original frame rate. The further apart the blended frame rate and the original frame rate, the less "oversampling" is required. For converting field blended NTSC (bobbed to 59.94fps) back to 25fps PAL, 4x oversampling sees to be enough. For field blended PAL bobbed to 50fps, the dedicated functions use 10x oversampling to de-blend to 23.976fps or 29.97fps, but I'm not sure I've de-blended the latter. The FixBlendNTSC function is the equivalent of FixBlendX(Number, 25.0, 4), where "Number" is the choice of frames to extract between 1 and 4, so it's FixBlendNTSC(Number) and the frame rate and oversampling are fixed at 25.0 and 4 respectively.

    I'm pretty sure SRestore works on the same principle. I've compared SRestore(25) to FixBlendNTSC, and when the latter is configured correctly, they both seem to return the same frames, but of course FixBlend is dumb and SRestore is automatic, so if the pattern changes, you're generally better off with SRestore unless you want to spend a lot of time on it. I've found it often doesn't change, but as you've discovered, that's not always the case.

    I originally created FixBlend when working on some DVDs that were a combination of video and field blended film. I divided them up into video and film to run SRestore on the film sections, but it was missing the blending for the first few seconds of each section, so my OCD kicked in and I painstakingly deblended them manually.
    Last edited by hello_hello; 29th Oct 2020 at 06:37.
    Quote Quote  
  9. FWIW I think a lot of the problem with SRestore() is all the noise and DCT blocking. This seems to work pretty well with the provided clips:

    Code:
    function Deblock_QED_i ( clip clp, int "quant1", int "quant2", int "aOff1", int "bOff1", int "aOff2", int "bOff2", int "uv" )
    {
        quant1 = default( quant1, 24 ) # Strength of block edge deblocking
        quant2 = default( quant2, 26 ) # Strength of block internal deblocking
    
        aOff1 = default( aOff1, 1 ) # halfway "sensitivity" and halfway a strength modifier for borders
        aOff2 = default( aOff2, 1 ) # halfway "sensitivity" and halfway a strength modifier for block interiors
        bOff1 = default( bOff1, 2 ) # "sensitivity to detect blocking" for borders
        bOff2 = default( bOff2, 2 ) # "sensitivity to detect blocking" for block interiors
    
        uv    = default( uv, 3 )    # u=3 -> use proposed method for chroma deblocking
                                    # u=2 -> no chroma deblocking at all (fastest method)
                                    # u=1|-1 -> directly use chroma debl. from the normal|strong deblock()
    
        last=clp
        par=getparity()
        SeparateFields().PointResize(width,height)
        Deblock_QED(last, quant1, quant2, aOff1, aOff2, bOff1, bOff2, uv)
        AssumeFrameBased()
        SeparateFields()
        Merge(SelectEven(),SelectOdd())
        par ? AssumeTFF() : AssumeBFF()
        Weave() 
    }
    
    
    v1 = Mpeg2Source("1.demuxed.d2v", Info=3)
    v2 = Mpeg2Source("2.demuxed.d2v", Info=3)
    
    v1+v2
    GreyScale()
    Deblock_QED_i(quant1=35, quant2=30)
    QTGMC(preset="fast")
    SRestore(frate=23.976, dclip=Blur(1.4))
    Image Attached Files
    Quote Quote  
  10. Nice. I tried it in the full film and looked around for blends but didn't find any Will keep this in mind for future use.
    Quote Quote  
  11. That source sorely needs the deblocking anyway.
    Quote Quote  
  12. How do I recognize when I should use this deblocking code? When I have this kind of combed material that also has some vertical lines that make "blocks"?
    Quote Quote  
  13. DCT blocking is harder to see while the video is interlaced. I usually apply a simple Bob() and use a screen magnifier (point resize) to see it.
    Quote Quote  



Similar Threads