VideoHelp Forum


Try StreamFab All-in-One and rip streaming video! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 17 of 17
Thread
  1. So 6 years ago I created this thread.
    I would like to re-convert the same movie in pristine quality (this time there would be no file size constraint). I re-read the thread in question, and although I now have a better understanding of the issues involved, I would like some clarifications before committing to specific settings for the encode.

    Two samples are attached (one from the opening, one cut around the 5:00 mark).

    There is a brief panning in the opening sequence (first sample, frames 265-451) which was formerly identified as being “phase shifted”. I was advised to use Avisynth's TFM() function. What is the actual difference between using setting “-1” (auto), “0” (bottom field first) or “1” (top field first) ? (I can see that “-1” and “1” have the same effect, while using “0” results in the displayed footage being seemingly shifted by one frame, although corresponding frames seem strictly identical, but frames 265-266 appear duplicated with “-1” or “1”, not with “0”, while the total frame count is the same at 141864, which is quite puzzling.) What are the rules which determine which setting should be used ?
    In this case, since there is (apparently) only a brief shot for which this processing is needed, does the function leave the rest of the footage unchanged, only applying it where it's actually needed ?
    When using TFM, I can see that there are remaining “combing” artifacts on the superimposed text from the credits : does it mean that the credits were added to the already phase shifted footage ? Or were the credits themselves likely created as interlaced video ? Generally speaking, how would such credits have been generated back then ? (The movie was broadcast in 2006, the DVD was authored that same year based on the folders' timestamps.)
    Also, using VirtualDub2, I can see that the image looks pixellated / aliased (if that's the correct word), while there is no such effect when reading the source VOBs with VLC Media Player for instance. Why is that ?

    Then, if the goal is to obtain the best possible quality with a size roughly around 2GB (total size of source VOBs being 4.85GB), what would be the wisest conversion settings ? Is x264 with -crf 20 still considered as the best compromise for the conversion of “SD” video ? I was formerly advised to use QTGMC(InputType=1) to “clean” / “stabilize” the image — doing some comparisons again, specifically on that problematic panning, I noticed that the QTGMC processed footage appear slightly sharper in most areas, but some parts appear quite fuzzy compared with the unprocessed footage, especially around the text and near the borders. Based on the characteristics of the source, would such a processing be beneficial at all ?

    Current script :
    Code:
    Vid = MPEG2Source("R:\DES FLEURS POUR ALGERNON (DD 2To Toshiba)\VIDEO_TS\VTS_01_1.d2v")
    Aud = NicAC3Source("R:\DES FLEURS POUR ALGERNON (DD 2To Toshiba)\VIDEO_TS\VTS_01_1 T80 2_0ch 384Kbps DELAY 0ms.ac3")
    VidFM = Vid.tfm(order=-1)
    Mix = AudioDub(VidFM, Aud)
    Return(Mix)
    Side question : I recently created english subtitles for that movie (it's a promise I had made almost 8 years ago to someone who mentioned that movie on a spanish language forum dedicated to rare movies — nickname “Mantua”, post from 2010/03/09). I would like to share the movie itself with foreign audiences, considering that it was (as far as I know) never made available outside of France and probably Switzerland (where the movie was shot). Although it might be a sensitive issue on this forum, would it be possible to get (perhaps by private message) suggestions of current reliable methods for sharing rare movies, that would have good odds of staying only in the foreseeable future ?
    Image Attached Files
    Quote Quote  
  2. Way too many words. If it or part of it is phase-shifted you put on TFM. I forget about setting an order unless something looks screwy. But, if it matters, it's easy enough to find out the field order by bobbing and having a look.

    You might test first with TFM(display=true) to see if anything is getting deinterlaced. You might try with TFM(PP=0) to prevent any deinterlacing.

    I neither read the whole thing nor did I download the samples. Personally I wouldn't include the audio with the video but bring it back in during the encoding or muxing.
    Quote Quote  
  3. Originally Posted by abolibibelot View Post
    There is a brief panning in the opening sequence (first sample, frames 265-451) which was formerly identified as being “phase shifted”. I was advised to use Avisynth's TFM() function. What is the actual difference between using setting “-1” (auto), “0” (bottom field first) or “1” (top field first) ? (I can see that “-1” and “1” have the same effect, while using “0” results in the displayed footage being seemingly shifted by one frame, although corresponding frames seem strictly identical, but frames 265-266 appear duplicated with “-1” or “1”, not with “0”, while the total frame count is the same at 141864, which is quite puzzling.) What are the rules which determine which setting should be used ?
    The order parameter tells TFM what field order to assume. If you set it wrong TFM will get confused about the temporal order of fields and may match them incorrectly. I recommend leaving it at -1 (auto), and if AviSynth doesn't already know the field order, use AssumeTFF() or AssumeBFF() before calling TFM().

    The scene transition at frame 265 (before any filtering) has an orphaned field. TFM has special handling for scene transitions and orphaned fields. It seems to me that it picked the best option, a duplicate frame. At the end of that shot (frame 451) there is another orphaned field, though it's very hard to see since the camera is no longer panning.

    Originally Posted by abolibibelot View Post
    In this case, since there is (apparently) only a brief shot for which this processing is needed, does the function leave the rest of the footage unchanged, only applying it where it's actually needed ?
    Usually. There are some circumstances when TFM's post processor misidentifies then horizontal lines as residual combing and applies its deinterlacer. That usually shows up as aliasing or moire artifacts. In cases like that you can disable the post processor (pp=0 or 1) or change the combing threshold to eliminate that. But either of those options might sometimes let a truly combed frame through. If you're certain that one shot is the only one with phase shifting you can just TFM() that one shot. Another possible fix is to provide your own deinterlaced frame using clip2=X. X may be something like QTGMC().SelectEven/Odd() or nnedi3(dh=true).SelectEven/Odd(). The post processor will still kick in but the final result usually looks better (as those deinterlacers are better than TMF's built in deinterlacers).

    Originally Posted by abolibibelot View Post
    When using TFM, I can see that there are remaining “combing” artifacts on the superimposed text from the credits : does it mean that the credits were added to the already phase shifted footage ? Or were the credits themselves likely created as interlaced video ?
    That usually means the credits were overlaid after the film was telecined. They could be interlaced or progressive (with a phase opposite that of the underling video). In this case they fade in/out at 25i -- 50 different fields per second.

    Originally Posted by abolibibelot View Post
    Also, using VirtualDub2, I can see that the image looks pixellated / aliased (if that's the correct word), while there is no such effect when reading the source VOBs with VLC Media Player for instance. Why is that ?
    In this video the title fades are causing false comb detection in the post processor. You can raise cthresh to 11 or 12 to eliminate that. Setting cthresh too high will let residual combing through but I didn't see any problems in that one shot.

    Originally Posted by abolibibelot View Post
    Then, if the goal is to obtain the best possible quality with a size roughly around 2GB (total size of source VOBs being 4.85GB), what would be the wisest conversion settings ? Is x264 with -crf 20 still considered as the best compromise for the conversion of “SD” video ?
    I usually use 18. Small defects are enlarged when watching SD video on modern large screen TVs.

    Originally Posted by abolibibelot View Post
    I was formerly advised to use QTGMC(InputType=1) to “clean” / “stabilize” the image — doing some comparisons again, specifically on that problematic panning, I noticed that the QTGMC processed footage appear slightly sharper in most areas, but some parts appear quite fuzzy compared with the unprocessed footage, especially around the text and near the borders. Based on the characteristics of the source, would such a processing be beneficial at all ?
    Yes, motion estimation doesn't work well at the edges of the frame, and at the boundary between moving and non-moving parts of the frame (panning behind the titles, for example). That causes QTGMC to screw up in those areas. You'll have to decide for yourself which is preferable.
    Quote Quote  
  4. @manono
    Way too many words.
    There are way too many words in the world, I don't think that I'm adding much these days. (What about that one ?)

    But, if it matters, it's easy enough to find out the field order by bobbing and having a look.
    I tried putting Bob() but it's not exactly obvious if a given field is a bottom one or a top one.

    You might test first with TFM(display=true) to see if anything is getting deinterlaced. You might try with TFM(PP=0) to prevent any deinterlacing.
    Indeed frame 302 gets deinterlaced, and appears badly blurred, PP=0 prevents that.
    When using display=true, what do the different parameters indicate, in particular "MI = X", "MIC = X", "match = c/p", and the various "MICS p/c/n/b/u" parameters displayed for frame 265 ?

    Personally I wouldn't include the audio with the video but bring it back in during the encoding or muxing.
    This time I was intending on doing it that way, and let the AC3 audio as-is (I took the old script and didn't modify that part). But if the audio gets reencoded, what difference does it make ? Shouldn't it be more reliable with AudioDub(), to ensure that the synchronization is preserved ?



    @jagabo
    The scene transition at frame 265 (before any filtering) has an orphaned field. TFM has special handling for scene transitions and orphaned fields. It seems to me that it picked the best option, a duplicate frame. At the end of that shot (frame 451) there is another orphaned field, though it's very hard to see since the camera is no longer panning.
    You detect orphaned fields by bobbing I suppose ? How do you ascertain that a field is a bottom one or a top one, and that a field is orphaned ?

    If you're certain that one shot is the only one with phase shifting you can just TFM() that one shot.
    Here that would mean trimming frames 265 to 451 ? Or is it safer to include a few adjacent frames ?
    Is there a way to automatically parse the whole video to check if there are other shots with the same defect ?

    Another possible fix is to provide your own deinterlaced frame using clip2=X. X may be something like QTGMC().SelectEven/Odd() or nnedi3(dh=true).SelectEven/Odd(). The post processor will still kick in but the final result usually looks better (as those deinterlacers are better than TMF's built in deinterlacers).
    Didn't quite understand that part. How would this work in a script ? And wouldn't it considerably slow down the script, if the whole video has to be processed by QTGMC, only for a few processed frames to be actually used ?

    That usually means the credits were overlaid after the film was telecined. They could be interlaced or progressive (with a phase opposite that of the underling video). In this case they fade in/out at 25i -- 50 different fields per second.
    Which means interlaced ?

    In this video the title fades are causing false comb detection in the post processor. You can raise cthresh to 11 or 12 to eliminate that. Setting cthresh too high will let residual combing through but I didn't see any problems in that one shot.
    This concerns frame 302 only from what I could see. Indeed using either pp=0 or cthresh=12 prevents the wrongful deinterlace.
    Strangely that frame appears normal in the encode I made originally — but QTGMC(InputType=1) was used then, and indeed if adding QTGMC in the script that frame appears sharp. So it works by constantly interpolating adjacent frames ?
    Quote Quote  
  5. Originally Posted by abolibibelot View Post
    I tried putting Bob() but it's not exactly obvious if a given field is a bottom one or a top one.
    AviSynth assumes BFF. If it plays smoothly, then nothing has to be done. If it plays jerky then try with 'AssumeTFF' or by setting the field order in TFM. Most people don't like to use the Bob filter for testing as it makes the video jump and down. I test with Yadif(Mode=1) as it's fast. I don't use it in my encodes, but just for testing.
    When using display=true, what do the different parameters indicate, in particular "MI = X", "MIC = X", "match = c/p", and the various "MICS p/c/n/b/u" parameters displayed for frame 265 ?
    I suggested using it to see if anything is being deinterlaced. I use it for that and don't pay any attention to the other stuff. If you're more curious than I am, it's explained in the included TFM doc.
    Shouldn't it be more reliable with AudioDub(), to ensure that the synchronization is preserved ?
    No.
    Quote Quote  
  6. Originally Posted by abolibibelot View Post
    The scene transition at frame 265 (before any filtering) has an orphaned field. TFM has special handling for scene transitions and orphaned fields. It seems to me that it picked the best option, a duplicate frame. At the end of that shot (frame 451) there is another orphaned field, though it's very hard to see since the camera is no longer panning.
    You detect orphaned fields by bobbing I suppose ?
    Yes. Bob(), SeparateFields(), or Yadif(mode=1).

    Originally Posted by abolibibelot View Post
    How do you ascertain that a field is a bottom one or a top one, and that a field is orphaned ?
    After bobbing a TFF video all the even fields (0, 2, 4...) are top fields, all the odd fields (1, 3, 5...) are bottom fields. It's the opposite for BFF video.

    Originally Posted by abolibibelot View Post
    If you're certain that one shot is the only one with phase shifting you can just TFM() that one shot.
    Here that would mean trimming frames 265 to 451 ?
    Yes.

    Originally Posted by abolibibelot View Post
    Is there a way to automatically parse the whole video to check if there are other shots with the same defect ?
    You can use IsCombedIVTC() (included in the TIVTC package) in a runtime test to print out a list of all combed frames.

    Code:
    Mpeg2Source("VTS_01_1a.d2v", CPU2="ooooxx", Info=3) 
    WriteFileIf(last, "CombInfo.txt", "IsCombedTIVTC(last, cthresh=9) == true", "current_frame", flush=true)
    Open that script in VirtualDub and select File -> Run Video Analysis Pass to quickly run through the entire video. A file called CombInfo.txt will contain the frame numbers of all frames TFM would detect as combed:

    Code:
    265
    266
    267
    ...
    435
    436
    437
    Note this list stops as 437 rather than 451 because the camera stops moving at the end of that shot. So TFM sees them as having no combing.

    Originally Posted by abolibibelot View Post
    Another possible fix is to provide your own deinterlaced frame using clip2=X. X may be something like QTGMC().SelectEven/Odd() or nnedi3(dh=true).SelectEven/Odd(). The post processor will still kick in but the final result usually looks better (as those deinterlacers are better than TMF's built in deinterlacers).
    Didn't quite understand that part. How would this work in a script ?
    TFM(clip2=QTGMC(FPSDivisor=2)) or TFM(clip2=nnedi3()).

    Originally Posted by abolibibelot View Post
    And wouldn't it considerably slow down the script, if the whole video has to be processed by QTGMC, only for a few processed frames to be actually used ?
    AviSynth is smart about processing. QTGMC will only be called when TFM's deinterlacer calls for it.

    Originally Posted by abolibibelot View Post
    That usually means the credits were overlaid after the film was telecined. They could be interlaced or progressive (with a phase opposite that of the underling video). In this case they fade in/out at 25i -- 50 different fields per second.
    Which means interlaced ?
    Yes. Only interlaced frames can contain a picture that changes 50 times a second on DVD.

    Originally Posted by abolibibelot View Post
    In this video the title fades are causing false comb detection in the post processor. You can raise cthresh to 11 or 12 to eliminate that. Setting cthresh too high will let residual combing through but I didn't see any problems in that one shot.
    This concerns frame 302 only from what I could see. Indeed using either pp=0 or cthresh=12 prevents the wrongful deinterlace.
    Strangely that frame appears normal in the encode I made originally — but QTGMC(InputType=1) was used then, and indeed if adding QTGMC in the script that frame appears sharp. So it works by constantly interpolating adjacent frames ?
    Yes, QTGMC's InputType=1 (or 2) analyzes the frame, and adjacent frames to clean up aliasing and flickering edges.
    Quote Quote  
  7. You can use IsCombedIVTC() (included in the TIVTC package) in a runtime test to print out a list of all combed frames.
    Did that, it detected several other frames / groups of frames.

    1303
    1304
    1306 => Caused by the title fade, during a tracking shot, should probably be left untouched.

    VTS_01_1 0h00m47s.demuxed.m2v

    34057 => This frame appears to be combed indeed, and is at a scene threshold — how could that happen, and what is the best way to deal with it ? (frame 13 from the sample below)

    VTS_01_1 0h22m41s.demuxed.m2v

    40965
    40966
    40967
    40968
    40969
    40970 => Several frames appear to be combed during a fade-in, with also what seems to be a stark color shift (the sky is purple then blue for the rest of the shot) — how could that happen, and what is the best way to deal with it ? (frames 81-86 from the sample below)

    VTS_01_1 0h27m15s.demuxed.m2v

    78469 => This frame appears to be combed indeed, and is at a scene threshold — how could that happen, and what is the best way to deal with it ? (frame 217 from the sample below)

    VTS_01_1 0h52m10s.demuxed.m2v

    129705
    ...
    129781 => This shot was slowed down, and appears to be stored with some kind of interlacing — what is the best way to deal with it ? (frames 81-157 from the sample below) Generally speaking, how is a slowed-down shot inserted in a 24FPS movie shot on film ?

    VTS_01_1 1h26m24s.demuxed.m2v

    139951
    ...
    141719 => End credits (starting from frame 330 in the sample below), here letting TFM do the deinterlacing seems to improve a lot, QTGMC doesn't seem to improve that much over TFM's deinterlacing, especially after the underlying video frame is frozen, although I didn't scrutinize that much.

    VTS_01_1 1h33m18s.demuxed.m2v


    AviSynth is smart about processing. QTGMC will only be called when TFM's deinterlacer calls for it.
    Even if for instance QTGMC is called in another line ?
    A = QTGMC(FPSDivisor=2)
    B = TFM(clip2=A)
    Generally speaking, does it only process what is necessary to generate a given frame, even if dozens of extra commands are present in the script ?
    Quote Quote  
  8. Well, perhaps it'll be quicker to diagnose with screenshots...

    1303-1306
    Title fade, probably no processing required — right ?
    QTGMC(InputType=1) makes the combing effect more obvious.

    Image
    [Attachment 59372 - Click to enlarge]


    34057 & 78469
    Single combed frame at a scene threshold. Not sure what to do here.

    Image
    [Attachment 59373 - Click to enlarge]

    Image
    [Attachment 59374 - Click to enlarge]


    40965-40970
    Several combed frames at a scene fade-in. Same.

    Image
    [Attachment 59375 - Click to enlarge]


    129705-129781
    This is the most troublesome. It's a slowed down shot, about 4 seconds, with a lot of motion, I'm not sure if it's best to leave it interlaced (smoother motion), or do the field matching (to remove the combing, but with a jerky motion, while the first and last frames have an orphaned field if I understood this correctly so they can't be fixed this way), or attempt some kind of interpolation.

    Image
    [Attachment 59376 - Click to enlarge]


    139951-141719
    End credits, about 4 seconds over a moving picture, then over a frozen frame. Here TFM's deinterlacing works very well over the frozen frame, it's harder to say during the 4 seconds where there's still motion underneath as it's a wide shot with snow falling, so it's bound to be quite fuzzy anyway. If using TFM with default settings, then QTGMC(InputType=1) doesn't seem to do improve, in fact it seems to be treating snow flakes as noise. If using TFM with PP=0, then QTGMC(InputType=1) doesn't by itself remove the combing effect in the titles (probably because it does its processing frame by frame and not field by field in that mode). QTGMC(FPSDivisor=2) does remove the combing effect but it doesn't look good for the 4 seconds with picture motion, and likewise over the frozen frame it smoothes out the snow as if it were noise, whereas TFM's deinterlacer only processes the titles.

    Image
    [Attachment 59377 - Click to enlarge]
    Quote Quote  
  9. Or not...
    Quote Quote  
  10. Sorry, I didn't answer when I first saw post 7 because of all the samples and questions. But most of your ills will be taken care of with:

    Code:
    Mpeg2Source("whatever.d2v") 
    TFM(cthresh=11, clip2=nnedi3())
    vInverse() # blur away residual combing
    vInverse() occasionally mistakes thin horizontal lines as residual combing and blurs them too. You can reduce that problem with:

    Code:
    ###################################################
    #
    # build a mask of areas where there are 
    # alternating horizontal lines
    #
    ##################################################
    
    function CombMask(clip v, int "threshold")
    {
        threshold = default(threshold, 5)
    
        Subtract(v, v.blur(0.0, 1.0).Sharpen(0.0, 0.6))
        GeneralConvolution(0, "
            0  8  8  8  0
           -0 -8 -8 -8 -0 
            0  8  8  8  0
           -0 -8 -8 -8 -0
            0  8  8  8  0", chroma=false, alpha=false)
        mt_lut("x 125 - abs")
        mt_binarize(threshold)
        mt_inpand()
        mt_expand()
        mt_expand(chroma="-128")
    }
    
    ##################################################
    
    
    Mpeg2Source("VTS_01_1 0h27m15s.demuxed.d2v", CPU2="ooooxx", Info=3) 
    TFM(cthresh=11, clip2=nnedi3())
    Overlay(last, vInverse(), mask=CombMask(5).Blur(1.0)) # blur softens the edges of the mask
    The slow motion shot most 4 fields from the same film frame, sometimes 3 and sometimes 5. It pretty much averages out at half speed -- two frames per film frame. So the same tfm works for the shot. That shot also has blended chroma. I don't think there's much you can do about that.

    That leaves just the issue with the scrolling credits. If you want to keep the full smoothness you'll have to encode at 50 fps. If you don't mind them being less smooth (but no less so than the usual 24p film to 25p video PAL speedup) you can use the sameTMF/vInverse code as the rest of the video.
    Quote Quote  
  11. Sorry, I didn't answer when I first saw post 7 because of all the samples and questions.
    Sorry, I figured that it would be clearer with 1 question (or group of questions) and 1 sample per frame interval, although similar issues could have been grouped in a more streamlined way. These days I'm having trouble concentrating on even the most trivial tasks, and tend to get lost in the details before I can have a grasp of the big picture. Feels like I'm about to lose my worried mind, ahhh-yeah. é_è

    The slow motion shot most 4 fields from the same film frame, sometimes 3 and sometimes 5. It pretty much averages out at half speed -- two frames per film frame. So the same tfm works for the shot. That shot also has blended chroma. I don't think there's much you can do about that.
    Weird pattern indeed. When processing that shot with TFM() there are 3 (almost) identical frames in a row at the beginning. I tried using Morph() to interpolate intermediate frames in the whole shot, not sure what looks best or “least worse” here. (Also tried FrameSurgeon which is a more complex interpolation function but it doesn't work well here as there's too much motion, it produces egregious artifacts.) How would you rate the options between a) leaving it as-is (but I would suppose that any combing effect is unwanted in a progressive video), or b) processing with TFM which results in a rather jerky motion, or c) interpolating intermediate frames ?
    Regarding the blended chroma, I see what you mean (it's obvious after bobbing, especially on the women shoes at the foreground). Does it affect all frames alike, or is there one “clean” frame in each pair ? In which case, would this also benefit from interpolating intermediate frames ?
    Morph seems to be a rather cumbersome function which can cause crashes when there are more than a few dozens of individual calls, but here it should be fine. Still, is there an alternative function that would have the same effect while being more suited to automatically process such a succession of duplicated frames with an irregular pattern ?

    For those two combed frames at scene thresholds, since there's little motion, and since (in both instances) the luminosity is markedly different from the next frames anyway, remapping them to the next frame seems like the best option.
    Code:
    RemapFrames(mappings="34057 34058")
    RemapFrames(mappings="78469 78470")
    For the 40965-70 interval, as it's a totally fixed shot, I tried the following : remapping each frame to 40971, then adding a similar fade-in over 5 frames. It's seamless and seems to look better than the TFM processing with or without vInverse.
    Code:
    RemapFrames(mappings="[40965 40970] 40971")
    V = last
    V1 = V.Trim(0,40964)
    V2 = V.Trim(40965,0).FadeIn(5)
    V1 ++ V2
    But it increases the total frame count by 1 (“an additional color frame is added at the start/end, thus increasing the total frame count by one” from the dedicated page) so it's going to cause a slight desynchronization. Seems to work as intended with V2 = V.Trim(40966,0).

    Another (hopefully last) question : what is the most efficient way to remove a small sudden defect in the picture, like that white spot at frame 282 (top left) or 431 (bottom right) ? I've tried DeSpot() (among the “Film damage correction” filters), increased pwidth and pheight to 50 (“a spot can be no larger than pwidth x pheight”), didn't work.
    Frame interpolation works well as the motion is slow and steady, but it feels overkill to fix only a relatively small defect.
    Image
    [Attachment 59398 - Click to enlarge]
    Last edited by abolibibelot; 10th Jun 2021 at 16:36.
    Quote Quote  
  12. Originally Posted by abolibibelot View Post
    Frame interpolation works well as the motion is slow and steady, but it feels overkill to fix only a relatively small defect.
    Image
    [Attachment 59398 - Click to enlarge]
    Whether small or not, if it works then I use it. However, if there's significant movement such that artifacts occur as a result and I can't isolate the 'fix' with some cropping, then I extract the frame, fix it in a photo editor, and then replace the 'bad' frame with the edited one.
    Quote Quote  
  13. Use FadeIn0() instead of FadeIn() to get rid of that extra frame. RemoverDirtMC(limit=10) will get rid of that spot.

    Since the slow motion section is essentially 12.5 fps I thought I could remove the blended chroma using SRestore(frate=12.5), then duplicating the remaining frames:

    Code:
    Mpeg2Source("VTS_01_1 1h26m24s.demuxed.d2v", CPU2="ooooxx", Info=3) 
    
    TFM(clip2=nnedi3())
    vInverse()
    
    source = last
    cleaned = SRestore(frate=12.5).SelectEvery(1,0,0).Trim(1,0)
    StackHorizontal(source, cleaned)
    That gives you the TFM'd video with some blended chroma in the slowmo section (frames 81 to 157, inclusive) on the left, and the cleaned chroma frames on the right. If you examine the cleaned frames you'll see a pattern of exact duplicates from frame 81 to 156, with frame 157 being a copy of the original frame 158. That extra duplicate might be acceptable. But...

    SRestore() often takes several frames before it locks into a pattern. If you use ReplaceFramesSimple(source, cleaned, mappings="[81 157]") AviSynth's smart handling kicks in and only frames 81 to 157 are processes with SRestore(). Since SRestore has problems at the start of a clip you don't get the same clean result as the above script. You end up with four nearly identical frames at the start of the slowmo shot.
    Quote Quote  
  14. @manono
    Whether small or not, if it works then I use it. However, if there's significant movement such that artifacts occur as a result and I can't isolate the 'fix' with some cropping, then I extract the frame, fix it in a photo editor, and then replace the 'bad' frame with the edited one.
    I have already done the latter for a personal video in case of very damaged frames for which no interpolation filter yielded a decent result. But in this particular case, with only a small defect on an otherwise clean frame, it feels a bit like using a hammer to remove a flea from a dog's paw... (Except that it happens to work better than said dog after said operation !)


    @jagabo
    Thanks for those suggestions, but once again I got stuck not very far from square one... After adding the SRestore script to the “plugins” directory I got the error message “I don't know what 'AvsPlusVersionNumber' means” (me neither) ; then after adding Zs_RF_Shared as recommended in post #883 on page 45 of the associated discussion (it is not even mentioned among the required plugins / filters on the dedicated page), I got another error message saying “mt_lutxy does not have a named argument "use_expr"”. Then I tried RemoveDirtMC, got the error message “There is no function named 'FluxSmoothT'”, since it doesn't seem to have a dedicated information page I figured that I would find the required dependencies inside the script itself, then since there was no program association with the .avsi extension I figured that it would make things easier to create one with Notepad2, but it changed the icon, which I didn't want, then I spent about an hour trying to find a solution to get the former icon back, which was possible in Windows XP but now requires some tricky registry editing or a third-party application... and then I had yet another BSOD as I was starting to type this reply... (At least at the next session I got the former icon back for .avsi files, oh well.)

    Even though I couldn't test the above yet : would it be possible to add a few extra frames for SRestore to “hit its stride”, so to speak (for instance “mappings="[76 157]"” if 5 extra frames are enough), and get the “clean result” from the full shot processing, then remap back the frames for which no processing was required (in this example frames 76-80) ? Or would this change nothing, if only the end result of the processing chain is taken into account when Avisynth renders each frame ?
    A more reliable solution would be to extract a lossless intermediate, and remap from this file instead.
    Last edited by abolibibelot; 13th Jun 2021 at 16:04.
    Quote Quote  
  15. Originally Posted by abolibibelot View Post
    But in this particular case, with only a small defect on an otherwise clean frame, it feels a bit like using a hammer to remove a flea from a dog's paw... (Except that it happens to work better than said dog after said operation !)
    As jagabo mentioned, RemoveDirtMC should remove crap like that throughout the entire video. Unless those specks last for 2 or more consecutive frames.
    Quote Quote  
  16. Originally Posted by abolibibelot View Post
    would it be possible to add a few extra frames for SRestore to “hit its stride”, so to speak (for instance “mappings="[76 157]"” if 5 extra frames are enough), and get the “clean result” from the full shot processing, then remap back the frames for which no processing was required (in this example frames 76-80) ? Or would this change nothing, if only the end result of the processing chain is taken into account when Avisynth renders each frame ?
    I tried lots of tricks like that. None of them worked.

    Originally Posted by abolibibelot View Post
    A more reliable solution would be to extract a lossless intermediate, and remap from this file instead.
    Yes.
    Quote Quote  
  17. Originally Posted by abolibibelot View Post
    @manono
    Whether small or not, if it works then I use it. However, if there's significant movement such that artifacts occur as a result and I can't isolate the 'fix' with some cropping, then I extract the frame, fix it in a photo editor, and then replace the 'bad' frame with the edited one.
    I have already done the latter for a personal video in case of very damaged frames for which no interpolation filter yielded a decent result. But in this particular case, with only a small defect on an otherwise clean frame, it feels a bit like using a hammer to remove a flea from a dog's paw...
    The aforementioned RemoveDirtMC works pretty well. If you find it damages other parts of the video too much you can use ReplaceFramesSimple() to change only the frames with known spots.

    Another possibility is ReplaceFamesMC (which can be found these forums). It replaces a single frame (or several in a row) with a motion interpolated frame. For example, ReplaceFramesMC(282) will replace only frame 282. This function will only be called with the specified frames. It will not slow down processing of the rest of the video.

    Since that is a very simple panning shot you could replace that portion of the frame with a crop from the previous frame:

    Code:
    prev = Loop(2,0,0)
    box = prev.Crop(196,82,16,16)
    patch = Overlay(last, box, x=200, y=85)
    ReplaceFramesSimple(last, patch, mappings="282")
    prev is a clip with the frames delayed by one frame, for example frame 282 in prev is frame 281 in last.
    box is a 16x16 box cut out of prev
    patch is a video where box is overlaid with an offset (the amount of panning) onto last
    ReplaceFramesSimple is used to limit the use of this patch to only frame 282

    The code will only be used for frame 282. It won't slow down filtering of the rest of the video. But its a PITA manually locating the problem frames, the effected area, etc. It won't work for frame 431 because the location of the spot and amount of panning are different.
    Quote Quote  



Similar Threads