VideoHelp Forum
+ Reply to Thread
Results 1 to 20 of 20
Thread
  1. So I have several Avisynth scripts to compress as lossless intermediates. They display fine in AVSPMod, one of them made VirtualDub2 crash, probably because of too many “Morph” commands, I splitted it in two parts, now all of them are loading correctly. But for the larger ones (11-12min each), the compression fails early on with an error message saying :
    Code:
    Avisynth read error:
    GetFrameBuffer: Returned a VFB with a 0 data pointer!
    size=8163391, max=1073741824, used=210497082
    I think we have run out of memory folks!
    What's going on, and what should I do ?

    Side question : if I compress with Lagarith and specify “RGB” in “Pixel format”, it compresses as YUV anyway. Why is that ? MagicYUV and UtVideo seem to honor that setting, and generate files about twice as large. If I convert to RGB in Avisynth with ConvertToRGB("Rec709"), Lagarith compresses as RGB, and still generates the smallest files of the three (albeit larger than in YUV, as it should be – in a test with a short video, I got 209Mo in Lagarith RGB vs. 122Mo in Lagarith YUV, and 239Mo for MagicYUV RGB, 293Mo for UtVideo RGB), so it seems to be the most efficient. So should I use the ConvertToRGB command for all scripts, or am I missing something ?
    (I want RGB because for some reason YUV lossless files are displayed with the wrong color matrix in the NLE software I use.)

    One of the scripts that fail :
    Code:
    LoadPlugin("C:\Logiciels autonomes\MeGUI\tools\lsmash\LSMASHSource.dll")
    LWLibavVideoSource("20151224_132902.m2ts", threads=1).Trim(1070,21350)
    
    mdata=DePanEstimate(range=5,trust=4.0,dxmax=-1,dymax=-1,stab=1.00)
    DePanStabilize(data=mdata,cutoff=2.0,damping=10,initzoom=1.00,dxmax=6,dymax=6,method=1,mirror=15,prev=0,next=0,blur=30,info=false)
    
    FrameSurgeon(cmd="20151224_132902 A FrameSurgeon.txt", show=false)
    
    Morph(123,125)
    Morph(135,137)
    Morph(142,144)
    Morph(977,979)
    Morph(1206,1208)
    Morph(1219,1221)
    Morph(1237,1239)
    Morph(1245,1247)
    Morph(1255,1257)
    Morph(1418,1421)
    Morph(2183,2185)
    Morph(2188,2190)
    Morph(2197,2199)
    Morph(12865,12868)
    Morph(15723,15725)
    
    ConvertToRGB("Rec709")
    The FrameSurgeon.txt file is a command file which contains hundreds of frame interpolation commands. According to its author it's very stable and unlikely to choke even with that many calls. The Morph function is much more sensitive and causes memory issues when called more than a few dozen times in the same script. I only used it for a few particularly problematic frames for which FrameSurgeon would produce egregious artifacts (typically in cases of rapid movement of a large enough object).
    Quote Quote  
  2. You could enable LAA and therefore make 32 bit VDub use up to 4GB of memory on a 64 bit OS. Run "auxsetup.exe" in the 'extra' directory and select 'Enable LAA'.

    Edit: Alternatively, use 64 bit VDub and Avisynth+.
    Last edited by Groucho; 18th Dec 2018 at 07:45.
    Quote Quote  
  3. Originally Posted by Groucho View Post

    Edit: Alternatively, use 64 bit VDub and Avisynth+.
    Best alternative IMO .


    Another approach is to split up the script into smaller sections for processing. Maybe include "x" number of frames then append. Maybe batch encode scripts


    Originally Posted by abolibibelot View Post

    If I convert to RGB in Avisynth with ConvertToRGB("Rec709"), Lagarith compresses as RGB, and still generates the smallest files of the three (albeit larger than in YUV, as it should be – in a test with a short video, I got 209Mo in Lagarith RGB vs. 122Mo in Lagarith YUV, and 239Mo for MagicYUV RGB, 293Mo for UtVideo RGB), so it seems to be the most efficient.
    Lagarith is more efficient than those other two, compression wise. It's also slower for encoding/decoding .

    You can get smaller filesizes with ffv1 in long gop mode, or x264 in lossless long gop mode . But they have less support in other applications .




    So should I use the ConvertToRGB command for all scripts, or am I missing something ?
    Yes if you are a control freak. Because you can control other parameters - the matrix, the conversion parameters (e.g. interlaced vs. progressive) , chroma resampler (e.g. maybe you want to use spline36 instead of bilinear) used etc...

    If you let something else do it, it might use non ideal parameters. Codecs make that choice usually choose based on speed - but something that converts fast often has other trade offs
    Quote Quote  
  4. You could enable LAA and therefore make 32 bit VDub use up to 4GB of memory on a 64 bit OS. Run "auxsetup.exe" in the 'extra' directory and select 'Enable LAA'.
    Are there drawbacks to this approach ? (Since the alternative is considered best by “poisondeathray”.)

    Edit: Alternatively, use 64 bit VDub and Avisynth+.
    Best alternative IMO .
    I do have Avisynth+ installed, but :
    – Will all the required filters / functions work in 64b ? I tried loading one of the scripts in VD2 x64 before, but it failed because it couldn't load LSMASHSource.dll, so I didn't insist. So first I would need the 64b version of LSMASHSource. Then I don't remember what were the required plugins for Morph and FrameSurgeon (configured them months ago). Do all common plugins have a stable 64b version now ? As for DePan, apparently it came with MVTools, there's a x64 version included, which I already placed in the plugins64 directory; I don't know if anything else is needed. It's so complicated to get this working with one configuration, that I always dread the prospect of having to start all over again !
    – Will the scripts then work in AVSPMod ? (Which seems to work in 32b.)

    You can get smaller filesizes with ffv1 in long gop mode, or x264 in lossless long gop mode . But they have less support in other applications .
    And they can't output RGB, right ?
    Anyway, Lagarith seems to be the best compromise here.
    Quote Quote  
  5. Originally Posted by abolibibelot View Post
    – Will all the required filters / functions work in 64b ? I tried loading one of the scripts in VD2 x64 before, but it failed because it couldn't load LSMASHSource.dll, so I didn't insist. So first I would need the 64b version of LSMASHSource.
    Yes, and there is a 64bit version

    Then I don't remember what were the required plugins for Morph and FrameSurgeon (configured them months ago). Do all common plugins have a stable 64b version now ? As for DePan, apparently it came with MVTools, there's a x64 version included, which I already placed in the plugins64 directory; I don't know if anything else is needed. It's so complicated to get this working with one configuration, that I always dread the prospect of having to start all over again !
    Most common plugins have a 64bit version now.

    But you mentioned HDRAGC in another thread. That does not yet, but someone was working on it

    I don't know about those specific StainlessS' functions, but he usually builds x64 versions now as well

    – Will the scripts then work in AVSPMod ? (Which seems to work in 32b.)
    The 64 bit version of avspmod, yes


    You can get smaller filesizes with ffv1 in long gop mode, or x264 in lossless long gop mode . But they have less support in other applications .
    And they can't output RGB, right ?
    Anyway, Lagarith seems to be the best compromise here.
    Yes, they can output RGB (internally, stored in a different format, but the output can be RGB, and is lossless)

    I agree, Lagarith is probably is the best compromise here, when you need to use it in some NLE for RGB




    Or do what people used to do before avisynth+ or x64, just split it up. Divide and conquer
    Quote Quote  
  6. Most common plugins have a 64bit version now.

    But you mentioned HDRAGC in another thread. That does not yet, but someone was working on it
    That's for a distinct part of the same movie, but yes ! If I do tweak that other part again (not sure because I'd like this to be finished by the end of the week, it's intended as a gift for family members, I want them to receive it on december 24th, and there's still a lot of ground to cover...), it should work well in 32 bits, as it did months ago when I rendered the processed files. It makes sense that color correction filters are less resource-hungry than stabilization + frame interpolation filters.
    Interesting anyway, I thought that this filter had been abandoned for years (12 years to be precise ). Is there an active thread where that someone talks about it ? Are there improvements planned as well ? An issue I've had with it is that it seems to completely ignore the values outside of the normal video range (and in the case of those recordings there's a lot outside of that range), so another filter has to be used before to bring the blacks to 16 and the highlights to 235, but then it seems to reduce the contrast...


    I don't know about those specific StainlessS' functions, but he usually builds x64 versions now as well
    So, FrameSurgeon : “Requires Either AVS+ or GScript[(c) Gavino], MvTools[(c) Manao], RT_Stats, FrameSel, ClipClop & Prune Plugins [(c) StainlessS].”
    And from https://www.mediafire.com/folder/hb26mthbjz7z6/StainlessS : there are x64 versions of RT_Stats and FrameSel, but not ClipClop and Prune ; the source code is included, so I suppose that I could build a x64 version myself, but right now I don't know how to do that, and to learn how to do it would make me wander even further from the already excruciatingly painstaking process I'm trying to put to completion... Reminds me of something I read on a forum years ago which expressed how I feel about that kind of things so well that I added it to my collection of quotes :
    “I guess you could say I'm rather lazy in that I have to be provoked into learning that kind of detail about a subject. There are so many pieces of the grand puzzle to know that I, most of the time, just shoot from the hip hoping to knock a few things loose and clear the path. There never seems to be enough time to learn every tangent in an array of possibilities while trying to keep in mind that these secondary and tertiary 'projects' are leading you further away from the simple task you just wanted to be done with. Many times, when it looks like the target is going to require a sniper rifle instead of my shotgun, instead of spending the time and resources procuring 50cal long barrel, mounting a scope, sighting it as I work on my breathing and windage calculations for a year or so, I look for a trained sharpshooter instead. When fishing the knowledge pool I never expect anything less than to be humbled. ...but I CAN cook a really mean ratatouille MoFo! Thanks Man.”
    “Klozov” 20091007
    (As for myself, I can't even cook a ratatouille... I wouldn't even know where to begin ! )
    Quote Quote  
  7. You could enable LAA and therefore make 32 bit VDub use up to 4GB of memory on a 64 bit OS. Run "auxsetup.exe" in the 'extra' directory and select 'Enable LAA'.
    I don't see this option...
    Click image for larger version

Name:	auxsetup.exe -- no LAA.png
Views:	720
Size:	7.0 KB
ID:	47515

    EDIT : And a Google search for : « virtualdub "enable LAA" » returns a measly two hits...
    EDIT : What's strange is that the VirtualDub2 process uses less than 2GB when it fails, according to ProcessExplorer : 1314064KB once the “A” script is loaded, and it fails when that reaches about 1750000KB – strangely Windows Task Manager reports about 1GB less).
    EDIT : After a few trials I managed to render the first of the problematic scripts. Memory usage maxed out at about 1730000 according to ProcessExplorer, and about 1236000KB (slightly increasing while the value in ProcessExplorer stalled). Why this discrepancy ?
    Only two more !
    Last edited by abolibibelot; 18th Dec 2018 at 16:54.
    Quote Quote  
  8. Originally Posted by abolibibelot View Post
    Is there an active thread where that someone talks about it ? Are there improvements planned as well ? An issue I've had with it is that it seems to completely ignore the values outside of the normal video range (and in the case of those recordings there's a lot outside of that range), so another filter has to be used before to bring the blacks to 16 and the highlights to 235, but then it seems to reduce the contrast...
    Just the original thread at doom9. Someone mentioned looking at the source and building a 64bit version. It would make sense to make requests there. I know a few people nudged and bumped the thread recently (because I'm one of them)

    You can definitely get better results doing it in other programs, but it's still a useful filter and one of the semi-frequently ones still missing from the x64 plugin list. I would also like to see it extended to 10bit; since 10bit is increasingly common in consumer space right now

    I don't know about those specific StainlessS' functions, but he usually builds x64 versions now as well
    So, FrameSurgeon : “Requires Either AVS+ or GScript[(c) Gavino], MvTools[(c) Manao], RT_Stats, FrameSel, ClipClop & Prune Plugins [(c) StainlessS].”
    And from https://www.mediafire.com/folder/hb26mthbjz7z6/StainlessS : there are x64 versions of RT_Stats and FrameSel, but not ClipClop and Prune ; the source code is included, so I suppose that I could build a x64 version myself, but right now I don't know how to do that, and to learn how to do it would make me wander even further from the already excruciatingly painstaking process I'm trying to put to completion...
    StainlessS is probably already set up for building stuff (especially his own filters/plugins) and could probably do it quickly . Since his newer stuff includes x64 versions, it would make sense to ask him

    If you're just trying to get stuff done by the deadline , I would just carry on with the x86 workflow you have going, and divide it up a bit to beat the memory issues

    I know you posted about interpolation errors - that's actually normal and expected. Some sets of settings might give you slightly better on some frames, and slightly worse on others. That' s the nature of interpolation . eg.blksize 16 might be ok for some, but 8 might be better for others. If you went through iterations with something like avsoptimizer, you might be able to calculate a best solve for a section optimizing dozens of parameters. But in the time it takes you to do that, there are ways to fix with other programs too with masking / compositing / motion tracking . There are user guided workflows that take track points and edge splines to guide the interpolation, so you minimize the blobby edge morphing artifacts. That's one of the main complaints and problems with interpolation - it's the incomplete object separation by the algorithms used, so you end up with morphing blobby edges. With other methods, you're practically guaranteed to get better results, but it depends on how much time investment you can put in . And some types of content are tough to interpolate no matter what

    In the end, I'm sure your family will be happy, because it's the thought that counts. Spend more time with them, instead of in front of a computer
    Quote Quote  
  9. Originally Posted by abolibibelot View Post
    You could enable LAA and therefore make 32 bit VDub use up to 4GB of memory on a 64 bit OS. Run "auxsetup.exe" in the 'extra' directory and select 'Enable LAA'.
    Are there drawbacks to this approach ?
    No.
    Quote Quote  
  10. Originally Posted by abolibibelot View Post
    You could enable LAA and therefore make 32 bit VDub use up to 4GB of memory on a 64 bit OS. Run "auxsetup.exe" in the 'extra' directory and select 'Enable LAA'.
    I don't see this option...
    Image
    [Attachment 47515 - Click to enlarge]
    Here's the dialog:
    Click image for larger version

Name:	Image1.png
Views:	147
Size:	5.6 KB
ID:	47516
    Seems you are using an old VDub, not VDub2.
    Quote Quote  
  11. Radical solution for you might be switching to Vapoursynth 64 threaded beast, it just might work, loading your avisynth 32bit script into Vapoursynth64 using avsproxy.dll. Download from here or here(its dll and exe file, put it into Vapoursynths Plugin64 folder) . You can get rid of these memory issues. Then you just use VirtualDub2 64bit to load it in.

    point is, forget about 32bit, install Python64bit, Vapoursynth64bit and get VirtualDub2 64bit

    Vapoursynth script:
    Code:
    import vapoursynth as vs
    
    file = r'20151224_132902.m2ts'  #or full path
    clip = vs.core.lsmas.LibavSMASHSource(file)
    clip = clip[1070,21350]
    
    clip = vs.core.avsw.Eval(
        'mdata=DePanEstimate(range=5,trust=4.0,dxmax=-1,dymax=-1,stab=1.00)'
        'DePanStabilize(data=mdata,cutoff=2.0,damping=10,initzoom=1.00,dxmax=6,dymax=6,method=1,mirror=15,prev=0,next=0,blur=30,info=false)'
        'FrameSurgeon(cmd="20151224_132902 A FrameSurgeon.txt", show=false)'
        'Morph(123,125)'
        'Morph(135,137)'
        'Morph(142,144)'
        'Morph(977,979)'
        'Morph(1206,1208)'
        'Morph(1219,1221)'
        'Morph(1237,1239)'
        'Morph(1245,1247)'
        'Morph(1255,1257)'
        'Morph(1418,1421)'
        'Morph(2183,2185)'
        'Morph(2188,2190)'
        'Morph(2197,2199)'
        'Morph(12865,12868)'
        'Morph(15723,15725)',                    #just make sure there is comma after last line of your avisynth script but not any above
        clips=[clip], clip_names=["last"])
        
    clip = vs.core.resize.Bicubic(clip, matrix_in_s = '709', format = vs.RGB24)
    clip.set_output()
    it works for me, not tested exactly avisynth script of yours but different scripts for 32bit Avisynth that I have, you might get lucky
    Last edited by _Al_; 18th Dec 2018 at 22:01.
    Quote Quote  
  12. @Groucho
    Seems you are using an old VDub, not VDub2.
    I was using VirtualDub2_41867, with the newer VirtualDub2_43073 I get this dialog indeed. Applied the patch, it seems to have done the trick : now I can load the whole “B” script which failed to load before, it's compressing right now, apparently without hiccup. Memory usage is at 3340840KB, no wonder it was starved... And 2498284KB for the “A” script.


    @_Al_
    Radical solution for you might be switching to Vapoursynth 64 threaded beast, it just might work, loading your avisynth 32bit script into Vapoursynth64 using avsproxy.dll. Download from here or here(its dll and exe file, put it into Vapoursynths Plugin64 folder) . You can get rid of these memory issues. Then you just use VirtualDub2 64bit to load it in.

    point is, forget about 32bit, install Python64bit, Vapoursynth64bit and get VirtualDub2 64bit
    Thanks for this alternative solution, I'll sure look into that (and Vapoursynth in general) later on. Right now, fingers crossed, the LAA workaround seems to be enough to get this done.
    ...
    But the result of the stabilization by DePanStabilize is quite ugly, when watched in motion (the borders seem “liquid” in places, and there are weird artifacts, like a small brilliant object – a literal nail in the coffin actually – which “dances” every now and then, tried different parameters, it just changes which frames are affected), so I'll have to start all over again, and once again I'm at a loss... é_è I think I'll have to use Deshaker after all... I must be in Hell already, that's the only explanation...
    Is there really no way to run Deshaker in YUV ? Will there be a significant quality loss when if I have to do a triple YUV<=>RGB conversion ? The alternative would be to run the interpolation filters without stabilization, then convert to RGB, then run Deshaker, but then I would have to review the whole footage AGAIN to look for badly interpolated frames, which I would have to interpolate AGAIN after rendering the whole movie, in the encoding script... Unless someone has a better/brighter idea ?...


    @poisondeathray
    Just the original thread at doom9. Someone mentioned looking at the source and building a 64bit version. It would make sense to make requests there. I know a few people nudged and bumped the thread recently (because I'm one of them)
    Didn't you say that you generally disliked “auto anything” ?

    You can definitely get better results doing it in other programs, but it's still a useful filter and one of the semi-frequently ones still missing from the x64 plugin list. I would also like to see it extended to 10bit; since 10bit is increasingly common in consumer space right now
    When I asked for guidance on that subject I didn't get much... (Wow, that was more than two years ago... crazy... At least I built a new computer based on an i7 6700K in the mean time, makes things a little bit easier than on my former 2009 machine !) Someone suggested DaVinci Resolve, which I tried, but I didn't get satisfying results; perhaps I was doing it wrong, it seems like an advanced software with a quite steep learning curve, not something that could be used to quickly fix a particular issue.

    I know you posted about interpolation errors - that's actually normal and expected. Some sets of settings might give you slightly better on some frames, and slightly worse on others. That' s the nature of interpolation . eg.blksize 16 might be ok for some, but 8 might be better for others. If you went through iterations with something like avsoptimizer, you might be able to calculate a best solve for a section optimizing dozens of parameters.
    But where can these parameters be optimized, if not in the filter's code ?

    But in the time it takes you to do that, there are ways to fix with other programs too with masking / compositing / motion tracking .
    Well, could you please elaborate ?

    There are user guided workflows that take track points and edge splines to guide the interpolation, so you minimize the blobby edge morphing artifacts. That's one of the main complaints and problems with interpolation - it's the incomplete object separation by the algorithms used, so you end up with morphing blobby edges. With other methods, you're practically guaranteed to get better results, but it depends on how much time investment you can put in . And some types of content are tough to interpolate no matter what
    I'd be interested to learn about those other methods, but I'm already very satisfied with what those two filters could accomplish, turning an ugly jerky mess into something watchable. Very few of those bad frames could not be repaired at all (when the result of the interpolation was worse than the original), and they are usually in places where everything is moving at the same time, thus barely noticeable.
    Getting a decent stabilization before the interpolations is more of an issue...

    In the end, I'm sure your family will be happy, because it's the thought that counts. Spend more time with them, instead of in front of a computer
    I actually think that most of them won't care that much... (I'm even prepared for the possibility that some may resent me for doing that.) And it's not the kind of family where people happily spend time with each other anyway. So it's mostly about funerals these days...
    And even then it's not self-evident. I did that movie mostly for my older brother, who has a kind of mental disability roughly similar to autism, he was particularly affected by the death of our grandmother, especially two days before Christmas (he often has child-like reactions and behaviours, and Christmas is still very important to him); I rented a car in the hope that I could bring him with me to the funeral, but he insisted that he did not want to come, so at least I tried to establish a link of sorts through video, I had him put the jacket of a suit borrowed from a neighbour (he put that over his pyjamas !) to record him with my lousy camera while he was saying hello and explaining that he was very sad, and then, suddenly cheerful, went on to imitate the-woman-in-the-Adams-family-movie when she says “Donald Duuuuck”, to, quote, unquote, “pay tribute to Grandma” (it must have made sense in his peculiar mind). Then I recorded the whole ceremony, to have something to show to him, I felt bad about it, and I felt like it was considered offensive by some (even now, when reviewing the footage, I catch some disapproving looks staring right at me – or maybe I'm imagining things... in any case, whatever they thought then, they must have long forgotten, while I'm still ruminating all this !), but I felt like it was my burden and my duty to do it no matter what, pour la beauté du geste. Then I tried to show the little video of my brother to my uncle and aunts but they didn't seem that interested, although two of them agreed to say a word for him in front of the aforementioned damned device (I knew it had an issue but didn't realize that it was so severe), as well as our mother. Then I showed him those three videos and put them on his computer. So it's mostly about computers these days...
    I promised him to make a movie out of all that, which should have been done in a matter of weeks, but there were many, many issues to overcome – plus many totally unrelated problems to deal with, so it got stalled for a looong time before I managed to produce a watchable first version, just in time for his birthday this year. Our mother and I rented an apartment for a few days to visit him (she hadn't seen him in almost four years, and it was only the third time in more than 25 years – yet they live about 200km apart), and he was utterly pissed off at first (think of him as Raymond Babbitt missing “The People's Court”). Then it got a little better – although his idea of fun was pretending that he was Obelix punching her in the air like an uninvited Roman (now that I think of it, he really felt like we were invading his territory...). I hoped that watching this together would bring them closer, give them something serious and meaningful to share, but it didn't happen, it won't ever happen, nothing will ever change, and I already know that when the day comes that she'll be put in a wooden box herself, I will be utterly alone. So it's mostly about waiting for everything to end these days...
    And I guess that spending time in front of a damn computer is one of my fragile workarounds to not go completely crazy too soon. Because it's mostly about saving appearances these days...
    Last edited by abolibibelot; 19th Dec 2018 at 03:14.
    Quote Quote  
  13. Originally Posted by abolibibelot View Post

    Is there really no way to run Deshaker in YUV ? Will there be a significant quality loss when if I have to do a triple YUV<=>RGB conversion ? The alternative would be to run the interpolation filters without stabilization, then convert to RGB, then run Deshaker, but then I would have to review the whole footage AGAIN to look for badly interpolated frames, which I would have to interpolate AGAIN after rendering the whole movie, in the encoding script... Unless someone has a better/brighter idea ?...
    You can use mdepan instead of depanestimate for depanstabilize, and get slightly better results, but deshaker is better 999/1000 times.

    Deshaker runs in RGB only. There might be a way to use the log file to apply the transforms in YUV, but that's only the analysis part. You'd still have to emulate what settings the deshaker engine is using in pass 2. And guth has not released the source code.

    The loss from several YUV<=>RGB conversions is not that bad compared to a shaky or poorly stabilized video . You can check yourself in avisynth with ConvertToXX() several times . The higher the quality the source video, the more easily easier you will see the loss . You can minimize the losses over sections where you only need to convert by only converting those section (instead of entire length)

    You can do lossless YUV filtering on RGB source, but filtering each Y,U,V plane separately as Y8 then merging them back as R,G,B - but you don't necessarily get the same results than as if you applied the filter directly. But you can demonstrate that tranform is lossless

    Or if some of the functions could be re-written in float (there is a vapoursynth mvtools that can run in float) , then it would be possible to interpolate without additional loss from RGB<=>YUV (you can convert losslessly if the intermediate calculations are in float) . Also if some of the interpolations were done at higher bit depths , that would minimize RGB<=>YUV losses as well (more precision, less rounding errors)


    @poisondeathray

    Didn't you say that you generally disliked “auto anything” ?
    Yes, but that I still use them for parts of certain workflows. They are useful and can save time

    You can definitely get better results doing it in other programs, but it's still a useful filter and one of the semi-frequently ones still missing from the x64 plugin list. I would also like to see it extended to 10bit; since 10bit is increasingly common in consumer space right now
    When I asked for guidance on that subject I didn't get much... (Wow, that was more than two years ago... crazy... At least I built a new computer based on an i7 6700K in the mean time, makes things a little bit easier than on my former 2009 machine !) Someone suggested DaVinci Resolve, which I tried, but I didn't get satisfying results; perhaps I was doing it wrong, it seems like an advanced software with a quite steep learning curve, not something that could be used to quickly fix a particular issue.
    Yes, a learning curve to use any software properly . But definitely someone familiar with color correcting will get better results than some auto filter . It's too much to go into in a forum post, you can only cover the basics . There are courses and tutorials you can watch



    I know you posted about interpolation errors - that's actually normal and expected. Some sets of settings might give you slightly better on some frames, and slightly worse on others. That' s the nature of interpolation . eg.blksize 16 might be ok for some, but 8 might be better for others. If you went through iterations with something like avsoptimizer, you might be able to calculate a best solve for a section optimizing dozens of parameters.
    But where can these parameters be optimized, if not in the filter's code ?
    Look at the mvtools2 documentation . The filter wrapper might not expose those parameters nicely (they might be set to default values) . But there are dozens of settings that can affect the quality of interpolation and the results, and it's impossible to know which ones work better for some sources or scenarios, you basically have to try or do testing . Even on the same scene, a certain set of settings might be ideal for frame 1 but a completely different set for frame 7 etc...

    But in the time it takes you to do that, there are ways to fix with other programs too with masking / compositing / motion tracking .
    Well, could you please elaborate ?

    There are user guided workflows that take track points and edge splines to guide the interpolation, so you minimize the blobby edge morphing artifacts. That's one of the main complaints and problems with interpolation - it's the incomplete object separation by the algorithms used, so you end up with morphing blobby edges. With other methods, you're practically guaranteed to get better results, but it depends on how much time investment you can put in . And some types of content are tough to interpolate no matter what
    I'd be interested to learn about those other methods, but I'm already very satisfied with what those two filters could accomplish, turning an ugly jerky mess into something watchable. Very few of those bad frames could not be repaired at all (when the result of the interpolation was worse than the original), and they are usually in places where everything is moving at the same time, thus barely noticeable.
    It's a more advanced workflow , too much to describe in a post. But the general idea is mattes and rotoscoped shapes define areas of foreground / background subjects , and track points can plug into the interpolation engine, so the results are generally better. It requires user intervention. The main issue with all interpolation / optical flow approaches is incomplete separation (there are a bunch of other issues too, but that's the main one; if you're interested I posted quite a bit about this at doom9) . e.g. if a hand crosses in front of a body, you lose the outline of the hand. Or legs cross in front of each other, you lose separation between what is what. That's why you get the blobby morphing blended artifacts instead of a clean frame.


    Getting a decent stabilization before the interpolations is more of an issue...
    Deshaker, or complement by fixing problem areas with other programs (you can use motion tracking, user guided stabilization too) . But again, more involved , but better results . Deshaker can have problems; any "auto" stabilizer can. For example , a car zooms past in the foreground and the stabilization gets skewed . Very common problem. You need more advanced stabilizers that can take user input to define areas to include or exclude to get better results. Deshaker can only do it on the frame borders, and you can't adjust it interactively on specific sections (keyframe settings) . Other common problems are cmos/rolling shutter jelly - deshaker can address it but only to a very limited extent. Some other stabilizers are significantly better in that area (newer mercalli versions, warp stabilizer are better for that) . But overall deshaker is still very good for an "auto" stabilizer
    Quote Quote  
  14. So in the mean time, I found some answers to some of my questions :
    http://forum.doom9.net/showthread.php?p=1815546
    => No, Deshaker can definitely not accept YUV as input.
    But “jagabo” wrote in 2010 :
    “When people talk about lossless integer colorspace conversion what they usually mean is that you don't get successively greater errors if you convert back and forth repeatedly. So on the very first conversion from RGB to YUV you will get losses but if you turn around and convert that back to RGB and then back to YUV and back to RGB, etc you don't accrue more and more errors. I believe AviSynth has implemented this level of losslessness.”
    Does that apply here ?

    You can use mdepan instead of depanestimate for depanstabilize, and get slightly better results, but deshaker is better 999/1000 times.
    Already tried, it's not significantly better and it's much slower.

    Deshaker runs in RGB only. There might be a way to use the log file to apply the transforms in YUV, but that's only the analysis part. You'd still have to emulate what settings the deshaker engine is using in pass 2. And guth has not released the source code.
    Yeah, I read that in the thread linked above earlier today... I only rapidly read the last page though, did he explain why he wouldn't / couldn't implement YUV support ?

    The loss from several YUV<=>RGB conversions is not that bad compared to a shaky or poorly stabilized video . You can check yourself in avisynth with ConvertToXX() several times . The higher the quality the source video, the more easily easier you will see the loss . You can minimize the losses over sections where you only need to convert by only converting those section (instead of entire length)
    And referring to the quote above, is it correct that only the first RGB=>YUV conversion is lossy in Avisynth ?

    Look at the mvtools2 documentation . The filter wrapper might not expose those parameters nicely (they might be set to default values) . But there are dozens of settings that can affect the quality of interpolation and the results, and it's impossible to know which ones work better for some sources or scenarios, you basically have to try or do testing . Even on the same scene, a certain set of settings might be ideal for frame 1 but a completely different set for frame 7 etc...
    Even “StainlesS” said that he didn't fully understand mvtools, so I don't think that I can go very far with that !
    (Quote : “I think that the only person that really understood mvtools usage (excluding original authors) was probably Didee, and he only very rarely visits (maybe in response to some particularly inciting post). I certainly dont fully (or even halfly, quarterly) understand mvtools, is almost a total mystery. Best one can do is read the docs/examples and try to make some sense of it. If anybody ever did write a mvtools 'guide', he would have a monumental task on his hands, methinks.”

    It's a more advanced workflow , too much to describe in a post. But the general idea is mattes and rotoscoped shapes define areas of foreground / background subjects , and track points can plug into the interpolation engine, so the results are generally better. It requires user intervention.
    From what I understand, that's what Deshaker does, automatically, and quite well, in its analysis pass, to determine what should be considered as a moving object (red vectors, ignored) and what should be considered as part of the background (white vectors, used for the calculations). But obviously it has to be much more accurate for frame interpolation.

    The main issue with all interpolation / optical flow approaches is incomplete separation (there are a bunch of other issues too, but that's the main one; if you're interested I posted quite a bit about this at doom9) . e.g. if a hand crosses in front of a body, you lose the outline of the hand. Or legs cross in front of each other, you lose separation between what is what. That's why you get the blobby morphing blended artifacts instead of a clean frame.
    A good compromise, for an automated filter, would be to resort to blending (Morph's approach) in cases or areas where the actual interpolation attempt produces fuzzy / blobby outlines. But blending several frames in a row results in weird looking movements...

    Deshaker, or complement by fixing problem areas with other programs (you can use motion tracking, user guided stabilization too) .
    DePanStabilize's method=-1 is descibed as “tracking of the base (first) frame instead of stabilization” – I haven't tried this as it seems completely different in purpose.

    But again, more involved , but better results . Deshaker can have problems; any "auto" stabilizer can. For example , a car zooms past in the foreground and the stabilization gets skewed . Very common problem.
    I've had that issue with Magix stabilizer, when a person walks just in front of the camera and covers the whole frame, there's a sudden lateral bump. DePanStabilize did surprisingly better in such instances. According to its documentation, Deshaker is designed to attempt to overcome that kind of issue, in particular when this option is enabled :
    “Remember discarded areas to next frame : When enabled, this feature makes Deshaker try to ignore approximately the same areas from one frame to the next. Deshaker will then become a lot more successful in ignoring moving objects. As long as they enter the scene rather slowly (by not covering too much of the background), Deshaker will usually be able to ignore those objects even if they eventually grow to cover most of the frame.”

    Other common problems are cmos/rolling shutter jelly - deshaker can address it but only to a very limited extent.
    At least here's an issue I do not have ! I should make a list of them, to see if it cheers me up...


    Side question : I have a few thousands frame interpolation commands for FrameSurgeon, in a text file containing lines like :
    Code:
    I1 165 # means that frame 165 will be interpolated
    I2 168 # means that frames 168 and 169 will be interpolated
    ...
    Would there be a convenient way to display only those frames, so as to review them in quick succession, instead of parsing the whole footage ?
    Quote Quote  
  15. Originally Posted by abolibibelot View Post
    So in the mean time, I found some answers to some of my questions :
    http://forum.doom9.net/showthread.php?p=1815546
    => No, Deshaker can definitely not accept YUV as input.
    But “jagabo” wrote in 2010 :
    “When people talk about lossless integer colorspace conversion what they usually mean is that you don't get successively greater errors if you convert back and forth repeatedly. So on the very first conversion from RGB to YUV you will get losses but if you turn around and convert that back to RGB and then back to YUV and back to RGB, etc you don't accrue more and more errors. I believe AviSynth has implemented this level of losslessness.”

    Does that apply here ?
    No ; for avisynth, the more conversions, the more loss at 8bit (and for every other program it's true too).

    There are various lossless RGB<=>YUV transforms, but they don't apply here and you always need at least +2-3 more bits required for the intermediate before going back

    You can reduce the amount of losses by working at higher bit depths, but your other program needs to be able to do that too . Can the other program handle RGB30 (10bit RGB), RGB48 (16bit RGB) , or float formats ? Unlikely in a consumer video editor


    And referring to the quote above, is it correct that only the first RGB=>YUV conversion is lossy in Avisynth ?
    No

    Look at the mvtools2 documentation . The filter wrapper might not expose those parameters nicely (they might be set to default values) . But there are dozens of settings that can affect the quality of interpolation and the results, and it's impossible to know which ones work better for some sources or scenarios, you basically have to try or do testing . Even on the same scene, a certain set of settings might be ideal for frame 1 but a completely different set for frame 7 etc...
    Even “StainlesS” said that he didn't fully understand mvtools, so I don't think that I can go very far with that !
    (Quote : “I think that the only person that really understood mvtools usage (excluding original authors) was probably Didee, and he only very rarely visits (maybe in response to some particularly inciting post). I certainly dont fully (or even halfly, quarterly) understand mvtools, is almost a total mystery. Best one can do is read the docs/examples and try to make some sense of it. If anybody ever did write a mvtools 'guide', he would have a monumental task on his hands, methinks.”
    But understanding it doesn't enable you to suddenly know what parameters will work on a given frame. You need lots of trial and error. That's why avsoptimizer might be helpful (although I don't necessarily agree with the method it uses for SSIM target in interpolation cases) . And even with the "best" settings (after thousands of iterations and combinations taking probably hundreds of hours) , you still will get better results using user guided interpolation.

    It's a more advanced workflow , too much to describe in a post. But the general idea is mattes and rotoscoped shapes define areas of foreground / background subjects , and track points can plug into the interpolation engine, so the results are generally better. It requires user intervention.
    From what I understand, that's what Deshaker does, automatically, and quite well, in its analysis pass, to determine what should be considered as a moving object (red vectors, ignored) and what should be considered as part of the background (white vectors, used for the calculations). But obviously it has to be much more accurate for frame interpolation.
    Not really;

    They are different types of tracking and stabilization software , used for different purposes. Deshaker is more like a general use stabilizer. And overall it's good. But there is no ability to adjust scenes or adjust inclusion / exclusion areas (except at the frame edges) . So in that respect , it's not suitable for compositing or visual effects type trackers which require accuracy. Deshaker wouldn't be accurate enough to match move or do composited patch repairs . Different types of visual effects trackers , can track separate objects, or background, or foreground, or whatever. You can tell it what to track or ignore. You can stabilize around a specific object or background, instead of a general smoothness.


    DePanStabilize's method=-1 is descibed as “tracking of the base (first) frame instead of stabilization” – I haven't tried this as it seems completely different in purpose.
    This is meant to be used as a reference frame. As if were on a static tripod. It's meant to be a locked off shot , to remove the camera motion in all frames. But mvtools2/deshaker/mercalli class of stabilizers don't do a good job with this type of stabilization scenario - because you can't define inclusion/exclusion areas


    Side question : I have a few thousands frame interpolation commands for FrameSurgeon, in a text file containing lines like :
    Code:
    I1 165 # means that frame 165 will be interpolated
    I2 168 # means that frames 168 and 169 will be interpolated
    ...
    Would there be a convenient way to display only those frames, so as to review them in quick succession, instead of parsing the whole footage ?
    Probably easier to get StainlessS to implement it as a debug view in his script
    Quote Quote  
  16. Now Deshaker is giving me random black frames, which produces some funky results once interpolated by FrameSurgeon :
    Click image for larger version

Name:	20151224_100029 F47 interpolation foireuse, image suivante apparaît entièrement noire.png
Views:	148
Size:	530.7 KB
ID:	47521
    (Perhaps it's dumbfounded by the horror of the suit jacket over the pyjamas...)
    I'm going to scream !

    Probably easier to get StainlessS to implement it as a debug view in his script
    There is a debug view argument, but I don't know what it does, and what that DebugView utility is :
    Code:
    Dv,       Default 0,       ClipClop and Prune DebugView level (0 - 4, Need DebugView utility)
    Yesterday I've read about the stabilizer in ffmpeg, any experience with that ?
    I've also found interesting insight in this 10 years old thread :
    https://forum.doom9.org/showthread.php?t=136025
    The script proposed by “g-force” is quite effective at suppressing small vibrations, but alas it's confused by those damn blurry frames : if frame n is blurry, meaning that the edges are doubled, it will align frame n+1 with either the upper edge or the lower edge of n, instead of ignoring it and aligning it with n-1 instead.
    This statement reflects my experience thus far :
    “I've never been able to dial in the settings for DePanStabilize to be of any use. Doesn't fix quick jitter, and tracks slower pans too well.”
    Some general advice for using DePanStabilize :
    “The 'trick' with [DePanStabilize] is: use a different (cropped) clip for DepanEstimate. What to crop? It depends on the scene... Focus on something that should not move... A house or something. I use tweak(bright=-100,cont=2.0) on this clip too... Somehow this helps, too.
    Also, I nearly always set cutoff to 0.5 and trust between 1 and 1.5.
    The same values for dxmax and dymax both in DepanEstimate() and in DepanStablize helps... (30 is a good value)
    Sometimes I get better stabilizing with Depan then with Deshaker.. It realy depends on the source.”
    I haven't tried the cropping trick, I'm not sure where I should crop to improve the efficiency, in this particular case, most of the shots are, or should be static, so I don't see how cropping could improve the efficiency, while a few shots have motion everywhere, so cropping around a particular spot on a given frame would be pointless since 5 frames later the picture's content is completely different.
    The last suggestion works only with method=0, which in my tests gives very poor results.
    Last edited by abolibibelot; 19th Dec 2018 at 20:10.
    Quote Quote  
  17. Originally Posted by abolibibelot View Post
    Now Deshaker is giving me random black frames

    Maybe you have too much going on there. I would split it up. Easier to debug too. Deshaker to a physical file

    But take it step by step, are you sure it's deshaker in avisynth ? Comment out the rest of the script and seek around with Lsmash only - because LSmash and transport streams can have problems too



    Probably easier to get StainlessS to implement it as a debug view in his script
    There is a debug view argument, but I don't know what it does, and what that DebugView utility is :
    Code:
    Dv,       Default 0,       ClipClop and Prune DebugView level (0 - 4, Need DebugView utility)
    You'd have to ask him or look deeper into the docs / script . I only use a few of his scripts here and there , not familiar with that one

    Yesterday I've read about the stabilizer in ffmpeg, any experience with that ?
    libvidstab is ok, but deshaker is clearly better in terms of results. Just like deshaker is clearly better than depanstabilize (with either mdepan or depanestimate) , at least in that general purpose stabilizer / hand held scenario . I don' t know anyone that would say otherwise. Zero.
    Quote Quote  
  18. Maybe you have too much going on there. I would split it up. Easier to debug too. Deshaker to a physical file
    So one more intermediary step with an uncertain result... It defeats the purpose of running it in Avisynth, which allows to visualize the result and try different parameters before proceeding.

    But take it step by step, are you sure it's deshaker in avisynth ? Comment out the rest of the script and seek around with Lsmash only - because LSmash and transport streams can have problems too
    Then what is the most recommended source plugin for MTS files ? Is there a chart somewhere with recommandations for each common file type ?
    And yes I checked, by placing “Return(last)” at different spots in the script. Besides, it worked fine with Depan so it's most likely due to Deshaker. Apparently it doesn't like when a frame is accessed directly, instead of linearly from the begining. Also, just tried loading the Avisynth script containing the Deshaker command into VirtualDub2 (which is now patched with “LAA”), it plays fine when moving forward, but as soon as I go back 1 frame it turns black, and stays black, and all following frames are black. The “extrapolate colors into border” feature is said to be very slow, so it might be too demanding for real-time processing, I dunno... (That's another dilemma : zooming in by a fixed factor will soften the picture and cut people's faces in some places, but trying to interpolate borders seems to produce an ugly result more often than not.)
    Quote Quote  
  19. Originally Posted by abolibibelot View Post
    Maybe you have too much going on there. I would split it up. Easier to debug too. Deshaker to a physical file
    So one more intermediary step with an uncertain result... It defeats the purpose of running it in Avisynth, which allows to visualize the result and try different parameters before proceeding.
    But take it step by step, are you sure it's deshaker in avisynth ? Comment out the rest of the script and seek around with Lsmash only - because LSmash and transport streams can have problems too
    Then what is the most recommended source plugin for MTS files ? Is there a chart somewhere with recommandations for each common file type ?
    And yes I checked, by placing “Return(last)” at different spots in the script. Besides, it worked fine with Depan so it's most likely due to Deshaker. Apparently it doesn't like when a frame is accessed directly, instead of linearly from the begining. Also, just tried loading the Avisynth script containing the Deshaker command into VirtualDub2 (which is now patched with “LAA”), it plays fine when moving forward, but as soon as I go back 1 frame it turns black, and stays black, and all following frames are black. The “extrapolate colors into border” feature is said to be very slow, so it might be too demanding for real-time processing, I dunno... (That's another dilemma : zooming in by a fixed factor will soften the picture and cut people's faces in some places, but trying to interpolate borders seems to produce an ugly result more often than not.)
    Right, that sounds like a non linear seek issue with deshaker in avisynth. So the KISS rule in effect here, the physical I-frame intermediate after deshaker/vdub will solve your problems

    Probably the most consistent source filter is DGSource/DGDecodeNV, but it's not free and requires a compatible Nvidia card. But it indexes the file, so it's robust , even with non linear seeks. A side benefit is offloading from the CPU . ffms2/lsmash index the file too, but they can exhibit flaky behaviour with transport streams. It's a well known issue.

    LSmash is probably ok for linear seeks, or something simple... but you have clipclop /framesurgeon there on top, that requires non linear access. It's very tough on a long GOP source, probably making a large contributing to your memory issues as well

    The border options are tough to decide on, they all have compromises. You have to decide on what sorts of trade offs you're willing to make

    This is the sort of scenario where it helps to have interactive / keyframable settings. Instead of 1 set of settings, it's nicer to be able to make adjustments. eg. Maybe on some close up shots, you don't want to soften so much but can take a bit less steady as a trade off, or maybe the border fill works on one scene ok etc...Basically tweak it as you go, or fine tune the settings. So although deshaker is overall very good, it doesn't have those type of adjustable settings on the fly (or at least not easily; I guess you could cut it up and sort of frankenstein it together)
    Quote Quote  
  20. Regarding the issue of displaying only the interpolated frames, I tried this dirty method : opened the command file in Calc, with spaces as separators, removed the “I” at the begining, added a column with a value calculated as [B] + [A] - 1, added columns with “Trim(” / “,” / “)”, then exported as .csv, edited that with WinHex to remove the x09 separator character and replace the new line symbols with “ ++ ”, exported that as .avs, added an AVISource command... It works but very slowly, AVSPMod (or Avisynth) seems to choke with so many Trim commands (1387). Is there a practical limit to the number of Trim segments, or the length of an Avisynth command in general ?
    I found a function named AdvancedMultiTrim, which seems to be designed for that purpose, but right now I wouldn't know how to convert such a list into a list of individual frames (must be very simple to program such a thing, but I know next to nothing about programming).
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!