I'm trying to improve a movie (about my deceased grandmother) I made from problematic source footage which I already spent a lot of time attempting to salvage as much as possible. I recorded the AVCHD 1280x720 source footage with two Panasonic compact cameras : a ZS3 shooting in 25 FPS in dec. 2013, with a strong backlight issue (mostly shot in front of a large window) ; a ZS7 shooting in 29.97 FPS in dec. 2015, that camera had a malfunctioning optical stabilization system, causing spontaneous intermittent vertical jerkiness (with the picture shaking up and down by about 5 pixels, 5 times per second, at random intervals...).
I made a watchable first version in May (for my brother's birthday), using :
– the stabilization feature from Magix Video Deluxe (NLE software used to edit this movie), followed by a combination of FrameSurgeon and Morph (Avisynth functions) to interpolate the remaining blurry frames caused by the jerkiness on the 2015 videos (see this thread ; Sawbones – an AutoIt program designed to help create a template file for FrameSurgeon, by using keyboard combinations to automatically add a command for the currently displayed frame when examining the footage in VirtualDub2 – was a huge help, considering that I had to manually check about 35min. of footage, amounting to about 50.000 frames, and ended up with about 4800 commands in the template file) ;
– a combination of AutoAdjust and HDRAGC to correct the exposure and bring out details out of the dark shadows on the 2013 videos (see this thread).
The first thing I want to do now is to interpolate the bad frames on the source footage rather than the edited / rendered movie, since some of those bad / blurry frames are located at crossfade points and I couldn't correct them satisfyingly when I finalized the first version.
But, examining the scripts I made a few months ago, I found out that the Morph() function (found on this thread, specifically that post) no longer works as expected : now if I use the command "Morph(137,139)" for instance, I get a blend of frame 137 and the last frame of the input movie, instead of a blend of 137 and 139. I tried copying the Morph function inside a script, changing its name to Morph2 and calling Morph2(x,y), same result. I checked that code was strictly identical to the code I copied initially. What could be the explanation ?
I haven't voluntarily changed anything in my Avisynth configuration since then, but I noticed that the modified date of most subdirectories in my Avisynth+ directory, 2018/10/26 22:52, is very close to the date of StaxRip subdirectories, so apparently something was changed in Avisynth when I first ran StaxRip. I compared the current directory with the one from a Macrium Relect backup made on September 29th, using WinMerge, only the differences are displayed :
The "Setup Log 2018-10-26 #001.txt" file indicates that "AviSynthPlus-MT-r2728.exe" (included in StaxRip) has been installed on that date. Before that, I most likely had "AviSynthPlus-r1576.exe" installed.
Another issue is that, previously, I ran the frame interpolation script on the already stabilized footage, at the final encoding stage. It would have been even more difficult and painstaking to spot damaged frames among the original jerky mess, and probably much less efficient too (potentially interpolating from misaligned frames). So now I have to stabilize the footage before interpolating. Are there Avisynth function(s)/plugin(s) that would be at least as good as the NLE feature, and produce a reliable result ? I have briefly tried Deshaker in VirtualDub months ago, it seemed quite efficient, but quite complicated, I had already many complications to deal with and, since the Magix stabilizer was performing mostly fine (better in my experience than the third-party proDAD Mercalli plugin that it now also includes, and which is supposed to be a professional level tool), I figured that I wouldn't need anything else... The alternative would be to treat the whole source files with MVD's stabilizer, export them as lossless intermediates, then process them with Avisynth to interpolate the bad frames, then re-import them into MVD, but I'd prefer not to...
Then, would it be possible, and wise, to load the scripts (probably 5-6) into the NLE timeline using AVFS (Avisynth Virtual File System), with my machine based on an Intel i7 6700K CPU with 16GB of RAM, or would it be better to first export the processed footage as intermediate lossless files ? The first option would be more convenient of course, but may be slower, and potentially less reliable (if there's an out-of-memory issue for instance the results can start to get wonky, and some Avisynth functions apparently don't produce the same output when processing footage in a linear way or when previewing a particular frame at a random spot). With the second option, I know what the processed footage will look like, there are less moving parts, but if I need to make a slight adjustement I have to start all over again and re-export the whole bunch of big files, instead of just reloading the virtual AVI files.
Apparently this depends partly on the source loading plugin used : for instance I've had very unreliable results in the past when using DirectShowSource. In this case the source files are .m2ts : which plugin is generally recommanded with that format, and why ? ffms2 / FFVideoSource, or LSMASH / LWLibavVideoSource ? or something else ?
On frame interpolation, again : with FrameSurgeon, I get ugly artifacts in case of fast motion (that's why I used Morph for some frames, as it usually performs better in those cases, even though it's much less convenient and gets unstable if called more than a few dozen times in the same script, whereas FrameSurgeon doesn't choke with thousands of frames to process). I have read (see here – it's in french but the screenshots alone illustrate the issue very well – and here) that this kind of artifacts were produced by MVTools2 when processing "HD" footage, and not at lower resolutions. Is it a known issue, and is it still the case ? (Those two threads are 5-6 years old...)
Then, with regards to the backlit footage, is there anything better than HDRAGC that I could try this time, some new wizardly function which appeared in the past few months ? Apparently this plugin is quite old; yet this kind of function should be high in demand as this is a very common issue, but I found nothing equivalent for Avisynth. Although I just found a function named SGradation which seems promising, although it seems to have vanished aside from Archive.org : on a quick test, a combination of autolevels, SGradation and HDRAGC seems to produce a more pleasing result than the AutoAdjust / HDRAGC combination I used earlier.
I have also tried the Gamma HDR function in Magix Video Deluxe, I have tried various processing methods in DaVinci Resolve (possibly not to the best of their abilities as I was just discovering this seemingly highly capable software), both of which produced worse results than HDRAGC.
+ Reply to Thread
Results 1 to 10 of 10
Last edited by abolibibelot; 5th Dec 2018 at 11:41.
AviSynth. But I saw the same problem as you with 64 bit AviSynth+. I'll look at it later. No time now.
For me the function worked properly with 32 bit AviSynth. But I saw the same problem as you with 64 bit AviSynth+. I'll look at it later. No time now.
On Doom9, the author of Sawbones/FrameSurgeon told me that Morph was based on “undocumented features”, which could partly explain such unexpected behaviour.
So I reinstalled Avisynth+ r1576 which I had before, and it solved the issue... What could be the likely explanation, if asking why an Avisynth function misbehaves in some circumstances is less of a conundrum than asking why there is evil in this god forsaken world ? I noticed that it installed “Microsoft Visual C++ Redistributable 2012 Update 4 (x86)”, if that's any clue.
Doing further testing, I find it surprising and puzzling that applying Morph after FrameSurgeon on the same frame (which is problematic for FrameSurgeon because of fast motion and produces those weird fuzzy edges) gives a slightly better result than applying Morph alone (and much better than FrameSurgeon alone for that particular frame). Normally, interpolating, say, frame 138 from the adjacent frames with Morph(137,139) should totally cancel the interpolation of the same (single) frame with FrameSurgeon(I1 138), right ? Or does Morph somehow include the actual frame that is being replaced in its calculations to create the interpolated frame ? Strangely, this happens only if “show=true” is used in the FrameSurgeon command, which displays some information on the upper left corner, without it there is no difference (therefore it's moot since that switch is only useful for previewing and has to be set to “false” or removed for the actual rendering).
Morpheus is (apparently) a new function designed by “StainlesS”, the author of FrameSurgeon / Sawbones (see the thread linked above). I don't see the same effect as described above with this function, i.e., applying Morpheus after FrameSurgeon(show=true) produces the same result as Morpheus alone.
Native frame :
Morph after FrameSurgeon with “show=true” (the front of the bike seems more straight, the letters on the box are clearer – but there are some extra artifacts on the bike's left handle) :
Morph after FrameSurgeon with “show=false” (identical to Morph alone) :
Morpheus (similar to FrameSurgeon on this frame, better in some areas, like the wall corner, but worse elsewhere, like the text on the box, hard to say which one is better overall) :
It must be said that this frame is not representative of the respective quality of those filters in general : most of the time, when there's no fast motion, FrameSurgeon produces a much cleaner interpolation than Morph, especially when interpolating 2-3 frames in a row (beyond that all interpolation filters I tried start to produce an ugly mess, unless there's no motion at all).
(By the way, I've had a weird issue tonight, whereby any newly opened window for any program has that stripped down aspect, Windows 2000 like, with square grey edges, as seen on those screenshots, instead of the usual Windows 7 look, and some of them take abnormally long to load... if anyone has any clue... I haven't tried the obvious, i.e. rebooting, as usual I have a gazillion of programs and windows opened and countless tasks initiated simultaneously, each of which progressing veeery slowly, and I dread the moment when a reboot becomes mandatory, or worse, the unexpected BSOD...)
And so, what would be the recommanded method to stabilize that footage before applying the interpolation filters ?
Last edited by abolibibelot; 5th Dec 2018 at 21:25.
So, with no further insight in sight, I tried using Deshaker within an Avisynth script : it works, the problem is that it works only in RGB colorspace, and if I add a ConvertToRGB command before, then the interpolation commands no longer work.
“MSuper: Clip must be YUV or YUY2 or planar RGB/RGBA” (for the Morph function)
“FrameSurgeon: Id Interpolation commands Planar and YUY2 ONLY”
(In the first case I don't understand why it doesn't work, since RGB is apparently allowed.)
Another problem is that Deshaker runs the second pass each time the script is reloaded, which takes way too long and is not practical at all.
So, is there a way to avoid a triple colorspace conversion (since I will have to export the pre-processed files as RGB to avoid color discrepancies once the lossless intermediates are imported into the NLE), and to avoid having to first export the stabilized footage before applying the interpolation treatment ?
Is there a native Avisynth stabilization filter that would work in YUV and would perform as well as Deshaker in a case like this, with the possibility of saving the complete stabilization parameters to allow for a quick loading ?
I tried DePanStabilize, but so far I can't get a satisfying result... There are several parameters which are poorly explained in the documentation, and several which seem to do nothing at all. It is said that some parameters work only with “method=0”, but with this I get poor results (and it negatively affects the quality of interpolation of bad frames, sometimes a frame interpolated after DePanStabilize(method=0) looks worse than the native frame, and much worse than the same frame interpolated after DePanStabilize(method=1)). With “method=1” it's definitely better, but there seems to be no control over the threshold of correction. The other modes seem unstable / unreliable.
Could anyone provide some tips to use it to the best of its possibilities ? Also I'd like to have a clear(er) explanation of the following parameters :
[DePanEstimate] range (why is default 0 ?) - trust - winx - winy - wleft - wtop - dxmax / dymax (why are these both in DPE and DPS ?) - stab - log (I tried it but it doesn't write anything in the designated file) - show (it doesn't show anything)
[DePanStabilize] cutoff - damping - blur (supposed to blur the borders but it doesn't seem to change anything) - dxmax / dymax (see above, and why doesn't it work with “method=1” ?) - fitlast - inputlog (if an input file is loaded here, does it just ignore the data provided by DePanEstimate ?) - method (the descriptions are quite opaque : “inertial”, “two-way average”, “unlimited (static) stabilization”, “tracking of the base (first) frame instead of stabilization”)
Also, does it work linearly, and if so, what happens when directly accessing a frame in the middle of a video ? Is the result of the correction the same as when the complete file is exported, with the same parameters ?
Apparently DePanEstimate can take a log file as input, with the same format as Deshaker, so what happens if an actual Deshaker log file is used as input ?