I did stabilize some handheld footage (no Discussion to buy a gimbal, its not about that so just let it be pls).
After stabilizing you can see that some frames are sharp and some are not (maybe you have to watch frame by frame to see it). The problem is, stabilizing or sharpening can't fix that difference. I tried RVRT deblur from hybrid (or selur did it for me) and i did not like it. I hoped it would be good, but when you watch frame by frame it looks AI-ish. I even like the blurry frame more than the "sharp" AI-Deblur. Im not sure, if this is maybe because it was deblurred before stabilizing, but on my end it does not work so i can't test for now.
So i really tried to find sth "traditional" (non-AI), but i did not find anything. So do you know any method to make blurry motion blur-frames as sharp as the sharper ones in the video (or at least a bit closer), but without looking AI-ish? As i said: Im only interested in post-processing and not in changeing capture-methods. I will just ignore every answer who tries it anyway to give me capture-tips.
I will upload the original footage and one which i did stabilize.
ps: I know i could stabilize more but i wanted to make the movement more natural and did not want to have to much crop.
+ Reply to Thread
Results 1 to 9 of 9
-
-
Haven't seen:
a. anything non-machine-learning based that helped with deblurring
b. I haven't seen anything impressive from machine-learing stuff
SeedVR2 has parameters input_noise_scale and latent_noise_scale which will might help to avoid over sharpening/smoothing, but It's probably too slow on normal hardware to play around with it and a source before finding the right balance of settings or knowing for sure whether this can help.
I would recommend stabilizing (and probably prefiltering) before throwing any machine learning stuff at it.
To illustrate, I attached sample of the output when directly throwing different machine learning deblurring approaches at your source. (might add GRLIR later too, but it's horribly slow)
Cu Selur
Ps.: Okay, GRLIR will take a while even seedvr2 is faster,...Last edited by Selur; 28th Mar 2026 at 09:15.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
If you absolutely, positively do not want to make it so that this issue does not become an issue in the first place, the only way to salvage the footage in a satisfying way (based on past experience, which seems still valid some 8 years later based on the remarks by “Selur” above, despite great advancements in all kinds of A.I. enhanced kinds of processing in the mean, really fu**ing mean time) would be frame interpolation – but beware, that is an insane amount of work, as it can not be fully automated as far as I know, or as far as I knew back in 2018. I had to resort to this to fix about 35min. of irreplacable footage made with a camera that had a defective optical stabilizer, which likewise exhibited many blurred frames after stabilizing; I had to check every single frame and add corresponding interpolation commands for each one that was jerky / blurry (nearly 5,000 out of about 50,000 frames total) to a list that was then used as an input for Avisynth frame interpolation functions (mostly FrameSurgeon, also Morpheus, and for a few problematic frames which exhibited ugly artifacts after processing with either of those two, I used the more rudimentary Morph function which simply blends adjacent frames). The AutoIt based program Sawbones made by Doom9 member “StainlessS” was of great help for that task (one rather awkward keystroke shortcut would add the proper command, whereas having to type each one manually would have been overwhelmingly tedious and more prone to error). Those tools may have improved since then, new tools could have been developed, you would have to ask the Avisynth wizards there (if you do, please post the link of the corresponding thread here, I'd be interested to read it, even though I sure hope I'll never have to do that kind of sh*t ever again...).
I'll stress this one more time: that is some INSANE kind of work. No one in their right mind would willingly consider this a regular part of a workflow, be it in a personal or professional context.
Quote from “poisondeathray” on Doom9, 2018-01-02:
For detection, I wouldn't trust most approaches on important footage. Too much room for error. Even if it's 95% accurate you risk missing bad sections; but even worse - false positives mean you risk losing good original frames.
Moreover, doing yourself and previewing gives you the option to evaluate the results. There are several different interpolation function variants and various settings that might get better results if your "default" result was poor. Example for one section a larger blocksize might help, for another maybe dct=1, etc..
Or sometimes, the interpolated result is worse than the original, then you can *see* that and use another method or adjust accordingly.
But brainstorming some ideas - maybe frame differences . The problem with avisynth runtime functions such as YDifferenceFromPrevious is that it will fail with 2 sequential blurry frames . Or something normal like turning on a light would be flagged inappropriately. Lots of scenarios for misdetection.
Maybe motion vectors. Or maybe something like deshaker's log file run through some excel script to filter the results. Large deltas maybe in rotation should be areas where there is large motion.
I don't know of anyway to automatically/accurately detect unwanted "what is a blurred frame".Last edited by abolibibelot; 28th Mar 2026 at 10:38.
-
Out of curiosity: which of the frame in the clip would you consider to be blurred? (maybe one can write a script to detect them, once one has a 'ground truth' to know what to detect,...) Or is it not the whole frame that you would consider blurred?
users currently on my ignore list: deadrats, Stears555, marcorocchini -
Yeah, that would be even more difficult in a case like this where the jerkiness is caused by random camera motion amplified by a large magnification factor, as there's no particular pattern, making it very tricky to assess which frames can be considered “good” and which are to be deemed “bad”. In the specific case I described above, the vast majority of “bad” frames had a distinctive aspect (bluriness was caused by short streaks of spontaneous vertical jerkiness, always with the same offset, and with a regular temporal pattern, while the camera itself was mostly steady) and were surrounded by sharp frames, making the interpolation fairly easy (at least when there was no complex motion involved, like people walking). Now trying to interpolate, e.g., 3 very blurry frames in a row based on less-than-flawless-but-still-sorta-okay-I-guess adjacent frames is bound to cause more artifacts, and more headaches.Out of curiosity: which of the frame in the clip would you consider to be blurred? (maybe one can write a script to detect them, once one has a 'ground truth' to know what to detect,...) Or is it not the whole frame that you would consider blurred?
-
Ah shit, i did write a long message but token expired...
Edit: Here is a easier video to see what i mean with changes between sharp and blurry.
On the new video it is like that on avidemux:
66ms is unsharp, 100ms is sharp, 133ms is unsharp, 166ms is sharp, 200ms is a bit unsharp, 233ms is sharp, 266ms is unsharp, 300ms is a bit sharper, 333ms is sharp, 366ms is unsharp. Its really good visible here. You have to look on the area left from his eye.
On the video from before it was a bit different like: 933ms - 100ms is sharp, 1033ms is unsharp, 1066ms is sharp, 1100ms - 1333ms is unsharp, then 1366-1433 sharp.
Something like that.
But i don't know, if the video with the brown horse is just normal sensor-burn-in-stuff when an objects moves. But on the second one with the white horse it is in my opinion not normal and i think easier to fix, because there are more sharp frames. the gap between sharp and blurry is shorter.
Edit2: And the two BASIC VSR videos looked not that good in my opinion and the SeedVR was just horrible
Edit3: And about Interpolation: I guess, you don't talk about Frame Interpolation like RIFE, right? Because that would not work. And i'm the enemy of complicatet WOrkflows. So yes, i need a task where i can save the seting, then drag-n-drop files and click on start and that's it (maybe i would need som time to "develop" such an atomatic stuff, but in dailylife i would not do complicated stuff, that's right). That's why i use hybrid and not Command line.
Edit4: Im trying now to fork cuvista from github. Vid.stab i already forked i guess, but it is to slow i noticed after xD
Cuvista seems to be ok-ish on speed-quality. But i need some more time. But that's just Stabilizing. When i would have something i could implement there to fix such blurryness, then i would like it. But it seems the only stuff available is AI which looks only on 240p videos good because you can't see artifacts.Last edited by Platos; 28th Mar 2026 at 12:25.
-
Hmm,.. I look at frames not ms since your source is vfr iirc. ms this might not be correct, but I translate this to:66ms is unsharp, 100ms is sharp, 133ms is unsharp, 166ms is sharp, 200ms is a bit unsharp, 233ms is sharp, 266ms is unsharp, 300ms is a bit sharper, 333ms is sharp, 366ms is unsharp. Its really good visible here.
Frame number|verdict
0 | ?
1 | ?
2 | unsharp
3 | sharp
4 | unsharp
5 | sharp
6 | 'a bit unsharp'
7 | sharp
8 | unsharp
9 | 'a bit sharper'
10 | sharp
11 | unsharp
Okay, (zooming in x4) you are referring to some fine detail blur.You have to look on the area left from his eye.
I doubt you can get rid of that and I think that is inherent to camera and chosen compression and not what one would normally describe a blurry.
as expectedAnd the two BASIC VSR videos looked not that good in my opinion and the SeedVR was just horrible
You probably could use something like
to lessen the effect.Code:clip = degrain.mcdegrainsharp(clip, csharp=0.70 # test different strength and threshold values
Cu Selurusers currently on my ignore list: deadrats, Stears555, marcorocchini -
ah, yes, i thought that.
But why using mcdegrain csharp ? Why not just using a regular sharper? Does that has a reason?
And the problem i think on sharpening is, that it does sharpen all frames. I already use sharpening after stabilizing (not on the video above, but normally). Or does it has a reason, that you suggest a degrainer filter? I dont understand the logic behind that. -
About my logic
behind it.
Disclaimer: all this is at least loosely based on what I remember Didée writing about it. This is by no means perfect, just trying to paraphrase what the idea is as how I understood it. (maybe asking some of the larger LLMs could help here,...)
Short: MCDegrainSharp performs motion-compensated temporal denoising with a bias toward preserving high-frequency detail. So perceived sharpness becomes more uniform due to selective temporal averaging.
Longer:
MCDegrainSharp reduces noise unevenly across spatial frequencies while trying to avoid blurring fine detail.
It introduces a constraint so that,- Fine structures are less smoothed than in standard MCDegrain (you will/might still lose some)
- Flat/noisy areas are more aggressively averaged.
- Noisy areas get smoother (less pseudo-detail)
- Real edges/textures are preserved or even slightly enhanced
So when:- Noise gets removed → reduces random high-frequency energy
- 'True/guesses' detail are preserved → remains consistent
---------------------
about why not use a regular sharper and how I ended up trying to use MCDegrainSharp: I don't know how to properly mask the areas that should be adjusted and the first thing that came to mind was some kind of motion mask, which popped 'Soothe', 'SeeSaw' and 'MCDegrainSharp' into my mind and since I only have experience with 'MCDegrainSharp'(*) I tried that. (the other two might be worth a try too and might be even better suited)
Cu Selur
(*) Never saw a port of 'Soothe' to Vapoursynth and only saw a port of 'SeeSaw' for Vapoursynth in G41Fun (which iirc. was limited).
Ps.: To make it clear, I wasn't totally sure this would help, but I made an somewhat 'educated' guess about what would be worth a try.
PPs.: Created a new port of SeeSaw.py and noticed there is one from muvsfunc and there is also a mod of Soothe.Last edited by Selur; 29th Mar 2026 at 00:37.
users currently on my ignore list: deadrats, Stears555, marcorocchini
Similar Threads
-
add motion blur on 50P video?
By marcorocchini in forum Newbie / General discussionsReplies: 30Last Post: 1st Mar 2025, 15:27 -
How do you fix this kind of motion blur or ghosting or what it might be
By speculumberjack in forum Video ConversionReplies: 2Last Post: 24th Apr 2024, 11:45 -
Help to fix shaky VHS capture with free software?
By shampistols69 in forum RestorationReplies: 6Last Post: 10th Feb 2024, 00:20 -
For handheld shaky home movies, 1080 30i, 1080 30p, or 1080 60p?
By MisterF in forum Newbie / General discussionsReplies: 6Last Post: 7th Oct 2023, 08:37 -
fighting extensive motion blur
By taigi in forum RestorationReplies: 2Last Post: 21st May 2021, 19:19


Quote