This uses the stack method I mentioned earlier - stabilization, merge, and lsfmod. (A Median based filter could also be used instead of merge)
There is no question that typed text, handwriting is more legible and it's cleaner, definitely higher signal/noise ratio than that Neat Video example. So if the criteria was legible text / handwriting ... this is it. If the criteria was denoising, this is still it. If you say otherwise, I'm going to question your eyesight or your sanity!
Some crops , and full frames in the zip below
NeatVideo_Truthler
[Attachment 60893 - Click to enlarge]
Stabilize,merge,lsfmod
[Attachment 60894 - Click to enlarge]
Closed Thread
Results 151 to 180 of 359
-
-
That's fantastic Poisndeathray, it even made some numbers more clear to read, Wow.
-
This is your standard stack technique. It's is a form of temporal super-resolution. It's only suitable for "static" objects, not things like moving people or objects, otherwise you'd get ghosting contamination on frames. The idea is signal averaging (merge, or median or similar math formula) over a range of frames, and that reduces noise, and increases "signal". You've heard of averaging VHS captures, or image stack for NR in photoshop - it's the same basic idea. You need frames to be spatially aligned - and that's what stabilization is for - you remove the camera motion. The more "solid" the stabilization, the better your results.
For videos where you want to revert back to the original motion - you reverse the stabilization (basically this involves tracking, stabilize using the tracking data, apply your filters/transforms, then "reversing" the stabilization using the same tracking data). So it's easier in a GUI program like blender or natron, or ie. programs with keyframes. I explained with examples and a demo package how to do it on doom9 a few years back, when discussing how to replicate ikena results on their demo videos using avisynth merge or median. If you don't need the original motion of the video , you just want to see the text, you don't need the "reverse-stabilize" part, it's just a single averaged image . A "moving window" for denoising can be used where the current "poster" frame uses N +/- X frames on either side, again easier to do with keyframes, because sometimes you want a truncated window one side, or maybe you want longer window on the other side. For example, you might come to a scene change or portion that's badly stabilized that you don't want to contaminate your results with. Or maybe you have frames that are very solidly stabilized and want to include more of those. A dynamically changing window length can be more easily used with a program using keyframes.
I'll upload a portion of pre-stabilized video and the script I used for that screenshot. This is 10bit422 ut video, and the frame range is 445-506 from the original.
https://www.mediafire.com/file/hen23m4tguuctp7/445-506_stab_ut_10bit422.avi/file
I used avisynth and 50-50 weighting for each merge, but if you had some particularly "good" frames you could weight them higher. Maybe there is a merge helper function in avs or vpy that takes more than 2 inputs. The temporalmedian(radius=x) takes up to 12 before/after , but 8bit only. I played with it briefly, but results were a bit worse than the mean script. You probably don't need this many entries - there are diminishing returns.
Code:a=AVISource("445-506_stab_ut_10bit422.avi") a10 = a.trim(10,-1) a11 = a.trim(11,-1) a12 = a.trim(12,-1) a13 = a.trim(13,-1) a14 = a.trim(14,-1) a15 = a.trim(15,-1) a16 = a.trim(16,-1) a17 = a.trim(17,-1) a18 = a.trim(18,-1) a19 = a.trim(19,-1) a20 = a.trim(20,-1) a21 = a.trim(21,-1) a22 = a.trim(22,-1) a23 = a.trim(23,-1) a24 = a.trim(24,-1) a25 = a.trim(25,-1) a26 = a.trim(26,-1) a27 = a.trim(27,-1) a28 = a.trim(28,-1) a29 = a.trim(29,-1) a30 = a.trim(30,-1) a31 = a.trim(31,-1) a32 = a.trim(32,-1) a33 = a.trim(33,-1) a34 = a.trim(34,-1) a35 = a.trim(35,-1) a36 = a.trim(36,-1) a37 = a.trim(37,-1) a38 = a.trim(38,-1) a39 = a.trim(39,-1) a40 = a.trim(40,-1) a41 = a.trim(41,-1) ba = merge(a10,a11) bb = merge(a12,a13) bc = merge(a14,a15) bd = merge(a16,a17) be = merge(a18,a19) bf = merge(a20,a21) bg = merge(a22,a23) bh = merge(a24,a25) bi = merge(a26,a27) bj = merge(a28,a29) bk = merge(a30,a31) bl = merge(a32,a33) bm = merge(a34,a35) bn = merge(a36,a37) bo = merge(a38,a39) bp = merge(a40,a41) c1 = merge(ba,bb) ca = merge(bc,bd) cb = merge(be,bf) cc = merge(bg,bh) cd = merge(bi,bj) ce = merge(bk,bl) cf = merge(bm,bn) cg = merge(bo,bp) da = merge(c1,ca) db = merge(cb,cc) dc = merge(cd,ce) dd = merge(cf,cg) ea = merge(da,db) eb = merge(dc,dd) merge(ea,eb) lsfmod(strength=200, defaults="slow")
-
I understand why didn't you posted video. You can use lossless UTvideo and SendGB.com to share larger videos.
-
Have you seen
upload a portion of pre-stabilized video and the script I used for that screenshot. This is 10bit422 ut video, and the frame range is 445-506 from the original.
https://www.mediafire.com/file/hen23m4tguuctp7/445-506_stab_ut_10bit422.avi/file
-
^^Because it is the poster's prerogative as to which hosting service he choses.
And his file(s) may reside on the server long than yours which will expire in a matter of days rendering your entire 'sermon' useless for anyone who stumbles upon this topic on a later date. Not that it has much value anyway.
-
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS
-
ahh you whiners
You get full speed with mediafire download (it can saturate any line) if you use a downloader with concurrent connections. Quit using "ghetto" downloaders.
Use adblockplus or similar if using a browser, or a decent downloader and you don't get any ads or adware.
If you're getting malware, quit using the internet until you learn how to use it safely
https://drive.google.com/file/d/1pAq4u_YyjSIUi0tMqaVCze923dkiFfZt/view?usp=sharing
-
MediaFire had harmful JS, it really wasn't about the obnoxious ads.
- If you block the JS, you can't download.
- If you allowed the JS, you were infected.
They proved themselves to be incompetent many years ago. Year after year, they spewed malware. I see confirmed malware from as recent as 2019, and suspected issues from 2020. They really suck.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS
-
I've been using mediafire for years with no problems, Malware are user's fault, mediafire is not responsible on what's hosted on their platform, it is the user responsability to check what's being downloaded.
-
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS
-
Maybe I've been lucky for the last 10 or so years I've used Mediafire. or because I check every file I download and make sure it is what I wanted to download.
-
I must have stumbled on the wrong topic.
I was looking for one how marvellous Neat Video is (in the wrong hands) and I find an argument about file-hosters.
A classic trolling technique. Make a comment and watch everyone else fall out about it whether it has foundation or not.
-
Well, I think it is clear to all here, except the OP perhaps, that while Neat Video can be a quite good denoiser, it isn't necessarily the "best", but that it can be helped a LOT by careful adjustment of the settings (erring on the side of caution and doing less reduction) and by pre-processing and using multiple techniques, based on the type of noise. Also, there are a number of "quite good" denoise options available, some using FOSS software.
And Neat Video (along with those others) can be easily also set to SUCK, in the wrong hands.
[Attachment 60926 - Click to enlarge]
Scott
-
Please Re-Read post 154.
It IS the original noisy footage, but stabilized
Apply the script to reproduce results of the screenshot
If I did not provide that video, someone might stabilize differently, and could reproduce the results. Now anybody can 100% verify the results - it's for transparency of testing.
Ask if you don't understand something
-
May I ask you why did you transform the original NV12 color space into 10bit YUV ? The lossless Ut video has a lot of color spaces, why don't you use Ut video codec?
HAve you read about that codec here? http://forum.doom9.net/showthread.php?p=1765782Last edited by Truthler; 25th Sep 2021 at 10:22.
-
Very easily. Just follow the instructions here:
https://youtu.be/j1581xbTrqk?t=137
https://youtu.be/wH-Pv2WOX10?t=50
https://youtu.be/FnSbpPXlmvo?t=51Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS
-
The original AVC was YV12 , not NV12 . Both are 8bit 4:2:0, but NV12 storage configuration is slightly different than YV12. This has implications on how certain programs handle video and specific pixel formats - it's not a trivial difference if it's mishandled, there can be quality and interpretation issues
I used UT video in 10bit 422, because the stabilized accuracy is higher , at least in theory. 10bit has more code values and higher precision, so you can effectively pixel changes more accurately. Does it make a difference ? Very slightly , and less than you would think, because of the averaging so many frames evens things out, and the source quality isn't great. If you add ConvertBits(8) before the merge, you can compare the results at 8bit vs. 10bit. I might test later at 16bit or 32bit float to see if it makes a larger difference, but I suspect it will be negligible
And there was some "AI" discussion earlier for denoising -
"AI" / neural net does not have to "create new details" when denoising - you can train for other accessory sub tasks that assist with denoising
eg. "AI" can be help forward and backward propogation and temporal image alignment. It's essentially a more advanced form of motion compensation or stabilization. This can help with tasks such as improving denoising quality - as you see earlier alignment helped with SMDegrain, but also the mean stack . Some of the temporal machine learning algorithms use this to reduce the issues caused by single image algorithms when applied to video
Other tasks are image segmentation, such as masks and object boundary delineation and separation . In the example with the blurred wall above the guy's head - if that was segmented properly, the wall wouldn't have been blurred. Segmentation us used frequently for "colorization" algorithms, but it can help with denoising tasks too
HAve you read about that codec here? http://forum.doom9.net/showthread.php?p=1765782
-
To help with the denoising accuracy and quality. If you look at smdegrain, the pre stabilization improved the results too. Neat video should improve too. But Neat video only has a 5 frame window for temporal denoising, so Neat video should produce worse results if you use it "regularly" with 1 instance on pre stabilized input
Does't your denoising really works in real motion picture?
Only in limited situations, such as this, such as you want to read text on a stationary object. Everything is stationary, only the camera moves. It's explained in post 154
It can for cases like this - because you re-apply the motion data to apply back the camera motion . It's a "clean plate" technique.
But if have someone walking across the view, it will contaminate the averaging data - you will get ghosting. You have to use more advanced technqiues like roto for the BG layer to denoise the BG separately
-
Here is a split screen comparison for those same frames , with the motion re-applied ("de-stabilized") as described in post 154. It uses UT Video 8bit420
https://drive.google.com/file/d/12ntV67WQ7lE_KYr-j8d0shoksjJCMKCT/view?usp=sharing
Pay attention to:
1) the peripheral edges on the "clean" half - normally there would be black borders there (the side effect of any stabilization), but a "cheap" method of edge fill was used (basically a blurred background) . The other commonly used alternative is a slight zoom, but I wanted to show the same zoom% for side by side comparison
2) the tracking/stabilization was ok, but it was not "perfect". You can tell because the "halves" slighltly "fall" apart - ie. there is a bit of room for improvement . This means if you tracked and stabilized it more carefully, you should be able to get even better results. As mentioned in post 154 - the quality of the noise reduction using stack methods is proportional to the accuracy of the alignment
3) The "poster" frame is frame 21 (ie. it was stabilized with frame 21 as reference), because frame 21 is the same as frame 466 (the frame range is 445-506, and 445+21=466) , to be consistent with the earlier screenshots from smdegrain and the others at frame 466, so you could compare more easily. So on frame 21 (or 466 original numbering) notice there is no black border or edge fill - because that's the reference frame where it was stabilized against. We are getting a bit off topic, but you can get "perfect" edge fill for every frame by keyframing the poster frame and using stabilization sets around the current "N" frame
Similar Threads
-
Restoring a whole Anime movie using Neat Video
By xonathan in forum RestorationReplies: 4Last Post: 26th Apr 2021, 08:55 -
Neat video "colorizes" wrong color hands and face
By mammo1789 in forum RestorationReplies: 4Last Post: 25th Aug 2020, 15:21 -
Any Experiences with "NEAT VIDEO"?
By Avagadro1 in forum RestorationReplies: 3Last Post: 12th Aug 2020, 00:22 -
Create a function in Avisynth to apply this denoiser
By zerowalk in forum EditingReplies: 3Last Post: 19th Aug 2019, 07:45 -
is neat video my best option
By hdfills in forum RestorationReplies: 9Last Post: 25th Aug 2017, 12:57