I need to improve some precious footage of my late grandmother. Her funerals were on Christmas day 2015, and, at the end of a movie about that day I'm making for my brother, I'm including this casual discussion I filmed in her apartment exactly two years before, on Christmas day 2013.
Now let's talk about technical stuff...
I filmed most of this being sitted in front of a large window, with a compact camera which struggled to maintain a good and stable exposure. So it's mostly very dark (the interior, the people talking) because of the daylight coming from that window (almost burned highlights). And so I want to pre-filter those sequences to recover as much detail and color in the shadows as possible, without affecting the highlights (or even recover some detail in highlights too). So far I used variations of this script :
...which already provided a significant improvement from this :Code:LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\ffms2.dll") FFVideoSource("20131224_145257.m2ts", threads=1) Autolevels() HDRAGC(coef_gain=2.25, max_gain=5.00, coef_sat=1.25, max_sat=1.75, corrector=0.80, reducer=1.2, mode=2, passes=4, black_clip=0.50) AutoAdjust(auto_gain=true, dark_limit=1, bright_limit=1, gamma_limit=10, dark_exclude=0.5, bright_exclude=0.5, gain_mode=0, temporal_radius=10)
...to this (I tweaked the parameters since I made those screencaps, even though I'm not sure it fared better overall) :
(HDRAGC then AutoAdjust)
(AutoAdjust then HDRAGC)
So, questions :
- Can I do better than that ?
- Are my parameters about correct with these plugins, and is their order logical, practical ? (For instance it seems like putting AutoAdjust after HDRAGC results in less enhancement in the shadows but a more natural contrast, see above.)
- Are there other plugins worth testing in such a case ?
- Is it unavoidable to lose picture quality / fine detail / sharpness when applying filters like HDRAGC / AutoAdjust ?
In fact I later removed AutoAdust altogether (and increased the "coef_gain" value in HDRAGC to compensate), deciding that it would be good enough, that I would then tweak some more within the NLE software, with gamma/contrast applied with varying values to different zones (as it's very painstaking and frustrating to fiddle with this in AvsPMod -- one frame will be seemingly improved by some change but another segment will be way worse), and be done with it (before I had those other issues which prevented me from getting it over with) : in combination with the other two, AutoAdjust further lightens the dark areas, but at the expense of overall contrast and (it would seem) more fine details (clothes and skin textures for instance), while it also affects the highlights, making the whole picture lose crispness.
Would you agree ? Would it be better with other parameters ?
(Autolevels + HDRAGC + AutoAdjust)
(AutoLevels + HDRAGC)
(unfiltered)
So, thank you for telling me if this can be improved further (and how), or if it's pretty much close to the maximum enhancement I can hope for.
+ Reply to Thread
Results 1 to 17 of 17
-
Last edited by abolibibelot; 28th Aug 2016 at 21:23.
-
So... noone for this one ?...
Again, is the result I already achieved satisfying, considering the native state of the source, or is there a way to improve it further ?
I think my demand is pretty straightforward, and I provided enough examples (without being too “wordy” as someone said in another thread) for knowledgeable Avisynth users to give some advices, one way or another. I can provide a video sample if required.Last edited by abolibibelot; 30th Aug 2016 at 17:35.
-
So... noone knows these filters ? Noone knows anything about that kind of issues ? Or what ?
-
-
First, it looks like you did a nice job -- so it's hard to know what else you expect.
It's way better, for sure, but not quite right. And it can be almost right at some point but awful later in the video (a video sample would show it : with a given set of parameters, selected while trying to improve a particular frame, when the exposure suddenly changes it gets overlit / oversaturated) ; so for the moment I loaded the videos with those parameters, and then cut them inside the NLE, so as to apply internal effects (mostly gamma & contrast), with different parameters for each zone, either to the pre-filtered videos or to the native ones when the result seems better, with a cross-fade of about 1s between adjacent cuts to smooth out the transition.
Since I don't have much experience with Avisynth, I figured that someone would know a better way to do it, to produce a more homogenous result right away and possibly avoid the hassle of these manual corrections.
Second, speaking for myself, I would never try to do this in Avisynth. I would pull it into Resolve where there is more control and you can see what you are doing.
With AVSPMod I can see what I'm doing, even if the interface is kinda clunky compared with a “real” GUI. But I don't quite feel like I know what I'm doing. -
Try lowering autoadjust dark_limit to 0.001. At the end of the script denoise a bit (these plugins tend to increase noise - this might be killing fine details)
As for denoising, I can give it a try, but those are usually slow filters, and since I have to load those files with AVFS it might put too much strain on my system (it's a 2009 PC with a Pentium Dual Core E5200 and 4GB of RAM, not quite suited for heavy-duty video editing -- it does work fine when editing 720p AVCHD but what I'm doing here seems to be about the maximum it can put up with). The alternative would be to export the pre-filtered videos as lossless intermediates, but that would be even more hassle, and I would lose the ability to correct the parameters and apply them right away. -
You can Trim() the video and apply different filters to different sections. If you need smooth transitions between clips you'll need to overlap the trims and Dissolve() between them. Or you can create several filtered versions of the video and use ReplaceFramesSimple() to select which is output when. But this is manually intensive if you have more than a few sections to deal with.
-
You can Trim() the video and apply different filters to different sections. If you need smooth transitions between clips you'll need to overlap the trims and Dissolve() between them. Or you can create several filtered versions of the video and use ReplaceFramesSimple() to select which is output when. But this is manually intensive if you have more than a few sections to deal with.
Does overlapping by X frames and using “Dissolve” have the same effect as doing a crossfade along X frames in a GUI editor ?
How can I deal with multiple Trim + Dissolve in the script ?
If I proceed this way, and only use the prefiltered clips (with some parts unfiltered), instead of mixing native and prefiltered footage on the timeline, would it be wise to convert the framerate at this stage ? I.e. wouldn't it slow down the treatment too much in this configuration (if blending / interpolating frames), and wouldn't there be a risk of desynchronization ?
EDIT : Here's a 27s sample, from the longest of those videos, which I created with SelectRangeEvery(1800, 30) (about 1s every 1m) :
http://www.mediafire.com/download/i10vww62t38o4va/20131224_145353+W7+SelectRange+1800%...30+Xvid+q4.avi
(Also in attachment, if the link doesn't work or if it's more convenient.)Last edited by abolibibelot; 31st Aug 2016 at 20:01.
-
In my experience the automatic filters don't do well. You end up with brightness and colors that wander all over the place. You really want to use an NLE for that type of control.
I don't know all AviSynth filters so I couldn't say. I tend to stick with ColorYUV(), Tweak(), RGBAdjust().
I wouldn't say that. I suspect someone who uses HDRAGC() a lot could do better. Or by using other filters.
Again, I don't know all AviSynth filters. I gave the Trim() and ReplaceFramesSimple() options off the top of my head.
Yes.
You know, you could read the manual. But...
Code:WhateverSource() part1 = Trim(0,500).ColorYUV(cont_y=-100) part2 = Trim(471,1000).ColorYUV(cont_y=100) part3 = Trim(971,0) Dissolve(part1, part2, 30) Dissolve(last, part3, 30)
It would slow down processing, obviously. "Too much" is subjective.
One place you might have an issue is when the new frame rate can't match the exact duration of the source. For example, if you have 24 frames of 25 fps video it has a display time of 96 ms. If you convert that to 29.97 fps you will either have 28 frames (93.4 ms) or 29 frames (96.8 ms). Ie, you can't get exactly 96ms with 29.97 fps video. That's not a big deal for 1 clip. But it could become an issue if you have many cuts/pastes. So you would want to perform the frame rate conversion first, then the cuts/pastes/dissolves after.
In your sample video everything is clamped between Y=16 and 235 -- ie, superblack and superwhites have been crushed. I suspect that was caused by in your processing and your source has values outside that range that could be recovered.Last edited by jagabo; 31st Aug 2016 at 20:47.
-
I wouldn't say that. I suspect someone who uses HDRAGC() a lot could do better. Or by using other filters.
You know, you could read the manual. But...
Since your editor isn't doing it right I would do it in the script.
(Apparently the framerate issue I mentioned in a previous thread -- in case that's what you mean here -- has been fixed with this newer version of Magix Video Deluxe, and it concerned the 29.97FPS export, no matter what the source framerate was.)
It may not be the smoothest method, but it seems to be the cleanest, the one that's the least prone to damage the picture quality, as I first guessed.
ConvertFPS does that kind of things :
Maybe it's normal ? Maybe it's less ugly in motion ? I guess the framerate conversion methods you gave me (in this thread) were sorted from the most simple and least computer intensive to the most complex and most computer intensive (frame duplication > frame blending > frame interpolating).
In this thread (in french but with many comparison screencaps) Interframe is said to be significantly better than MFlowFPS which is « technically outdated » :
http://letransfert.soforums.com/t618-L-interpolation-avec-Interframe-SVPFLOW-VS-MVTOOLS2.htm
Interframe produces this :
And ChangeFPS :
It would slow down processing, obviously. "Too much" is subjective.
In your sample video everything is clamped between Y=16 and 235 -- ie, superblack and superwhites have been crushed. I suspect that was caused by in your processing and your source has values outside that range that could be recovered.
Code:LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\ffms2.dll") FFVideoSource("20131224_145353.m2ts", threads=1) SelectRangeEvery(1800, 30)
...but apparently you're right, I don't get it :
The first one is the source loadad with FFVideoSource, the levels look normal (they extend toward both ends), the second one is that compressed sample loaded with AVISource, the levels appear to be crushed like you said, but the frame looks just the same... If I load the virtual AVI file the levels look like the first one (normal), so something happened at the compression stage. Maybe I should have selected Xvid HD 720, instead of the default Xvid Home ? -- No, doesn't change a thing. Same result with VirtualDubMod. Lagarith, same.
Avidemux conversion of the virtual AVI file seems fine (except the first frame appears completely green in AVSPMod) :
20131224_145353 W7 SelectRangeEvery 1800,30 Avidemux Xvid q4.avi
So what was wrong in VirtualDub ? Maybe something in “Color depth” ? (“Decompression format” is set to “Autoselect” and “Output format to compressor/display” is set to 24 bit RGB.) -
Yes, I remembered reading that other thread a few days ago.
Yes, but simple decimation/duplication produces jerky motion. Professional frame rate conversion usually blends fields to produce slightly smoother looking motion. At 60 fields/frames per second you don't notice the blending as much.
Yes.
Interframe is better with some video. It will usually produce blending rather than weird distortions in areas where it can't detect motion properly. But all motion interpolation techniques produce distortions to some extent. The professional versions allow you to control the process manually -- marking moving objects, etc.
YUV video normally defines full black as Y=16, and full white as Y=235. So software that converts YUV to RGB normally expands that limited range to full range RGB where black is RGB=0 and white is RGB=255. Any Y values below 16 or above 235 are irretrievably crushed in the process. The default behavior in VirtualDub is to convert incoming YUV video to RGB in this manner -- which explains the loss. You can avoid this by performing the conversion yourself in AviSynth with ConvertToRGB(matrix="PC.601") to retain the full range. Or you can use ColorYUV(levels="PC->TV") to compress the full range YUV to limited range YUV.
Because the program is displaying the YUV video as RGB with the same rec.601 matrix, crushing superblacks and superbrights, the same as VirtualDub.
You can set Video -> Fast Recompress to prevent VirtualDub from converting incoming YUV to RGB. The YUV data will be passed directly to the compression codec. But you shouldn't be re-compressing your video at all if you want help filtering it. You're just adding another round of detail loss and compression artifacts. Use an M2TS cutter to trim out representative samples with no recompression.
And why are your sources M2TS? Are they already compressed with a lossy codec? If so, you are losing detail and creating artifacts right off the bat. -
You can avoid this by performing the conversion yourself in AviSynth with ConvertToRGB(matrix="PC.601") to retain the full range.
But all motion interpolation techniques produce distortions to some extent. The professional versions allow you to control the process manually -- marking moving objects, etc.
You can set Video -> Fast Recompress to prevent VirtualDub from converting incoming YUV to RGB.
But you shouldn't be re-compressing your video at all if you want help filtering it. You're just adding another round of detail loss and compression artifacts. Use an M2TS cutter to trim out representative samples with no recompression.
Isn't the second sample I provided fine enough for that purpose ?
And why are your sources M2TS? Are they already compressed with a lossy codec? If so, you are losing detail and creating artifacts right off the bat. -
Yes, if it's 709 content you should use PC.709 to retain levels.
It's up to you. I prefer to have slightly jerky video without blending. Are you going from 25p to 29.97p or 59.94p? The latter will look better. Judder from 25p to 60p is very similar to judder from 24p to 60p -- what most people in the USA are used to seeing. 25p to 30p is going to have 5 little jerks every second.
I missed that clip. I'll look at it later. When you're trying to bring out dark details the blocking artifacts and loss of detail from compression make the video look bad. So a recommendation with a less compressed source may be different than that from a more compressed source. -
It's up to you. I prefer to have slightly jerky video without blending. Are you going from 25p to 29.97p or 59.94p? The latter will look better. Judder from 25p to 60p is very similar to judder from 24p to 60p -- what most people in the USA are used to seeing. 25p to 30p is going to have 5 little jerks every second.
Again, I have about 45minutes in 29.97FPS and 35 minutes in 25FPS, and I was planning on exporting the whole movie as 29.97FPS. Would it make sense to double the framerate in this case ? -
There is no right way. There is only personal preference. Duplicating frames from 25p to 30p is much more noticeable than duplicating frames from 25p to 60p (or duplicating fields to 30i).
If you're not limited to 29.97 fps consider using 59.94 fps instead.
Similar Threads
-
Is it possible to improve the compressibility of this DVD footage ?
By abolibibelot in forum Video ConversionReplies: 18Last Post: 8th Oct 2015, 00:46 -
Could you help me write an AVIsynth script to improve this LD scene?
By jrodefeld in forum RestorationReplies: 6Last Post: 13th Oct 2014, 14:26 -
Prepare Fast 60 fps Gameplay Footage For Youtube w/ Avisynth
By Kitin in forum Video ConversionReplies: 30Last Post: 22nd Oct 2013, 09:44 -
How to Improve Commercial DVD with Avisynth that has no problems?
By VideoFanatic in forum RestorationReplies: 60Last Post: 28th Nov 2012, 21:06 -
Improve performance with VirtualDub & AviSynth
By ziggy1971 in forum Video ConversionReplies: 5Last Post: 26th Jan 2012, 17:17