VideoHelp Forum
+ Reply to Thread
Results 1 to 17 of 17
Thread
  1. I need to improve some precious footage of my late grandmother. Her funerals were on Christmas day 2015, and, at the end of a movie about that day I'm making for my brother, I'm including this casual discussion I filmed in her apartment exactly two years before, on Christmas day 2013.
    Now let's talk about technical stuff...

    I filmed most of this being sitted in front of a large window, with a compact camera which struggled to maintain a good and stable exposure. So it's mostly very dark (the interior, the people talking) because of the daylight coming from that window (almost burned highlights). And so I want to pre-filter those sequences to recover as much detail and color in the shadows as possible, without affecting the highlights (or even recover some detail in highlights too). So far I used variations of this script :

    Code:
    LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\ffms2.dll")
    FFVideoSource("20131224_145257.m2ts", threads=1)
    Autolevels()
    HDRAGC(coef_gain=2.25, max_gain=5.00, coef_sat=1.25, max_sat=1.75, corrector=0.80, reducer=1.2, mode=2, passes=4, black_clip=0.50)
    AutoAdjust(auto_gain=true, dark_limit=1, bright_limit=1, gamma_limit=10, dark_exclude=0.5, bright_exclude=0.5, gain_mode=0, temporal_radius=10)
    ...which already provided a significant improvement from this :

    Click image for larger version

Name:	201312241453 natif2.png
Views:	581
Size:	538.9 KB
ID:	38334


    ...to this (I tweaked the parameters since I made those screencaps, even though I'm not sure it fared better overall) :

    Click image for larger version

Name:	201312241453 HDRAGC puis Autoadjust (meilleur contraste).png
Views:	642
Size:	645.5 KB
ID:	38335
    (HDRAGC then AutoAdjust)

    Click image for larger version

Name:	201312241453 Autoadjust puis HDRAGC.png
Views:	722
Size:	632.1 KB
ID:	38339
    (AutoAdjust then HDRAGC)



    So, questions :
    - Can I do better than that ?
    - Are my parameters about correct with these plugins, and is their order logical, practical ? (For instance it seems like putting AutoAdjust after HDRAGC results in less enhancement in the shadows but a more natural contrast, see above.)
    - Are there other plugins worth testing in such a case ?
    - Is it unavoidable to lose picture quality / fine detail / sharpness when applying filters like HDRAGC / AutoAdjust ?

    In fact I later removed AutoAdust altogether (and increased the "coef_gain" value in HDRAGC to compensate), deciding that it would be good enough, that I would then tweak some more within the NLE software, with gamma/contrast applied with varying values to different zones (as it's very painstaking and frustrating to fiddle with this in AvsPMod -- one frame will be seemingly improved by some change but another segment will be way worse), and be done with it (before I had those other issues which prevented me from getting it over with) : in combination with the other two, AutoAdjust further lightens the dark areas, but at the expense of overall contrast and (it would seem) more fine details (clothes and skin textures for instance), while it also affects the highlights, making the whole picture lose crispness.
    Would you agree ? Would it be better with other parameters ?

    Click image for larger version

Name:	AvsPMod 20131224145257 1.png
Views:	764
Size:	1.18 MB
ID:	38337
    (Autolevels + HDRAGC + AutoAdjust)

    Click image for larger version

Name:	AvsPMod 20131224145257 2 (un peu plus sombre mais plus naturel).png
Views:	1048
Size:	1.13 MB
ID:	38336
    (AutoLevels + HDRAGC)

    Click image for larger version

Name:	AvsPMod 20131224145257 natif.png
Views:	865
Size:	924.0 KB
ID:	38338
    (unfiltered)


    So, thank you for telling me if this can be improved further (and how), or if it's pretty much close to the maximum enhancement I can hope for.
    Last edited by abolibibelot; 28th Aug 2016 at 21:23.
    Quote Quote  
  2. So... noone for this one ?...
    Again, is the result I already achieved satisfying, considering the native state of the source, or is there a way to improve it further ?
    I think my demand is pretty straightforward, and I provided enough examples (without being too “wordy” as someone said in another thread) for knowledgeable Avisynth users to give some advices, one way or another. I can provide a video sample if required.
    Last edited by abolibibelot; 30th Aug 2016 at 17:35.
    Quote Quote  
  3. So... noone knows these filters ? Noone knows anything about that kind of issues ? Or what ?
    Quote Quote  
  4. First, it looks like you did a nice job -- so it's hard to know what else you expect.

    Second, speaking for myself, I would never try to do this in Avisynth. I would pull it into Resolve where there is more control and you can see what you are doing.
    Quote Quote  
  5. Originally Posted by abolibibelot View Post

    In fact I later removed AutoAdust altogether (and increased the "coef_gain" value in HDRAGC to compensate), deciding that it would be good enough, that I would then tweak some more within the NLE software, with gamma/contrast applied with varying values to different zones (as it's very painstaking and frustrating to fiddle with this in AvsPMod -- one frame will be seemingly improved by some change but another segment will be way worse), and be done with it (before I had those other issues which prevented me from getting it over with) : in combination with the other two, AutoAdjust further lightens the dark areas, but at the expense of overall contrast and (it would seem) more fine details (clothes and skin textures for instance), while it also affects the highlights, making the whole picture lose crispness.
    Would you agree ? Would it be better with other parameters ?
    Try lowering autoadjust dark_limit to 0.001. At the end of the script denoise a bit (these plugins tend to increase noise - this might be killing fine details)
    Quote Quote  
  6. First, it looks like you did a nice job -- so it's hard to know what else you expect.
    Thanks for this first reply.

    It's way better, for sure, but not quite right. And it can be almost right at some point but awful later in the video (a video sample would show it : with a given set of parameters, selected while trying to improve a particular frame, when the exposure suddenly changes it gets overlit / oversaturated) ; so for the moment I loaded the videos with those parameters, and then cut them inside the NLE, so as to apply internal effects (mostly gamma & contrast), with different parameters for each zone, either to the pre-filtered videos or to the native ones when the result seems better, with a cross-fade of about 1s between adjacent cuts to smooth out the transition.

    Since I don't have much experience with Avisynth, I figured that someone would know a better way to do it, to produce a more homogenous result right away and possibly avoid the hassle of these manual corrections.

    Second, speaking for myself, I would never try to do this in Avisynth. I would pull it into Resolve where there is more control and you can see what you are doing.
    I've never heard of Resolve before, apparently it's a full-fledged professional editor, specialized in the treatment or color ; maybe a bit overkill in this case, at least too expensive to purchase. There seems to be a free “lite” version, probably with many restrictions.
    With AVSPMod I can see what I'm doing, even if the interface is kinda clunky compared with a “real” GUI. But I don't quite feel like I know what I'm doing.
    Quote Quote  
  7. Resolve Lite has some limitations on stabilizing and de-noising but is otherwise very complete.

    (Not sure I'm much use to you in avisynth as my HDRAGC days have long ago given way to luma and chroma curves. )
    Quote Quote  
  8. Try lowering autoadjust dark_limit to 0.001. At the end of the script denoise a bit (these plugins tend to increase noise - this might be killing fine details)
    Thanks for these tips. What does “dark_limit” do exactly ?
    As for denoising, I can give it a try, but those are usually slow filters, and since I have to load those files with AVFS it might put too much strain on my system (it's a 2009 PC with a Pentium Dual Core E5200 and 4GB of RAM, not quite suited for heavy-duty video editing -- it does work fine when editing 720p AVCHD but what I'm doing here seems to be about the maximum it can put up with). The alternative would be to export the pre-filtered videos as lossless intermediates, but that would be even more hassle, and I would lose the ability to correct the parameters and apply them right away.
    Quote Quote  
  9. You can Trim() the video and apply different filters to different sections. If you need smooth transitions between clips you'll need to overlap the trims and Dissolve() between them. Or you can create several filtered versions of the video and use ReplaceFramesSimple() to select which is output when. But this is manually intensive if you have more than a few sections to deal with.
    Quote Quote  
  10. You can Trim() the video and apply different filters to different sections. If you need smooth transitions between clips you'll need to overlap the trims and Dissolve() between them. Or you can create several filtered versions of the video and use ReplaceFramesSimple() to select which is output when. But this is manually intensive if you have more than a few sections to deal with.
    Thanks for those advices. It was indeed manually intensive already the way I did it (I made cuts about every 20 seconds, for about 30 minutes worth of footage), hence why I was hoping that a better set of parameters with these filters or others filters could improve the whole videos (at least better than these did), adapting themselves “smartly” to the varying exposure conditions... (Maybe I was hoping for some kind of magic...) So these are the best suited tools available in Avisynth, they can't be used with a significantly better result to improve single frames, and there's no way of getting a better temporal adaptation, except adapting the parameters manually to sections with different characteristics ?

    Does overlapping by X frames and using “Dissolve” have the same effect as doing a crossfade along X frames in a GUI editor ?
    How can I deal with multiple Trim + Dissolve in the script ?

    If I proceed this way, and only use the prefiltered clips (with some parts unfiltered), instead of mixing native and prefiltered footage on the timeline, would it be wise to convert the framerate at this stage ? I.e. wouldn't it slow down the treatment too much in this configuration (if blending / interpolating frames), and wouldn't there be a risk of desynchronization ?


    EDIT : Here's a 27s sample, from the longest of those videos, which I created with SelectRangeEvery(1800, 30) (about 1s every 1m) :
    http://www.mediafire.com/download/i10vww62t38o4va/20131224_145353+W7+SelectRange+1800%...30+Xvid+q4.avi
    (Also in attachment, if the link doesn't work or if it's more convenient.)
    Image Attached Files
    Last edited by abolibibelot; 31st Aug 2016 at 20:01.
    Quote Quote  
  11. Originally Posted by abolibibelot View Post
    It was indeed manually intensive already the way I did it (I made cuts about every 20 seconds, for about 30 minutes worth of footage), hence why I was hoping that a better set of parameters with these filters or others filters could improve the whole videos (at least better than these did), adapting themselves “smartly” to the varying exposure conditions...
    In my experience the automatic filters don't do well. You end up with brightness and colors that wander all over the place. You really want to use an NLE for that type of control.

    Originally Posted by abolibibelot View Post
    So these are the best suited tools available in Avisynth,
    I don't know all AviSynth filters so I couldn't say. I tend to stick with ColorYUV(), Tweak(), RGBAdjust().

    Originally Posted by abolibibelot View Post
    they can't be used with a significantly better result to improve single frames
    I wouldn't say that. I suspect someone who uses HDRAGC() a lot could do better. Or by using other filters.

    Originally Posted by abolibibelot View Post
    and there's no way of getting a better temporal adaptation, except adapting the parameters manually to sections with different characteristics ?
    Again, I don't know all AviSynth filters. I gave the Trim() and ReplaceFramesSimple() options off the top of my head.

    Originally Posted by abolibibelot View Post
    Does overlapping by X frames and using “Dissolve” have the same effect as doing a crossfade along X frames in a GUI editor ?
    Yes.

    Originally Posted by abolibibelot View Post
    How can I deal with multiple Trim + Dissolve in the script ?
    You know, you could read the manual. But...
    Code:
    WhateverSource()
    part1 = Trim(0,500).ColorYUV(cont_y=-100)
    part2 = Trim(471,1000).ColorYUV(cont_y=100)
    part3 = Trim(971,0)
    Dissolve(part1, part2, 30)
    Dissolve(last, part3, 30)
    Originally Posted by abolibibelot View Post
    would it be wise to convert the framerate at this stage ?
    Since your editor isn't doing it right I would do it in the script.

    Originally Posted by abolibibelot View Post
    I.e. wouldn't it slow down the treatment too much in this configuration (if blending / interpolating frames),
    It would slow down processing, obviously. "Too much" is subjective.

    Originally Posted by abolibibelot View Post
    and wouldn't there be a risk of desynchronization ?
    One place you might have an issue is when the new frame rate can't match the exact duration of the source. For example, if you have 24 frames of 25 fps video it has a display time of 96 ms. If you convert that to 29.97 fps you will either have 28 frames (93.4 ms) or 29 frames (96.8 ms). Ie, you can't get exactly 96ms with 29.97 fps video. That's not a big deal for 1 clip. But it could become an issue if you have many cuts/pastes. So you would want to perform the frame rate conversion first, then the cuts/pastes/dissolves after.

    In your sample video everything is clamped between Y=16 and 235 -- ie, superblack and superwhites have been crushed. I suspect that was caused by in your processing and your source has values outside that range that could be recovered.
    Last edited by jagabo; 31st Aug 2016 at 20:47.
    Quote Quote  
  12. I wouldn't say that. I suspect someone who uses HDRAGC() a lot could do better. Or by using other filters.
    That's the kind of “beast” I was hoping to find in this lair ! ;^p

    You know, you could read the manual. But...
    Well, yes, I could read all the manuals of the world, but that's already a lot to process as it is... So, thanks for the example.

    Since your editor isn't doing it right I would do it in the script.
    The editor, if I understand this correctly, is doing the same as ChangeFPS in that regard, i.e. adding duplicated frames. Is it “doing it wrong” ?
    (Apparently the framerate issue I mentioned in a previous thread -- in case that's what you mean here -- has been fixed with this newer version of Magix Video Deluxe, and it concerned the 29.97FPS export, no matter what the source framerate was.)
    It may not be the smoothest method, but it seems to be the cleanest, the one that's the least prone to damage the picture quality, as I first guessed.

    ConvertFPS does that kind of things :
    Click image for larger version

Name:	20131224_145353 ConvertFPS ntsc_video.png
Views:	407
Size:	713.4 KB
ID:	38383
    Maybe it's normal ? Maybe it's less ugly in motion ? I guess the framerate conversion methods you gave me (in this thread) were sorted from the most simple and least computer intensive to the most complex and most computer intensive (frame duplication > frame blending > frame interpolating).

    In this thread (in french but with many comparison screencaps) Interframe is said to be significantly better than MFlowFPS which is « technically outdated » :
    http://letransfert.soforums.com/t618-L-interpolation-avec-Interframe-SVPFLOW-VS-MVTOOLS2.htm

    Interframe produces this :
    Click image for larger version

Name:	20131224_145353 Interframe 30000,1001.png
Views:	348
Size:	728.7 KB
ID:	38384

    And ChangeFPS :
    Click image for larger version

Name:	20131224_145353 ChangeFPS ntsc_video.png
Views:	356
Size:	710.0 KB
ID:	38385

    It would slow down processing, obviously. "Too much" is subjective.
    Too much here means no longer being able to have a fluid or semi-fluid playback within the NLE ; if for instance it takes 3 seconds just to display a single frame it becomes very hard to work with.

    In your sample video everything is clamped between Y=16 and 235 -- ie, superblack and superwhites have been crushed. I suspect that was caused by in your processing and your source has values outside that range that could be recovered.
    I didn't apply any processing for that sample. I just compressed it with VirtualDub. (By the way, I had to load it through AVFS : I can no longer load an AVS script into VirtualDub. Maybe that's because I installed ffdshow 64 bit ? VirtualDubMod tells me there are two missing DLLs, but works anyway, and it does load the AVS correctly. One more puzzling thing when I don't need any more...)

    Code:
    LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\ffms2.dll")
    FFVideoSource("20131224_145353.m2ts", threads=1)
    SelectRangeEvery(1800, 30)
    And yet, as the screencaps in the first post show, I was able to recover details in the seemingly crushed blacks and seemingly burnt whites.
    ...but apparently you're right, I don't get it :

    Click image for larger version

Name:	20131224_145353 FFVideoSource.png
Views:	368
Size:	746.0 KB
ID:	38389
    Click image for larger version

Name:	20131224_145353 FFVideoSource + VirtualDub Xvid + AVISource.png
Views:	334
Size:	615.9 KB
ID:	38390

    The first one is the source loadad with FFVideoSource, the levels look normal (they extend toward both ends), the second one is that compressed sample loaded with AVISource, the levels appear to be crushed like you said, but the frame looks just the same... If I load the virtual AVI file the levels look like the first one (normal), so something happened at the compression stage. Maybe I should have selected Xvid HD 720, instead of the default Xvid Home ? -- No, doesn't change a thing. Same result with VirtualDubMod. Lagarith, same.
    Avidemux conversion of the virtual AVI file seems fine (except the first frame appears completely green in AVSPMod) :
    20131224_145353 W7 SelectRangeEvery 1800,30 Avidemux Xvid q4.avi
    So what was wrong in VirtualDub ? Maybe something in “Color depth” ? (“Decompression format” is set to “Autoselect” and “Output format to compressor/display” is set to 24 bit RGB.)
    Quote Quote  
  13. Originally Posted by abolibibelot View Post
    (Apparently the framerate issue I mentioned in a previous thread -- in case that's what you mean here -- has been fixed
    Yes, I remembered reading that other thread a few days ago.

    Originally Posted by abolibibelot View Post
    It may not be the smoothest method, but it seems to be the cleanest, the one that's the least prone to damage the picture quality, as I first guessed.
    Yes, but simple decimation/duplication produces jerky motion. Professional frame rate conversion usually blends fields to produce slightly smoother looking motion. At 60 fields/frames per second you don't notice the blending as much.

    Originally Posted by abolibibelot View Post
    I guess the framerate conversion methods you gave me (in this thread) were sorted from the most simple and least computer intensive to the most complex and most computer intensive (frame duplication > frame blending > frame interpolating).
    Yes.

    Originally Posted by abolibibelot View Post
    In this thread (in french but with many comparison screencaps) Interframe is said to be significantly better than MFlowFPS which is « technically outdated » :
    http://letransfert.soforums.com/t618-L-interpolation-avec-Interframe-SVPFLOW-VS-MVTOOLS2.htm
    Interframe is better with some video. It will usually produce blending rather than weird distortions in areas where it can't detect motion properly. But all motion interpolation techniques produce distortions to some extent. The professional versions allow you to control the process manually -- marking moving objects, etc.

    Originally Posted by abolibibelot View Post
    I didn't apply any processing for that sample. I just compressed it with VirtualDub.
    YUV video normally defines full black as Y=16, and full white as Y=235. So software that converts YUV to RGB normally expands that limited range to full range RGB where black is RGB=0 and white is RGB=255. Any Y values below 16 or above 235 are irretrievably crushed in the process. The default behavior in VirtualDub is to convert incoming YUV video to RGB in this manner -- which explains the loss. You can avoid this by performing the conversion yourself in AviSynth with ConvertToRGB(matrix="PC.601") to retain the full range. Or you can use ColorYUV(levels="PC->TV") to compress the full range YUV to limited range YUV.

    Originally Posted by abolibibelot View Post
    Code:
    LoadPlugin("C:\Program Files (x86)\AviSynth\plugins\ffms2.dll")
    FFVideoSource("20131224_145353.m2ts", threads=1)
    SelectRangeEvery(1800, 30)
    And yet, as the screencaps in the first post show, I was able to recover details in the seemingly crushed blacks and seemingly burnt whites.
    ...but apparently you're right, I don't get it :

    The first one is the source loadad with FFVideoSource, the levels look normal (they extend toward both ends), the second one is that compressed sample loaded with AVISource, the levels appear to be crushed like you said, but the frame looks just the same...
    Because the program is displaying the YUV video as RGB with the same rec.601 matrix, crushing superblacks and superbrights, the same as VirtualDub.

    Originally Posted by abolibibelot View Post
    So what was wrong in VirtualDub ? Maybe something in “Color depth” ? (“Decompression format” is set to “Autoselect” and “Output format to compressor/display” is set to 24 bit RGB.)
    You can set Video -> Fast Recompress to prevent VirtualDub from converting incoming YUV to RGB. The YUV data will be passed directly to the compression codec. But you shouldn't be re-compressing your video at all if you want help filtering it. You're just adding another round of detail loss and compression artifacts. Use an M2TS cutter to trim out representative samples with no recompression.

    And why are your sources M2TS? Are they already compressed with a lossy codec? If so, you are losing detail and creating artifacts right off the bat.
    Quote Quote  
  14. You can avoid this by performing the conversion yourself in AviSynth with ConvertToRGB(matrix="PC.601") to retain the full range.
    Why is it PC.601 in this case ? Shouldn't it be PC.709 / Rec.709 for “HD” content ?

    But all motion interpolation techniques produce distortions to some extent. The professional versions allow you to control the process manually -- marking moving objects, etc.
    So the best compromise here with the tools I have and the processing constraints would be blending frames, even though it produces some ugly results on individual frames ?

    You can set Video -> Fast Recompress to prevent VirtualDub from converting incoming YUV to RGB.
    OK, sorry, I think you're the one who already told me that like a year ago... « Il comprend vite mais il faut lui expliquer longtemps. » ;^D

    But you shouldn't be re-compressing your video at all if you want help filtering it. You're just adding another round of detail loss and compression artifacts. Use an M2TS cutter to trim out representative samples with no recompression.
    I figured it would be more practical to upload a small file, yet encoded with a relatively high bitrate so that the colors / exposure shouldn't be significantly affected. And it's easy to make small cuts from the whole file in one step with SelectRangeEvery. With TSMuxer (I don't know any other tool which can directly cut M2TS files) doing the same thing is much more tedious (it only allows to cut parts one by one ; a clever script might do the trick but I wouldn't know how to do it, so that it would for example make a first cut between 0s and 1s then increment the cutting timecodes by 60 seconds until the end of the file, then join the parts as one file, all in one step). I tried with MKVMerge and the “split by parts based on timecodes” option (which is supposed to output a single file with all the cuts if each interval is preceded by a “+”), but it doesn't work with those M2TS files, I spent almost an hour making tests trying to understand why : it ignores the timecodes and transcodes the whole file, same without the AC3 audio, same if I take that newly created MKV as source. (It works fine with an MP4 file so the syntax is correct.)
    Isn't the second sample I provided fine enough for that purpose ?

    And why are your sources M2TS? Are they already compressed with a lossy codec? If so, you are losing detail and creating artifacts right off the bat.
    Straight from a Panasonic camera... (A ZS7/TS10 precisely.) So, yes, already compressed with a lossy codec, but I didn't do it, the camera did. (To be more precise the files on the memory card have the .MTS extension, they're renamed as .M2TS when exported by the Panasonic software but no actual conversion/compression takes place.) Do you know of any camera (except high-end professional models) which doesn't compress with a lossy codec ?
    Quote Quote  
  15. Originally Posted by abolibibelot View Post
    You can avoid this by performing the conversion yourself in AviSynth with ConvertToRGB(matrix="PC.601") to retain the full range.
    Why is it PC.601 in this case ? Shouldn't it be PC.709 / Rec.709 for “HD” content ?
    Yes, if it's 709 content you should use PC.709 to retain levels.

    Originally Posted by abolibibelot View Post
    So the best compromise here with the tools I have and the processing constraints would be blending frames, even though it produces some ugly results on individual frames ?
    It's up to you. I prefer to have slightly jerky video without blending. Are you going from 25p to 29.97p or 59.94p? The latter will look better. Judder from 25p to 60p is very similar to judder from 24p to 60p -- what most people in the USA are used to seeing. 25p to 30p is going to have 5 little jerks every second.

    Originally Posted by abolibibelot View Post
    Isn't the second sample I provided fine enough for that purpose ?
    I missed that clip. I'll look at it later. When you're trying to bring out dark details the blocking artifacts and loss of detail from compression make the video look bad. So a recommendation with a less compressed source may be different than that from a more compressed source.
    Quote Quote  
  16. It's up to you. I prefer to have slightly jerky video without blending. Are you going from 25p to 29.97p or 59.94p? The latter will look better. Judder from 25p to 60p is very similar to judder from 24p to 60p -- what most people in the USA are used to seeing. 25p to 30p is going to have 5 little jerks every second.
    But didn't you say the opposite earlier ? That duplicating frames wasn't the “right” way ?
    Again, I have about 45minutes in 29.97FPS and 35 minutes in 25FPS, and I was planning on exporting the whole movie as 29.97FPS. Would it make sense to double the framerate in this case ?
    Quote Quote  
  17. Originally Posted by abolibibelot View Post
    But didn't you say the opposite earlier ? That duplicating frames wasn't the “right” way ?
    There is no right way. There is only personal preference. Duplicating frames from 25p to 30p is much more noticeable than duplicating frames from 25p to 60p (or duplicating fields to 30i).

    Originally Posted by abolibibelot View Post
    Again, I have about 45minutes in 29.97FPS and 35 minutes in 25FPS, and I was planning on exporting the whole movie as 29.97FPS. Would it make sense to double the framerate in this case ?
    If you're not limited to 29.97 fps consider using 59.94 fps instead.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!