Hi everyone,
As continuation to the previous talk we had, I dived into AviSynth and QTGMC. To sum the other post for people who weren't involved in it - I already done with the capturing. I have around 200 video files (PAL 720x576, 25FPS. Loseless HufYuv and overscan is being masked). I captured. Around 150 are family footage taken from the same Camcorder, and 50 of the other videos varies from TV shows to Cartoons.
I plan to keep the family footage in loseless interlaced form for archiving. So I will have own two copies - loesless and Lossy post QTGMC. The other 50 videos are not that important, and I plan only to keep the Lossy QTGMC file.
I have AviSynth+ with all the 32Bit plugin needed for QTGMC, and also using AvsPmod. I followed the excellent blog post by Andrew Swan, However - I'm left with couple of questions. Anrew using FFMPEGSource2 to load the video file, but I assume that's because the file is already FFMPEG. I just AviSource for HuffYuv. Here's the script:
I'm AssumeTFF() because the else the video movement was wobbly (which also mean I guess the default if not stated is to use BFF). It's an old PC with 4 cores, so I used prefretch 2. Here's the questions:Code:SetFilterMTMODE("QTGMC", 2) AviSource("D:\Copy.avi") AssumeTFF() QTGMC(Preset="Slower") Prefetch=(2)
1. Andrew's state ConvertToYV12() is needed by QTGMC. Is that still the case? I was able to run the script without it, there was no mention about using ConvertToYV12() in the QTGMC documentation, and I couldn't tell a difference in the Preview window when using that command.
2. Andrew's resize his video using BilinearResize(720,540) for fix aspect ration due to pixel size ration (the original size was 720x480). Doesn't that hurt the final quality because everything got a bit stretched? or it's very minimal and worth the proper aspect ratio? I'm assuming 720,540 is the proper aspect ratio from NTSC. What would be the proper one for PAL?
3. Based on reading I did. I know cropping is normally bad. I mask all the overscan with black bars using virtualDub, retaining the original resolution. I do not mask more then 20 pixels on each side (X) and 24 top/bottom (Y) - I assume that's the limit because that was what actually hidden by the old CRT tubes. However, Andrew's crop the black bars, and resize black to the original resolution to Spline64Resize() which also done some sharping in the process. Is that a common workflow when saving view-oriented files? or that resizing usually hurt quality a bit too much and it's better just to watch it with the black bars around?
4. Andrew using FFMpeg to save the Videos. However, it was suggested before to use x264/5 as you get better quality for Bitrate. I assume much devices can read x264. Is the difference in quality worth using 265? or not really? Also - Lordsmurf suggested using Hybrid (I'm assuming It work with .avs script - I didn't try it yet). I normally just state constant quality of 22 RF, and leave everything default. But I wonder if I should turn on/off with other options, and if perhaps the RF number should be smaller (as I can only do it once on the Cartoons for examples and I won't be saving the loseless format).
5. It was mentioned on the previous post that It's not always the best going for 50 FPS (on PAL) and something the video will end up looking funky and it's better to leave it at 25FPS (using SelectEven()). But I wonder if I can make assumptions for videos taken from the same SOURCE (same Camcorder). If a single video from the Camcorder (and assuming my father didn't fiddle with the default Camera settings) using TFF, looks better on 50 FPS, and has a Chroma Offset of -4 down that being corrected with AviSynth - I can apply the same settings to ALL the videos take from the same source? If that's the case, maybe instead of using Hybrid - I will write a quick command line script that use the command line x264, and recursively apply the same settings to all those 150 videos.
6. Anything else I should be adding to the base QTGMC I mentioned above to be good baseline to all videos?
Thanks again everyone!
+ Reply to Thread
Results 1 to 30 of 57
-
Last edited by Okiba; 29th Sep 2020 at 04:53.
-
I rarely use Slower. It blurs.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
QTGMC() used to work only with YV12. Newer versions work with YV12, YUY2, YV24, and maybe some others. And since the video is interlaced at that point he should have used ConvertToYV12(interlaced=true). Doing that conversion incorrectly blurs the colors of the two fields together. This manifests as ghosting of colors in fast moving shots.
If you want your video to be displayed with the proper aspect ratio you need to resize to a frame size that matches the DAR of the video, or keep the original frame size and encode with SAR/DAR flags (so the player or TV resizes for you while playing). Note that nobody watches video on a 4:3 CRT anymore. Pretty much everything you watch will be upscaled to an HD display, typically 1280x720, 1920x1080, or 3840x2160 (and 4:3 material will typically be pillarboxed within those frame sizes). Every resize has the potential to introduce artifacts so it's best to resize as few times as possible. Using AR flags allows you to keep the original frame size and have the player/TV upscale to the final display size with a single resize. Unfortunately, some players/TVs will ignore the AR flags in MP4 or MKV files and display the video with the wrong aspect ratio. So it's safest to resize to a frame size that matches the aspect ratio of your source.
Any 4:3 frame size is appropriate for 4:3 DAR video. 320x240, 640x480, 720x540, 960x720, 1440x1080, just to mention a few.
Any 4:3 frame size. More common with PAL is 384x288, 768x576.
Cropping isn't necessarily bad. It depends on what you're doing and doing it correctly. For example PAL DVD requires a 720x576 (or 704x576) frame at 25 fps. So you can't just crop away black borders of a 720x576 source. You would have to follow up with adding borders back (which is fine, essentially the same as masking) or resize back to 720x576 (possibly introducing artifacts and distortions). But if you're not producing DVDs there may be no reason to restore the frame to 720x576.
Since you're using AviSynth you can do it there. Use Crop() to crop then AddBorders() to restore the frame size with perfect black (or whatever color you want) borders.
The typical CRT hid much more than that. More like 5 percent at each edge. And the amount varied with the temperature, how long the TV has been on, etc.
Again, it depends on what you're final output is.
Most modern devices yes. Note that x264 is a particular encoder. The standard is h.264 (AKA AVC).
The most modern devices support h.265. The goal of h.265 was to produce the same quality as h.264 at half the bitrate. I don't think they've come anywhere near that goal. Especially with SD video (a lot of the advances apply more to larger frames).
For the most part you should stick with the presets and tunings. For example preset "slow", tune "animation". Go with the slowest preset you can stand. I usually use CRF 18 and preset slow for SD material.
For handheld camcorder video 50 fps with QTGMC will almost always look better. Again, your target format may limit your choices. 50 fps progressive isn't supported by DVD. You would probably want to reinterlace back to 25i. You can assume the basic properties of the video are the same when shot with the same analog camcorder. But always check.
Note that field order is critical. An interlaced frame packs two images into one frame. One image is in all the even numbered scanlines (0,2,4...) the other in all the odd numbered scan lines (1,3,5...), called fields. The two images are displayed separately and sequentially (at 50 fields per seconed) on an interlaced PAL TV. The field order corresponds the which of those two fields is displayed first. If you use the wrong field order the two fields will be display in the wrong temporal order. You will get a two-steps-forward-one-step-back jerky motion -
Why are you doing this massive amount of work. Have you looked at 2-3 minutes of video that you've run through QTGMC and compared it to your original capture? Have you done a quick back-of-the-envelope calculation on how much time this will take for 200 videos? It has to be massive, even if each video is "only" 20-30 minutes.
I think your time would be much better spend editing, adding titles (so later generations have some idea who they're looking at), doing gamma and color corrections, etc.
Your time, your choice, but I'd sure do the comparison, and if the "after" doesn't knock my socks off compared to "before," I'd forget about it. -
Phew. Had to catch up a lot of information with Google to reply
I rarely use Slower. It blurs.
QTGMC() used to work only with YV12
But if you're not producing DVDs there may be no reason to restore the frame to 720x576.
So it's safest to resize to a frame size that matches the aspect ratio of your source.
And let's say it's indeed 4:1 - why should I resize it to be 768x576 and not 1920x1440? (both are 4:1)
The typical CRT hid much more than that. More like 5 percent at each edge. And the amount varied with the temperature, how long the TV has been on, etc.
For the most part you should stick with the presets and tunings
x264.exe qtgmc.avs --crf 18 --preset slow --output "results.mp4"
In case It's animation, I will be adding "--tune animation"
You will get a two-steps-forward-one-step-back jerky motion
You can assume the basic properties of the video are the same when shot with the same analog camcorder. But always check.
Why are you doing this massive amount of work. Have you looked at 2-3 minutes of video that you've run through QTGMC and compared it to your original capture? Have you done a quick back-of-the-envelope calculation on how much time this will take for 200 videos? It has to be massive, even if each video is "only" 20-30 minutes.
Yes I have. To my eyes at least - there's quite a big improvement using QTGMC. Mainly because It seems it doing much more then just de-interlacing (I even picked the QTGMC.avs file to see what exactly happening in the background). The original time invest was to figure how to properly Capture, how to store the captured footage for archiving. I didn't plan to dive into post capture/processing stuff. However, I had couple of problematic videos. Time based issues with my capture setup. A fix was suggested to me for couple of problematic videos (about 10). That's where I learned about QTGMC and AviSynth. There was a initial setup/research time on how to use it, and I'm still learning here now. However, I can pretty easily write a small Shell script that will convert the 200 video files for me automatically based on the generic QTGMC script mentioned above. It's a dedicated machine, so I don't care leave it on and let it do the encoding. Unless I'm missing here some huge time-sink I'm not aware of.
think your time would be much better spend editing, adding titles (so later generations have some idea who they're looking at), doing gamma and color corrections, etc.
Thanks! -
I had forgotten about the original thread, and didn't realize that you have indeed done a before/after comparison. I apologize for wasting you time with my post because I didn't remember the original post. Since you can see a significant difference, your work is almost certainly worth doing.
Yes, QTGMC also does denoising, and it is quite good at that. The deinterlacing is pretty much as good as it gets, so you don't degrade the video too much (deinterlacing always degrades the video).
I think there might be much faster denoisers than QTGMC which, for VHS PAL captures, might do an equal or better job, but since you have QTGMC almost sorted out, you will probably be best served by sticking with that. -
Where does this "4:1" come from? You used it several times. I guess you mean 4:3. More below...
Analog video doesn't have pixels. It's a continuous waveform. The way it's drawn on the face of an analog CRT produces a 4:3 aspect ratio picture (under optimal conditions). The ITU specifies that PAL video is captured as 704x576, or720x576. I'm going to simplify here: the 704x576 frame has the 4:3 image (it's really 702.something but 704 is generally considered close enough), the 720x576 frame has a little extra at the left and right in case the source or cap is slightly off center. The general equation that relates the display aspect ratio to the frame dimensions is:
Code:DAR = FAR * SAR DAR = Display aspect ratio, the final shape of the picture that's viewed FAR = Frame Aspect Ratio (frame_width:frame_height) SAR = Sampling aspect ratio -- the "distance" between samples horizontally and vertically
Code:DAR = FAR * SAR 4:3 = 704:576 * SAR 4/3 = 704/576 * SAR (4 * 576) / (3 *704) = SAR 2304 / 2112 = SAR divide both values by 192 12/11 = SAR 12:11 = SAR
The tutorials have many errors, approximations, and simplifications. I think he used those values to help you keep close to the correct aspect ratio when resizing to 720x540 at the end.
I'm not sure you should even use the animation tuning for VHS caps of cartoons. That tuning works well for sharp, low noise, cartoons -- which isn't the case with VHS tapes. If will depend on how much you sharpen and how much noise reduction you use. Give it a try and compare for yourself. -
You might gets come ideas for using QTGMC on PAL VHS captures in this thread from doom9.org:
Restoring old VHS video by Avisynth -
I apologize for wasting you time with my post
Where does this "4:1" come from?
I would crop whatever you need to get rid of junk at the edges of the frame then encode with a 12:11 SAR ("--sar 12:11" on the x264 command line). Alternatively, resize your 720x576 cap to 786x576, crop away whatever you don't want, and encode as square pixel (-- sar 1:1).
It seems like the first option is better (encoding with 12:11 sar flag) - as this means only a single resize happens, and I don't need to resize it on my AviSynth flow. Unless the AviSyth resize can actually improve the total quality using the right method.
Give it a try and compare for yourself.
You might gets come ideas for using QTGMC on PAL VHS captures in this thread from doom9.org:
EDIT:
I cropped one of the videos and created two x264 files. One with --sar 12:11, and one resized using LanczosResize(786,576) with --sar 1:1 flag. Took a screenshot with VLC pload them both here so you can have a look if you wish. First of all, there seems to be a single pixel difference. Nothing major. To my eyes, the resized seems to have a bit more noise compared to the more soft 12:11 with no resize video.
By the way. I noticed x246.exe only handle video (well, that makes sense. I assume it means I have to use ffmpeg. Here's the final x264.exe command I plan on using:
x264.exe qtgmc.avs --crf 18 --preset slow --sar 12:11 --output "results.mp4"
If someone can quickly extract the proper ffmpeg command out of it, that would be cool. If not - I'll figure it out by reading the commandline options ffmpeg hasLast edited by Okiba; 30th Sep 2020 at 06:06.
-
Yes.
Yes, some devices will ignore the SAR flags and play the video at the frame aspect ratio.
VLC does respect the SAR. Kodi might depend on the particular device.
Yes, potentially.
There's the question. There are some upscaling filters in AviSynth (nnedi3 for example) that work much better than the upscalers built into most players/TVs. So upscaling to 1440x1080 with nnedi3 might look better than letting the TV do the upscaling. But upscalers work best with video that's sharp to begin with -- VHS is not sharp. The best upscalers manage to retain sharp edges without creating oversharpening halos or aliasing artifacts.
Most players use something like a BicubicResize() to scale video -- that slightly sharpens the picture. LanczosResize() is even sharper. Sharpening increases noise as well as edges -- hence the increased noise.
Something like:
Code:ffmpeg -i "qtgmc.avs" -c:v libx264 -preset slow -crf 18 -sar=12:11 -c:a acc "results.mp4"
-
Yes, some devices will ignore the SAR flags and play the video at the frame aspect ratio.
Kodi might depend on the particular device.
So upscaling to 1440x1080 with nnedi3 might look better than letting the TV do the upscaling.
ffmpeg -i "qtgmc.avs" -c:v libx264 -preset slow -crf 18 -sar=12:11 -c:a acc "results.mp4" -
You can see by the shape of the video when it's played. But I don't think VLC lets you see the SAR or DAR values numerically. You can use MediaInfo for that.
Kodi on the RPi will display the video correctly. I have one too.
Yes.
You probably just have a "licensing safe" build that doesn't include it. I recommend you get a build that has it. Or use another encoder. You can get a list of codecs included with your current build with "ffmpeg -codecs". -
You can use MediaInfo for that.
I recommend you get a build that has it.
I *think* I have everything I need to keep pushing the project forward. I'm attaching the final example for a review (I also attached the original Loseless video). I fixed everything you guys mentioned except the TBC issues which sadly I can't with the current setup (so what was fixed was the Aspect Ratio, Chroma Offset by 4 pixels, and de-interlaced it). Feel free to review the final results
I'm summing it up for myself (to make sure I didn't miss anything) and for future readers who might find it useful:
- The 'generic' camcorder QTGMC script looks like so:
Code:SetFilterMTMODE("QTGMC", 2) AviSource("E:\loseless_hufyuv_file.avi") AssumeTFF() QTGMC(Preset="Slower", EdiThreads=3) crop(20, 6, -20, -6) chromaShift(L=-4) Prefetch(3)
- Masking bars are being cropped.
- To avoid multiple re-sizing, we let the TV/Monitor do the resizing, but adding SAR as option to the encoder (12:11 for a PAL video that starts as 720x576 but get cropped).
- The ffmpeg command is the following:
Code:ffmpeg -i "qtgmc.avs" -c:v libx264 -preset slow -crf 18 -vf setsar=12/11 -c:a laac "results.mp4"
-
If you want to switch to 64 bit AviSynth+ there's a replacement for ChromaShift(): ChromaShiftSP(). It uses X and Y instead of C and L, and the values are the opposite sign (negated). And it supports odd as well as even shifts. Even non-integer values. So ChromaShiftSP(Y=4) is equivalent to ChromaShift(L=-4).
And it's even possible to use 32 bit filters within 64 bit AviSynth with MP_pipeline(). It's a bit awkward and slower than all native 64 bit.
Code:MP_Pipeline(""" ### platform: win64 AviSource("E:\loseless_hufyuv_file.avi") AssumeTFF() QTGMC(Preset="Slower", EdiThreads=3) crop(20, 6, -20, -6) prefetch(3) ### ### ### platform: win32 LoadPlugin("c:\program files (x86)\AviSynth+\plugins+\chromashift.dll") ChromaShift(L=-4) ### ### """)
That particular video could use some level/color adjustments. Maybe something like Tweak(cont=1.2, bright=-40, sat=1.2). -
there's a replacement for ChromaShift(): ChromaShiftSP().
That script will run on both 32 bit and 64 bit AviSynth+.
That particular video could use some level/color adjustments. Maybe something like Tweak(cont=1.2, bright=-40, sat=1.2).
a. I don't have the eye yet to know "What good" and "whats not"
b. I'm not sure my monitor is calibrated or even remotely correct.
I will apply what you suggested for that specific video. See if I can tell what you tried to do there. The 4 Pixel down and 1 to the Left ChromaShift for example, was suggested by LordSmurf. I wasn't even aware of that until I learned about it, and then keep seeing it all the time
4 pixels seems OK to me. But I couldn't tell a difference when moving just one pixel. Hopefully it's correct now (as I'm going to apply it to all this specific CamCorder setup videos).
Thank you for the professional help jagabo. I appreciate your help! -
I thought 3 pixels was a little better than 4. You might also try sharpening the chroma.
Code:MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height))
-
I thought 3 pixels was a little better than 4
You might also try sharpening the chroma.
It's a bit awkward and slower than all native 64 bit.
Also, seems like MergeChroma need to run on YV12 (And my Loseless video is YUV2). I added to ConvertToYV12() just so I can share the results (includes MergeChrome, 3 pixels down instead of 4, and the color/level correction). When I have some free time I will read about the difference between YV12 and YUV2 (and why my HuffYuv Files are YUV2 and if that's OK to change them to YV12).
Code:SetFilterMTMODE("QTGMC", 2) AviSource("E:\test\SwissRaw.avi") ConvertToYV12() AssumeTFF() QTGMC(Preset="Slower", EdiThreads=3) crop(20, 6, -20, -6) ChromaShiftSP(Y=3) MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height)) Tweak(cont=1.2, bright=-40, sat=1.2) Prefetch(3)
Last edited by Okiba; 1st Oct 2020 at 09:50.
-
awarpsharp2
http://avisynth.nl/index.php/AWarpSharp2
Only Just the 32bit section will be slower? or using MP_Pipeline scope means that even the 64bit will run slower?
Also, seems like MergeChroma need to run on YV12 (And my Loseless video is YUV2). I added to ConvertToYV12() just so I can share the results (includes MergeChrome, 3 pixels down instead of 4, and the color/level correction). When I have some free time I will read about the difference between YV12 and YUV2 (and why my HuffYuv Files are YUV2 and if that's OK to change them to YV12).
If you use ConvertToYV12 before a deinterlacer to convert 422 to 420, it has to use interlace=true, otherwise you will get chroma artifacts. ConvertToYV12(interlaced=true)
But since you're using some filter later that requires planar, just use ConvertToYV16(interlaced=true) instead and you can keep 422
Sat is a global variable I can apply to all videos? or this is too scene specific?Last edited by poisondeathray; 1st Oct 2020 at 10:22.
-
YV12 has the luma channel at 720x576 but the chroma channels at 360x288. YUY2 has the luma at 720x576 but the chroma at 360x576. Internally, YV12 is stored as 3 planes (YYYY... UUUU... VVVV...). YUY2 is stored interleaved (YUYVYUYV...)
-
awarpsharp2
What do you need that is 32bit ?
MergeChroma requires planar input. YUY2 (8bit 422) in planar equivalent would be YV16YV12 has the luma channel at 720x576 but the chroma channels at 360x288. YUY2 has the luma at 720x576 but the chroma at 360x576. Internally, YV12 is stored as 3 planes (YYYY... UUUU... VVVV...). YUY2 is stored interleaved (YUYVYUYV...)
interlaced=true
Does it makes a difference if I de-interlace and use ConvertToYV16() afterwards, or if I first ConvertToYV16(interlace=true) and then de-interlace?
Yes , but you can apply different filters or different settings to different sections by using Trim()
That's pretty impressive, Okiba.
EDIT:
Sample uploaded.Last edited by Okiba; 1st Oct 2020 at 11:38.
-
In general, there is additional overhead with mp_pipeline, so you expect it to be slower most of the time, than if you ran natively x64 with prefetch(x)
But there are some cases where mp_pipeline's threading model makes some operations faster. It uses a different threading model than global preftech
So I probably making it more simple then it's is, but the difference is in HOW the information is store - but the information is identical (4:2:2)? So some plugins except a specific image format, and I can move between images formats. But quality wise, YUV2 and YV16 will be the same?
Some programs handle different types of 8bit422, differently. For example, most Windows NLE's do not handle YUY2 or YV16 as YUV, they get converted to RGB.
But in avisynth - YUY2 and YV16 are interconvertible, losslessly. All types of 8bit422 are treated in avisynth as either YUY2 or YV16 (the conversion is sometimes done in the source filter)
Does it makes a difference if I de-interlace and use ConvertToYV16() afterwards, or if I first ConvertToYV16(interlace=true) and then de-interlace?
But in general , most filters run faster with their planar counterparts, so converting to YV16 earlier rather than later should be faster
Yes , but you can apply different filters or different settings to different sections by using Trim() -
Thank you for answering the questions poisondeathray. In that case, here's the updated generic script:
Code:SetFilterMTMODE("QTGMC", 2) AviSource("E:\test.avi") ConvertToYV16(interlaced=true) AssumeTFF() QTGMC(Preset="Slower", EdiThreads=3) crop(20, 6, -20, -6) MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp2(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height)) ChromaShiftSP(Y=3) Prefetch(3)
One thing I noticed with the camera during the night scene, is that on some scenes, It's like it lacks focus on the side areas (and the center is focused). But that's a subject for another time :P -
Last edited by Sharc; 2nd Oct 2020 at 07:43. Reason: coring added
-
you can examine the waveform with histogram() in avisynth, or use a waveform with other programs
I tried use the histogram() command, and it looks a bit complex compared to the Virtualhub Histogram or the Histogram I know from when taking picture with my Camera. Is tweaking brightness/black has a set of rules I can apply from just looking at the histogram? or it's a matter of having experience "knowing" what is good or not?
levels(12,1.0,255,0,255,coring=false) -
All three of your samples have about the same black and white levels. The blacks are way too high, the whites a little low. You'd be better off capturing closer to the right levels because you would make better use of the limited range of luma values. Your deepest blacks are around Y=45 and brightest whites around 220. The valid range for limited range rec.601 is from 16 to 235. So you only have about 176 different Y values, where you could have 220.
A waveform monitor is much more useful than a histogram. Note that AviSynth's Histogram() defaults to a waveform monitor, not a histogram. A waveform monitor is basically a graph of all the Y values across the width or height of the frame. Here's a explanation I wrote up long ago:
https://forum.videohelp.com/threads/340804-colorspace-conversation-elaboration#post2121568
In AviSynth I like to use TurnRight().Histogram().TurnLeft() to get a horizontal waveform (this is what one would see on an oscilloscope).
[Attachment 55172 - Click to enlarge] -
All three of your samples have about the same black and white levels.
The blacks are way too high, the whites a little low
So I assume what's happening here - is that while this scene is dark, is not the darkest I tested. So the blacks starts at 45. Same thing happens for the white. It seems however, nothing clips (which was the intention). So now that I understand the graph better. How does it helps me solve it? Is it possible perhaps to get a graph based on ALL the video, and not that single point? and then - move the lowest black (let's say it's 30) to 16 - and move the lower white (lets say it's 220) to 235? "Streching" the histogram?
I was checking Sharc settings:
levels(12,1.0,255,0,255,coring=false)
Thanks!Last edited by Okiba; 2nd Oct 2020 at 15:02.
-
The levels filter exists because it's easy to adjust levels with it. Look at the image in post #28. The blacks are at Y=45 and whites at Y=220. To fix that so that blacks are at 16 and whites are at 235 you use Levels(45, 1.0, 220, 16, 235). Gamma is the linearity of that adjustment. 1.0 is linear. Gamma values less than 1.0 reduce dark details, values over 1.0 bring out dark detail. Here's an example that animates the gamma value
Code:###################################################### function GreyRamp() { BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32") StackHorizontal(last, last.RGBAdjust(rb=1, gb=1, bb=1)) StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2)) StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4)) StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8)) StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16)) StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32)) StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64)) StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128)) } ###################################################### function levels_gamma(clip v, float gamma) { Levels(v, 16, gamma, 235, 16, 235, coring=false) Subtitle("gamma="+String(gamma)) } ###################################################### GreyRamp() ConvertToYUY2() Animate(last, 0, 256, "levels_gamma", 0.5, 2.0) TurnRight().Histogram().TurnLeft()
Similar Threads
-
Qtgmc-GUI. A simple encoder for your Qtgmc scripts.
By ProWo in forum Video ConversionReplies: 17Last Post: 4th Mar 2023, 02:01 -
Help with QTGMC
By Xyena in forum Newbie / General discussionsReplies: 6Last Post: 22nd Sep 2020, 12:25 -
I think this just needs QTGMC...
By pooksahib in forum Video ConversionReplies: 2Last Post: 30th May 2019, 03:12 -
Help with QTGMC?
By attackworld in forum EditingReplies: 3Last Post: 26th Nov 2018, 21:57 -
A video edition software recommendation, basic but no so basic
By helpcito in forum EditingReplies: 11Last Post: 17th Feb 2018, 05:15