Looking into luma correction a little more closely, it seems my camera's luma occupies the whole [0,255] range
So I experimented with mapping [0,255] to [16,235] in Levels(), and for comparison also included the corresponding ColorYUV() statement
I'd previously imagined that Levels() compresses chroma also, but now mapping to [16,235] makes that fairly clearCode:levels = "Levels(0,1.0,255,16,235,coring=false)" coloryuv = "ColorYUV(off_y=16, gain_y=-36)" StackHorizontal(ColorYUV(analyze=true).Subtitle("original", align=1), \ Eval(levels).ColorYUV(analyze=true).Subtitle(levels, align=1), \ Eval(coloryuv).ColorYUV(analyze=true).Subtitle(coloryuv, align=1))
The AVS 2.58 documentation for Levels() seems to confirm that
Mapping [0,255]->[16,235] instead of [16,255]->[~16,235] as before certainly compresses luma more, but I haven't been able to spot any visible banding with this footage. ColorYUV() does a nice job of leaving chroma alone and the results are quite pleasing visually under the circumstances, so I think I'll go with that.Code:For adjusting brightness or contrast it is better to use Tweak or ColorYUV, because Levels also changes the chroma of the clip.![]()
+ Reply to Thread
Results 61 to 90 of 133
-
-
BTW, there is another function called YLevels that only affects Y
http://avisynth.org/mediawiki/Ylevels
What is the "look" that you're going for in this piece ? or were you planning to do more color manipulations in other programs ?
IMO the contrast is low in 2 sections, the sky and the foreground (grass) - you can see this graphically in the waveform - there are 2 bands where the data is clustered, the entire tonal range isn't utilized . The black level is slightly eleveated, giving that "washed out" appearance, also the saturation is a bit low . These together make me feel this is supposed to be a sad story - It feels like my trip to England where it rained every day . But maybe that is the intention ? -
The few pixels down below Y=16 are only noise and overshoots. That's the reason why footroom and headroom are part of the spec. (Add Blur(1.0) before the levels check and you'll see all the pixels below 16 disappear.) Look at a mix of light and dark shots. And see if there is anything significant down there.
Because of the noise.Last edited by jagabo; 14th Mar 2013 at 15:50.
-
-
For interest's sake, this is the background to the calculations. I ensured my PC display was correctly calibrated for symmetrical display, photographed a perfectly round circle and adjusted VirtualDub's screen dimensions to get the circle round again. Measuring screen dimensions showed the display aspect ratio to be 1.366 for '4:3', and 1.821 for '16:9'. I compared those against resources like http://en.wikipedia.org/wiki/Pixel_aspect_ratio. It followed that PAR4:3 = DAR/SAR = 1.366/(720/576) ~ 59:54, and PAR16:9 = 1.821/(720/576) ~ 118:81. In other words, my camera is recording in Rec.601 pixel aspect ratio.
Unfortunately most Internet resources focus on pixel aspect ratios alone, and don't bother to address the corresponding display aspect ratios. In projects like this, those values are far from academic! When combining disparate sources, it's crucial to know exactly what target display aspect ratio to target. Especially with anamorphic video like DV. So for the record, Rec.601 '4:3' display aspect ratio is 1.366, and Rec.601 '16:9' display aspect ratio is 1.821.
As you mention, it is possible to encode a full DVD picture with 704 (59:54 or 118:81) horizontal pixels, as also shown at the link above. But as explained elsewhere, it made more sense (for me at least) to stick with 720 pixels. -
My primary (and only) concern really, is to prevent picture information present after capturing from getting lost in processing later. Other than that I find it easier to deliver output the way it was shot
. Except if that was completely botched in which case I pull hair out on a case-by-case basis.
IMO the contrast is low in 2 sections, the sky and the foreground (grass) - you can see this graphically in the waveform - there are 2 bands where the data is clustered, the entire tonal range isn't utilized . The black level is slightly eleveated, giving that "washed out" appearance, also the saturation is a bit low .
These together make me feel this is supposed to be a sad story - It feels like my trip to England where it rained every day . But maybe that is the intention ?
(I won't tell my wife what you said.)Last edited by fvisagie; 15th Mar 2013 at 01:10.
-
-
Upon reflection, an additional concern is ensuring that what I get on the PC screen is what I will (approximately) get on the TV screen. For AvsPmod, will setting the display to Rec.601 and TV levels be the correct way for this DV footage?
-
LOL!, was the Honeymoon in England as well? Obviously I wasn't there on your Honeymoon with you, so I don't know what it was supposed to look like in terms of "natural" . If you have some reference photographs shot on that day, you can use that as your guide if your goal was to make it look that way. I just gave my impressions of what it looked like. I would increase the highlight contrast to bring out the clouds, shadow contrast to bring out the foreground, paying attention to the black level (this will make it look less "washed out"), increase the saturation a bit . There is only so far you can "push" DV footage before it falls apart.
-
-
You mean those values are the DARs of 720x576 when displayed with those PARs? Yes, ish. But you're not meant to see those extra pixels. Apart from DV and DVD, working with video that has a DAR other than exactly 4x3 or 16x9 can be problematic.
As you mention, it is possible to encode a full DVD picture with 704 (59:54 or 118:81) horizontal pixels
I don't really mind what you do - I'll never see you video, and I couldn't see the difference even if I did. But to choose the more complicated option, and then do it in a way that involves extra scaling (= picture blur), because you think it's better - when it's potentially worse - that would be a frustrating waste of time.
Cheers,
David. -
No reference photos, I'm afraid, so I'll go by subjective appearance where this needs doing.
Any hints (or available reading) for doing those in Avisynth, please??
The best bet is to use whatever setup (eg. DVD player, TV) you're going to be viewing it on
(PS. Our honeymoon was in Namibia. In summer it's cloudy, hot, humid and rainy in the subtropical north, yum!) -
This concern of yours really bothers me, because I don't see the problem you're referring to. Hopefully I've confused you, and if I restate my aim, and the approach by which I hope to reach it, I could clear that confusion up. If I don't manage to clear up any confusion on your part, please restate the problem in a way I can understand
.
So let me try. Ignoring the detail of my to-be-improved workflow above, in essence my aim with it is to:
- process SD with as little getting lost as possible (taking into account your and the others' earlier inputs)
- add HD to the same workflow with as little getting lost as possible
That conversion would need to pay attention to ensuring the output comes out in the right destination non-square pixel aspect ratio, meaning the correct standards-based dimensions must be used. Then it merely becomes an issue of deciding whether to use 12/11 or 59/54 as you pointed out.
Does this hopefully clear up something for you; otherwise, what is the big thing that I'm missing?
Lastly, as to whether to base everything on a horizontal resolution of 720 or 704, I'd measured the outputs of both (correctly processed and encoded I assure you!) on all devices I could lay my hands on. This issue ultimately boils down (in my experience at least) to the choice between correct rendering on analogue DVD outputs and imperfect rendering on digital devices @ 704 pixels (but with loss of horizontal resolution on the latter compared to 720), vs. imperfect rendering on all devices but better horizontal resolution on digital ones @ 720 pixels. Since the rendering error is in all cases at most ~2.5%, in my view that makes the decision here a subjective and personal one, also largely influenced by intended audience etc.
I'm holding thumbs that your big concern has somehow disappeared!Last edited by fvisagie; 15th Mar 2013 at 11:18.
-
Then it merely becomes an issue of deciding whether to use 12/11 or 59/54 as you pointed out.
4/3 = 720/576 * 16/15
Most NLE's use ITU for their calculations (12:11 for 4:3 PAL) , even with the full 720px width -
IMO , it's more difficult to do color work in avisynth . Some people get great results using avisynth only for color work, I'm not one of them
For example , how would only adjust the clouds only , or only bright areas? In your "coloryuv" example , you brought the "superwhites" down, but in order to compensate from making the foreground darker, you've brought up the black level - this reduces contrast and give that washed out, milky appearance
Yes, you can make selective adjustments in avisynth using various luma masks and masktools, but it's more difficult. Traditional color correction tools in NLE's work in RGB, but they have shadows/midtones/highlights adjustments - 3 way color correction . With something like RGB curves (e.g in an NLE or vdub's gradation curves), you can "map" different areas and make non linear changes much easier with a GUI. Yes, you can do it in YUV , with smoothcurve, but it's very difficult to get the string parameters correct . For something like contrast, there typically is a contrast "midpoint" , the point at which the data gets moved from symmetrically in either direction. What if you didn't want a symmetrical adjustment? It's difficult to adjust that in avisynth tools, it's usually set in the middle. So If I were to increase "contrast" both ends would move, symmetrically pushing both brights and darks away - you would make the clouds clip again, the foreground grass dark. Moreover, you can keyframe the changes in other programs (so as scene exposure and conditions change, you can compensate - very difficult to do in avisynth). Simply put, I find fine tune color control lacking with avisynth. Now there are some "tricks" you can use, I'll take a closer look and try to make some suggestions later on how you might do that in aviysnth -
-
After playing around a bit with various dark scenes, light ones and combined ones and comparing various settings and filters, it seems ColorYUV(off_y=2, gain_y=-22) is a good enough baseline starting point for this camera, at least for scenes like these. '2' from David's suggestion for mapping 16, which means the gain needs to be adjusted by 2 also. I checked measurements of this adjustment on the above scenes with jagabo's Blur() and finally did a side-by-side encode of original and corrected, and on the TV the result is even more subtle than on PC and very pleasing.
edit: I neglected to mention that as suggested for RGB conversion I'll use the PC matrices for safety's sake anyway. -
The images posted earlier all have blown-away brights exceed ing RGB 255. Taking all those measurements in YUV is instructive. However, TV, monitors, projectors, etc., don't display YUV. The original source is too bright, black levels and gamma are too high, and brights are irreparably clipped at the outset.
original:
[Attachment 16809 - Click to enlarge]
Levels only, YUV + mostly RGB. Needs denoising, dot crawl is more obvious. YUV isn't the only colorspace in town.
[Attachment 16810 - Click to enlarge]Last edited by sanlyn; 25th Mar 2014 at 20:08.
-
That's the impression I've been getting. But you've given me great pointers for going about this in other tools. By the time I turn to this work, hopefully I'll be able to see how my new copy of Movie Studio fares with it!
-
Levels only, YUV + mostly RGB. Needs denoising, dot crawl is more obvious. YUV isn't the only colorspace in town. Bright clouds masking sunlight (but they shouldn't be RGB 255. Go outside and look.), and the foreground is overcast/indirect light.
-
I started this correction a couple of days ago but was interrupted. Meanwhile, poisondeathray's post (#75) touched on the methods I used before I could post the results. I did this first with histograms and image filters in AfterEffects; but that's a bit unfair, as many people don't have AE or Color Finesse to play with, so I accomplished more or less the same thing in VirtualDub. Anyway, ColorYUV() and Levels() was the first step, to bring levels and chroma into a useable range.
ColorYUV(off_y=-15)
Levels(2, 0.95, 255, 10, 235, coring=false)
As pdr noted, there are some things you can do in YUV (and which often you MUST do) before going to RGB to pinpoint more specific areas that you can't do with YUV (Well, no, I shouldn't say you can't do it in YUV, because I've seen people hit some very specific areas in YUV, but how they figured it out seems to be a totally undocumented secret). Given more time I would have got into the dither() plugin to help smooth some of the "spikes" in the spectrum, but I've seen dither introduce some unwanted effects. Probably worth a try, though.
You remarked earlier that Levels() "changed some colors". True, but ColorYUV also "changes" some colors; both filters map them inside the preferred borders. Invalid chroma is as troublesome as invalid luma. The avs script reigned in lume+chroma. A YUV histogram of that code reveals an abrupt clipping point at RGB 220 or so. You can keep those brights below 255, but to no avail; practically speaking the original capture has no data above a certain point.
Below: original levels. Left=YUV, right=RGB). The YUV histogram clearly shows hard bright clipping on the right, anemic black levels, and deficit of midtone values. The RGB histogram hsows the same thing, but here the color clipping and poor blacks are more clearly seen.
[Attachment 16814 - Click to enlarge]
Below: YUV levels adjust, Avisynth only. The YUV chart shows clippping, so bringing brights farther down offers no more detail. There is just a very slight creep into the below-RGB 16 area, but no detail down there anyway. The RGB histogram shows the obvious bright detail destruction in luma as well as chroma.
[Attachment 16815 - Click to enlarge]
Below: Adjust with gradation curves. Primary adjust (left) and secondary tweak (right). Could probably get those peaks smoothed with dither() and some contrast masking. Gradation curves raised the darker parts but kept blacks intact, and darkened bright clipped areas. No o0bvious banding, but note that the clouds have an abrupt cutoff above the shadow areas at about RGB 128 (from the midtones on up), and there's noise in the brighter colors. I'd opt for a better capture; the original IRE blacks are too high, and the image is too bright.
[Attachment 16816 - Click to enlarge]
Further RGB tweaks with ColorMill:
MIDDLE POINT - Middle Point=-3,Booster=-56, Base Shift=0
GAMMA - Red=3, Blue=3, Green=3
LEVELS - Dark=-8, Middle=0, Light=5
SATURATION - +5%
I think my posted image has the cloud shadow a bit dark, but....it was getting late.Last edited by sanlyn; 25th Mar 2014 at 20:09.
-
If you were bringing this into vegas studio , the problem with the studio version is it doesn't have scopes (waveform, RGB histogram, parade,etc..) This is one of the main differentiating things between the "pro" version, and it's a severe limitation IMO. You can make adjustments "by eyeballing", but scopes are almost mandatory IMO, as human eye isn't always accurate and can be "tricked" . In this example below, what color or "shade" are A and B ?
There are a near infinite number of subjective "looks" you might be going for. I dislike making suggestions on color because it's so subjective, unless you have a reference, or a clear description or picture in your head of what you want. The only "objective" thing that I think everyone can agree on is bringing down the superbrights.
Eitherway, one thing I recommend - whatever you use - is to increase both the highlight contrast and shadow/midtone contrast. I would bump up the brightness of the grass , along with the contrast. You said you overexposed this on purpose to compensate for the "dark" foreground - well in reality the foreground probably isn't that dark, and in reality you probably saw the separation in the clouds, not blobs of white. This is just a limitation of the camera (low dynamic range) .
In general, the more severe your adjustments to footage, the more it breaks apart and reveals compression/codec "nastiness" , the more you need to denoise and potentially degrade the footage. If you look at the example below, there are color splotches (look at the frame edges, treeline) and more noise and crap revealed by the changes made (you would probably add luma/chroma denoiser)
In a NLE , you can do this with something like curves (adjust different regions, differently), but you have to be very careful, or you will get posterization effects when doing this on 8bit footage. Not to pick on Sanlyn, but you can see an example of that in the lower grey clouds, where the separatation is lost (it looks like "burnt grey")
In this example with avisynth only, There is higher contrast in the clouds (the shapes are more discernable), and in the foreground - there is more "pop" to the image because the contrast is higher in both areas. The "flatter" the image, the more washed out it looks, especially when black is elevated. But sometimes you don't something like this - Sometimes those changes will distract the viewer's point of view or focus from the important part or the story you're trying to tell (I don't know what it is here, but you get the idea). You can look at the waveform and see the difference, the 2 "bands" representing clouds and foreground are more spread out. Anyways, the avisynth "trick" here was to use HDRAGC to selectively increase contrast in the foreground/grass regions (invert was used to apply it to the brighter cloud regions) . But it's hard to keyframe changes in avisynth...
Code:levels(0,1,255,0,237,false, dither=true) invert hdragc(max_gain=2, coef_sat=0.75) invert levels(12,1,255,0,255,false, dither=true) hdragc(max_gain=2, coef_sat=0.88) tweak(sat=1.2, coring=false)
-
Yep, looks good. I played with some "auto" filters as well, but ran out of time. Nice work. I find AGC a bit tricky to manage (it can really blow an image to pieces if you ain't careful), but it works well here.
BTW, I saw the Adelson image you posted and many other examples of optical illusion. The "A" and the "B" squares are exactly the same grayscale hue , RGB 120-120-120. One can use a pixel sampler such as CSamp or built-ins in NLE's to check pixel values.Last edited by sanlyn; 25th Mar 2014 at 20:10.
-
Thanks guys, you've gone to a lot of trouble here and I sincerely appreciate it. Best of all, you've put it simply enough for me to follow it all!
Just checking, but one brute-force way would be to create Trim()s to separate treatments? It would probably be very tedious but at least as accurate as the Trim()s? -
-
I've encountered some interesting things and hopefully you can confirm my interpretation.
As background, by doing an end-to-end test I learnt that my current basic workflow preserves both invalid YUV and RGB values. Any corrections in such a workflow would therefore be purely for esthetic purposes - no information would be lost by clipping.
After correcting luma some 'useful' illegal values remained. When I then 'lost' these by using Rec matrices to convert to and from RGB as I would for Deshaker, there was some clearly visible (and measurable) damage, esp. in very bright and very dark areas. This didn't occur with PC matrices. Therefore, I might as well try preserve any 'useful' illegal values that remain after correction by using PC matrices when converting for Deshaker.
Trying to put this into practice, the first thing I discovered was that it seems best to postpone correction also as late as possible. E.g. correcting luma after QTGMC produces visibly better results than correcting first. My interpretation is that correcting luma (by compressing the range) effectively reduces the luma "resolution". More pixels end up looking similar than before (banding) which will change their processing in subsequent filters, right?
The second thing I discovered was more vexing. ConvertToXYZ(matrix="PC.abc") requires width to be a multiple of 4. I had intended to crop only 10 junk pixels from 720. Keeping in mind that luma has been corrected but useful illegal values remain, the choice now becomes:discarding more good border pixels and have fewer to work with in Deshaker and other filters, but with the benefit that useful illegal luma values are retained using PC matrices;
vs.
using Rec matrices to keep the border pixels, but lose useful illegal luma values in the process.
If I end up cropping to 704 pixels before encoding, the first choice then seems the obvious one - there are some pixels to spare I guess? But if I want to hang on to every good pixel, the choice seems less clear - how does one measure the effect of losing more pixels vs. the effect of clipping useful luma values? There's no common baseline to measure against, the images are different sizes after different cropping, scaling for comparison would affect quality, etc?
-
-
Not quite, there are still YUV values that do not map to sRGB (out of gamut colors, that lie outside of color cube) , regardless of what you do . You probably aren't concerned with those
Trying to put this into practice, the first thing I discovered was that it seems best to postpone correction also as late as possible. E.g. correcting luma after QTGMC produces visibly better results than correcting first. My interpretation is that correcting luma (by compressing the range) effectively reduces the luma "resolution". More pixels end up looking similar than before (banding) which will change their processing in subsequent filters, right?
The second thing I discovered was more vexing. ConvertToXYZ(matrix="PC.abc") requires width to be a multiple of 4. I had intended to crop only 10 junk pixels from 720. Keeping in mind that luma has been corrected but useful illegal values remain, the choice now becomes:discarding more good border pixels and have fewer to work with in Deshaker and other filters, but with the benefit that useful illegal luma values are retained using PC matrices;
vs.
using Rec matrices to keep the border pixels, but lose useful illegal luma values in the process.
If I end up cropping to 704 pixels before encoding, the first choice then seems the obvious one - there are some pixels to spare I guess? But if I want to hang on to every good pixel, the choice seems less clear - how does one measure the effect of losing more pixels vs. the effect of clipping useful luma values? There's no common baseline to measure against, the images are different sizes after different cropping, scaling for comparison would affect quality, etc?
http://avisynth.org/mediawiki/Crop
It seems to work here with 710width and convertoXYZ(whatever matrix). What error are you getting?
One thing is certain - deshaker works better if you crop things like black borders (you don't have any, just junk borders) . If you don't want to crop, another option is to use deshaker settings to ignore edge pixels for the analysis (look in the pass 1 parameters, you can enter values for left,top,right,bottom)
Personally I would keep all the pixels until the very end, make that decision to crop (and/or add pillarbox) when encoding to your final format. Personally I would use PC matrix and do everything in RGB (only because it's easier for color work, not for "best practices". "Best practices" would dictate that you do everything in YUV, that means no deshaker)Last edited by poisondeathray; 18th Mar 2013 at 10:18.
-
Code:
# Starting off with 720x576 YV12 interlaced QTGMC(Preset="Slower") Crop(0, 0, -10, 0) ConvertToRGB(matrix="PC.601", interlaced=false)
ConvertToRGB: Rec.709 and PC Levels support require MMX and horizontal width a multiple of 4
(original.avs, line 21)
EDIT: PS. Thanks for the other hints.
Personally I would keep all the pixels until the very end, make that decision to crop (and/or add pillarbox) when encoding to your final format. Personally I would use PC matrix -
Similar Threads
-
avoid rec.709 -> rec.601 conversion in premiere pro and vegas
By codemaster in forum EditingReplies: 0Last Post: 21st Dec 2012, 03:47 -
Is there a small test video one can use to test Rec 601 / 709 conversion?
By Asterra in forum Video ConversionReplies: 5Last Post: 19th Jun 2011, 08:28 -
Rec.709 to RGB24 to Rec.601
By Anonymous344 in forum Newbie / General discussionsReplies: 8Last Post: 2nd May 2011, 18:40 -
Reds look wrong for Rec.709
By Anonymous344 in forum Newbie / General discussionsReplies: 11Last Post: 23rd Apr 2011, 18:14