VideoHelp Forum




+ Reply to Thread
Page 3 of 5
FirstFirst 1 2 3 4 5 LastLast
Results 61 to 90 of 133
  1. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    Your camcorder is just like 99% of recent consumer camcorders: it records luma 16-255.
    Looking into luma correction a little more closely, it seems my camera's luma occupies the whole [0,255] range

    Click image for larger version

Name:	original analysed.png
Views:	3154
Size:	544.7 KB
ID:	16740

    So I experimented with mapping [0,255] to [16,235] in Levels(), and for comparison also included the corresponding ColorYUV() statement
    Code:
    levels      = "Levels(0,1.0,255,16,235,coring=false)"
    coloryuv    = "ColorYUV(off_y=16, gain_y=-36)"
    StackHorizontal(ColorYUV(analyze=true).Subtitle("original", align=1), \
                    Eval(levels).ColorYUV(analyze=true).Subtitle(levels, align=1), \
                    Eval(coloryuv).ColorYUV(analyze=true).Subtitle(coloryuv, align=1))
    I'd previously imagined that Levels() compresses chroma also, but now mapping to [16,235] makes that fairly clear

    Click image for larger version

Name:	original vs levels vs coloryuv.png
Views:	3103
Size:	1.63 MB
ID:	16741

    The AVS 2.58 documentation for Levels() seems to confirm that
    Code:
    For adjusting brightness or contrast it is better to use Tweak or ColorYUV, because Levels  also changes the chroma of the clip.
    Mapping [0,255]->[16,235] instead of [16,255]->[~16,235] as before certainly compresses luma more, but I haven't been able to spot any visible banding with this footage. ColorYUV() does a nice job of leaving chroma alone and the results are quite pleasing visually under the circumstances, so I think I'll go with that.
    Quote Quote  
  2. BTW, there is another function called YLevels that only affects Y
    http://avisynth.org/mediawiki/Ylevels

    What is the "look" that you're going for in this piece ? or were you planning to do more color manipulations in other programs ?

    IMO the contrast is low in 2 sections, the sky and the foreground (grass) - you can see this graphically in the waveform - there are 2 bands where the data is clustered, the entire tonal range isn't utilized . The black level is slightly eleveated, giving that "washed out" appearance, also the saturation is a bit low . These together make me feel this is supposed to be a sad story - It feels like my trip to England where it rained every day . But maybe that is the intention ?
    Quote Quote  
  3. Originally Posted by fvisagie View Post
    Looking into luma correction a little more closely, it seems my camera's luma occupies the whole [0,255] range
    The few pixels down below Y=16 are only noise and overshoots. That's the reason why footroom and headroom are part of the spec. (Add Blur(1.0) before the levels check and you'll see all the pixels below 16 disappear.) Look at a mix of light and dark shots. And see if there is anything significant down there.

    Originally Posted by fvisagie View Post
    Mapping [0,255]->[16,235] instead of [16,255]->[~16,235] as before certainly compresses luma more, but I haven't been able to spot any visible banding with this footage.
    Because of the noise.
    Last edited by jagabo; 14th Mar 2013 at 15:50.
    Quote Quote  
  4. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    The other option is to use a 720x576p50 timeline to do the editing , reinterlacing at the end (since you've already bobbed to 50p for deshaker) , so you don't have to worry about where to cut, or internal interlaced scaling issues (I don't know what kind of project you're doing, but when you scale for whatever reason e.g. overlays, PIP, whatver, NLE's typically do poor interlaced scaling)
    That's the plan exactly, thanks.
    Quote Quote  
  5. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    if you say that 704x576 is equivalent of 4x3 (some people refer to call this ITU PAR, for obvious reasons), then 720x576 is wider than 4x3, and you need more picture to fill it (or pad it with black bars). If that's what you're doing, your calculation is correct. It goes slightly wrong when PC software video players scale the whole 720x576 to 4x3. That's why I prefer 704x576.
    For interest's sake, this is the background to the calculations. I ensured my PC display was correctly calibrated for symmetrical display, photographed a perfectly round circle and adjusted VirtualDub's screen dimensions to get the circle round again. Measuring screen dimensions showed the display aspect ratio to be 1.366 for '4:3', and 1.821 for '16:9'. I compared those against resources like http://en.wikipedia.org/wiki/Pixel_aspect_ratio. It followed that PAR4:3 = DAR/SAR = 1.366/(720/576) ~ 59:54, and PAR16:9 = 1.821/(720/576) ~ 118:81. In other words, my camera is recording in Rec.601 pixel aspect ratio.

    Unfortunately most Internet resources focus on pixel aspect ratios alone, and don't bother to address the corresponding display aspect ratios. In projects like this, those values are far from academic! When combining disparate sources, it's crucial to know exactly what target display aspect ratio to target. Especially with anamorphic video like DV. So for the record, Rec.601 '4:3' display aspect ratio is 1.366, and Rec.601 '16:9' display aspect ratio is 1.821.

    As you mention, it is possible to encode a full DVD picture with 704 (59:54 or 118:81) horizontal pixels, as also shown at the link above. But as explained elsewhere, it made more sense (for me at least) to stick with 720 pixels.
    Quote Quote  
  6. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    What is the "look" that you're going for in this piece ? or were you planning to do more color manipulations in other programs ?
    My primary (and only) concern really, is to prevent picture information present after capturing from getting lost in processing later. Other than that I find it easier to deliver output the way it was shot . Except if that was completely botched in which case I pull hair out on a case-by-case basis.

    IMO the contrast is low in 2 sections, the sky and the foreground (grass) - you can see this graphically in the waveform - there are 2 bands where the data is clustered, the entire tonal range isn't utilized . The black level is slightly eleveated, giving that "washed out" appearance, also the saturation is a bit low .
    What would you suggest to make this look "natural", if I could use that term?

    These together make me feel this is supposed to be a sad story - It feels like my trip to England where it rained every day . But maybe that is the intention ?
    Oh, that's funny, that was shot on our honeymoon trip!!! ROTFLMAO!

    (I won't tell my wife what you said.)
    Last edited by fvisagie; 15th Mar 2013 at 01:10.
    Quote Quote  
  7. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by jagabo View Post
    Add Blur(1.0) before the levels check and you'll see all the pixels below 16 disappear.
    Thanks, that's an excellent suggestion.
    Quote Quote  
  8. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by fvisagie View Post
    My primary (and only) concern really, is to prevent picture information present after capturing from getting lost in processing later. Other than that I find it easier to deliver output the way it was shot . Except if that was completely botched in which case I pull hair out on a case-by-case basis.
    Upon reflection, an additional concern is ensuring that what I get on the PC screen is what I will (approximately) get on the TV screen. For AvsPmod, will setting the display to Rec.601 and TV levels be the correct way for this DV footage?
    Quote Quote  
  9. Originally Posted by fvisagie View Post

    IMO the contrast is low in 2 sections, the sky and the foreground (grass) - you can see this graphically in the waveform - there are 2 bands where the data is clustered, the entire tonal range isn't utilized . The black level is slightly eleveated, giving that "washed out" appearance, also the saturation is a bit low .
    What would you suggest to make this look "natural", if I could use that term?

    These together make me feel this is supposed to be a sad story - It feels like my trip to England where it rained every day . But maybe that is the intention ?
    Oh, that's funny, that was shot on our honeymoon trip!!! ROTFLMAO!

    (I won't tell my wife what you said.)

    LOL!, was the Honeymoon in England as well? Obviously I wasn't there on your Honeymoon with you , so I don't know what it was supposed to look like in terms of "natural" . If you have some reference photographs shot on that day, you can use that as your guide if your goal was to make it look that way. I just gave my impressions of what it looked like. I would increase the highlight contrast to bring out the clouds, shadow contrast to bring out the foreground, paying attention to the black level (this will make it look less "washed out"), increase the saturation a bit . There is only so far you can "push" DV footage before it falls apart.
    Quote Quote  
  10. Originally Posted by fvisagie View Post

    Upon reflection, an additional concern is ensuring that what I get on the PC screen is what I will (approximately) get on the TV screen. For AvsPmod, will setting the display to Rec.601 and TV levels be the correct way for this DV footage?

    It will give a rough approximation. The best bet is to use whatever setup (eg. DVD player, TV) you're going to be viewing it on , as equipment can be calibrated and setup very differently .
    Quote Quote  
  11. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by fvisagie View Post
    So for the record, Rec.601 '4:3' display aspect ratio is 1.366, and Rec.601 '16:9' display aspect ratio is 1.821.
    You mean those values are the DARs of 720x576 when displayed with those PARs? Yes, ish. But you're not meant to see those extra pixels. Apart from DV and DVD, working with video that has a DAR other than exactly 4x3 or 16x9 can be problematic.

    As you mention, it is possible to encode a full DVD picture with 704 (59:54 or 118:81) horizontal pixels
    Woe, stop right there - you know those PAR values you quoted? They're approximations. 59:54 is 0.043% too large, and based on really screwy (yet incorrect) fractional pixel counts. The correct PAR is 1150:1053. Simply stating 704x576=4x3 implies a PAR of 12:11, which is 0.111% too small. Given that both those errors are less than one pixel, I choose 12:11 - or just ignore the PAR and treat 704x576. http://forum.doom9.org/showthread.php?p=1110419#post1110419


    I don't really mind what you do - I'll never see you video, and I couldn't see the difference even if I did. But to choose the more complicated option, and then do it in a way that involves extra scaling (= picture blur), because you think it's better - when it's potentially worse - that would be a frustrating waste of time.

    Cheers,
    David.
    Quote Quote  
  12. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    No reference photos, I'm afraid, so I'll go by subjective appearance where this needs doing.

    Originally Posted by poisondeathray View Post
    I would increase the highlight contrast to bring out the clouds, shadow contrast to bring out the foreground, paying attention to the black level (this will make it look less "washed out"), increase the saturation a bit .
    Any hints (or available reading) for doing those in Avisynth, please??

    The best bet is to use whatever setup (eg. DVD player, TV) you're going to be viewing it on
    Yes, that does seem to be the most practical approach, thanks.

    (PS. Our honeymoon was in Namibia. In summer it's cloudy, hot, humid and rainy in the subtropical north, yum!)
    Quote Quote  
  13. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    I don't really mind what you do - I'll never see you video, and I couldn't see the difference even if I did. But to choose the more complicated option, and then do it in a way that involves extra scaling (= picture blur), because you think it's better - when it's potentially worse - that would be a frustrating waste of time.
    This concern of yours really bothers me, because I don't see the problem you're referring to. Hopefully I've confused you, and if I restate my aim, and the approach by which I hope to reach it, I could clear that confusion up. If I don't manage to clear up any confusion on your part, please restate the problem in a way I can understand .

    So let me try. Ignoring the detail of my to-be-improved workflow above, in essence my aim with it is to:
    • process SD with as little getting lost as possible (taking into account your and the others' earlier inputs)
    • add HD to the same workflow with as little getting lost as possible
    The approach I took with that aim: the choices with the last item above are either to scale SD to square pixel HD losing some quality in the process, or scaling HD to anamorphic SD losing some quality in the process. With DV already grainy at SD on the one hand, and with HD's higher starting resolution on the other, the latter option sounded preferable to me hence the conversion of HD content to anamorphic SD format.

    That conversion would need to pay attention to ensuring the output comes out in the right destination non-square pixel aspect ratio, meaning the correct standards-based dimensions must be used. Then it merely becomes an issue of deciding whether to use 12/11 or 59/54 as you pointed out.

    Does this hopefully clear up something for you; otherwise, what is the big thing that I'm missing?

    Lastly, as to whether to base everything on a horizontal resolution of 720 or 704, I'd measured the outputs of both (correctly processed and encoded I assure you!) on all devices I could lay my hands on. This issue ultimately boils down (in my experience at least) to the choice between correct rendering on analogue DVD outputs and imperfect rendering on digital devices @ 704 pixels (but with loss of horizontal resolution on the latter compared to 720), vs. imperfect rendering on all devices but better horizontal resolution on digital ones @ 720 pixels. Since the rendering error is in all cases at most ~2.5%, in my view that makes the decision here a subjective and personal one, also largely influenced by intended audience etc.

    I'm holding thumbs that your big concern has somehow disappeared!
    Last edited by fvisagie; 15th Mar 2013 at 11:18.
    Quote Quote  
  14. Then it merely becomes an issue of deciding whether to use 12/11 or 59/54 as you pointed out.
    It's not 59:54; the value is 16:15 if you are using non ITU guidelines (based on 720px width)

    4/3 = 720/576 * 16/15

    Most NLE's use ITU for their calculations (12:11 for 4:3 PAL) , even with the full 720px width
    Quote Quote  
  15. Originally Posted by fvisagie View Post

    Originally Posted by poisondeathray View Post
    I would increase the highlight contrast to bring out the clouds, shadow contrast to bring out the foreground, paying attention to the black level (this will make it look less "washed out"), increase the saturation a bit .
    Any hints (or available reading) for doing those in Avisynth, please??
    IMO , it's more difficult to do color work in avisynth . Some people get great results using avisynth only for color work, I'm not one of them

    For example , how would only adjust the clouds only , or only bright areas? In your "coloryuv" example , you brought the "superwhites" down, but in order to compensate from making the foreground darker, you've brought up the black level - this reduces contrast and give that washed out, milky appearance

    Yes, you can make selective adjustments in avisynth using various luma masks and masktools, but it's more difficult. Traditional color correction tools in NLE's work in RGB, but they have shadows/midtones/highlights adjustments - 3 way color correction . With something like RGB curves (e.g in an NLE or vdub's gradation curves), you can "map" different areas and make non linear changes much easier with a GUI. Yes, you can do it in YUV , with smoothcurve, but it's very difficult to get the string parameters correct . For something like contrast, there typically is a contrast "midpoint" , the point at which the data gets moved from symmetrically in either direction. What if you didn't want a symmetrical adjustment? It's difficult to adjust that in avisynth tools, it's usually set in the middle. So If I were to increase "contrast" both ends would move, symmetrically pushing both brights and darks away - you would make the clouds clip again, the foreground grass dark. Moreover, you can keyframe the changes in other programs (so as scene exposure and conditions change, you can compensate - very difficult to do in avisynth). Simply put, I find fine tune color control lacking with avisynth. Now there are some "tricks" you can use, I'll take a closer look and try to make some suggestions later on how you might do that in aviysnth
    Quote Quote  
  16. Originally Posted by poisondeathray View Post
    Then it merely becomes an issue of deciding whether to use 12/11 or 59/54 as you pointed out.
    It's not 59:54; the value is 16:15 if you are using non ITU guidelines (based on 720px width)

    4/3 = 720/576 * 16/15

    Most NLE's use ITU for their calculations (12:11 for 4:3 PAL) , even with the full 720px width

    Whoops. My bad, sorry, Ignore me.

    You guys were deciding between 704 vs. 702 width , not 720

    It's supposed to be 702 width but most software rounds it to 704.
    Quote Quote  
  17. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by jagabo View Post
    You should look through a variety of shots that your camcorder puts out and adjust the levels accordingly. Don't assume Levels(0, 1.0, 255, 2, 235, coring=false) is right based on that one shot. Check what it delivers at the low end too.
    After playing around a bit with various dark scenes, light ones and combined ones and comparing various settings and filters, it seems ColorYUV(off_y=2, gain_y=-22) is a good enough baseline starting point for this camera, at least for scenes like these. '2' from David's suggestion for mapping 16, which means the gain needs to be adjusted by 2 also. I checked measurements of this adjustment on the above scenes with jagabo's Blur() and finally did a side-by-side encode of original and corrected, and on the TV the result is even more subtle than on PC and very pleasing.

    edit: I neglected to mention that as suggested for RGB conversion I'll use the PC matrices for safety's sake anyway.
    Quote Quote  
  18. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    The images posted earlier all have blown-away brights exceed ing RGB 255. Taking all those measurements in YUV is instructive. However, TV, monitors, projectors, etc., don't display YUV. The original source is too bright, black levels and gamma are too high, and brights are irreparably clipped at the outset.

    original:
    Image
    [Attachment 16809 - Click to enlarge]


    Levels only, YUV + mostly RGB. Needs denoising, dot crawl is more obvious. YUV isn't the only colorspace in town.
    Image
    [Attachment 16810 - Click to enlarge]
    Last edited by sanlyn; 25th Mar 2014 at 20:08.
    Quote Quote  
  19. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    IMO , it's more difficult to do color work in avisynth.
    That's the impression I've been getting. But you've given me great pointers for going about this in other tools. By the time I turn to this work, hopefully I'll be able to see how my new copy of Movie Studio fares with it!
    Quote Quote  
  20. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by sanlyn View Post
    The images posted earlier all have blown-away brights exceed ing RGB 255. ... The original source is too bright, black levels and gamma are too high, and brights are irreparably clipped at the outset.
    Levels only, YUV + mostly RGB. Needs denoising, dot crawl is more obvious. YUV isn't the only colorspace in town. Bright clouds masking sunlight (but they shouldn't be RGB 255. Go outside and look.), and the foreground is overcast/indirect light.
    I'm not following what exactly you did to get to the improved example? I know not to expect a be-all-and-end-all recipe, but it would be instructive to start off with what you did and play around with that, thanks.
    Quote Quote  
  21. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by fvisagie View Post
    I'm not following what exactly you did to get to the improved example? I know not to expect a be-all-and-end-all recipe, but it would be instructive to start off with what you did and play around with that, thanks.
    I started this correction a couple of days ago but was interrupted. Meanwhile, poisondeathray's post (#75) touched on the methods I used before I could post the results. I did this first with histograms and image filters in AfterEffects; but that's a bit unfair, as many people don't have AE or Color Finesse to play with, so I accomplished more or less the same thing in VirtualDub. Anyway, ColorYUV() and Levels() was the first step, to bring levels and chroma into a useable range.

    ColorYUV(off_y=-15)
    Levels(2, 0.95, 255, 10, 235, coring=false)


    As pdr noted, there are some things you can do in YUV (and which often you MUST do) before going to RGB to pinpoint more specific areas that you can't do with YUV (Well, no, I shouldn't say you can't do it in YUV, because I've seen people hit some very specific areas in YUV, but how they figured it out seems to be a totally undocumented secret). Given more time I would have got into the dither() plugin to help smooth some of the "spikes" in the spectrum, but I've seen dither introduce some unwanted effects. Probably worth a try, though.

    You remarked earlier that Levels() "changed some colors". True, but ColorYUV also "changes" some colors; both filters map them inside the preferred borders. Invalid chroma is as troublesome as invalid luma. The avs script reigned in lume+chroma. A YUV histogram of that code reveals an abrupt clipping point at RGB 220 or so. You can keep those brights below 255, but to no avail; practically speaking the original capture has no data above a certain point.

    Below: original levels. Left=YUV, right=RGB). The YUV histogram clearly shows hard bright clipping on the right, anemic black levels, and deficit of midtone values. The RGB histogram hsows the same thing, but here the color clipping and poor blacks are more clearly seen.
    Image
    [Attachment 16814 - Click to enlarge]


    Below: YUV levels adjust, Avisynth only. The YUV chart shows clippping, so bringing brights farther down offers no more detail. There is just a very slight creep into the below-RGB 16 area, but no detail down there anyway. The RGB histogram shows the obvious bright detail destruction in luma as well as chroma.
    Image
    [Attachment 16815 - Click to enlarge]


    Below: Adjust with gradation curves. Primary adjust (left) and secondary tweak (right). Could probably get those peaks smoothed with dither() and some contrast masking. Gradation curves raised the darker parts but kept blacks intact, and darkened bright clipped areas. No o0bvious banding, but note that the clouds have an abrupt cutoff above the shadow areas at about RGB 128 (from the midtones on up), and there's noise in the brighter colors. I'd opt for a better capture; the original IRE blacks are too high, and the image is too bright.
    Image
    [Attachment 16816 - Click to enlarge]


    Further RGB tweaks with ColorMill:
    MIDDLE POINT - Middle Point=-3,Booster=-56, Base Shift=0
    GAMMA - Red=3, Blue=3, Green=3
    LEVELS - Dark=-8, Middle=0, Light=5
    SATURATION - +5%
    I think my posted image has the cloud shadow a bit dark, but....it was getting late.
    Last edited by sanlyn; 25th Mar 2014 at 20:09.
    Quote Quote  
  22. Originally Posted by fvisagie View Post
    Originally Posted by poisondeathray View Post
    IMO , it's more difficult to do color work in avisynth.
    That's the impression I've been getting. But you've given me great pointers for going about this in other tools. By the time I turn to this work, hopefully I'll be able to see how my new copy of Movie Studio fares with it!
    If you were bringing this into vegas studio , the problem with the studio version is it doesn't have scopes (waveform, RGB histogram, parade,etc..) This is one of the main differentiating things between the "pro" version, and it's a severe limitation IMO. You can make adjustments "by eyeballing", but scopes are almost mandatory IMO, as human eye isn't always accurate and can be "tricked" . In this example below, what color or "shade" are A and B ?

    Click image for larger version

Name:	checkershadow_illusion4med.jpg
Views:	3159
Size:	76.5 KB
ID:	16817



    There are a near infinite number of subjective "looks" you might be going for. I dislike making suggestions on color because it's so subjective, unless you have a reference, or a clear description or picture in your head of what you want. The only "objective" thing that I think everyone can agree on is bringing down the superbrights.

    Eitherway, one thing I recommend - whatever you use - is to increase both the highlight contrast and shadow/midtone contrast. I would bump up the brightness of the grass , along with the contrast. You said you overexposed this on purpose to compensate for the "dark" foreground - well in reality the foreground probably isn't that dark, and in reality you probably saw the separation in the clouds, not blobs of white. This is just a limitation of the camera (low dynamic range) .

    In general, the more severe your adjustments to footage, the more it breaks apart and reveals compression/codec "nastiness" , the more you need to denoise and potentially degrade the footage. If you look at the example below, there are color splotches (look at the frame edges, treeline) and more noise and crap revealed by the changes made (you would probably add luma/chroma denoiser)

    In a NLE , you can do this with something like curves (adjust different regions, differently), but you have to be very careful, or you will get posterization effects when doing this on 8bit footage. Not to pick on Sanlyn, but you can see an example of that in the lower grey clouds, where the separatation is lost (it looks like "burnt grey")

    In this example with avisynth only, There is higher contrast in the clouds (the shapes are more discernable), and in the foreground - there is more "pop" to the image because the contrast is higher in both areas. The "flatter" the image, the more washed out it looks, especially when black is elevated. But sometimes you don't something like this - Sometimes those changes will distract the viewer's point of view or focus from the important part or the story you're trying to tell (I don't know what it is here, but you get the idea). You can look at the waveform and see the difference, the 2 "bands" representing clouds and foreground are more spread out. Anyways, the avisynth "trick" here was to use HDRAGC to selectively increase contrast in the foreground/grass regions (invert was used to apply it to the brighter cloud regions) . But it's hard to keyframe changes in avisynth...

    Code:
    levels(0,1,255,0,237,false, dither=true)
    invert
    hdragc(max_gain=2, coef_sat=0.75)
    invert
    levels(12,1,255,0,255,false, dither=true)
    hdragc(max_gain=2, coef_sat=0.88)
    tweak(sat=1.2, coring=false)
    So the generic "learning point" here is to increase the contrast in 2 regions - spread out the data more to make use of more tonal range

    Click image for larger version

Name:	compare2.png
Views:	3141
Size:	1.47 MB
ID:	16818
    Quote Quote  
  23. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Yep, looks good. I played with some "auto" filters as well, but ran out of time. Nice work. I find AGC a bit tricky to manage (it can really blow an image to pieces if you ain't careful), but it works well here.

    BTW, I saw the Adelson image you posted and many other examples of optical illusion. The "A" and the "B" squares are exactly the same grayscale hue , RGB 120-120-120. One can use a pixel sampler such as CSamp or built-ins in NLE's to check pixel values.
    Last edited by sanlyn; 25th Mar 2014 at 20:10.
    Quote Quote  
  24. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Thanks guys, you've gone to a lot of trouble here and I sincerely appreciate it. Best of all, you've put it simply enough for me to follow it all!

    Originally Posted by poisondeathray View Post
    But it's hard to keyframe changes in avisynth...
    Just checking, but one brute-force way would be to create Trim()s to separate treatments? It would probably be very tedious but at least as accurate as the Trim()s?
    Quote Quote  
  25. Originally Posted by fvisagie View Post

    Just checking, but one brute-force way would be to create Trim()s to separate treatments? It would probably be very tedious but at least as accurate as the Trim()s?

    Yes, but the problem with that is there would be no keyframe interpolation. The changes will be abrupt, not smooth . There will be jumps as your settings switch to the next set, instead of gradual . You have no control.
    Quote Quote  
  26. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    I've encountered some interesting things and hopefully you can confirm my interpretation.

    As background, by doing an end-to-end test I learnt that my current basic workflow preserves both invalid YUV and RGB values. Any corrections in such a workflow would therefore be purely for esthetic purposes - no information would be lost by clipping.

    After correcting luma some 'useful' illegal values remained. When I then 'lost' these by using Rec matrices to convert to and from RGB as I would for Deshaker, there was some clearly visible (and measurable) damage, esp. in very bright and very dark areas. This didn't occur with PC matrices. Therefore, I might as well try preserve any 'useful' illegal values that remain after correction by using PC matrices when converting for Deshaker.

    Trying to put this into practice, the first thing I discovered was that it seems best to postpone correction also as late as possible. E.g. correcting luma after QTGMC produces visibly better results than correcting first. My interpretation is that correcting luma (by compressing the range) effectively reduces the luma "resolution". More pixels end up looking similar than before (banding) which will change their processing in subsequent filters, right?

    The second thing I discovered was more vexing. ConvertToXYZ(matrix="PC.abc") requires width to be a multiple of 4. I had intended to crop only 10 junk pixels from 720. Keeping in mind that luma has been corrected but useful illegal values remain, the choice now becomes:
    discarding more good border pixels and have fewer to work with in Deshaker and other filters, but with the benefit that useful illegal luma values are retained using PC matrices;
    vs.
    using Rec matrices to keep the border pixels, but lose useful illegal luma values in the process.
    If I end up cropping to 704 pixels before encoding, the first choice then seems the obvious one - there are some pixels to spare I guess? But if I want to hang on to every good pixel, the choice seems less clear - how does one measure the effect of losing more pixels vs. the effect of clipping useful luma values? There's no common baseline to measure against, the images are different sizes after different cropping, scaling for comparison would affect quality, etc?
    Quote Quote  
  27. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    Yes, but the problem with that is there would be no keyframe interpolation. The changes will be abrupt, not smooth . There will be jumps as your settings switch to the next set, instead of gradual . You have no control.
    I hadn't considered that, thanks.
    Quote Quote  
  28. Originally Posted by fvisagie View Post
    I've encountered some interesting things and hopefully you can confirm my interpretation.

    As background, by doing an end-to-end test I learnt that my current basic workflow preserves both invalid YUV and RGB values. Any corrections in such a workflow would therefore be purely for esthetic purposes - no information would be lost by clipping.
    Not quite, there are still YUV values that do not map to sRGB (out of gamut colors, that lie outside of color cube) , regardless of what you do . You probably aren't concerned with those

    Trying to put this into practice, the first thing I discovered was that it seems best to postpone correction also as late as possible. E.g. correcting luma after QTGMC produces visibly better results than correcting first. My interpretation is that correcting luma (by compressing the range) effectively reduces the luma "resolution". More pixels end up looking similar than before (banding) which will change their processing in subsequent filters, right?
    Note this isn't necessarily "always" true. It depends on the situation, which filters you are using, the source footage . If you've found it works better in this case, then go ahead



    The second thing I discovered was more vexing. ConvertToXYZ(matrix="PC.abc") requires width to be a multiple of 4. I had intended to crop only 10 junk pixels from 720. Keeping in mind that luma has been corrected but useful illegal values remain, the choice now becomes:
    discarding more good border pixels and have fewer to work with in Deshaker and other filters, but with the benefit that useful illegal luma values are retained using PC matrices;
    vs.
    using Rec matrices to keep the border pixels, but lose useful illegal luma values in the process.
    If I end up cropping to 704 pixels before encoding, the first choice then seems the obvious one - there are some pixels to spare I guess? But if I want to hang on to every good pixel, the choice seems less clear - how does one measure the effect of losing more pixels vs. the effect of clipping useful luma values? There's no common baseline to measure against, the images are different sizes after different cropping, scaling for comparison would affect quality, etc?
    You sure about that 10 pixels? YV12 requires mod2 for width, progressive or interlaced.
    http://avisynth.org/mediawiki/Crop

    It seems to work here with 710width and convertoXYZ(whatever matrix). What error are you getting?

    One thing is certain - deshaker works better if you crop things like black borders (you don't have any, just junk borders) . If you don't want to crop, another option is to use deshaker settings to ignore edge pixels for the analysis (look in the pass 1 parameters, you can enter values for left,top,right,bottom)

    Personally I would keep all the pixels until the very end, make that decision to crop (and/or add pillarbox) when encoding to your final format. Personally I would use PC matrix and do everything in RGB (only because it's easier for color work, not for "best practices". "Best practices" would dictate that you do everything in YUV, that means no deshaker)
    Last edited by poisondeathray; 18th Mar 2013 at 10:18.
    Quote Quote  
  29. Member
    Join Date
    Aug 2007
    Location
    Isle of Man
    Search Comp PM
    Originally Posted by poisondeathray View Post
    It seems to work here with 710width and convertoXYZ(whatever matrix). What error are you getting?
    Code:
    # Starting off with 720x576 YV12 interlaced
    QTGMC(Preset="Slower")
    Crop(0, 0, -10, 0)
    ConvertToRGB(matrix="PC.601", interlaced=false)
    ConvertToRGB: Rec.709 and PC Levels support require MMX and horizontal width a multiple of 4
    (original.avs, line 21)
    The result is the same with both interlaced and progressive content with the corresponding 'interlaced=' settings. I guess it's an AVS 2.58 vs. 2.60 thing?

    EDIT: PS. Thanks for the other hints.

    Personally I would keep all the pixels until the very end, make that decision to crop (and/or add pillarbox) when encoding to your final format. Personally I would use PC matrix
    And that's what this vexing error is preventing.
    Quote Quote  
  30. Originally Posted by fvisagie View Post
    Originally Posted by poisondeathray View Post
    It seems to work here with 710width and convertoXYZ(whatever matrix). What error are you getting?
    Code:
    # Starting off with 720x576 YV12 interlaced
    QTGMC(Preset="Slower")
    Crop(0, 0, -10, 0)
    ConvertToRGB(matrix="PC.601", interlaced=false)
    ConvertToRGB: Rec.709 and PC Levels support require MMX and horizontal width a multiple of 4
    (original.avs, line 21)
    The result is the same with both interlaced and progressive content with the corresponding 'interlaced=' settings. I guess it's an AVS 2.58 vs. 2.60 thing?

    It might be 2.58 vs. 2.6 . That code works fine here on your video sample... output video is 710x576 as expected, no errors
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!