VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 57 of 57
Thread
  1. The blacks are at Y=45 and whites at Y=220.
    Where can I see the Y values? Right now moving my mouse around, and use the Cursor position (so if y position is 35, I just 255-35 to know whites are at 225).

    Also, I played around with the Histogram filter in VirtualDub yesterday. You can sample all the video. Not the specific frame. It's hard to do smart adjusts based on specific frame, I have to look in couple of frames until I found the one where whites are above 220. Can I do something similar with Avisynth?

    you use Levels(45, 1.0, 220, 16, 235)
    OK. So now it makes much more sense. So the 4 values are pretty easy - because they are direct outputs based on the graph. The gamma will require experiment and seeing what works best for your eye?

    levels(12,1.0,255,0,255,coring=false)
    So Sharc, I wondered why you picked 12/255? Did you perhaps caught a frame where higher whites/lower blacks? Also - it seems like Sharc is adjust based on 0-255 and jagabo on 16-235? last time I was reading about that, 0-255 was better for RGB and PC Monitor/Video Cards and 16-235 was more standard in the TV/Film industry?
    Last edited by Okiba; 3rd Oct 2020 at 01:21.
    Quote Quote  
  2. levels(12,1.0,255,0,255,coring=false)
    So Sharc, I wondered why you picked 12/255? Did you perhaps caught a frame where higher whites/lower blacks? Also - it seems like Sharc is adjust based on 0-255 and jagabo on 16-235? last time I was reading about that, 0-255 was better for RGB and PC Monitor/Video Cards and 16-235 was more standard in the TV/Film industry?
    I had a quick look at your results-italy.mp4 and found that the minimum Y was 12 using 'colorYUV(analyze=true)'. This happened very rarely though.
    I may have misunderstood that you want to watch it on PC monitor and keep it at 4:2:2 (which is not commonly supported by TVs) , so I went for the PC range 0 ..... 255. The resulting histogram looked ok, but maybe I should better have used the oscilloscope view.
    Anyway, main purpose was to draw your attention to the 'levels' filter for further experimentation, rather than recommending specific values.
    Last edited by Sharc; 3rd Oct 2020 at 03:45.
    Quote Quote  
  3. I had a quick look at your results-italy.mp4 and found that the minimum Y was 12 using 'colorYUV(analyze=true)'.
    Strange. I could be wrong, but I think VirtualDub Histogram starts coloring Bar as red when they below 16 and/or above 335. So It's strange you find a frame with 12. It shouldn't happen. That being said - your checking the final product - FFMPEG and not the raw file. Is it possible the encoding cause that?

    'colorYUV(analyze=true)
    I didn't know that trick. If its precise - would it be easier compared to check the black/white levels manually on the graph?

    I may have misunderstood that you want to watch it on PC monitor and keep it at 4:2:2 (which is not commonly supported by TVs)
    I'm not really. I plan to watch it on both PC monitor, and TV. So 16-235 not to clip things on the TV. Was that a wrong though?

    Anyway, main purpose was to draw your attention to the 'levels' filter for further experimentation, rather than recommending specific values.
    Your sure did!
    Quote Quote  
  4. Originally Posted by Okiba View Post
    I had a quick look at your results-italy.mp4 and found that the minimum Y was 12 using 'colorYUV(analyze=true)'.
    Strange. I could be wrong, but I think VirtualDub Histogram starts coloring Bar as red when they below 16 and/or above 335. So It's strange you find a frame with 12. It shouldn't happen. That being said - your checking the final product - FFMPEG and not the raw file. Is it possible the encoding cause that?
    I don't really know. There are some hints how VDub and Avisynth 'levels' correspond with each other in the Avisynth wiki. Or someone else may help.

    'colorYUV(analyze=true)
    I didn't know that trick. If its precise - would it be easier compared to check the black/white levels manually on the graph?
    It's up to you what you prefer. See the Avisynth wiki doc. In addition to the Min./Max. values it also returns "Loose" values which discard the 0.4% extremes, and you get values around 40 for the "Loose Minimum" and around 220 for the "Loose Maximum" which are close to what jagabo stated and which may be more relevant in practice. Perhaps these "Loose" values are closer to what the vdub filter returns? I don't know.
    Also keep in mind that captions like date and time will influence these values. So one may decide to crop these off for analysis .....

    I'm not really. I plan to watch it on both PC monitor, and TV. So 16-235 not to clip things on the TV. Was that a wrong though?
    Stay within the TV range. It's safer. Your final encode should be 4:2:0 (YV12) though for best compliance with various playback scenarios/devices.

    If you find some time you may want to study 'ColorYUV' in more detail .....
    Last edited by Sharc; 3rd Oct 2020 at 06:24. Reason: typos
    Quote Quote  
  5. What you guys aren't understanding is that those very low black, and very high white values aren't really parts of the image. They are noise and oversharpening halos. You don't need to retain those. And RGB values read off the screen in VirtualDub are not Y values. The default in VirtualDub is are rec.601 conversion (Y=16-235 to RGB=0-255 contrast expansion). What you need to look at are large areas of black and white. Note that the waverform monitor from TurnRight().Histogram().TurnLeft() is 256 pixels tall -- one pixel for each possible Y value. So you can count the pixels to get Y values:

    Image
    [Attachment 55231 - Click to enlarge]


    All the little peaks below 45 and over 220 are oversharpening halos and noise.

    Not every frame will have full black and full white elements. So you also have to use your judgement about whether the brightest and darkest parts of the image/shot/video (even in very large patches) are reflective of the black and white levels.
    Quote Quote  
  6. ^^^
    Thank you for these explanations, jagabo. I wasn't aware of the y/pixel calibration of the waveform monitor, using a pixel ruler. Now the "Loose" values of colorYUV(analyze=true) attempting to discard noise and halos make sense.
    Quote Quote  
  7. Here's an example that shows oversharpening halos somewhat like those created by VHS decks' sharpening filters:

    Code:
    # create black/white vertical bars, 16 pixel thick bars, 256 pixels total
    BlankClip(width=32, height=256)
    StackHorizontal(last, last.Invert())
    StackHorizontal(last, last, last, last)
    ConvertToYV12()
    ColorYUV(off_y=-45).ColorYUV(off_y=45) # force blacks to 45
    ColorYUV(off_y=35).ColorYUV(off_y=-35) # force whites to 220
    
    # show unfiltered bars on the left, sharpened on the right
    StackHorizontal(last, UnsharpMask(radius=3, strength=50, threshold=0))
    
    HistogramOnBottom()
    ConvertToRGB(matrix="pc.601") # so RGB values on screen reflect Y values from the video
    Image
    [Attachment 55232 - Click to enlarge]


    The left half of the image is simply vertical bars of brightness Y=45 and Y=220. The right half is the same bars after a sharpening filter. After sharpening the middle of the dark bars remains at 45 but as you get close to the white bars the values drop to 35, then 25, the 15. The middle of the white bars remains at 220, but as you approach a black bar the values rise to 229, 239, and 249. Those values aren't part of the original image, they were created by the sharpening filter.
    Quote Quote  
  8. Your final encode should be 4:2:0 (YV12) though for best compliance with various playback scenarios/devices.
    Oh really? Right now both the Loseless format and the MPEG are 4:2:2. Why 4:2:0 is preferred? which devices won't support 4:2:2 and what will happen if I try to play 4:2:2 on those?

    The default in VirtualDub is are rec.601 conversion (Y=16-235 to RGB=0-255 contrast expansion)
    Just to be clear here. I"m speaking about the CAPTURE Historgram. Not the Level filter (see attached). All the "Red" section, are below 16 and above 235? when I say I made sure everything doesn't clip - I mean that non of the videos were displaying any red bars.

    All the little peaks below 45 and over 220 are oversharpening halos and noise.
    How precise should It be? I tested couple of videos. They all range from 43 to 49 blacks for example. Is there a big difference between those small numbers? or I can leave it at 45 for most videos instead of fiddle with small numbers?

    So you also have to use your judgement about whether the brightest and darkest parts of the image/shot/video (even in very large patches) are reflective of the black and white levels.
    So that's why it's better do it manually frame to frame and not use something that sums all the information like the Levels filter do in VirtualDub. Because a lot of the extreme information is just noise - I should manually look for those thick section.

    So that's makes me wonder. First of all. Does "Levels" means brightness/contrast settings?
    And also, I will try to check a very dark scene, and a very bright scene. If the values are still 45-220, is it safe to say Levels is actually a global option (for specific setup) and I can use on the generic QTGMC script? Up until now I was sure brightness/contrast are per scene option.
    Image Attached Thumbnails Click image for larger version

Name:	Histo.png
Views:	16
Size:	2.2 KB
ID:	55233  

    Quote Quote  
  9. Originally Posted by Okiba View Post
    Your final encode should be 4:2:0 (YV12) though for best compliance with various playback scenarios/devices.
    Oh really? Right now both the Loseless format and the MPEG are 4:2:2. Why 4:2:0 is preferred? which devices won't support 4:2:2 and what will happen if I try to play 4:2:2 on those?
    Chroma subsampling 4:2:2 is basically better (double vertical color resolution) than 4:2:0, but at the cost of storage space. 4:2:2 is the right format for capturing interlaced VHS with Huffyuv. Leave it at 4:2:2 as long as possible along your workflow. Some (avisynth-)filters may however still support 4:2:0 only. You would have to check the individual filters.
    SW players will normally play 4:2:2 without issues.
    4:2:0 is standardized for DVD, AVCHD, Blu-ray, PAL-DV to name a few. HW players may reject 4:2:2 sources as "unsupported format" or similar. My TV also doesn't play 4:2:2 mp4 files for example, only 4:2:0. Disc authoring SW (if this should matter) may als reject 4:2:2 footage.
    So I wouldn't prefer 4:2:0 over 4:2:2, with the exception of broader playback compatibility and lower file size - if this should matter.
    The x264 encoder supports both 4:2:2 and 4:2:0 subsampling formats, default is 4:2:0.
    Last edited by Sharc; 3rd Oct 2020 at 10:19.
    Quote Quote  
  10. Originally Posted by Okiba View Post
    Your final encode should be 4:2:0 (YV12) though for best compliance with various playback scenarios/devices.
    Oh really? Right now both the Loseless format and the MPEG are 4:2:2. Why 4:2:0 is preferred? which devices won't support 4:2:2 and what will happen if I try to play 4:2:2 on those?
    Pretty much all commercial distribution is 4:2:0, DVD, BD, online streaming. Computers are about the only thing that will play 4:2:2 h.264. Most other devices will not play 4:2:2 h.264. I believe more modern devices will play 4:2:2 h.265. You typically get a black screen (only audio) or an error message (no audio or video) when 4:2:2 isn't supported.

    Originally Posted by Okiba View Post
    The default in VirtualDub is are rec.601 conversion (Y=16-235 to RGB=0-255 contrast expansion)
    Just to be clear here. I"m speaking about the CAPTURE Historgram. Not the Level filter (see attached). All the "Red" section, are below 16 and above 235?
    Yes that histogram is of Y values and the red areas are Y<16 and Y>235.

    Originally Posted by Okiba View Post
    when I say I made sure everything doesn't clip - I mean that non of the videos were displaying any red bars.
    That is why histograms aren't very useful. You don't know what parts of the picture are outside the range. AlsoVirtualDub's capture histogram is not linear, I believe it's logarithmic. So infrequent values are exaggerated.

    For example, with VHS caps the left/right black borders are typically at full black but the active picture may have elevated black levels. If you look only at a histogram you don't know for sure if the red ends are because of the black borders or the active picture.

    Originally Posted by Okiba View Post
    All the little peaks below 45 and over 220 are oversharpening halos and noise.
    How precise should It be? I tested couple of videos. They all range from 43 to 49 blacks for example. Is there a big difference between those small numbers? or I can leave it at 45 for most videos instead of fiddle with small numbers?
    This is where you have to use your own judgement. If one shot out of a hundred has blacks at 43 and the rest are all at 49 you'll probably want to use 49. Unless that one shot is critical for some reason. If half the shots are 43 and the other half are at 49 you may want to go with 43. Or maybe an average of 46 -- depending on how important the dark detail is.

    Also, since you are doing this as a batch pre-processing step, you can let some blacks fall a little below Y=16, and some whites a little above Y=235. Because you can always fix them with your later processing. Or just delay all the levels/colors adjustments for later.

    Originally Posted by Okiba View Post
    So you also have to use your judgement about whether the brightest and darkest parts of the image/shot/video (even in very large patches) are reflective of the black and white levels.
    So that's why it's better do it manually frame to frame and not use something that sums all the information like the Levels filter do in VirtualDub. Because a lot of the extreme information is just noise - I should manually look for those thick section.
    Yes.

    Originally Posted by Okiba View Post
    So that's makes me wonder. First of all. Does "Levels" means brightness/contrast settings?
    Yes, you can achieve the same adjustments with the cont and bright settings in Tweak() or gain_y and off_y settings in ColorYUV(). Though AviSynth's Levels() filter also does the same to the chroma (increasing or decreasing color saturation) so you also have to use sat in Tweak() and cont_u, cont_v in ColorYUV().

    Originally Posted by Okiba View Post
    And also, I will try to check a very dark scene, and a very bright scene. If the values are still 45-220, is it safe to say Levels is actually a global option (for specific setup) and I can use on the generic QTGMC script? Up until now I was sure brightness/contrast are per scene option.
    It's both. You want to make sure your caps fall within a reasonable range. Then, assuming it's needed and you care, adjust shot by shot for the final output.
    Quote Quote  
  11. Pretty much all commercial distribution is 4:2:0, DVD, BD, online streaming.
    4:2:0 is standardized for DVD, AVCHD, Blu-ray, PAL-DV to name a few.
    Funny. I never actually tested if the MPEG I shared here worked on my TV. Just tried stream them with my Rpi Kodi device, and - black screen I created new samples with -pix_fmt yuv420p, and that's indeed working. So I guess I will be using 4:2:0 for the lossy MPEG. Does it effect Levels somehow?

    For example, with VHS caps the left/right black borders are typically at full black but the active picture may have elevated black levels.
    To negate that, I cropped the black bars during playback, tweaked the Histogram based on that - and removed the cropping for the actual capture.

    It's both.
    Think that what I will do then. I will sample couple of other videos. See where the black/white hits - and try to find a single number that fits most. I will adjust specific find I find the result problematic.

    So update batched script:

    Code:
    SetFilterMTMODE("QTGMC", 2)
    AviSource("Loseless.avi")
    ConvertToYV16(interlaced=true) 
    AssumeTFF()
    QTGMC(Preset="Slower", EdiThreads=3)
    crop(20, 6, -20, -6)
    Levels(X, 1.0, X, 16, 235)
    ChromaShiftSP(Y=3)
    MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height))
    Prefetch(3)
    Quote Quote  
  12. Originally Posted by Okiba View Post
    I created new samples with -pix_fmt yuv420p, and that's indeed working. So I guess I will be using 4:2:0 for the lossy MPEG. Does it effect Levels somehow?
    No.

    Originally Posted by Okiba View Post
    I will sample couple of other videos. See where the black/white hits - and try to find a single number that fits most. I will adjust specific find I find the result problematic.
    If you are going to make levels changes later it may make more sense to do it all later rather than a rough change in you pre-processing and finer changes later. Each time you adjust levels/saturation you will get some quantization errors. On the other hand making the rough changes before QTGMC() will help mask the quantization errors at that step.
    Quote Quote  
  13. Are the quantization errors permanent? so changing levels multiple times will make the video looks worse over time?
    Due to the huge amount of videos, I probably not going to tweak all of them, Just couple of Key videos later (for example, my old folks wedding). Probably when I also going to tweak colors.

    By the way, using 45/220 makes everything a bit "too" dark. That's because I also need to fix other stuff like saturation?
    And sure, thanks for the tip. I now doing every tweaking before QTGMC:

    Code:
    SetFilterMTMODE("QTGMC", 2)
    AviSource("Loseless.avi")
    ConvertToYV16(interlaced=true) 
    crop(20, 6, -20, -6)
    Levels(X, 1.0, X, 16, 235)
    ChromaShiftSP(Y=3)
    MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height))
    AssumeTFF()
    QTGMC(Preset="Slower", EdiThreads=3)
    Prefetch(3)
    AviSynth looks so powerful it's amazing really. I was sure it's just a way to use API call on Video files. But I can see based on what you shared you can also write methods and use them. When I'll have time, I'll check it out. Maybe I can write a script that auto select the cropping value (up until end of the masking bar).
    Quote Quote  
  14. Originally Posted by Okiba View Post
    Are the quantization errors permanent? so changing levels multiple times will make the video looks worse over time?
    You may want to add add 'dither=true', like
    Code:
    Levels(X, 1.0, X, 16, 235,coring=false,dither=true)
    It helps against color banding.

    Maybe I can write a script that auto select the cropping value (up until end of the masking bar).
    For automatic borders cropping you could try AutoCrop or RoboCrop
    http://avisynth.nl/index.php/AutoCrop
    http://avisynth.nl/index.php/RoboCrop

    Edit: Hmmm, I probably misunderstood. You mean finding the 'X' values for the levels, right?
    Last edited by Sharc; 4th Oct 2020 at 02:58.
    Quote Quote  
  15. For automatic borders cropping you could try AutoCrop or RoboCrop
    Niceeee. No need to reinvent the wheel! Will test those soon. I wonder why I works logically though. How does it know a line/row is perfectly black and is a black bar - or a completely dark video. I guess not many videos are 0 dark for a whole line/row though.

    dither=true
    Googled Dither and what banding is. Sounds like there is no reason not to use dither true. But why does Levels - which is contrast/brightness settings - has color-related option in it?

    coring=false
    The documentation mention:

    coring was created for VirtualDub compatibility, and it remains true by default for compatibility with older scripts. In the opinion of some, you should always use coring=false if you are working directly with luma values (whether or not your input is 16d-235d).
    I assume Levels change Luma values and therefore it should be False according to "some opinions"?
    Quote Quote  
  16. Originally Posted by Okiba View Post
    coring=false
    The documentation mention:
    coring was created for VirtualDub compatibility, and it remains true by default for compatibility with older scripts. In the opinion of some, you should always use coring=false if you are working directly with luma values (whether or not your input is 16d-235d).
    I assume Levels change Luma values and therefore it should be False according to "some opinions"?
    I don't have the straight answer to your question, but you could visualize the effect of the 'coring' side-by-side with a script like
    Code:
    v1=AVISource("your.avi")
    v1=v1.converttoyv16(interlaced=true).crop(20,6,-20,-6)
    v2=v1.levels(42,1.0,220,16,235,coring=true).histogram()
    v3=v1.levels(42,1.0,220,16,235,coring=false).histogram()
    out=stackhorizontal(v2,v3)
    return out
    Coring=true seems to clip the Luma and to compress the Luma range.

    Added:
    You may also run jagabo's script of post #30 and change it to 'coring=true' in his 'function levels_gamma' to see what happens.
    Last edited by Sharc; 4th Oct 2020 at 05:49. Reason: Added
    Quote Quote  
  17. Generally you want to use coring=false to make sure illegal values are handled the same as legal values rather than being crushed. Note that the original levels filter in AviSynth worked in RGB after a rec.601 conversion. So all illegal levels (Y<16, Y>235) were crushed before the levels adjustments were applied (and crushed again after the adjustments are applied). The defaults in the AviSynth filter were to emulate that.

    Regarding quantization errors here's an extreme example:

    Code:
    Levels(45,1.0,220,127,129,coring=false)
    Levels(127,1.0,129,16,235,coring=false)
    GreyScale()
    Image
    [Attachment 55243 - Click to enlarge]


    With the first line the entire video is converted to only three Y values, 127, 128, and 129. With the second line those three values are spread out to 16, 126, 235. GreyScale() is used to remove the chroma to make the luma changes more obvious. All the in-between values that were lost in the first step cannot be recovered in the second step.

    The dither option basically adds ordered noise to reduce this banding/posterization. It works ok for small levels adjustments. If you enable the option with the above sequence you'll see what it does.

    If you think the black level is too low after the 45/220 adjustment you can change the values to give you what you want.

    Code:
    Levels(45, 1.0, 220, 25, 235, coring=false)
    Or increase the gamma to increase shadow detail:

    Code:
    Levels(45, 1.5, 220, 16, 235, coring=false)
    Quote Quote  
  18. For automatic borders cropping you could try AutoCrop or RoboCrop
    Niceeee. No need to reinvent the wheel! Will test those soon. I wonder why I works logically though. How does it know a line/row is perfectly black and is a black bar - or a completely dark video. I guess not many videos are 0 dark for a whole line/row though.
    It can use statistics for example by sampling a range of frames to make a best estimate of the black borders. One has however always to be a bit cautious with automatisms. They can be wrong.

    If you prefer automatisms you may also want to give 'AutoAdjust and 'AutoLevel' a try. Again, check the result carefully before you start the final encoding.
    http://avisynth.nl/index.php/AutoAdjust
    http://avisynth.nl/index.php/Autolevels
    Quote Quote  
  19. For cropping I often open a script in VirtualDub2, use it's Crop filter to visually get the settings, then use those values in AviSynth's Crop() filter.
    Quote Quote  
  20. Thank you for explaining that guys. I added "coring=false" to the Levels section on the script.

    All the in-between values that were lost in the first step cannot be recovered in the second step.
    But correct me if I'm wrong, in the situation here, Everything below 45 is noise, and everything above 220 is also noise. So technically, we don't use anything for a second Levels use? because all we did is to dump sharping, halos etc and not read details?

    If you think the black level is too low after the 45/220 adjustment you can change the values to give you what you want.
    It's hard to tell really. Because me seeing everything a bit dark can be un-trained eye, wrongly calibrated monitor, etc etc. I guess I will trust the numbers based on the graph for now. As this would be the best option for a newbie.

    AutoLevels seems to be early on development (2 versions, one of which is beta)
    But AutoAdjust looks a bit more matured. Will be interesting to check it.

    For cropping I often open a script in VirtualDub2, use it's Crop filter to visually get the settings, then use those values in AviSynth's Crop() filter.
    I currently cut/mask overscan in VirtualDub, write the Masking values on the side - and use them with Crop(). I have to agree the VirtualDub Crop UI is much more easier. You can drag the mouse around, and it's much more easier then guessing pixels size with Crop(). The problem is the 200 videos I already captured a while ago. For those - AutoCrop might be good idea. I will check which one support sampling of multiple frames.
    Quote Quote  
  21. [QUOTE=Okiba;2597231]
    AutoLevels seems to be early on development (2 versions, one of which is beta)
    But AutoAdjust looks a bit more matured. Will be interesting to check it.
    Out of curiosity I tried AutoLevels (v0.12Beta3) on your 'results-italy.mp4' and 'results-epocot.mp4'. The result is surprisingly good IMHO, using
    Code:
    autolevels(ignore_low=0.005,ignore_high=0.0003,autolevel=true,autogamma=false)
    I cropped the black borders off beforehand as these will affect the result.
    (Note: AutoLevels has its own parameters to ignore borders and other crud btw.)

    Anyway, one would have to do more thorough tests, including fades, changing day/night scenes and so on....
    It may not be possible to find 'one for all' settings for your videos.
    Last edited by Sharc; 4th Oct 2020 at 13:18.
    Quote Quote  
  22. Originally Posted by Okiba View Post
    All the in-between values that were lost in the first step cannot be recovered in the second step.
    But correct me if I'm wrong, in the situation here, Everything below 45 is noise, and everything above 220 is also noise. So technically, we don't use anything for a second Levels use? because all we did is to dump sharping, halos etc and not read details?
    I was simulating what would happen if you adjust levels in your QTGMC pre-filtering step, then load those pre-filtered clips into an editor and apply a a final brightness and contrast adjustment. I just skipped the save/reload step. And used outrageous adjustments to make problem visually obvious.

    Say for example you apply a Y' = Y * 1.4 adjustment in one step, then later apply a Y' = Y * 1.3 adjustment in a second step. After the first step a pixel with a Y value of 1 becomes 1.4. But the video output is limited to integers. So that 1.4 is rounded down to 1. Then in the second adjustment that 1 becomes 1.3. Which rounds down to 1 again. But if you had applied Y' = Y * (1.4*1.3) in a single step you get 1.82 which rounds up to 2. So doing the two adjustments in one step is more accurate than doing it in two smaller steps.
    Quote Quote  
  23. Had some time to play with all the toys you shared

    RoboCrop is based on AutoCrop, and aim to be simpler (so for example you don't have to tell it explicitly to take a range of frames, it does that by default). However, I had a strange issue with it. I cropped the video manually, and then with RoboCrop and the results in the AvsPMod preview looked differently. I used AvsPmod to save a screenshot - and it seems identical, so it might just be a preview thingi. But I will do some more verification before using it by default.

    I did noticed something else though. While cropping manually, I noticed that the masking I did wasn't perfect. So I crop two more pixels, so I indeed up with un-even cropping (18 to the left and 20 to the right, instead of 20/20 like I mostly have). This broke chroma sharping:

    MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp(20).Sharpen(1.0).nnedi3_rpow2(2 , cshift="Spline36Resize", fwidth=width, fheight=height))
    Due to "Planet height must be multiple of of 2). But didn't I technically affected the width and It's still even (18 vs 20?)

    But if you had applied Y' = Y * (1.4*1.3) in a single step you get 1.82 which rounds up to 2. So doing the two adjustments in one step is more accurate than doing it in two smaller steps.
    When does it become noticeable? As you said - you shared very extreme example. Change 40 to 16 and then in another video moving 16 to 25. Was that be very noticeable?

    I tried AutoLevels, and I was indeed unable to identify difference between the results of the plugin to the manual Level we set (45,230).
    Quote Quote  
  24. Originally Posted by jagabo View Post
    I was simulating what would happen if you adjust levels in your QTGMC pre-filtering step, then load those pre-filtered clips into an editor and apply a a final brightness and contrast adjustment. I just skipped the save/reload step. And used outrageous adjustments to make problem visually obvious.

    Say for example you apply a Y' = Y * 1.4 adjustment in one step, then later apply a Y' = Y * 1.3 adjustment in a second step. After the first step a pixel with a Y value of 1 becomes 1.4. But the video output is limited to integers. So that 1.4 is rounded down to 1. Then in the second adjustment that 1 becomes 1.3. Which rounds down to 1 again. But if you had applied Y' = Y * (1.4*1.3) in a single step you get 1.82 which rounds up to 2. So doing the two adjustments in one step is more accurate than doing it in two smaller steps.
    Interesting thoughts. So If I understand it correctly this also means that low luma values are more affected than high luma values. For Y=90, for example, both cases (i.e. one step and 2 steps) would result in y'=164 (rounded integer).
    Quote Quote  
  25. Originally Posted by Okiba View Post
    I tried AutoLevels, and I was indeed unable to identify difference between the results of the plugin to the manual Level we set (45,230).
    One of the pitfalls with 'auto' adjustments is that they may produce flicker, or tweak fade-in/fade-outs unduly. So keep an eye on it.
    Quote Quote  
  26. Originally Posted by Sharc View Post
    Originally Posted by jagabo View Post
    I was simulating what would happen if you adjust levels in your QTGMC pre-filtering step, then load those pre-filtered clips into an editor and apply a a final brightness and contrast adjustment. I just skipped the save/reload step. And used outrageous adjustments to make problem visually obvious.

    Say for example you apply a Y' = Y * 1.4 adjustment in one step, then later apply a Y' = Y * 1.3 adjustment in a second step. After the first step a pixel with a Y value of 1 becomes 1.4. But the video output is limited to integers. So that 1.4 is rounded down to 1. Then in the second adjustment that 1 becomes 1.3. Which rounds down to 1 again. But if you had applied Y' = Y * (1.4*1.3) in a single step you get 1.82 which rounds up to 2. So doing the two adjustments in one step is more accurate than doing it in two smaller steps.
    Interesting thoughts. So If I understand it correctly this also means that low luma values are more affected than high luma values. For Y=90, for example, both cases (i.e. one step and 2 steps) would result in y'=164 (rounded integer).
    Higher Y values are effected by the same amount. 90 may become 164 both ways, just as 2 becomes 4 both ways. But 91 becomes 165 and 166.
    Quote Quote  
  27. I think I am good. As I mentioned, I have two type of Content. Personal family footage, of which I apply the generic script AND keep the loseless format. So in case It will end up just OK and need more tweaking, I can always do it from scratch on the HuffYuv file (so it will be a fresh Level modification). The other content is couple of old Cartoons/Talk shows my father would like to keep. I probably not going to keep Loseless version of those (because they are not that important and the space needed for this all project is already quite big). But because it's not the same setup - I'm going to manually adjust Levels. So they will be precise (as I will be checking the Histogram for each).

    Phew. That was a very educational post for me. I learned a lot. Thank you Sharc, Lordsmurf, johnmeyer, poisondeathray and especially to jagabo who answered professional and with patience to every stupid question I had
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!