VideoHelp Forum
+ Reply to Thread
Page 8 of 11
FirstFirst ... 6 7 8 9 10 ... LastLast
Results 211 to 240 of 328
Thread
  1. The tiny Shotcut player puts your 235 bar at 255.

    "Pot Player" is working pretty well once configured.
    Quote Quote  
  2. Originally Posted by chris319 View Post
    The tiny Shotcut player puts your 235 bar at 255.
    I did it in the NLE, is the player something else? Did you set the properties to full ?


    And false alarm for ffplay, it works ok too for swscale/zscale

    Code:
    ffplay -i "Y_0,16,235,255.mp4" -vf scale=in_color_matrix=bt709:in_range=full,format=rgb24
    Code:
    ffplay -i "Y_0,16,235,255.mp4" -vf zscale=matrixin=709:rangein=full,format=gbrp
    Quote Quote  
  3. Everything look good now?

    I'm using full range all around. Note that the clip levels are not so wacky any more. Keep in mind that the target range is 5 - 246, not 16 - 235.

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf format=rgb24,scale=in_color_matrix=bt709:in_range=full,"lutrgb=r='clip(val,19,233)':g='clip(val,19,233)':b='clip(val,19,233)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -pix_fmt yuv422p  -ar 48000 -c:a pcm_s16le  -ac 2  -f  mxf  clipped.mxf
    Where do you see "full range" in Shotcut properties?
    Quote Quote  
  4. Originally Posted by chris319 View Post
    Everything look good now?

    I'm using full range all around. Note that the clip levels are not so wacky any more. Keep in mind that the target range is 5 - 246, not 16 - 235.

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf format=rgb24,scale=in_color_matrix=bt709:in_range=full,"lutrgb=r='clip(val,19,233)':g='clip(val,19,233)':b='clip(val,19,233)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -pix_fmt yuv422p  -ar 48000 -c:a pcm_s16le  -ac 2  -f  mxf  clipped.mxf

    The target range is supposed to Y 16-235. That's your reference black, reference white. 5-246 is only for transient under/overshoots.
    (and R,G,B 16-235 for r103, using studio range RGB . Reference black and white are still RGB 16,16,16 , 235,235,235 there, 5-246 are only for transient under/overshoots too there)

    And not ok, because you're using full in, but limited out. So the reference black and white are off

    To be consistent, I would get rid of that last -pix_fmt and replace with format . Stick with that syntax -vf scale,format and you will reduce the chance of having problems later.

    If you wanted those clip values it would look like this

    Code:
    -vf scale=in_color_matrix=bt709:in_range=full,format=rgb24,"lutrgb=r='clip(val,19,233)':g='clip(val,19,233)':b='clip(val,19,233)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=full,format=yuv422p
    If you notice the code example in post #93 , it's the same syntax (clip values are different, and format=yuv420p)
    https://forum.videohelp.com/threads/395939-ffmpeg-Color-Range/page4#post2574065


    Where do you see "full range" in Shotcut properties?
    In the properties tab, color range . Your options are Broadcast Limited (mpeg), and Full (jpeg). Choose the latter. If you click it back and forth you should see the display change

    I'm not entirely sure what shotcut uses for the renderer. It might be using a shared resource such as a GPU overlay, and then change to something else. So close down other video applications before doing this. It's the same with some media players. Some renderers cannot be in use with multiple instances. And if a video application swaps to a different one, that can change what you see
    Quote Quote  
  5. And not ok, because you're using full in, but limited out. So the reference black and white are off
    I found that if the output were full range, I had to use those wacky clip values that you saw before. In limited range the clip values are 19 and 233. So which should I do?
    Quote Quote  
  6. Originally Posted by chris319 View Post
    And not ok, because you're using full in, but limited out. So the reference black and white are off
    I found that if the output were full range, I had to use those wacky clip values that you saw before. In limited range the clip values are 19 and 233. So which should I do?
    Not sure , because I don't know how you arrived at those values, and more importantly not sure what type of source you are starting with

    If you test a standard colorbars video for the input, you get messed up colors and values again with full in,limited out

    For a normal range, standard video file Y16-235 , using the full range in and full range out equations , clipping to RGB 16-235 is pretty conservative . If you stop there, by definition that's conservatively 100% legal for r103

    But you will almost always get some illegal pixels back when converting to YUV and especially subsampling . It might 0.1%, maybe more 1%. it really depends on the
    content and edges and the relationships to each other . Some types of sources are very prone. That's where the other filters and manipulations are used, after the conversion and subsampling, you might want to apply a band filter or pass filter. Basically something to reduce the % if it's high. The downside is they all tend to blur the picture

    Many consumer camcorders actually record Y 16-255 black to white. The usual thing to do there is to clamp highlights into range first . And if it's an actual full range recording (black and white level are actually Y=0,Y=255) , you need to clamp the levels there too. Not clip (or at least prior to clipping)


    The benefit of _Al_ 's vapoursynth visualization is it will calculate the %, and also importantly, you can see where the affected pixels are. Maybe for one source 10, 240 is correct. Maybe for another 11, 232. And you can take appropriate further action if required. If they are important areas , maybe something like highlights on actors faces, you probably don't want to blindly clip them . People tend to notice that stuff. But if they are some edges of some background building way in the distance with shallow depth of field, probably nobody is going to notice.
    Quote Quote  
  7. I don't know how you arrived at those values
    Iteratively. Run script, look at them on my program which checks the RGB values, adjust levels, repeat until levels look good.

    Test patterns are easy. When you add real-world footage with ringing and overshoots, not so easy.

    As an experiment, I set the lutrgb clip levels to 16 and 235. My program reported max/min levels of 0 and 255.
    Last edited by chris319; 29th Mar 2020 at 16:37.
    Quote Quote  
  8. Originally Posted by chris319 View Post

    As an experiment, I set the lutrgb clip levels to 16 and 235. My program reported max/min levels of 0 and 255.
    That's an unfiltered check. And that's normal and expected, as explained earlier. Subsampling is the main culprit that produces it. It's almost impossible not to produce some 0 and 255 values

    Your RGB clipping stage does not catch the new pixel values, nor the new combinations generated by subsampling. Some YUV values in the middle range can combine to produce RGB 255,255,255 or negative RGB values (that get clipped to zero) . Those are the out of gamut YUV values, that don't map to the 8bit RGB cube.

    And you can't clip to 128 grey.

    Min/ Max by itself is not that relevant.

    You should be asking

    1) what % of pixels are affected? It is more than the 1% allowed? And what areas, are they vital or important?

    2) how does the upsampler check (what algorithm) and what prefilter method do they use to check?
    Quote Quote  
  9. There are MILLIONS of potential out of gamut values combinations that clipping in individual channels can miss by a min/max clipping strategy

    For example, YUV (185,185,185) maps to RGB (1.12799 , 0.578978, 1.14027) in float values or (255,148,255 ) in int8, when converting to studio range RGB (applying the full range equations). That's 255 in R,B

    Are you going to clip to 185 max ? Your brightest pixel value ? You can demonstrate even lower values produce invalid combinations. YUV (180,160,180) maps RGB(255,150,239) in the 8bit studio RGB conversion . Still invalid for R. Min/max clipping even after subsampling , after YUV conversion , with min/max YUV clipping (maybe using lutyuv) won't pick those illegal combinations up if you do individual channel min/max checks . Those values are in the middle of the range.



    100% all illegal, out of gamut pixels should be culled by the first RGB conversion, and clipping to RGB[16,235] . Remember 5,246 are supposedly allowable for under/overshoots. That's a conservative narrow clipping range in studio range RGB because 100% all under/over shoots are clipped. If you clip to anything more narrow like RGB[17,234] (assuming your reference black and white levels are correct,) and you start to eat away at the picture, reduce contrast , shadows and highlight detail.

    But where do these new values come from? Converting to YUV and subsampling after your RGB clipping. Subsampling resizes the U,V planes, so new values are generated. The algorithm used generally sample a radius of pixels, so depending on what pixel values are next to each other you can get more invalid combinations. Changing the sampling kernal can sometimes generate fewer invalid results.
    Quote Quote  
  10. I'm starting to think that we've gone as far as we can with this "poor man's legalizer" project using ffmpeg. I hope it is useful to Marco and others. What do you think, pdr?
    Quote Quote  
  11. Well, I hovestly don't know if these are overlapping color correction (or equalization ?) problems, but here's a discussion about HDR to SDR in FFMPEG:

    Emulating MadVR's HDR to SDR tonemapping using FFMPEG

    They uses:
    Code:
    -vf zscale=transfer=linear,tonemap=tonemap=clip:param=1.0:desat=2:peak=0,zscale=transfer=bt709,format=yuv420p
    So, In your opinion, what are the "definitive" parameters to use after all ?
    Quote Quote  
  12. Tone-mapping from HDR to SDR is something else, there is no switches that you set right and all of us get the same result as original was rendering on screen. It is sort of interpretation, preference, choice for desired outcome.
    Quote Quote  
  13. When discussing "HDR" you need to distinguish between the two types of HDR: PQ and HLG.

    HLG is supposed to be compatible with SDR displays and is what will be used for broadcast. PQ has metadata and makes my head spin. HLG has a peak white (100% reflectance) of 75% (IRE) as opposed to SDR'S 100%.

    Taking a test chart patch with 90% reflectance, in HLG that patch would be 100% (IRE). In HLG it is 73%.
    Quote Quote  
  14. Originally Posted by poisondeathray View Post
    There are MILLIONS of potential out of gamut values combinations that clipping in individual channels can miss by a min/max clipping strategy

    For example, YUV (185,185,185) maps to RGB (1.12799 , 0.578978, 1.14027) in float values or (255,148,255 ) in int8, when converting to studio range RGB (applying the full range equations). That's 255 in R,B

    Are you going to clip to 185 max ? Your brightest pixel value ? You can demonstrate even lower values produce invalid combinations. YUV (180,160,180) maps RGB(255,150,239) in the 8bit studio RGB conversion . Still invalid for R. Min/max clipping even after subsampling , after YUV conversion , with min/max YUV clipping (maybe using lutyuv) won't pick those illegal combinations up if you do individual channel min/max checks . Those values are in the middle of the range.



    100% all illegal, out of gamut pixels should be culled by the first RGB conversion, and clipping to RGB[16,235] . Remember 5,246 are supposedly allowable for under/overshoots. That's a conservative narrow clipping range in studio range RGB because 100% all under/over shoots are clipped. If you clip to anything more narrow like RGB[17,234] (assuming your reference black and white levels are correct,) and you start to eat away at the picture, reduce contrast , shadows and highlight detail.

    But where do these new values come from? Converting to YUV and subsampling after your RGB clipping. Subsampling resizes the U,V planes, so new values are generated. The algorithm used generally sample a radius of pixels, so depending on what pixel values are next to each other you can get more invalid combinations. Changing the sampling kernal can sometimes generate fewer invalid results.
    This is all true, but the r103 spec calls for R,G,B in the range of 5 - 245 and that's it. As far as I'm concerned, if R,G,B are in that range then r103 has been satisfied.

    As I have said in the past, some delivery specs call for Y to be in the range 16 - 235 and go no further. That's easier to achieve and the pictures look better. Again, YUV is what's actually transmitted, not the R,G,B components.
    Quote Quote  
  15. Originally Posted by chris319 View Post

    This is all true, but the r103 spec calls for R,G,B in the range of 5 - 245 and that's it. As far as I'm concerned, if R,G,B are in that range then r103 has been satisfied.

    Sure, and that would be easy to do , if you were submitting RGB.

    What you are doing is partially invalidated because you are preforming additional processing after the RGB clip stage, such as converting to YUV, subsampling. When you've clipped conservatively to RGB 16-235 on your sample, your program reported a min/max of 0 and 255's for the final RGB check, right ? How is that within 5-246 ?

    r103 allows for 1% leeway of out of gamut errors (0 and 255 are the out of gamut errors). That's what you need to satisfy. You almost always produce some out of gamut errors (YUV values that map to R,G,B 0 or 255 ) . Its almost impossible not to on normal content (mainly because of subsampling)
    Quote Quote  
  16. Originally Posted by chris319 View Post
    This is all true, but the r103 spec calls for R,G,B in the range of 5 - 245 and that's it. As far as I'm concerned, if R,G,B are in that range then r103 has been satisfied.
    There are cases it is not going to work. Taking again that lighthouse video. Running conversion from YUV limited to RGB limited to map values same, so clipping is not happening and getting this graph where yellow shows percentage of illegal values per frame:

    Note around frame 1500, blue values go all the way to 10%. So there are 10% of blue values outside of 5-246.
    Image Attached Thumbnails Click image for larger version

Name:	illegal values 5 - 546 YUV to RGB range limited.png
Views:	158
Size:	58.1 KB
ID:	52541  

    Quote Quote  
  17. So I limit those values, all of those illegal values are cut off. This graph would show 0% , all legal values. Vapoursynth line would be:
    clipped_rgb = core.std.Limiter(rgb_clip, 5, 246, planes=[0, 1, 2])

    Then again convert this to YUV as you would, then again to RGB to mock some studio monitor , and you'd see the second image, some illegal values are gone, but those illegal blues went down from 10% to about 4%. Again , those YUV were mostly legal, not all overblown parts or too dark. Just that lighthouse has some overblown parts, but not many comparing what illegal values came from that grass.
    Using Frame 1500: Before limiting and then that monitor RGB, supposedly fixed:
    Image Attached Thumbnails Click image for larger version

Name:	clip_02_[1920, 1080, 0, 0]_frame_0001500.png
Views:	76
Size:	2.76 MB
ID:	52542  

    Click image for larger version

Name:	clip_07_[1920, 1080, 0, 0]_frame_0001500.png
Views:	142
Size:	3.02 MB
ID:	52543  

    Quote Quote  
  18. So that is many perhaps just blur image to get rid of this. But that blur really shows up and make image not sharp anymore if for example you just do that. Combination of things might help it, touch of blur + limiting blue only, I don't know.

    Point is what works for this video and just for some areas, like around that 1500 frame, might not work in other video. So avoiding just automatically run video with low pass filter blindly, taking some time evaluating what is needed and fix it. Maybe even only certain parts. As you can see, it is mostly legal or almost so you'd give just that one part a treat. But that you cannot do if having workflow as you have. It always come back to it.

    You look for general formula. But that Im afraid does not exist. You still underestimate what it is to directly go to that frame 1500 with your app, clipping value, blur a bit and seeing result, right away, you'd just give a video 15 min and you'd might understand what is necessary to do. With your pipe approach ,strictly synchronous feedback it is not possible.

    Unless you decide to blur the heck out of it, like most probably do, or clip the heck out of it, but that might work for overblown parts somehow partially, but that should be treated with touch in NLE, not like this uncompromising clipping without even seeing result.
    Last edited by _Al_; 30th Mar 2020 at 23:32.
    Quote Quote  
  19. Or other approach,
    making that graph which gives you troublesome areas, then just on that region arround frame 1500, between those crossfades I applied in Vegas color balance, just for shadows, brought it a notch down (one color adjust approach is nonsense as I suggested above) and then loading same RGB from Vegas (frame serves RGB to Vapoursynth) I got suddenly about 3% of illegal values only.
    No other corrections, filters, broadcast levels etc.
    So just knowing what is illegal (having that graph available, it was sorted right away and then I could just encode video right away.
    Is this solution for you or you thing someone would do it automatically? Because not sure about that auto solution. Auto solution would always look drastic in general.
    Image Attached Thumbnails Click image for larger version

Name:	vegas ,lowering shadows.PNG
Views:	97
Size:	400.7 KB
ID:	52544  

    Quote Quote  
  20. And that 1500 hundred frame just coming from Vegas instead of 10% illegal values, you have 3.3% before any limiter was applied:
    Image Attached Thumbnails Click image for larger version

Name:	clip_02_[1920, 1080, 0, 0]_frame_0001500.png
Views:	166
Size:	2.80 MB
ID:	52545  

    Quote Quote  
  21. What you are doing is partially invalidated because you are preforming additional processing after the RGB clip stage, such as converting to YUV, subsampling.
    Gotcha.

    I tried clipping in the rgb24 domain without subsampling. I didn't have to use outsized values to meet spec. I want to try yuv444p to disable the subsampling to prove this point even further.

    The dilemma is that clients call for 4:2:0 or 4:2:2, which are subsampled. So do we measure rgb24, or yuv444p, 4:2:0 or 4:2:2? Maybe we measure before encoding to 4:2:0 or 4:2:2?
    Last edited by chris319; 31st Mar 2020 at 15:12.
    Quote Quote  
  22. Not sure "how much" you'd be better off, but it would help for sure. It is source dependent.

    You are thinking again , some formula. But take that our clip, you'd need to squeeze levels for blacks to cut off those illegal values. Broadcast limiters might not reach those levels, so you'd need to go even further. In other scene you'd need something else, or not that much.
    Squeezing those black levels towards middle would have similar effect as taking colors a notch off for all channels. But again it shows up in overall result. To just squeeze levels as a formula might not be necessary as in our clip for example, just to bring some whites down but nothing like change overall look if trying o fix those "RGB lower values" based on one scene.
    Last edited by _Al_; 31st Mar 2020 at 03:03.
    Quote Quote  
  23. Does Vapoursynth use ffmpeg as its underpinning?
    Quote Quote  
  24. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Yes.
    Quote Quote  
  25. Originally Posted by chris319 View Post
    Does Vapoursynth use ffmpeg as its underpinning?
    What does it mean? Is it based on that? No.
    But not strictly. It uses different source filters to load videos, I already had a post about vs before for you, so read it again. FFms2 source is based on ffmeg. But there are other source plugins. Maybe some other filters here or there where some lib or code is sneaked into dll, not sure about latest BestAudioSource. It was also explained how vs works, these threads go round and round looks like it. It is not ffmpeg. You would mostly use one suitable source plugin, then built-in resizer (Zimg), then limiter, some LUT or Expressions.
    Quote Quote  
  26. OK guyz, according to Gyan Doshi (an FFMPEG maintainer) here's the correct way to "legalize" my - Canon HF100 generated - files:

    Originally Posted by Gyan
    Ok, so your video is YUV, not RGB. And clip will squish the values beyond the clip ranges. You should remap them.

    You can use the geq filter to remap the pixel value ranges.
    Code:
    ffmpeg
     -i input
     -vf "geq=lum='(p(X,Y)-16)/(255-16)*(235-16)+16':cb='(p(X,Y)-16)/(255-16)*(240-16)+16'"
     -c:v libx264 -c:a copy out.mp4
    The geq filter rescales the input luma from 16-255 to 16-235, and both the input chroma from 16-255 to 16-240, which is the legal range for broadcast 8-bit signals.
    Original discussion @ video.stackexchange.com
    Quote Quote  
  27. Originally Posted by forart.it View Post
    OK guyz, according to Gyan Doshi (an FFMPEG maintainer) here's the correct way to "legalize" my - Canon HF100 generated - files:

    Originally Posted by Gyan
    Ok, so your video is YUV, not RGB. And clip will squish the values beyond the clip ranges. You should remap them.

    You can use the geq filter to remap the pixel value ranges.
    Code:
    ffmpeg
     -i input
     -vf "geq=lum='(p(X,Y)-16)/(255-16)*(235-16)+16':cb='(p(X,Y)-16)/(255-16)*(240-16)+16'"
     -c:v libx264 -c:a copy out.mp4
    The geq filter rescales the input luma from 16-255 to 16-235, and both the input chroma from 16-255 to 16-240, which is the legal range for broadcast 8-bit signals.
    Original discussion @ video.stackexchange.com

    There some issues with this;

    If you input a greyscale video Y=0-255 to test

    It turns green ,and the output Y range is 1-235
    Quote Quote  
  28. Originally Posted by poisondeathray View Post
    There some issues with this;

    If you input a greyscale video Y=0-255 to test

    It turns green ,and the output Y range is 1-235
    So it *should* be:
    Code:
     -vf "geq=lum='(p(X,Y)-16)/(((255-16)*(235-16))+16)':cb='(p(X,Y)-16)/(((255-16)*(240-16))+16)'"
    ...or not ?

    Another question: there's a way to probe the source file in order to get the exact input range ?
    Quote Quote  
  29. Originally Posted by forart.it View Post

    So it *should* be:
    Code:
     -vf "geq=lum='(p(X,Y)-16)/(((255-16)*(235-16))+16)':cb='(p(X,Y)-16)/(((255-16)*(240-16))+16)'"
    ...or not ?
    No, it becomes invalid YUV 0,0,0




    Another question: there's a way to probe the source file in order to get the exact input range ?
    Input range of each frame in terms of min/max Y,U,V (or RGB if input is RGB) ?

    Not sure how to do that with ffmpeg. Maybe with ffprobe to print a list ? Not sure of the command to do this. It's possible in other programs
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!