+ Reply to Thread
Results 211 to 233 of 233
And false alarm for ffplay, it works ok too for swscale/zscale
ffplay -i "Y_0,16,235,255.mp4" -vf scale=in_color_matrix=bt709:in_range=full,format=rgb24Code:
ffplay -i "Y_0,16,235,255.mp4" -vf zscale=matrixin=709:rangein=full,format=gbrp
Everything look good now?
I'm using full range all around. Note that the clip levels are not so wacky any more. Keep in mind that the target range is 5 - 246, not 16 - 235.
ffmpeg -y -i "C0015.mp4" -c:v mpeg2video -r 59.94 -vb 50M -minrate 50M -maxrate 50M -q:v 0 -dc 10 -intra_vlc 1 -lmin "1*QP2LAMBDA" -qmin 1 -qmax 12 -vtag xd5b -non_linear_quant 1 -g 15 -bf 2 -profile:v 0 -level:v 2 -vf format=rgb24,scale=in_color_matrix=bt709:in_range=full,"lutrgb=r='clip(val,19,233)':g='clip(val,19,233)':b='clip(val,19,233)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited -color_primaries bt709 -color_trc bt709 -colorspace bt709 -pix_fmt yuv422p -ar 48000 -c:a pcm_s16le -ac 2 -f mxf clipped.mxf
The target range is supposed to Y 16-235. That's your reference black, reference white. 5-246 is only for transient under/overshoots.
(and R,G,B 16-235 for r103, using studio range RGB . Reference black and white are still RGB 16,16,16 , 235,235,235 there, 5-246 are only for transient under/overshoots too there)
And not ok, because you're using full in, but limited out. So the reference black and white are off
To be consistent, I would get rid of that last -pix_fmt and replace with format . Stick with that syntax -vf scale,format and you will reduce the chance of having problems later.
If you wanted those clip values it would look like this
I'm not entirely sure what shotcut uses for the renderer. It might be using a shared resource such as a GPU overlay, and then change to something else. So close down other video applications before doing this. It's the same with some media players. Some renderers cannot be in use with multiple instances. And if a video application swaps to a different one, that can change what you see
And not ok, because you're using full in, but limited out. So the reference black and white are off
If you test a standard colorbars video for the input, you get messed up colors and values again with full in,limited out
For a normal range, standard video file Y16-235 , using the full range in and full range out equations , clipping to RGB 16-235 is pretty conservative . If you stop there, by definition that's conservatively 100% legal for r103
But you will almost always get some illegal pixels back when converting to YUV and especially subsampling . It might 0.1%, maybe more 1%. it really depends on the
content and edges and the relationships to each other . Some types of sources are very prone. That's where the other filters and manipulations are used, after the conversion and subsampling, you might want to apply a band filter or pass filter. Basically something to reduce the % if it's high. The downside is they all tend to blur the picture
Many consumer camcorders actually record Y 16-255 black to white. The usual thing to do there is to clamp highlights into range first . And if it's an actual full range recording (black and white level are actually Y=0,Y=255) , you need to clamp the levels there too. Not clip (or at least prior to clipping)
The benefit of _Al_ 's vapoursynth visualization is it will calculate the %, and also importantly, you can see where the affected pixels are. Maybe for one source 10, 240 is correct. Maybe for another 11, 232. And you can take appropriate further action if required. If they are important areas , maybe something like highlights on actors faces, you probably don't want to blindly clip them . People tend to notice that stuff. But if they are some edges of some background building way in the distance with shallow depth of field, probably nobody is going to notice.
I don't know how you arrived at those values
Test patterns are easy. When you add real-world footage with ringing and overshoots, not so easy.
As an experiment, I set the lutrgb clip levels to 16 and 235. My program reported max/min levels of 0 and 255.
Last edited by chris319; 29th Mar 2020 at 16:37.
Your RGB clipping stage does not catch the new pixel values, nor the new combinations generated by subsampling. Some YUV values in the middle range can combine to produce RGB 255,255,255 or negative RGB values (that get clipped to zero) . Those are the out of gamut YUV values, that don't map to the 8bit RGB cube.
And you can't clip to 128 grey.
Min/ Max by itself is not that relevant.
You should be asking
1) what % of pixels are affected? It is more than the 1% allowed? And what areas, are they vital or important?
2) how does the upsampler check (what algorithm) and what prefilter method do they use to check?
There are MILLIONS of potential out of gamut values combinations that clipping in individual channels can miss by a min/max clipping strategy
For example, YUV (185,185,185) maps to RGB (1.12799 , 0.578978, 1.14027) in float values or (255,148,255 ) in int8, when converting to studio range RGB (applying the full range equations). That's 255 in R,B
Are you going to clip to 185 max ? Your brightest pixel value ? You can demonstrate even lower values produce invalid combinations. YUV (180,160,180) maps RGB(255,150,239) in the 8bit studio RGB conversion . Still invalid for R. Min/max clipping even after subsampling , after YUV conversion , with min/max YUV clipping (maybe using lutyuv) won't pick those illegal combinations up if you do individual channel min/max checks . Those values are in the middle of the range.
100% all illegal, out of gamut pixels should be culled by the first RGB conversion, and clipping to RGB[16,235] . Remember 5,246 are supposedly allowable for under/overshoots. That's a conservative narrow clipping range in studio range RGB because 100% all under/over shoots are clipped. If you clip to anything more narrow like RGB[17,234] (assuming your reference black and white levels are correct,) and you start to eat away at the picture, reduce contrast , shadows and highlight detail.
But where do these new values come from? Converting to YUV and subsampling after your RGB clipping. Subsampling resizes the U,V planes, so new values are generated. The algorithm used generally sample a radius of pixels, so depending on what pixel values are next to each other you can get more invalid combinations. Changing the sampling kernal can sometimes generate fewer invalid results.
Well, I hovestly don't know if these are overlapping color correction (or equalization ?) problems, but here's a discussion about HDR to SDR in FFMPEG:
Emulating MadVR's HDR to SDR tonemapping using FFMPEG
Tone-mapping from HDR to SDR is something else, there is no switches that you set right and all of us get the same result as original was rendering on screen. It is sort of interpretation, preference, choice for desired outcome.
When discussing "HDR" you need to distinguish between the two types of HDR: PQ and HLG.
HLG is supposed to be compatible with SDR displays and is what will be used for broadcast. PQ has metadata and makes my head spin. HLG has a peak white (100% reflectance) of 75% (IRE) as opposed to SDR'S 100%.
Taking a test chart patch with 90% reflectance, in HLG that patch would be 100% (IRE). In HLG it is 73%.
As I have said in the past, some delivery specs call for Y to be in the range 16 - 235 and go no further. That's easier to achieve and the pictures look better. Again, YUV is what's actually transmitted, not the R,G,B components.
Sure, and that would be easy to do , if you were submitting RGB.
What you are doing is partially invalidated because you are preforming additional processing after the RGB clip stage, such as converting to YUV, subsampling. When you've clipped conservatively to RGB 16-235 on your sample, your program reported a min/max of 0 and 255's for the final RGB check, right ? How is that within 5-246 ?
r103 allows for 1% leeway of out of gamut errors (0 and 255 are the out of gamut errors). That's what you need to satisfy. You almost always produce some out of gamut errors (YUV values that map to R,G,B 0 or 255 ) . Its almost impossible not to on normal content (mainly because of subsampling)
Note around frame 1500, blue values go all the way to 10%. So there are 10% of blue values outside of 5-246.
So I limit those values, all of those illegal values are cut off. This graph would show 0% , all legal values. Vapoursynth line would be:
clipped_rgb = core.std.Limiter(rgb_clip, 5, 246, planes=[0, 1, 2])
Then again convert this to YUV as you would, then again to RGB to mock some studio monitor , and you'd see the second image, some illegal values are gone, but those illegal blues went down from 10% to about 4%. Again , those YUV were mostly legal, not all overblown parts or too dark. Just that lighthouse has some overblown parts, but not many comparing what illegal values came from that grass.
Using Frame 1500: Before limiting and then that monitor RGB, supposedly fixed:
So that is many perhaps just blur image to get rid of this. But that blur really shows up and make image not sharp anymore if for example you just do that. Combination of things might help it, touch of blur + limiting blue only, I don't know.
Point is what works for this video and just for some areas, like around that 1500 frame, might not work in other video. So avoiding just automatically run video with low pass filter blindly, taking some time evaluating what is needed and fix it. Maybe even only certain parts. As you can see, it is mostly legal or almost so you'd give just that one part a treat. But that you cannot do if having workflow as you have. It always come back to it.
You look for general formula. But that Im afraid does not exist. You still underestimate what it is to directly go to that frame 1500 with your app, clipping value, blur a bit and seeing result, right away, you'd just give a video 15 min and you'd might understand what is necessary to do. With your pipe approach ,strictly synchronous feedback it is not possible.
Unless you decide to blur the heck out of it, like most probably do, or clip the heck out of it, but that might work for overblown parts somehow partially, but that should be treated with touch in NLE, not like this uncompromising clipping without even seeing result.
Last edited by _Al_; 30th Mar 2020 at 23:32.
Or other approach,
making that graph which gives you troublesome areas, then just on that region arround frame 1500, between those crossfades I applied in Vegas color balance, just for shadows, brought it a notch down (one color adjust approach is nonsense as I suggested above) and then loading same RGB from Vegas (frame serves RGB to Vapoursynth) I got suddenly about 3% of illegal values only.
No other corrections, filters, broadcast levels etc.
So just knowing what is illegal (having that graph available, it was sorted right away and then I could just encode video right away.
Is this solution for you or you thing someone would do it automatically? Because not sure about that auto solution. Auto solution would always look drastic in general.
What you are doing is partially invalidated because you are preforming additional processing after the RGB clip stage, such as converting to YUV, subsampling.
I tried clipping in the rgb24 domain without subsampling. I didn't have to use outsized values to meet spec. I want to try yuv444p to disable the subsampling to prove this point even further.
The dilemma is that clients call for 4:2:0 or 4:2:2, which are subsampled. So do we measure rgb24, or yuv444p, 4:2:0 or 4:2:2? Maybe we measure before encoding to 4:2:0 or 4:2:2?
Last edited by chris319; 31st Mar 2020 at 15:12.
Not sure "how much" you'd be better off, but it would help for sure. It is source dependent.
You are thinking again , some formula. But take that our clip, you'd need to squeeze levels for blacks to cut off those illegal values. Broadcast limiters might not reach those levels, so you'd need to go even further. In other scene you'd need something else, or not that much.
Squeezing those black levels towards middle would have similar effect as taking colors a notch off for all channels. But again it shows up in overall result. To just squeeze levels as a formula might not be necessary as in our clip for example, just to bring some whites down but nothing like change overall look if trying o fix those "RGB lower values" based on one scene.
Last edited by _Al_; 31st Mar 2020 at 03:03.