VideoHelp Forum




+ Reply to Thread
Page 7 of 11
FirstFirst ... 5 6 7 8 9 ... LastLast
Results 181 to 210 of 328
  1. Originally Posted by chris319 View Post
    IIRC I had to tweak the code because the colors were inaccurate after implementing your suggestions. So I got it to render the colors accurately.

    Now the clip levels are different and must be tweaked yet again with the combined lutrgb commands.

    How does this look? "scale" before and after lutrgb. Note that I had to go to limited range to get accurate colors/levels. I don't know if you test this code for color accuracy but I certainly do test it thoroughly. I'll test this version once I have your imprimatur.

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -pix_fmt yuv422p  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf 
     scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  "lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf



    It doesn't look correct to me. But if you think you got ok results results then go with it .


    I posted all this already. It looked ok in one of the earlier posts


    I wouldn't use -pix_fmt at all. Because you're not controlling the conversion. The only time you should use -pix_fmt is when you use it as an input format, such as raw video or pipes. I would use "format" whenever you're changing format. So you know exactly what youre getting, you know exactly what pixel format you're using, and more importantly when. Also, you know exactly how the conversion is done - which matrix, full or limited - what equation is being used. Instead of guessing or letting ffmpeg do default 601 limited range. You should always control the conversion explicitly

    When using vf scale, the in or out color_matrix points in the YUV direction. So the first -vf scale should be IN, not OUT , And you're missing what format you're converting TO. You're converting to RGB24 before lutrgb. You are missing a comma between that and lutrgb. Recall a comma separates individual filters in a linear chain. -vf scale is a separate filter than -vf lutrgb. A 2nd vf scale comes after lutrgb, then format=yuv422p because you want 8bit 422. You chain them alltogether and separate with commas

    The clip values look wrong to me . You're converting using limited range (recall full range equations were being used before). If you started with a "normal" range video Y16-235 . You would get RGB 0-255. If you then clip RGB [30,233], the contrast will look washed out, wrong black and white levels. Then you're limited range equation to convert back to YUV, that will result in about Y [42,216] . Recall r103 wants the black level at Y=16, white level at Y=235 . Actually every spec wants Y [16,235]. And that's the reference black and white level - the actual black and white level, not some excursions or stray pixels



    Originally Posted by chris319 View Post
    The benefit of printing every frame, is you can identify specific scenes or areas that need attention. It's a more proper way of doing it instead of blind clipping or blind adjustments, so you can make proper corrections. Clipping has it's place, but it's supposed to be used in conjunction with scopes and color correction.
    Then you can pay a colorist $50 per hour to fix the problems frame by frame. My solution is admittedly quick and dirty but costs $0. At least you'll have an idea of where your levels are.

    You can do it yourself in a NLE or Resolve too. Not necessarily frame by frame. Minor corrections like levels , saturation are easy to do per scene. Pretty easy to do intermediate stuff in Resolve too with masks. Personally I wouldn't damage everything else by forcing global clip values. And those current values seem very excessive

    And don't just look at the min/max values. Look at the picture with your eyes. You're usually allowed 1% leeway (again, usually illegal pixels will be produced from subsampling) . And that 1% is also prefiltered before it's calculated. But wrong black/white levels will cause rejection on other grounds

    That % illegal overlay and visualization script _Al_ posted is golden. That' s similar to what a colorist typically uses, and professional color correction and broadcast legalizer software produces. They show the areas that are flagged illegal and then you can do something about it.
    Quote Quote  
  2. How about now? I am AFK so haven't had a chance to test this to see if it runs and if the colors are accurate. Not sure about "format=422".

    I set the clip levels interactively until the video is in the 5 - 246 range. It IS time consuming.

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf 
     scale=w=1280:h=720:format=422:out_color_matrix=bt709:in_range=limited,  "lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    Quote Quote  
  3. Originally Posted by chris319 View Post
    How about now? I am AFK so haven't had a chance to test this to see if it runs and if the colors are accurate. Not sure about "format=422".

    I set the clip levels interactively until the video is in the 5 - 246 range. It IS time consuming.

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf 
     scale=w=1280:h=720:format=422:out_color_matrix=bt709:in_range=limited,  "lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf


    You're missing the format=rgb24 prior to lutrgb, and the format=yuv422p exiting lutrgb (it's not format=422). Check some earlier posts.

    The order is important. Think of it A to B to C to D to E... Think of what each step is doing. Scale and format go together, because the sws scale flags specified in -vf scale tells ffmpeg how to convert to that desired format. If it was for an interlaced conversion, you'd need to specify that flag too for a proper interlaced YUV<=>RGB conversion.

    scale,format=rgb24,lutrgb,scale,format=yuv422p

    The first scale should use in_color_matrix, not out . Because the input is YUV, and you're converting to RGB prior to applying lutrgb . It "points" in the YUV direction, so you use would "in".

    Code:
    -vf scale=w=1280:h=720:in_color_matrix=bt709:in_range=limited,format=rgb24,"lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited,format=yuv422p
    But your levels will be messed up if your input video was typical because of the excessive clip values and limited range equations. If the input video was Y=16-235 , the output video is Y=42-208



    Even crudely clamping levels is generally better than clipping levels. It doesn't take long. Clipping is just throwing away data. Often that data has useful details that you can bring into legal range. Unless your footage was perfectly shot, perfectly lit, perfectly exposed, 100% everytime, clipping like that is going to make the highlights 1 shade of "white" are reduce highlight and shadow details.

    Clipping has it's place , but it has to be used judiciously. It has to be done AFTER you get the reference black and white level correct.
    Quote Quote  
  4. Is this going to screw up the color accuracy?

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf 
     scale=w=1280:h=720:format=rgb24:in_color_matrix=bt709:in_range=limited,  "lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",format=yuv422p, scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    Quote Quote  
  5. Originally Posted by chris319 View Post
    Is this going to screw up the color accuracy?
    In what way ? Those clip values with a limited range conversion definitely screw everything up

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf 
     scale=w=1280:h=720:format=rgb24:in_color_matrix=bt709:in_range=limited,  "lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",format=yuv422p, scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    Format has to come after scale, separated by a comma, because format is a separate filter.

    Right now you have format in the middle of scale ,breaking up the scale arguments
    Quote Quote  
  6. Those clip values with a limited range conversion definitely screw everything up
    The way to test it is to temporarily remove the lutrgb commands and see if the colors are still true.
    Quote Quote  
  7. Originally Posted by chris319 View Post
    Those clip values with a limited range conversion definitely screw everything up
    The way to test it is to temporarily remove the lutrgb commands and see if the colors are still true.
    You can easily test it yourself too... I'm pretty sure we did this earlier (but with full range in/out, but eitherway you'd expect +/- 3)


    So actually doing it on some test videos, if you take out lutrgb, you get the expected +/-3 from YUV=>RGB=>YUV trip with known colors, such as colobars or test patterns, or normal camera video - but nothing like the +/- ~ 27 with those lutrgb values .

    +/-3 "looks" similar to the input with basically any input. +/-27 does not. The contrast is off as mentioned earlier ,so that affects every color. That's the first thing you see before even looking at actual values. You know it's wrong right away. The black and white clipping is too drastic.

    In some productions clipping to Y=16 ,235 is even considered drastic . You're allowed small excursions, that's what the head and footroom are for .
    Quote Quote  
  8. This is as far as I've gotten. It runs and the colors are accurate.

    I need your help in substituting "format" for "-pix_fmt".

    Code:
    ffmpeg  -y  -i "test_pattern_720.mkv"  -pix_fmt rgb24  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf scale=in_color_matrix=bt709:in_range=limited  -color_primaries bt709,"lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    Quote Quote  
  9. Originally Posted by chris319 View Post
    This is as far as I've gotten. It runs and the colors are accurate.

    I need your help in substituting "format" for "-pix_fmt".

    Code:
    ffmpeg  -y  -i "test_pattern_720.mkv"  -pix_fmt rgb24  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf scale=in_color_matrix=bt709:in_range=limited  -color_primaries bt709,"lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf



    Incompatible pixel format 'rgb24' for codec 'mpeg2video', auto-selecting format 'yuv420p'
    Maybe try pix_fmt yuv422p instead of rgb24 , since you want 4:2:2

    Not sure what is going on there , but you have -color_primaries twice. The first instance is breaking up the filter chain before lutrgb, and everything after , including lutrgb is discarded

    Code:
    -vf scale=in_color_matrix=bt709:in_range=limited  -color_primaries bt709,"lutrgb...

    If you look at the verbose log (add -v 9 -loglevel 99 -report to print out a text file), it doesn't mention lutrgb being applied





    It was correct a few pages back ago , when you wanted your "unity". Full range equations in/out 0-255 => 0-255 . YUV 16-235 would become RGB 16-235. Which is r103. Which uses studio RGB.

    Recently , you've gone back to limited range equations, then clipped drastically, that's why it's not correct . Y 16-235 => RGB 0-255. So even if you clip RGB [16,235] , you cut off important data and change the contrast . Not sure why you're going in circles
    Last edited by poisondeathray; 28th Mar 2020 at 09:15.
    Quote Quote  
  10. I was getting the results I wanted with the script I posted for Marco a couple of days ago and I thought I was finished. Now you're saying it's not the right way to do things so I'm heeding your advice even though I was satisfied with the results previously.

    There was a reason I went back to limited range and it had to do with the behavior of ffmpeg. Note that I added a patch at #FFFFFF or 255 to my test pattern. I will have to futz around with full range again to refresh my memory as to why. In addition, one of our previous scripts had some color drift in it so I had to redo it to eliminate the color drift.

    Per r103 the allowable range is RGB 5 - 246. This range is wider than 16 - 235. It allows footroom and headroom for transients, overshoots and artifacts. If I were in charge, the range would be in the YUV domain and that would be that. I don't know why they chose to specify RGB in r103, but that's the hand we're dealt. The RGB components are never actually transmitted; YUV is what's transmitted. In fact some station/network specs call for Y 16 - 235.

    https://www.youtube.com/watch?v=Jo0fWmqtGBs
    Quote Quote  
  11. you have -color_primaries twice. The first instance is breaking up the filter chain before lutrgb, and everything after , including lutrgb is discarded
    I eliminated the first instance of -color_primaries and it definitely altered the colors by elevating the black level. 0-191-0 became 31-191-29.

    I eliminated the second instance, leaving in the first instance, and I got error messages and the video wouldn't play. This is why one must test these things.

    With both instances of -color_primaries I get 0-191-0. Perfect!

    I am leaving in -pix_fmt because it works. Some delivery specs call for 4:2:0 OR 4:2:2.

    So this is what I wound up with. Now I have to work on the 1080i version and add interlace.

    Code:
    ffmpeg  -y  -i "test_pattern_720.mkv"  -pix_fmt rgb24  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf scale=in_color_matrix=bt709:in_range=limited  -color_primaries bt709,"lutrgb=r='clip(val,30,223)':g='clip(val,30,223)':b='clip(val,30,223)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    Quote Quote  
  12. It's "working" because lutrgb is not actually applied. That's why it's "perfect". Look at the log if you need proof. Your syntax is interrupting the filter chain, so everything after is discarded

    It's not doing what you think it is . Just take out lutrgb competely. Don't even convert to RGB. Just encode the source as-is and presto - it "works" too

    I eliminated the first instance of -color_primaries and it definitely altered the colors by elevating the black level. 0-191-0 became 31-191-29.
    Because lutrgb is being applied now. You would expect the wrong colors with those clip values. Look at the log file if you need proof
    Quote Quote  
  13. You're right. The first instance has been eliminated and the clip levels have been set to more rational values.

    Here is the code I use in my "scope" program. Does it look right to you?

    Code:
    pipeIn$ = "-i " + filename$ + " -f image2pipe  -s 1280x720  -pix_fmt yuv420p  -r 59.94  -vf scale=in_range=full:out_range=full  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -vcodec rawvideo -"
    Quote Quote  
  14. Here is the latest version of the clipping program:

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -pix_fmt rgb24  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf scale=in_color_matrix=bt709:in_range=limited,"lutrgb=r='clip(val,37,212)':g='clip(val,37,212)':b='clip(val,37,212)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    lutrgb seems to behave differently on actual video than it does on a test pattern. It does not perform a hard clip.
    Quote Quote  
  15. Originally Posted by chris319 View Post
    You're right. The first instance has been eliminated and the clip levels have been set to more rational values.

    Here is the code I use in my "scope" program. Does it look right to you?

    Code:
    pipeIn$ = "-i " + filename$ + " -f image2pipe  -s 1280x720  -pix_fmt yuv420p  -r 59.94  -vf scale=in_range=full:out_range=full  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -vcodec rawvideo -"


    "Right" in what way ? I'm not sure how it's being used ... There is no output

    If -s 1280x720 and -pix_fmt are defining the input format or pipe in, those arguments should precede the -i .

    If they meant to be output arguments (something that alters the input) , they should come after the -i . But then you have 2 scales , -s with -vf scale, then you should combine those


    Originally Posted by chris319 View Post
    Here is the latest version of the clipping program:

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -pix_fmt rgb24  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf scale=in_color_matrix=bt709:in_range=limited,"lutrgb=r='clip(val,37,212)':g='clip(val,37,212)':b='clip(val,37,212)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf

    why -pix_fmt rgb24 ? It's not doing what you thing it's doing...

    -pix_fmt rgb24 will be applied at the end. But it will not work, because it will auto insert a filter and convert to yuv420p because you selected mpeg2 . If you wanted YUV422 , you should explicitly control the conversion . If you wanted YUV420 you should explicitly control that too.

    The clip values seem excessive, for the same reasons mentioned above. You're doing a limited range conversion. Y(16-235) gets mapped to RGB(0-255). Then clipping in RGB to 37-212 , you would expect a loss of contrast and erosion of highlight and shadow detail on a legal range input clip. Accuracy would be roughly +/- >30 or so. Y=16 "black" is now Y=48 "dark grey" . Y=235 "white" is now Y=198 "bright grey" (off by 37) . eg. 75% blue YUV(28,212,120) is now YUV(57,194,122). Significant errors in black and white level, messed up contrast and colors .

    And if you had a full range input, the right thing to do would be to use full range conversion to RGB , or clamp (not clip) it to legal range first before converting to RGB.

    You're going in circles. You should be able to check everything yourself


    lutrgb seems to behave differently on actual video than it does on a test pattern. It does not perform a hard clip.
    Is your actual video RGB ?

    Lutrgb hard clips precisely in RGB . If you need more proof , repeat this procedure again: Take known RGB input values from some BMP, or RGB input video , apply lutRGB, export a BMP . Measure.

    If your actual video is YUV, then possibly you are not converting YUV=> RGB correctly . Check each step and verify the results.

    When you convert back to YUV and especially subsample, you generate new values. Obviously that' s not RGB anymore. That's not lutrgb's fault.
    Quote Quote  
  16. why -pix_fmt rgb24 ?
    Because pdr said so and I can't get "format" to work so, as I explained, I'm using -pix_fmt.

    You're missing the format=rgb24 prior to lutrgb
    If your actual video is YUV, then possibly you are not converting YUV=> RGB correctly.
    You're going in circles.
    I'm trying to implement your longwinded, multi-paragraph suggestions. This is why I asked you to rewrite the script. I make a change according to what I think you are suggesting, then it turns out to be wrong. This piecemeal back-and-forth isn't helping. Add rgb24, don't add rgb24, solve it yourself. You've got me confused.

    Agreed that the clip levels seem odd.

    If they meant to be output arguments (something that alters the input) , they should come after the -i . But then you have 2 scales , -s with -vf scale, then you should combine those
    It's a scope. It's intended to measure the video so it shouldn't alter anything.
    Quote Quote  
  17. Full range vs. limited range:

    Full range: the 235-235-235 white patch becomes 255-255-255 WITH NO CLIPPING APPLIED.

    Limited range: 235-235-235 is left at 235-235-235 WITH NO CLIPPING APPLIED.

    ffmpeg is unquestoinably altering the levels in full range. That's why it's in limited range.
    Quote Quote  
  18. If your actual video is YUV, then possibly you are not converting YUV=> RGB correctly.
    So what is the correct way?
    Quote Quote  
  19. Originally Posted by chris319 View Post
    why -pix_fmt rgb24 ?
    Because pdr said so and I can't get "format" to work so, as I explained, I'm using -pix_fmt.

    You're missing the format=rgb24 prior to lutrgb
    If your actual video is YUV, then possibly you are not converting YUV=> RGB correctly.
    You're going in circles.
    I'm trying to implement your longwinded, multi-paragraph suggestions. This is why I asked you to rewrite the script. I make a change according to what I think you are suggesting, then it turns out to be wrong. This piecemeal back-and-forth isn't helping. Add rgb24, don't add rgb24, solve it yourself. You've got me confused.

    Agreed that the clip levels seem odd.


    Would you prefer yes/no answers ?

    I try explain WHY something happens, so people actually learn something and can think for themselves and apply to different situations




    1) Syntax wise, look at post 183 again . There is an example there

    -vf format is used whenever you are changing pixel formats . You control the conversion of when and how it's done. The order matters. If you wanted the output format to be 4:2:0 , the last format would be format=yuv420p instead

    If you don't understand something about ffmpeg syntax and formatting, ask now.




    2) Clip values are wrong for limited range equations.

    When you use limited range conversion to RGB -that RGB conversion already clips Y <16, Y> 235 .
    Y 16-235 => RGB 0-255

    Limited range conversion back to YUV
    RGB 0-255 => Y 16-235

    Anything you do to clip with lutrgb in that RGB stage will cut into that Y=16-235 range values when you convert back to YUV using a limited range equations . That's why the contrast is so low and levels and colors are wrong. The black level is already at Y=16, white at Y=235. Every value of RGB clipping cuts into that

    All "invalid" out of gamut values are discarded by that RGB conversion, even the ones in the middle of the range (but some come back when you convert back to YUV and subsample)

    Does that make sense ?
    So why are you doing this ?



    3) The RGB values in r103 refer to studio range RGB . That is obtained by using the full range equations , not limited range like you are using now. That's your "unity". That's what was used before in the last few pages. Not sure why you decided to change to limited range





    Originally Posted by chris319 View Post
    Full range vs. limited range:

    Full range: the 235-235-235 white patch becomes 255-255-255 WITH NO CLIPPING APPLIED.

    Limited range: 235-235-235 is left at 235-235-235 WITH NO CLIPPING APPLIED.

    ffmpeg is unquestoinably altering the levels in full range. That's why it's in limited range.

    RGB or YUV values?

    How are you testing it ?

    You already thought this a few pages back ... deja vu... And your error was demonstrated . Did you forget ?

    Full range in and out preserves levels . DO you need more proof ?
    Quote Quote  
  20. Latest. Moved format=rgb24 to inside -vf

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf format=rgb24,scale=in_color_matrix=bt709:in_range=limited,"lutrgb=r='clip(val,37,212)':g='clip(val,37,212)':b='clip(val,37,212)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    Quote Quote  
  21. I try explain WHY something happens, so people actually learn something and can think for themselves and apply to different situations
    I appreciate your interest and effort but it's not helping. You always have a new unwritten rule about ffmpeg.

    It doesn't help that I'm implementing your suggestions piecemeal, one by one. Referring me to a post from weeks ago doesn't help, either. If I changed something it was for a reason and by now I've forgotten why.

    I play my test pattern with MPC-BE and the 235-235-235 patch comes out exactly that color, according to my color-checker/eyedropper program. The root bmp file has it as 235-235-235.

    I run this script and the 235 patch comes out 255-255-255. Full range vs limited range. Same eyedropper, same player. Now tell me why.

    Code:
    ffmpeg  -y  -i "test_pattern_720.mp4"  -c:v mpeg2video  -r 59.94  -vf format=rgb24,scale=in_color_matrix=bt709:in_range=full,scale=w=1280:h=720:out_color_matrix=bt709:out_range=full  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    the 235-235-235 white patch becomes 255-255-255
    RGB or YUV values?
    235-235-235 WHITE can only be one of the two.
    Quote Quote  
  22. Originally Posted by chris319 View Post
    Latest. Moved format=rgb24 to inside -vf

    Code:
    ffmpeg  -y  -i "C0015.mp4"  -c:v mpeg2video  -r 59.94  -vb 50M  -minrate 50M  -maxrate 50M  -q:v 0  -dc 10  -intra_vlc 1  -lmin "1*QP2LAMBDA"  -qmin 1  -qmax 12  -vtag xd5b  -non_linear_quant 1  -g 15  -bf 2  -profile:v 0  -level:v 2  -vf format=rgb24,scale=in_color_matrix=bt709:in_range=limited,"lutrgb=r='clip(val,37,212)':g='clip(val,37,212)':b='clip(val,37,212)'",scale=w=1280:h=720:out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -ar 48000 -c:a pcm_s16le  -f  mxf  clipped.mxf
    This will be wrong because of the clip values and limited range conversion (but full range conversion will give you wrong values too with those clip values)

    see explanation #2 above in post #199 if you want to understand why



    Originally Posted by chris319 View Post
    I try explain WHY something happens, so people actually learn something and can think for themselves and apply to different situations
    I appreciate your interest and effort but it's not helping. You always have a new unwritten rule about ffmpeg.

    It doesn't help that I'm implementing your suggestions piecemeal, one by one. Referring me to a post from weeks ago doesn't help, either. If I changed something it was for a reason and by now I've forgotten why.

    You can go back and check. The full explanations are there, code examples too.

    Should I retype everything? Or just refer to the same post ?

    I can't help you with your forgetfulness






    I play my test pattern with MPC-BE and the 235-235-235 patch comes out exactly that color, according to my color-checker/eyedropper program. The root bmp file has it as 235-235-235.

    I run this script and the 235 patch comes out 255-255-255. Full range vs limited range. Same eyedropper, same player. Now tell me why.

    235-235-235 WHITE can only be one of the two.
    Right, but you're using a YUV video. The 235-235-235 patch isn't 235-235-235, because it's a YUV video now. So if it was a RGB BMP to start with, how did you convert to YUV ?

    And how is the player converting YUV to RGB ?

    eg. A normal range YUV to RGB conversion will convert YUV 235,128,128 to RGB 255,255,255 .

    But a full range YUV to RGB conversion will convert YUV 235,128,128 to RGB 235,235,235. That's your unity






    Demo -

    "Y_0,16,235,255.mp4"
    This is a 1280x720p24.0 4:2:0 h264 clip attached below . It's unflagged

    There are 4 bars , Y =0 , Y=16, Y=235, Y=255 . In a normal media player, you should only see 2 bars, because of a limited range conversion to RGB

    If you to use a YUV picker , you can verify this ,or use a waveform

    Code:
    ffplay -i "Y_0,16,235,255.mp4" -vf waveform=g=orange:o=0.5
    Image
    [Attachment 52509 - Click to enlarge]



    Full range conversion to RGB, full range conversion back to YUV 4:2:0 . ie. Full range equations in and out

    Code:
    ffmpeg -i "Y_0,16,235,255.mp4" -vf scale=in_color_matrix=bt709:in_range=full,format=rgb24,scale=out_color_matrix=bt709:out_range=full,format=yuv420p -c:v libx264 -crf 18 -an full_in_out.mp4
    Image
    [Attachment 52510 - Click to enlarge]



    Code:
    ffplay -i "full_in_out.mp4" -vf waveform=g=orange:o=0.5

    And the Y levels are the same
    Image Attached Files
    Quote Quote  
  23. To make that vapoursynth work , try to delete all registry connected with it and try to install it again.
    To not have real time feedback is quite a problem, beside getting correct ffmpeg codes.

    I tried to get graph for a video to see illegal values for R,G or B and for all combined. Not sure it is fast, maybe slow, maybe it could be faster in C, not sure, but vapoursynth is used for most work and also tried to use 4 threads, all work independently (analyzing R, G, B and for all channels). On my obsolete PC it runs about real time, but I have junk PC. To preview it with that previous code to see masks is perhaps better. Just an idea. It uses matplotlib library to get graphs on screen.

    This is what I got for that before mentioned lighthouse video. I trimmed 10 frames from beginning and 5 frames from end because it gave some false readings and screw graph axes scales:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from concurrent import futures
    import matplotlib.pyplot as plt       #pip install matplotlib
    
    core.max_cache_size = 1000  # or do not  limit RAM if having enough, default is 4096 which is  4GB
    source_path=r'F:/_lighthouse_lossless.mp4'
    clip = core.lsmas.LibavSMASHSource(source_path)
    clip = clip[10:-5]              #trimmed first 10 frames and last 5
    
    #MIN and MAX set always as 8bit values, even if using higher bits for RGB
    illegal_MIN = 5
    illegal_MAX = 246
    
    #YUV into RGB , kernel can be: 'Point', 'Spline36', 'Lanczos', 'Bilinear', 'Bicubic'
    RGB_RESIZE = 'Point'
    
    #format=vs.RGB24 or vs.RGB30,vs.RGB48,vs.RGBS
    #or other resize arguments
    RGB_RESIZE_ARGS = dict(matrix_in_s = '709', format = vs.RGB24, range_in_s = 'limited')
    
    
    
    #------------------------ END OF USER INPUT ---------------------------------
             
    _RGB_RESIZE = getattr(core.resize, RGB_RESIZE)
    
    '''original YUV to RGB'''
    rgb_clip = _RGB_RESIZE(clip, **RGB_RESIZE_ARGS)
    
    if rgb_clip.format.sample_type == vs.INTEGER:
        '''255 for 8bit, 1023 for 10bit etc'''
        SIZE=2**rgb_clip.format.bits_per_sample-1
        illegal_min=illegal_MIN*(SIZE+1)/256
        illegal_max=illegal_MAX*(SIZE+1)/256
    else:
        '''float values for RGBS'''
        SIZE=1
        illegal_min=illegal_MIN/255.0
        illegal_max=illegal_MAX/255.0
    
    def analyze_channel(plane, min=0, max=None, max_value=None):     
        mask = core.std.Expr(clips=plane, expr=[f'x {min} < x {max} > or {max_value} 0 ?'])\
                   .std.PlaneStats(prop='PlaneStats')
        return [mask.get_frame(frame).props['PlaneStatsAverage']*100 for frame in range(len(plane))] 
    
    def analyze_all(planes, min=0, max=None, max_value=None):
        expr =  [f'x {min} < y {min} < or z {min} < or x {max} > or y {max} > or z {max} > or {max_value} 0 ?']
        mask = core.std.Expr( clips=planes, expr=expr)\
                   .std.PlaneStats(prop='PlaneStats')
        return [mask.get_frame(frame).props['PlaneStatsAverage']*100 for frame in range(len(clip))]
    
    x_axes    = list(range(0, len(clip)))
    planes    = [core.std.ShufflePlanes(rgb_clip, planes=p,  colorfamily=vs.GRAY)  for p in [0,1,2]]
    
    with futures.ThreadPoolExecutor(max_workers=4) as exe:
        jobs = [exe.submit(analyze_channel, plane, illegal_min, illegal_max, SIZE) for plane in planes]
        jobs.append(exe.submit(analyze_all, planes, illegal_min, illegal_max, SIZE))
        y_axis = [job.result() for job in jobs]
        
    plt.title("RGB illegal values")
    plt.xlabel("Frames")
    plt.ylabel("Illegal values in % for frame")
    [plt.scatter(x_axes, y_axis[index], s=10, c=color) for index, color in enumerate(['red','green','blue','yellow'])]
    plt.show()
    Last edited by _Al_; 28th Mar 2020 at 23:40.
    Quote Quote  
  24. light house video gave mi this,

    R,G,B channels have same color and all channels is yellow in graph,

    first image is for all video, about 2minute in lenght, second image is some detail after zooming in graph
    Image Attached Thumbnails Click image for larger version

Name:	plot.PNG
Views:	98
Size:	200.7 KB
ID:	52511  

    Click image for larger version

Name:	plot_detail.PNG
Views:	82
Size:	184.0 KB
ID:	52512  

    Quote Quote  
  25. To make that vapoursynth work , try to delete all registry connected with it
    I hope you're joking. I've never heard of having to delete registry values and reinstalling a program just to get it to run. If that technique fails then I will have wasted my time screwing around with the registry. Not gonna go there.

    How do you account for the red splotches I see on the lighthouse?
    Last edited by chris319; 28th Mar 2020 at 23:51.
    Quote Quote  
  26. ok, so screwing around for two years with ffmpeg lines to pull together just couple of video formats is ok then? Not really sure who is joking here actually,

    red thingy in video are illegal values for those actual pixels by your definition,

    those graphs show them as yellow in percentage, then separate channels in those particular colors represent one color as well, red, green and blue, so for example in those last graphs it shows that only illegal blue gives a trouble. So evaluating it, maybe just to reduce blue channel would work , instead of all of them, but not sure how would it render back to YUV.
    Last edited by _Al_; 28th Mar 2020 at 23:51.
    Quote Quote  
  27. Originally Posted by chris319 View Post

    PDR: your code fails.

    Which code ?
    How does it "fail" ?
    Quote Quote  
  28. Well f*** me.

    The crux of my problems was the video player, which is essential to my eyedropper program. VLC, MPC-BE and WMP were changing the video levels and making 235-235-235 into 255-255-255. I can't find any way to defeat this behavior. The problem seems to be with the players, not ffmpeg. This messes with my level measurements.

    So I wrote a little video player in PureBasic and sure enough, it doesn't stretch 235 to 255.

    Thanks for making that test file, pdr.

    Anybody know of a player that will play back the true full range without screwing around with the video levels?
    Quote Quote  
  29. "Pot Player" can be configured to not force 235 to 255.

    VLC and MPC-BE are headed for the garbage can where they belong.

    libx265 reduces 255 to 235 in full range unless you use yuvj420p.

    mpeg2video, huffyuv, ffv1 and libx264 are OK.
    Last edited by chris319; 29th Mar 2020 at 05:45.
    Quote Quote  
  30. Originally Posted by chris319 View Post
    Well f*** me.

    The crux of my problems was the video player, which is essential to my eyedropper program. VLC, MPC-BE and WMP were changing the video levels and making 235-235-235 into 255-255-255. I can't find any way to defeat this behavior. The problem seems to be with the players, not ffmpeg. This messes with my level measurements.

    So I wrote a little video player in PureBasic and sure enough, it doesn't stretch 235 to 255.

    Thanks for making that test file, pdr.

    Anybody know of a player that will play back the true full range without screwing around with the video levels?

    Nothing is being "forced" by any of the players; they are just doing the standard conversion by default. If a video is unflagged or misflagged, you can't blame the player

    You should be able to control the conversion in any decent player. It depends where the RGB conversion, sometimes it's the GPU , sometimes the renderer.

    For example, in MPCHC, you can add and output shader 0-255 to 16-235 . It should be the same in MPCBE .


    1) You can use mpv or one of it's derivatives. Accurate player, including 10bit formats. There are many mpv projects and forks and it's highly customizable

    Code:
    mpv input.ext --video-output-levels=limited
    In this case you want the output levels as "limited", because it's applying a studio range RGB conversion (RGB 16-235, white to black ; instead of computer RGB, or 0-255 white to black). It's also called limited range RGB in some places

    If you're going to be using this scenario frequently, you can make a batch file , maybe save to a handy location like desktop, where you can drop a video onto it to play (save a text file, rename the .txt extension to .bat)

    Code:
    mpv %1 --video-output-levels=limited


    2) ffplay. You can control the RGB conversion using -vf scale or zscale - but there seems to be an accuracy issue. More than +/3 . With greyscale it should be perfect (+/- 0) . Both swcale and zscale(zimg) look to be affected. But zimg in vapoursynth works (+/- 0) . Same library as zscale. So it suggests an ffmpeg/ffplay implementation bug. I'm looking into it if affects current versions only, etc..


    3) Shotcut. Not necessarily a great "player". Assign the color range to "full" for the clip in the properties panel. On that test clip, accuracy is slightly off (+/-1). With greyscale values it should be perfect (+/- 0) .

    4)avisynth/avspmod , or vapoursynth/vsedit - you have full control over the method of RGB conversion for display and playback , built in color picker. But you can't get vapoursynth to work... Very useful tools to have for AV work. Even if you rely primarily on other tools - it's always nice to have several tools to check and verify results.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!