Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 91 to 120 of 328
Thread
-
At your suggestion, I ran my script with range=full and the RGB clipped at 246, just as expected. Oddly, it had been clipping at 255 but now it's clipping at 246. Was I doing something wrong before? Now it works.
The RP 219 colors are all +/- 2 of 191 which is within tolerance as far as I'm concerned, unless you write some code that eliminates the floating-point error. The 0 is clipped to 5 per my command.
I've added GOP 12 and BF 0. Does that sound right?Last edited by chris319; 17th Feb 2020 at 00:32.
-
Should be bf 2 . I think the vtag is wrong match , it's for 1080i50. I think "xd59" is for 720p60, but it might not matter for mxf ; I think it only matters for mov wrapped xdcamhd422 . But it doesn't hurt to have correct tag. There should be a buffer setting for VBV, not sure what it is. When you use that command, it says auto choosing VBV buffer size of 746 bytes; I'm pretty sure that's wrong . Audio specs are important too. You need at least 2 mono 48Khz 24bit LPCM
ffmbc passed everything for submissions in the past for XDCAMHD422 , and for many people . There was a preset. Not sure if there are any issues with the ffmpeg variant or if there are other switches that you need. Might be worthwhile to look up the code and translate it over if you don't want to use ffmbc directly
What "0" ? Typical bars shouldn't have any at any stage (subsampling can generate zeros, but typical bars won't show it, because of the way it's arranged). Unless you started with Y=0. That's what full range in/out means. I think you're either using the wrong equation to convert or to check. Or both. Use full range equations for everything. Or it's that -pix_fmt doing something funky
It should be +/- whatever of RGB 180,16,16 now, not 191, because you're using studio range RGB (full range equations). ie. final YUV to RGB check should be using it too.
I would explicitly control the conversions by telling ffmpeg scale matrix,range and "format" in the linear filter chain. -pix_fmt can cause problems, it will perform limited range 601 unless other wise specified. As mentioned earlier I got different results when file was flagged 709 vs unflagged when using -pix_fmt instead of format in the chain explictly in the correct location
When I ran your earlier command line it said "it says incompatible pixel format rgb24 for codec mpeg2, auto selecting yuv420p" . So -pix_fmt was being applied at the end. It only picked yuv420p because you chose mpeg2 . In the last one you have 2 -pix_fmt . I would avoid it altogether. Use format in the filter chain so you know exactly where and how it's being converted. I would only use -pix_fmt as an input option (before the -i) for rawvideo pipes
As I said before, the scale arguments you fill point in the YUV direction . So YUV to RGB you use the "IN" range and matrix. RGB to YUV you use the "OUT" range and matrix
To be clear, this is full range 709 conversion to RGB , adding the lutrgb clipping, full range 709 conversion back to YUV
Code:-vf scale=in_range=full:in_color_matrix=bt709,format=rgb24,lutrgb='r=clip(val,6,245)',lutrgb='g=clip(val,6,245)',lutrgb='b=clip(val,6,245)',scale=out_color_matrix=bt709:out_range=full,format=yuv420p
Next is subsampling. Y,RGB can be 16-235 (even excursions clipped, so quite conservative) and perfect in YUV444P8. 100% perfect r103. YET as soon as you subsample you can generate illegal pixels. Easy to prove and demonstrate this. It's the combination and proximity of various pixels that cause this. And you can miss with pure min/max clipping
You can partially deal with this using a filter, or at least reduce the % . It mentions quarter band horizontal and half band vertical and the coefficients in the r103 document. I think you can apply a convolution filter. I played with this in vapoursynth, and it does help... Not sure if it's exactly the same numbers, but essentially the net effect is a blurring details. But the problem is applying filter , converting back to YUV then subsampling yet again produces new pixel values, and potentially some are illegal, yet again. Its a vicious cycle. And you can't blur it to a bloody mess.
The QCer actually applies a filter when checking too. I know a guy and this is all he does, and he owes me some favours. I'll try to get more info. They typically don't use r103 at their facility, but I know they have a module/setting for r103, r103 strict and both use a prefilter. Also they have variables like % flagged (ie. it's adjustable before a warning or rejection is given, and I'm pretty sure it's >1% by default)
They key point is what % is flagged as illegal, and what are acceptable limits for that submission scenario . But I don't know if a good way of dealing with this besides using dedicated software/ hardware. There are broadcast checkers that can estimate or flag % or display them as colored/overlay, but I don't know of any free / open source ones that estimate % amounts . If you are aware you are above a certain % then at least you can do something about itLast edited by poisondeathray; 17th Feb 2020 at 01:52.
-
Code:
What is a "normal HD vtag"? For the formats you want these are the vtags: xd5d XDCAM HD422 1080p24 CBR xd5e XDCAM HD422 1080p25 CBR xd5f XDCAM HD422 1080p30 CBR xdv6 XDCAM HD 1080p24 VBR xdv7 XDCAM HD 1080p25 VBR xdv8 XDCAM HD 1080p30 VBR xd55 XDCAM HD422 720p25 CBR xd59 XDCAM HD422 720p60 CBR
-
How does this look?
Code:ffmpeg -y -i "D:\videos\trains\c0015.MP4" -vf scale=in_range=full:in_color_matrix=bt709,format=rgb24,lutrgb='r=clip(val,6,246)',lutrgb='g=clip(val,6,246)',lutrgb='b=clip(val,6,246)',scale=out_color_matrix=bt709:out_range=full,format=yuv420p -s 1280x720 -r 59.94 -vb 50M -minrate 50M -maxrate 50M -dc 10 -intra_vlc 1 -lmin "1*QP2LAMBDA" -qmin 1 -qmax 12 -vtag xd59 -non_linear_quant 1 -g 12 -bf 2 -profile:v 0 -level:v 2 -acodec pcm_s16le -f mxf clipped.mxf
-
-
What is the -s 1280x720 for ? If input was already 1280x720 you don't need it. And if you did need it, you should include it in the prior -vf scale filter chain with the range and matrix: -vf scale=w=1280:h=720:etc.... . Or if your wanted the intermediate RGB filtering as original resolution whatever it was , then include it as the output scale as w=1280:h=720 in that filter chain (before the last "format", when exiting RGB)
The last format should be yuv422p (not yuv420p) for xdcamhd422
There are a bunch of xdcamhd422 checks too. I don't know how strict you're trying to be. You get warnings or rejections for things like wrong essence, buffer underflow, XML incorrect, really a zillion little things you never thought about.
see this discussion for the bufsize , initial occupancy
https://groups.google.com/forum/#!topic/ffmbc-discuss/P6r5fsjGq9Y
https://github.com/bcoudurier/FFmbc/blob/ffmbc/ffmbc.c
Code:} else if(!strcmp(arg, "xdcamhd422")) { opt_codec("vcodec", "mpeg2video"); opt_codec("acodec", "pcm_s16le"); opt_default("flags2", "+ivlc+non_linear_q"); opt_default("bf", "2"); opt_default("g", norm == NTSC ? "15" : "12"); opt_frame_pix_fmt("pix_fmt", "yuv422p"); opt_qscale("qscale", "1"); opt_default("qmin", "1"); opt_default("b", "50000000"); opt_default("maxrate", "50000000"); opt_default("minrate", "50000000"); opt_default("bufsize", "17825792"); opt_default("rc_init_occupancy", "17825792"); opt_default("sc_threshold", "1000000000"); intra_dc_precision = 10; opt_default("color_primaries", "bt709"); opt_default("color_transfer", "bt709"); opt_default("color_matrix", "bt709");
-
How's this?
Code:D:\Programs\ffmpeg\BroadcastVideo\ffmpeg -y -i "D:\videos\trains\c0015.MP4" -vf scale=w=1280:h=720:in_range=full:in_color_matrix=bt709,format=rgb24,lutrgb='r=clip(val,6,246)',lutrgb='g=clip(val,6,246)',lutrgb='b=clip(val,6,246)',scale=out_color_matrix=bt709:out_range=full,format=yuv422p -r 59.94 -vb 50M -minrate 50M -maxrate 50M -dc 10 -intra_vlc 1 -lmin "1*QP2LAMBDA" -qmin 1 -qmax 12 -vtag xd59 -non_linear_quant 1 -g 12 -bf 2 -profile:v 0 -level:v 2 -acodec pcm_s16le -f mxf clipped.mxf
With your code, the whites have crept back up to 255 in "full" range. With the code I posted the whites are clipped at 245.Last edited by chris319; 17th Feb 2020 at 10:58.
-
This code clips the whites at 245 and gives me 4:2:2:
Code:D:\Programs\ffmpeg\BroadcastVideo\ffmpeg -y -i "D:\videos\trains\c0015.MP4" -vf scale=in_range=full:in_color_matrix=bt709,format=rgb24,lutrgb='r=clip(val,6,246)',lutrgb='g=clip(val,6,246)',lutrgb='b=clip(val,6,246)',scale=out_color_matrix=bt709:out_range=full,format=yuv422p -s 1280x720 -r 59.94 -vb 50M -minrate 50M -maxrate 50M -dc 10 -intra_vlc 1 -lmin "1*QP2LAMBDA" -qmin 1 -qmax 12 -vtag xd59 -non_linear_quant 1 -g 12 -bf 2 -profile:v 0 -level:v 2 -f mxf clipped.mxf
-
RGB or YUV ?
What video? Bars ? or some random video ?
What stage ? The very last ?
How are you checking? Computer RGB, or studio RGB range ?
If you're using a computer monitor and color picker, you're probably using computer RGB .
The last RGB check has to be done as r103 - studio RGB range too.
**A random video can generate 0 and 255 because of subsampling and upsampling for the final RGB check. Easy to demonstrate and prove this.
But typical bars with75% colors and 0,100% white/black should not -
All measurements are RGB as this is the R 103 spec.
I'm using a color picker and MPC-BE to check. There is an object in the shot which reads RGB 255 out of the camera and 255 using your code, moving the size to within "scale".in the prior -vf scale filter chain with the range and matrix: -vf scale=w=1280:h=720:etc.... .
The last RGB check has to be done as r103 - studio RGB range too.
I'm using actual camera footage with an object which reads RGB 255-255-255 untreated. When I apply my code it's RGB 246-246-246. Good enough for me and R 103 compliant. -
You're not measuring it correctly.
ie. MPC-BE is not using the correct RGB. It's converting back to RGB for display using the standard Rec 709 conversion, not the studio RGB levels required for r103
What you "see" is just a RGB converted representation of the underlying YUV video
If you're using a media player, using a normal setup - it will take that file and use normal 709 conversion for computer RGB. Because you're on a computer 0-255 system. So a big white bar of Y=235 will become RGB 255,255,255
If you have a secondary display you can set that up for RGB 16-235, and you need to do a bunch of stuff to setup it up properly like the GPU drivers settings to output 16-235.
Another way is to preview it like this with ffplay, then use a normal color picker like you are now
Code:ffplay -i xdcamhd422.mxf -vf scale=in_color_matrix=bt709:in_range=full,format=rgb24
(you can also do these types of manipulations and preview them in avspmod, vsedit. And check YUV levels directly)
I already mentioned this on page 1
-
I did a vapoursynth test, tried to follow all that was said to clip those RGB values in right way, made some black&white masks to see results. Basically if there is lots of illegal values in original YUV, like blown up sky, it still will show at the end, some highs or R, G or B would still come out from middle of range. Even if YUV original is nice, it will still end up with illegal values, like edges, where some R, G or B alone would jump out.
Code:import vapoursynth as vs from vapoursynth import core import numpy as np import cv2 clip = core.lsmas.LibavSMASHSource(r'G:/video.mp4') MIN = 5 MAX = 246 def restrict_rgb_frame(n,f): img = np.dstack([np.array(f.get_read_array(i), copy=False) for i in range(3)]) img = np.clip(img, a_min = MIN, a_max = MAX) vs_frame = f.copy() [np.copyto(np.asarray(vs_frame.get_write_array(i)), img[:, :, i]) for i in range(3)] return vs_frame def get_mask_illegal_values(n,f): img = np.dstack([np.array(f.get_read_array(i), copy=False) for i in range(3)]) mask = cv2.inRange(img, (MIN, MIN, MIN), (MAX, MAX, MAX)) mask = cv2.bitwise_not(mask) #mask is one plane only, grayscale vs_frame = f.copy() [np.copyto(np.asarray(vs_frame.get_write_array(i)), mask[:, :]) for i in range(3)] return vs_frame '''original YUV to RGB, using YUV as full range to get studio RGB''' rgb_clip = core.resize.Point(clip, matrix_in_s = '709', format = vs.RGB24, range_in_s = 'full') '''making a black&white mask to show illegal values before limiter''' mask = core.std.ModifyFrame(rgb_clip, rgb_clip, get_mask_illegal_values) '''limiting RGB, there is no illegal values now''' clipped_rgb = core.std.ModifyFrame(rgb_clip, rgb_clip, restrict_rgb_frame) '''making a mask to show illegal values (should be all black, no illegal values now)''' mask_clipped = core.std.ModifyFrame(clipped_rgb, clipped_rgb, get_mask_illegal_values) '''RGB back to final YUV for delivery ''' clipped_yuv = core.resize.Point(clipped_rgb, matrix_s='709', format = vs.YUV420P8, range_s = 'full') '''changing that YUV back to RGB to mock studio monitor''' rgb_clip_monitor = core.resize.Point(clipped_yuv, matrix_in_s = '709', format = vs.RGB24, range_in_s = 'full') '''making a black&white mask to show illegal values''' mask_monitor = core.std.ModifyFrame(rgb_clip_monitor, rgb_clip_monitor, get_mask_illegal_values) clip.set_output(0) rgb_clip.set_output(1) clipped_rgb.set_output(2) mask.set_output(3) clipped_yuv.set_output(4) rgb_clip_monitor.set_output(5) mask_monitor.set_output(6)
-
Nice _Al_ ,
I wonder if there is a way to estimate % or # pixels affected in a frame? Maybe binarize the results (0 or 1), then calculate something that against 100% black ? Or is there some python function that can already do this ?
It's nice to visualize the areas affected too , so you can maybe do something about it. e.g. maybe clip to more strict range, and see how that affects it, or maybe apply filter like GeneralConvolution in avs or Convolution in vpy etc... decrease saturation etc... and you can use the same masks to apply the filter
A trip down memory lane - recall the U,V ranges for valid RGB and the animated GIF . So many values, and in the middle of the range are illegal
https://forum.doom9.org/showthread.php?t=154731
If you "cull" all the bad values by converting to RGB (and even clipping to stricter range) , many illegal values come back as soon as you subsample. -
I'll try using ffplay when I'm back at my regular computer.
What about the -t option? If I use -t 10 I would expect it to play for 10 seconds, then hopefully it would pause and display a freeze frame. Is that how it works? Or, I can manually pause playback with the space bar or the "p" key.
Code:ffplay -i myvideo.mxf -t 10 -vf scale=in_color_matrix=bt709:in_range=full,format=rgb24
Al: what qualifies as "illegal" in your tests?Last edited by chris319; 17th Feb 2020 at 14:36.
-
Yes -t should work, and spacebar for pause
I have checked mpc-be using my color-bar video. If the bar is, say, RGB 0-191-0 in the bmp image, the reading I get from mpc-be is likewise 0-191-0. I'm not expecting a great deal of difference using ffplay but I'll try it.
There's you're answer.
A BMP is already RGB. Not YUV.
You're using computer RGB (0-255) for MPCBE for your YUV to RGB conversion for display . Recall the 2 RGB systems, computer RGB, studio range RGB (limited range RGB) . On a studio range RGB system such as r103, 75% red is 180,16,16 . On a computer RGB system , 75% red is 191,0,0 . In both cases YUV values are the same -
@ _Al_
1) I think it would be more flexible to set the min/max mask_illegal_values limits different from the restrict_rgb_frame clipping min/max values. e.g. You might want to clip to a different min/max range , or set different illegal limits . You might have clip_min, clip_max , and illegal_min, illegal_max or something like that
2) More of a general question but what is an easy way to define and use a parameter variable in python ?
For example, I would like to change the resize algorithm (or any command that I can replace with a variable name) to a global variable, instead of having "Point" hardcoded. So instead of having to manually change Point to Bicubic 3 times (or however many instances), just do it once in the parameter list
So in the list of parameter variables you could change the something like
clip_min = 16
clip_max = 235
illegal_min = 5
illegal_max = 246
Resize = "Bicubic" -
Illegal qualifies any R, G or B value that is outside of range (5,246) for that particular RGB video. That would show white pixel. But that is what I did, it could be different, per channel etc.
Here is showing how many percent of those white pixels is. Function that makes those masks is just changed, plus one global variable that counts beforehand, how many pixels video actually has.
Code:NUM_PIXELS = clip.width * clip.height def get_mask_illegal_values(n,f): img = np.dstack([np.array(f.get_read_array(i), copy=False) for i in range(3)]) mask = cv2.inRange(img, (MIN, MIN, MIN), (MAX, MAX, MAX)) mask = cv2.bitwise_not(mask) ####this would make all white grayscale to test it, it would show 100% ##mask = np.zeros((clip.height, clip.width), np.uint8) ##mask[:] = (255) ####all black grayscale to test it would show 0 % ##mask = np.zeros((clip.height, clip.width), np.uint8) white_pixel_total = np.sum(mask == 255) percent = round(white_pixel_total/NUM_PIXELS*100, 2) mask = np.dstack([mask,mask,mask]) text = f'{percent} % of illegal values R,G or B is below {MIN} or above {MAX}' mask = cv2.putText(mask, text, (50, clip.height-80), cv2.FONT_HERSHEY_SIMPLEX , 1, (255,0,0), 3, cv2.LINE_AA) vs_frame = f.copy() [np.copyto(np.asarray(vs_frame.get_write_array(i)), mask[:, :, i]) for i in range(3)] return vs_frame
most parts in video are far below 0.1 percentLast edited by _Al_; 17th Feb 2020 at 15:56.
-
poisondeathray, I will look into it later,
to define resize it has to be done setting attribute, because it is not an argument but attribute of 'resize'
example definig resize method:
Code:kernel= 'Point' #or 'Bicubic' etc
Code:_resize = getattr(core.resize, kernel) clip = _resize(clip, format = vs.RGB24)
Code:clip = core.resize.Point(clip, format = vs.RGB24)
-
Actually planestats average calculated on mask_monitor gives the % affected already doesn't it? (or not as %, but expressed as 0-1.0) . Since it's already binarized . 100% pixels affected or flagged as illegal returns an average of 1.0 . 0% affected returns 0.0 . And some white dots in between is some value in between . (And it also lists channel min/max values, but those refer to the mask only, so it's always 0 and 1)
I'm pretty sure this is Ok, for separating out the min/max for clipping vs illegal , can you double check when you get a chance? Thanks
Code:clip_MIN = 16 clip_MAX = 235 illegal_MIN = 5 illegal_MAX = 246 def restrict_rgb_frame(n,f): img = np.dstack([np.array(f.get_read_array(i), copy=False) for i in range(3)]) img = np.clip(img, a_min = clip_MIN, a_max = clip_MAX) vs_frame = f.copy() [np.copyto(np.asarray(vs_frame.get_write_array(i)), img[:, :, i]) for i in range(3)] return vs_frame def get_mask_illegal_values(n,f): img = np.dstack([np.array(f.get_read_array(i), copy=False) for i in range(3)]) mask = cv2.inRange(img, (illegal_MIN, illegal_MIN, illegal_MIN), (illegal_MAX, illegal_MAX, illegal_MAX)) mask = cv2.bitwise_not(mask) #mask is one plane only, grayscale vs_frame = f.copy() [np.copyto(np.asarray(vs_frame.get_write_array(i)), mask[:, :]) for i in range(3)] return vs_frame
Last edited by poisondeathray; 17th Feb 2020 at 16:06.
-
This gives rise to the question: when I encode the bmp, do I want to use full range or limited range to end up with standard RP 219 bars in the encoded mp4? I think I've been using limited range (16 - 235).
On a studio range RGB system such as r103, 75% red is 180,16,16 . On a computer RGB system , 75% red is 191,0,0 .
However, jagabo's calculations, which match SMPTE RP 219 exactly, have red at 191-0-0. So RP 219 bars are full range RGB 0 - 255?
The video R 103 "expects" is limited-range 16 - 235?
Have I got that right?
I see that you have specified "full" when using ffplay. So how is this different from MPC-BE? -
I'll be back, out of time today, those PlaneStas there was min, max and average value and average was in floating point, I thought. I don't think it is possible to get % ( out of range values), but I could be wrong.
EDIT: I see, that should give % in that specific case 0 or 1, I will compare those valuesLast edited by _Al_; 17th Feb 2020 at 16:21.
-
Again, it doesn't matter. You end up with the same YUV values either way (or very close, it turns out you need that levels workaround if starting with 180,16,16)
If you start with RGB 180,16,16, you use the full range equation
If you start with RGB 191,0,0, you use the standard, limited range equation
But for people working on computers, you usually use RGB 0-255. Because your display is setup for RGB 0-255
On a studio range RGB system such as r103, 75% red is 180,16,16 . On a computer RGB system , 75% red is 191,0,0 .
However, jagabo's calculations, which match SMPTE RP 219 exactly, have red at 191-0-0. So RP 219 bars are full range RGB 0 - 255?
If you convert YUV bars to RGB with full range equation, 709 ( resulting in limited range RGB, or studio range RGB), that results in RGB 180,16,16
The video R 103 "expects" is limited-range 16 - 235?
Have I got that right?
In Studio range RGB (or limited range RGB), black to white is RGB 16-235
In computer RGB, Black to white is RGB 0-255 .
It's using "limited" by default. Or the "regular" computer RGB conversion. ie. Limited range 709 to convert YUV to RGB for display. So that results in RGB 0-255. For bars you would get 191,0,0 (or +/- 1 or 2 for rounding etc...) -
min/max aren't relevant, because that will always be 0 or 255 on the mask layer for 8bit, 0 or 1023 for 10bit etc...
I'm pretty sure it's just *100 , and it's linear not some weird log function. It's just decimal range 0-1.0 . 0 is 100% black, 1.0 is 100% white. Stuff in between is intermediate. 50% affected would be 0.5 -
yes, this is specific case 0 or 1 only, I fixed my answer I will check it,
then if trying to make it vs only, there would have to be that mask ready beforehand also in vs, -
If you start with RGB 191,0,0, you use the standard, limited range equation
It's using "limited" by default. Or the "regular" computer RGB conversion. ie. Limited range 709 to convert YUV to RGB for display. So that results in RGB 0-255.
In studio range RGB (or limited range RGB), black to white is RGB 16-235 -
Yes, this specific case. It's just to estimate what % of pixels are "illegal" as defined by the limits set. If you set it 5,246 (or maybe you want to show other value). Because some places list a certain % that's allowable (and not just r103, I'm saying in general, useful for many other scenarios too) .
The mask is already there from mask_monitor
I just added
Code:mask_monitor = core.std.PlaneStats(mask_monitor, plane=0, prop='PlaneStatsR').std.PlaneStats(mask_monitor, plane=1, prop='PlaneStatsG').std.PlaneStats(mask_monitor, plane=2, prop='PlaneStatsB').text.FrameProps()
So this is quite a handy function .
You can adjust the min/max clipping maybe to 8 and 245 and show how many white spots disappear or become "legal" . You can apply a low pass or convolution and see the % illegal pixels reduce .
I'm still validating/testing it but it seem to match other tools and what planestats is saying . I'm trying to look for problems and issues for you to fix , but it's pretty solid right now . It's almost like a pro tool where hot spots are identified with zebra or colored overlay . In fact you can modify this to display colored maybe 25% overlay quite easily too
And this shows what experience is saying - when you clip RGB 6-246 , you get 0% illegal when you do everything the same but use YUV444. It's the subsampling step that is the culprit that generates illegal pixels. On normal videos, 100% legal can become some % illegal as soon as you subsample . It really depends on the distribution of pixels and relationships and edges. You can even clip to strict RGB 16-235, on some videos the subsampling willl bring back some % illegal. Almost unavoidable -
No, I already (tried to) explained this... I know it's confusing terminology.
There is a difference between limited and full range equations, and limited and full range data levels.
"Limited range video" means black point is supposed to be at Y=16, white point Y=235. You can have Y 0-255 data levels , but if the black point is Y=16, white is Y=235, that's still called "limited range video", but with over/undershoots
"full range video" means black point is supposed to be Y=0, white point Y=255
Similarly , Studio range RGB (or limited range RGB) refers to the black and white point of RGB 16,16,16 , 235,235,235 . You can still have RGB values <16, >235. Undershoots and overshoots.
And computer range RGB (or full range RGB) refers to the black and white point of RGB 0,0,0 , 255,255,255
The limited range equation refers to the Y 16-235 level converting to/from RGB 0-255. (Y 0-15, 236-255 are discarded.) It's called "limited" range equation because it takes the limited range of Y 16-235. So if you started with normal YUV video, such Y16-235, you need to apply the limited range equation to get full range RGB data levels (or computer range RGB, where black is 0,0,0, white is 255,255,255) . This is why in ffmpeg syntax the -vf out_range=limited or tv , in_range= limited or tv . And that is why it "points" in the YUV direction. Because the "limited" refers to the YUV side. The "limited" for ffmpeg -vf scale is referring to the equation used
http://avisynth.nl/index.php/Color_conversions
yuv [16,235] <-> rgb [0,255] (0 <= [r,g,b] <= 255, 16 <= y <= 235, 16 <= [u,v] <= 240)
Similarly, the full range equations refers to the Y 0-255 level converting to/from RGB 0-255 . So if you had Y 16-235, you would get RGB 16-235. If you had YUV 39,128,128, you would get RGB 39,39,39 . That' s your "unity" way back in the beginning. It's called full range equation because it takes the full range of Y0-255 instead of 16-235. Similarly , ffmpeg -vf scale has in/out ranges as "full" or "pc" for the full range equations
http://avisynth.nl/index.php/Color_conversions
yuv [0,255] <-> rgb [0,255] (0 <= [r,g,b] <= 255, 0 <= y <= 255, 0 <= [u,v] <= 255) -
It's called "limited" range equation because it takes the limited range of Y 16-235.
So when encoding my color-bar bmp I use range=limited, correct?
Similar Threads
-
ffmpeg 4.1.4 question regarding "limited color range" output file
By bokeron2020 in forum Newbie / General discussionsReplies: 12Last Post: 1st Aug 2019, 17:28 -
Can I convert color profile with FFMPEG?
By PabstBlueRibbon in forum Video ConversionReplies: 0Last Post: 9th Sep 2017, 13:41 -
Color Range Question
By Akai-Shuichi in forum RestorationReplies: 4Last Post: 14th Feb 2017, 15:53 -
Change in color while reencoding with ffmpeg
By Epaminaidos in forum Video ConversionReplies: 24Last Post: 30th Sep 2016, 11:09 -
Can I alter h264 file's color range flags?
By bergqvistjl in forum Newbie / General discussionsReplies: 1Last Post: 17th Dec 2015, 12:00