VideoHelp Forum
+ Reply to Thread
Page 3 of 6
FirstFirst 1 2 3 4 5 ... LastLast
Results 61 to 90 of 152
Thread
  1. Originally Posted by chris319 View Post
    This code works:

    Code:
    rf.f = yf + 2*(vf-128)*(1-#Kr)
    gf.f = yf - 2*(uf-128)*(1-#Kb)*#Kb/#Kg - 2*(vf-128)*(1-#Kr)*#Kr/#Kg 
    bf.f = yf + 2*(uf-128)*(1-#Kb)
    
    If rf.f > 255: rf = 255:EndIf
    If gf.f > 255: gf = 255:EndIf
    If bf.f > 255: bf = 255:EndIf
    And this code doesn't, despite your claim of having tested it.

    Code:
    Pr = v / 255: Pb = u / 255: Y / 255
    
    Rf.f=Yf+Pr*2*(1-#Kr)
    Gf.f=Yf-(2*Pb*(1-#Kb)*(#Kb/(1-#Kr-#Kb)))-(2*Pr*(1-#Kr)*(#Kr/(1-#Kr-#Kb))) 
    Bf.f=Yf+Pb*2*(1-#Kb)
    
    rf * 255
    gf * 255
    bf * 255
    This is almost same code (only Kg is replaced by explicit use of Kr and Kb ; (1-Kb-Kr)=Kg)
    You must subtract from Cb and Cr offset i.e. (255/2=127.5)
    Clamping (for 0 and for 255) may be required due quantization errors.

    So pseudo code should be like this:

    Code:
    Pr = (v -127.5)/ 255: Pb = (u-127.5) / 255: Y / 255
    
    Rf.f=Y+Pr*2*(1-#Kr)
    Gf.f=Y-(2*Pb*(1-#Kb)*(#Kb/(1-#Kr-#Kb)))-(2*Pr*(1-#Kr)*(#Kr/(1-#Kr-#Kb))) 
    Bf.f=Y+Pb*2*(1-#Kb)
    
    If Rf.f > 1.0: Rf = 1.0:EndIf
    If Gf.f > 1.0: Gf = 1.0:EndIf
    If Bf.f > 1.0: Bf = 1.0:EndIf
    
    If Rf.f < 0.0: Rf = 0:EndIf
    If Gf.f < 0.0: Gf = 0:EndIf
    If Bf.f < 0.0: Bf = 0:EndIf
    
    Rf * 255
    Gf * 255
    Bf * 255
    You should check data types (have no clue - Rf.f means float? and Rf means integer?) and also provide Kr, Kb variables
    Quote Quote  
  2. Originally Posted by chris319 View Post
    And this code doesn't, despite your claim of having tested it.

    Code:
    Pr = v / 255: Pb = u / 255: Y / 255
    
    Rf.f=Yf+Pr*2*(1-#Kr)
    Gf.f=Yf-(2*Pb*(1-#Kb)*(#Kb/(1-#Kr-#Kb)))-(2*Pr*(1-#Kr)*(#Kr/(1-#Kr-#Kb))) 
    Bf.f=Yf+Pb*2*(1-#Kb)
    
    rf * 255
    gf * 255
    bf * 255
    Because you normalized U and V incorrectly.
    Quote Quote  
  3. I am using this code with ffmpeg:

    Code:
    ffmpeg -y -i "infile.mp4"  -c:v libx264  -pix_fmt yuv420p  -crf 0      -vf zscale=in_range=full:out_range=full,eq=brightness=-0.05,lutyuv=y='clip(val*1.19,0,255)',unsharp=luma_msize_x=7:luma_msize_y=7:chroma_msize_x=7:chroma_msize_y=7:luma_amount=-0.7:chroma_amount=-0.7   -x264opts colorprim=bt709:transfer=bt709:colormatrix=bt709:force-cfr   -c:a copy  outfile.mp4
    It works but MediaInfo says "Color range: Limited".

    ???
    Quote Quote  
  4. Originally Posted by chris319 View Post
    I am using this code with ffmpeg:

    Code:
    ffmpeg -y -i "infile.mp4"  -c:v libx264  -pix_fmt yuv420p  -crf 0      -vf zscale=in_range=full:out_range=full,eq=brightness=-0.05,lutyuv=y='clip(val*1.19,0,255)',unsharp=luma_msize_x=7:luma_msize_y=7:chroma_msize_x=7:chroma_msize_y=7:luma_amount=-0.7:chroma_amount=-0.7   -x264opts colorprim=bt709:transfer=bt709:colormatrix=bt709:force-cfr   -c:a copy  outfile.mp4
    It works but MediaInfo says "Color range: Limited".

    ???

    You can use -x264opts fullrange=on , or just add it on to your existing -x264opts command . It's just a "flag" or "label" nothing is actually converted here (unlike the RGB to YUV full range conversion earlier, where functional data was actually changed in a different manner)

    I don't think you need zscale there, because it's not doing anything

    Code:
    -x264opts fullrange=on:colorprim=bt709:transfer=bt709:colormatrix=bt709:force-cfr

    But what is your infile.mp4 ? Is it actually full range ? Or limited range with over/under shoots ?

    Because the presence of the flag can mess things up on the receiving end if it's not actually full range
    Quote Quote  
  5. I don't think you need zscale there, because it's not doing anything
    zscale comes from code you posted:

    https://forum.videohelp.com/threads/390628-Inaccurate-YUV-RGB-Conversion/page2#post2533099

    The original is from the camcorder which MediaInfo says is Limited. However,

    use -x264opts fullrange=on , or just add it on to your existing -x264opts command .
    makes it "Full" in MediaInfo, so that's a step forward. Is there any other unnecessary code here?

    Code:
    ffmpeg -y -i "infile.mp4"  -c:v libx264  -pix_fmt yuv422p  -crf 0      -vf zscale=in_range=full:out_range=full,eq=brightness=-0.05,lutyuv=y='clip(val*1.19,0,255)',unsharp=luma_msize_x=7:luma_msize_y=7:chroma_msize_x=7:chroma_msize_y=7:luma_amount=-0.7:chroma_amount=-0.7   -x264opts fullrange=on:colorprim=bt709:transfer=bt709:colormatrix=bt709:force-cfr   -c:a copy  outfile.mp4
    I had to omit

    Code:
    zscale=matrix=709
    because ffmpeg was complaining of no path between color spaces or some such.
    Quote Quote  
  6. Originally Posted by chris319 View Post
    I don't think you need zscale there, because it's not doing anything
    zscale comes from code you posted:

    https://forum.videohelp.com/threads/390628-Inaccurate-YUV-RGB-Conversion/page2#post2533099
    That's for a different conversion . RGB bitmap full range conversion to YUV . Instead of limited range conversion.

    Presumably your MP4 is YUV ? YUV to YUV is a different type of conversion (obviously) , but that's the difference here


    The original is from the camcorder which MediaInfo says is Limited. However,

    use -x264opts fullrange=on , or just add it on to your existing -x264opts command .
    makes it "Full" in MediaInfo, so that's a step forward.
    Why is "full" desired here ? Just curious . It looks like you're multiply values with lutyuv, is it for some testing scenario ?



    I had to omit

    Code:
    zscale=matrix=709
    because ffmpeg was complaining of no path between color spaces or some such.

    "Matrix" is only applicable for colorspace conversions eg. RGB<=>YUV conversions (or other color model). Since YUV => YUV is going to the same thing, "matrix" is irrelevant . It was included in the earlier example, because it was going from RGB=>YUV (so instead of 601, 2020,etc... )
    Quote Quote  
  7. It gets complicated. The ultimate objective is to make camcorder video EBU r 103 compliant.

    https://tech.ebu.ch/docs/r/r103.pdf

    The camcorder video is 4:2:0 YUV at 1080p. I then convert it to RGB24 720p because it is MUCH easier to control the individual R,G,B channels in the RGB domain. So we are making a round trip YUV -> RGB -> YUV. The camcorder puts out levels from 16 - 255. So far I've been working in Limited range but plan to experiment with full range to see how ffmpeg behaves. Distributors do not accept 1080p. In the U.S. we are still using MPEG2 for over-the-air video. The 6 Mhz RF channel handles 19.38 Mbps but this is usually divided up between a main HD channel and several SD subchannels as low as, say, 3 Mbps. There are really two standards: a "contribution" standard which is generally at 50 Mbs but is then scaled to a lower bit rate for broadcast, and that depends on how each station has divided its 19.38 Mbps. Each station resamples from the "contribution" standard for eventual over-the-air broadcast.

    In any event, there must NOT be Y, R, G or B information at 0 or 255 as these values are strictly for sync.

    The video is then converted back to YUV 4:2:2 because that is what distributors want. At this stage I alter the "lift" and "gain" using brightness and lutyuv. This is all checked on a special waveform monitor which reads out the minimum and maximum levels for each R, G and B channel. It's been difficult to achieve but has worked out nicely. If you have a better way of doing it I'd love to hear about it.

    I found it difficult to bring the R, G and B channels into spec in the YUV domain. You wind up sucking too much saturation out of the picture.
    Last edited by chris319; 8th Nov 2018 at 05:21.
    Quote Quote  
  8. Originally Posted by chris319 View Post
    In any event, there must NOT be Y, R, G or B information at 0 or 255 as these values are strictly for sync.
    Clamp between 1 and 254 (as SAV and EAV use 0 and 255), side to this not sure if any broadcaster use internally infrastructure to transmit RGB - i assume 0 and 255 is not your problem as it is present only in display.

    Originally Posted by chris319 View Post
    The video is then converted back to YUV 4:2:2 because that is what distributors want. At this stage I alter the "lift" and "gain" using brightness and lutyuv. This is all checked on a special waveform monitor which reads out the minimum and maximum levels for each R, G and B channel. It's been difficult to achieve but has worked out nicely. If you have a better way of doing it I'd love to hear about it.
    Honestly i don't get why you struggle with things that have no (or they have marginal) impact and you can simply CLAMP your signal to follow requirements.

    Originally Posted by chris319 View Post
    I found it difficult to bring the R, G and B channels into spec in the YUV domain. You wind up sucking too much saturation out of the picture.
    RGB to YCbCr to RGB should be no quality loss (except quantization errors) - YCbCr is artificial (but very useful) colour space as there no sensors or displays able to capture or display YCbCr colour space natively.
    Last edited by pandy; 9th Nov 2018 at 06:49.
    Quote Quote  
  9. Clamp between 1 and 254 (as SAV and EAV use 0 and 255)
    How is this done, Pandy? What is the ffmpeg command for it?

    I already have this code:
    Code:
    lutrgb=r='clip(val,17,231)',lutrgb=g='clip(val,17,231)',lutrgb=b='clip(val,17,231)'
    I also set gamma in this process but omitted it from my post.

    not sure if any broadcaster use internally infrastructure to transmit RGB
    They don't. That's why I convert back to YUV.

    i assume 0 and 255 is not your problem as it is present only in display.
    Broadcasters will REJECT program material that is not up to their specs, i.e. they will not broadcast it. So it is my problem.
    Last edited by chris319; 8th Nov 2018 at 08:51.
    Quote Quote  
  10. Originally Posted by chris319 View Post
    Distributors do not accept 1080p.
    Because there is no 1080p59.94 broadcast in North America. 1080i29.97 or 720p59.94. Some prefer one over the other.

    If it's 1080p content it must be converted properly - If it's progressive "23.976p" content it must be telecined ("23.976p in 29.97i" with 3:2 cadence) for 1080i, and 3:2 cadence repeats for 720p59.94. If it's 59.94p content it must be converted to 29.97i for 1080i (true interlace content) . They usually want it low passed too (to prevent line twittering). 29.97p content is just frame doubled to 59.94 for 720p, or just encoded interlaced as 29.97i ("29.97p in 29.97i" where each field represents same moment in time)


    The video is then converted back to YUV 4:2:2 because that is what distributors want. At this stage I alter the "lift" and "gain" using brightness and lutyuv. This is all checked on a special waveform monitor which reads out the minimum and maximum levels for each R, G and B channel. It's been difficult to achieve but has worked out nicely. If you have a better way of doing it I'd love to hear about it.

    I found it difficult to bring the R, G and B channels into spec in the YUV domain. You wind up sucking too much saturation out of the picture.

    There are other levels you need to worry about too , not just min/max values - "illegal" broadcast colors . Both YUV values that don't "map" to a "legal" RGB value, and things like illegal saturation values

    The "better" way is using a NLE for this , that's what everyone uses . There are filters that do that automatically (broadcast legal filters) . But that is "ugly brute force", no finesse . That's for quick turn around, don't care too much, just get it in spec. But it's a good way to double check things in spec before export

    But before that, you usually need to make color corrections, adjustments too - fix highlights, fix shadows, correct skin tones etc...ie. Subjective manipulations. Nobody, ever has perfect lighting and conditions with a perfect camera response - not even in a studio lit, perfectly controlled environment with expensive studio equipment and cameras. Color timing, scene matching. You need to keyframe adjustments to do it properly - changes over time. You need to mask / roto / power windows to control certain areas. You need to be able to "see" things and get feedback. Those types of manipulations are nearly impossible in ffmpeg - you need some sort of GUI. ffmpeg is very useful , but the wrong tool for this (or at least it should be used with other proper tools). So I'd start looking at using the right tools for the job

    For example, >90% of consumer grade cameras have usable data in the Y 235-255 range. Usable highlights that are recoverable. You don't want to just clip those - it will look ugly and get rejected in the subjective category because of abrupt changes instead of gradual fall off
    Last edited by poisondeathray; 8th Nov 2018 at 09:37.
    Quote Quote  
  11. Originally Posted by chris319 View Post
    Clamp between 1 and 254 (as SAV and EAV use 0 and 255)
    How is this done, Pandy? What is the ffmpeg command for it?
    https://ffmpeg.org/ffmpeg-filters.html#limiter

    Originally Posted by chris319 View Post
    I already have this code:
    Code:
    lutrgb=r='clip(val,17,231)',lutrgb=g='clip(val,17,231)',lutrgb=b='clip(val,17,231)'
    I also set gamma in this process but omitted it from my post.

    not sure if any broadcaster use internally infrastructure to transmit RGB
    They don't. That's why I convert back to YUV.
    Well, nowadays they use very frequently 4:4:4 thus link is capable to pass RGB data anyway and people care less nowadays, see this frequently when BFF material is encoded as TFF.

    Originally Posted by chris319 View Post
    i assume 0 and 255 is not your problem as it is present only in display.
    Broadcasters will REJECT program material that is not up to their specs, i.e. they will not broadcast it. So it is my problem.
    Saw your bug report about this and seem you already have advice how to solve issue. Seem you chasing this issue since long time...

    Rule is simple broadcast equipment (more precisely integrated circuits of serializers and deserializers) by default clamp every video data between 1 and 254 i.e. (4 and 1016) as remaining codes are used for synchronization - this how hardware is designed internally. Of course i fully understand broadcaster requirements and good practices to not rely on HW design - this is very wise approach.

    You can always use 8 bit data (i.e. between 0 and 255) and apply lower gain (i.e. contrast 252.5/255 ) then add offset 1 (brightness 1.5/255) this will prevent 0 and 255 to occur but i would suggest to use limiter and clamp video between EBU/broadcaster recommended values - IMHO it is better to do hard clamp on occasional under and overshoots than adjust whole quantization space as this will lead to unavoidable quality loss - requantization of already quantized signal with limited accuracy is lossy and 8 bit is not enough without dithering and dithering will be anyway removed by encoder.
    Quote Quote  
  12. If it's progressive "23.976p" content
    As I explained:
    The camcorder video is 4:2:0 YUV at 1080p.
    For example, >90% of consumer grade cameras have usable data in the Y 235-255 range. Usable highlights that are recoverable. You don't want to just clip those - it will look ugly and get rejected in the subjective category because of abrupt changes instead of gradual fall off
    You can do this by simply reducing the video gain but then you're changing all of the picture level and this doesn't work in a dark scene unless you want to hand-grade every individual shot, which I don't.

    In SDR video anything over 90% reflectance is considered out of bounds (90% reflectance = 100 IRE = digital 235). You have to exert control either by using the camera iris or an ND filter or by choice of subject matter.

    This all changes with HDR and depends on whether you are using HLG or PQ. With HLG 100% reflectance is 75 IRE (out of 100). 90% reflectance is 73 IRE. I don't even want to think about PQ.

    There are other levels you need to worry about too , not just min/max values - "illegal" broadcast colors . Both YUV values that don't "map" to a "legal" RGB value, and things like illegal saturation values
    EBU r 103 only concerns itself with Y, R, G and B but yes, I keep an eye on those, too. r 103 also suggests low-pass filtering.
    Quote Quote  
  13. Originally Posted by chris319 View Post
    If it's progressive "23.976p" content
    As I explained:
    The camcorder video is 4:2:0 YUV at 1080p.

    "1080p" isn't really an accurate description or explanation - you need to attach the frame rate . "1080p" can commonly refer to 1080p23.976, 1080p24.0, 1080p29.97, 1080p30.0, 1080p59.94, 1080p60.0 . eg. Many stations broadcast movies 1080p23.976; but not as pN native, it's telecined (23.976p in 29.97i) . It's IVTCed by the display hardware, and you still get full progressive frames returned as 1080p23.976



    You can do this by simply reducing the video gain but then you're changing all of the picture level and this doesn't work in a dark scene unless you want to hand-grade every individual shot, which I don't.
    Well that' s your decision, but those types of basic adjustments are what people do every day. Professionals for sure. But even non professionals like Youtubers with their own channels even do this. It's a minimum expectation

    The video QC'er who eventually gets it does more than just check tech specs like min/max, db audio etc... , there are subjective assessment checklists. Some places are very strict, others not so much...

    You'll get a QC sheet back saying why it was rejected

    It will be something like
    1.1 PQ
    a) excessive highlight compression
    b) highlight and shadow hard clipping
    c) variable skin tones
    d) fluctuating levels
    Quote Quote  
  14. Some video clips in the camera and no amount of grading after the fact can fix that. You can't unclip a signal. That's where a camera's iris and and ND filter come in. As I say, you have to exert some control over what you shoot. It usually clips at 255, an inconvenient value, rather than at 235 or 246 (r 103). Some high-end cameras let you set clip levels but not a consumer camcorder. YouTubers may grade their videos but they don't have to comply with r103 or ATSC specs so I doubt they go to such elaborate lengths.

    We haven't broached the subject of gray scale/transfer curve, a whole separate discussion. That is where ffmpeg and a scope come in handy.

    I don't use Mac but I understand FCP 7 used to have a dynamite legalizer. No more with FCP X. The software legalizers cost $$$.
    Quote Quote  
  15. Yes, definitely do as much as you can with lighting, camera settings and equipment . That's the most important by far . There is only so much you can do in post . You hear "fix it in post" all the time, but it's never as good as getting it right or close to right in the first place

    I'm just suggesting not to ignore the other things on the list - technical submission specs are just one category for QC

    Sorry, but brutal honest truth - not wanting to make adjustments by scene - especially if this is your own work - is just plain lazy. It doesn't come across as professional. Don't you want it to look as good as it can be ? If some high school youtuber can spend a few minutes editing, color correcting and matching shots, don't you think something destined for broadcast should get similar attention ? You could argue it should probably get more care and attention - because in addition to color correcting, grading, each shot etc.. you also have to comply with specs.

    ffmpeg is very useful, but it's just one tool in the tool belt. I'm suggesting to complement ffmpeg with proper / ideal tools for the tasks at hand. It's not very good at editing, or making levels and color corrections, because you can't keyframe or change stuff on the fly. Free tools have gotten far - eg. free versions of Davinci Resolve , Fusion, Hitfilm, Blender, Natron . Many free NLE's too for editing and basic color correction . Many of the tools actually make use of ffmpeg in the backend. Resolve deserves extra mention because is an exceptionally good tool right now for editing and color work. It used to be (and still is) the gold standard cinema color work. The free version doesn't have all the bells and whistles, but it's still very powerful. It used to be about ~$5-10K and they offer it for free including fusion now. As a free alternative to a "traditional" broadcast NLE, you can't beat it.
    Quote Quote  
  16. don't you think something destined for broadcast should get similar attention ?
    If it were destined for broadcast or even YouTube, yes. The stuff I'm doing is so simple it doesn't warrant hand grading. I have to run it through ffmpeg anyway to adjust the transfer curve, so why not see if I can make it broadcast spec while I'm at it? It sure as heck isn't going to be broadcast so it's squarely in the "experimental" category (and has turned out to be a lot of work).
    Quote Quote  
  17. Originally Posted by chris319 View Post
    don't you think something destined for broadcast should get similar attention ?
    If it were destined for broadcast or even YouTube, yes. The stuff I'm doing is so simple it doesn't warrant hand grading. I have to run it through ffmpeg anyway to adjust the transfer curve, so why not see if I can make it broadcast spec while I'm at it? It sure as heck isn't going to be broadcast so it's squarely in the "experimental" category (and has turned out to be a lot of work).
    Understood ,you're doing some tests / experiments right now . But the assumption is eventually you're going to be submitting a larger project (otherwise why would you be doing all this)

    Earlier you asked if there was a "better way" . You were converting to RGB because some manipulations were easier, making adjustments to lift , gain etc..but having difficulty with saturation back in YUV .

    I'm suggesting the easier/better way is to use a broadcast NLE or similar tool use all the scopes, including the vectorscope, histogram. That's what is used in broadcast and cinema. That's what they are designed to do. Faster, better, more intuitive interface. I recommend exploring some of those tools and add them to your workflow . There are many tutorial series on YT and similar sites to help people familiarize themselves with the GUI and various variations on workflows. Everyone does stuff slightly differently or has preferences. There are various pros/cons , and "gotchas" with each type of workflow . But nobody in their right mind would do color work with ffmpeg only, it's far too limited in this regard. Not in 2018 with the pile of free tools like resolve readily available - there really is no excuse. e.g. you can't even do a simple curves adjustment without a lot of work going back and forth and using other programs anyways. You will end up taking 100x more time trying to get results that are clearly worse. ffmpeg is not mutually exclusive with other tools. You can still use them for various parts of the workflow. It's just that certain tools are much better at certain tasks.



    I have to run it through ffmpeg anyway to adjust the transfer curve
    You've mentioned that twice now , "transfer curve" . What did you mean by that exactly ? or did you mean that in a more generic sense ?

    Because "Transfer" has a specific, reserved meaning in terms of the "transfer characteristics" or "transfer function" used in production work, especially in post production work for things like linearization
    Quote Quote  
  18. If I were in the business of delivering shows or commercials to broadcast or features to Netflix, I would get a Mac and trick it out with FCP and Scopebox and Eyeheight and it would cost well over $2,000 U.S., and I would get a much better monitor. As a hobbyist who likes to shoot train videos and who has worked for decades with big broadcast TV cameras costing well over $50,000 U.S. back when all the adjustments were in hardware, it isn't worth the investment. As you observed I'm just a casual hobbyist who knows how to program. Using the packaged solutions you suggest gives me nothing to do and no pride of workmanship, besides costing a pretty penny. That said, I haven't ruled out getting Premiere Elements for $100 U.S.

    You helped me with my scope program — I found the weak link in that chain — and it works wonderfully now, reporting out-of-spec conditions.

    "transfer curve" . What did you mean by that exactly ? or did you mean that in a more generic sense ?
    I'm surprised you have to ask. It is the thing controlled by the "gamma" parameter, a more precise way of saying "gamma". To adjust that properly the camera needs to look at an 18% card with a 90% patch and look at the output on a properly-calibrated scope. It also helps to have a "cavity black" reference.

    https://www.bhphotovideo.com/c/product/813250-REG/Kodak_1277144_Gray_Card_Plus_9x12.html?sts=pi
    Quote Quote  
  19. Originally Posted by chris319 View Post
    Using the packaged solutions you suggest gives me nothing to do and no pride of workmanship, besides costing a pretty penny. That said, I haven't ruled out getting Premiere Elements for $100 U.S.
    It doesn't cost anything for Resolve. And I suggested a bunch of other free packages.

    Don't bother with Elements . It's really a run down version of Premiere. It doesn't have multicam or more importantly scopes (no waveform, no histogram, etc...) for example. $100 for a piece of turd.

    Resolve is much better and free and has decent editing capabilities .

    I really don't know how companies can justify selling those packages anymore in 2018, unless they offer something significantly better or more features

    Premiere Pro is the bigger brother of Elements, and more complete and better video editor (but not free) , but Resolve is better for color work for most people (some people don't like it for whatever reasons)



    "transfer curve" . What did you mean by that exactly ? or did you mean that in a more generic sense ?
    I'm surprised you have to ask. It is the thing controlled by the "gamma" parameter, a more precise way of saying "gamma".

    Yes I have to ask, because people often use the terms loosely but really mean something else. If you had used the term "gamma" that indicates something everyone can understand quite clearly

    As soon as you use terms like "transfer curve", red flags go off, because this implies you're applying a specific transfer function. It does affects gamma, but not in the way you're thinking of making manipulations. There are equations that "map" things back and forth, just like the colormatrix equations above, for example 709 <=> linear , SMPTE ST 2084 <=>709 , various log <=> linear, etc...
    Quote Quote  
  20. If you like to push things your way for nothing, even as a hobby as well, why don't you use modules that are already written for video processing. Vapoursynth together with Vapoursynth editor is just what you need. FFmpeg is not ideal for this at all, only with some sort of GUI. You are trying to sort out basics, and it is not easy, heck pros cannot even decide what is correct yuv to RGB formula, because there is none that is 100%. It is a representation. But there are filters that already have sorted this out trying the best. Or adding opencv for example, if you want to have visual feedback trying to make visuals, gui on your own, you just read Vaporsynth memoryview, change it into numpy array of planes and opencv can put it in on screen. It goes on screen in RGB anyway so that would be your visual, but actual filter works in YUV space , so you stay in YUV space. That visual RGB gets clipped (0-255), but that is just visual.

    You can easily read YCbCr values from that array for pixels for YUV or RGB if you need etc. It is very fast in code, even for Python, if working with those arrays of planes. Those color changes have been designed to detail, choosing matrix colors, range using Vapoursynth.

    Example is better than 100 hundred words, you want to change YUV to RGB, you use simple line:
    RGB = vs.core.resize.Point(YUV_video, format=vs.RGB24, matrix_in_s="709")
    and it is going to work with 4:2:0 8bit or 4:2:0 10bit etc.

    To that illegal clipping of RGB using our camcorders, Magix Vegas users recommend for camcorder users also after loading clips changing it to 16-235 (they call it Computer RGB to Studio RGB filter), because of those illegal values , working on a project if using filters, color corrections and then you might bring it back.
    Last edited by _Al_; 8th Nov 2018 at 22:17.
    Quote Quote  
  21. Thanks for the advice on Premiere Elements. I use a program called Shotcut. It is very much a work in progress and a very ambitious endeavor. It cost $0 U.S. I have Premiere Pro in the back of my mind but I don't do $20 worth of editing every month.

    Here is the camcorder I use now:

    https://www.bhphotovideo.com/c/product/1211905-REG/sony_fdrax53_b_fdr_ax53_4k_ultra_hd.html

    Here is a handy spreadsheet I made. It works well for SDR but I don't have HLG down yet. As you can see, an 18% reflectance gray card should be at 43 IRE in SDR or "0 stops".

    https://docs.google.com/spreadsheets/d/1wGZd5_OEPSE08CC_RpP5X_hTxOgapWw7pihrKeFOWls/edit#gid=0

    It was Art Adams who turned me on to the use of an 18% gray card. I used to use something like this:

    https://www.markertek.com/product/ac-gse-9/accu-chart-9-step-eia-type-grey-scale-chart

    Here is the scope currently (needs some fine tuning).

    Image
    [Attachment 47125 - Click to enlarge]
    Quote Quote  
  22. I'll look into Vapoursynth.

    The problem I'm having with ffmpeg is that it's not very precise. For example, given

    lutrgb=r='clip(val,17,231)'

    ffmpeg clips the red channel at 223 or 241 instead of 231 depending on whether you are working in full or tv range, so you spend a lot of time fussing with it until you get the value you're after. In addition, there is interaction between the controls. For example, raising the "brightness" (lift/pedestal) necessitates a change in level. Ironically, this is the correct way of doing it.

    In addition, 235 for whites is the BT.709 spec but r 103 makes allowance for overshoots and its "preferred" maximum is 246, so there is some wiggle room.

    This is all in the RGB domain. It gets even more complicated going back and forth between YUV, RGB and back.

    It would be nice if all this could be done interactively while watching a scope. Anyone want to collaborate?
    Quote Quote  
  23. Originally Posted by chris319 View Post

    The problem I'm having with ffmpeg is that it's not very precise. For example, given

    lutrgb=r='clip(val,17,231)'

    ffmpeg clips the red channel at 223 or 241 instead of 231 depending on whether you are working in full or tv range, so you spend a lot of time fussing with it until you get the value you're after.
    Are you sure ? I've never used ffmpeg's lutrgb, because clipping is typically never done in RGB, but there might be a bug that needs reporting (I doubt it)

    If you use lutrgb to clip , then you should be using RGB as the immediate input , and you should be reading the direct RGB output if you are evaluating the filter itself. It should clip R to [17,231] there accurately. It's on you to do the prior RGB conversion correctly, the lut filter is just doing the math . Just like it's on you to do whatever conversions come after properly, like YUV conversions

    Technically, there is no "full" vs. "limited" RGB . RGB is always 0,0,0 - 255,255,255 . The "limited" vs. "full" refers to what the levels are in the YUV part.

    Another option is -vf limiter . And both vapoursynth and avisynth have methods to clip that for sure work and are accurate . Very useful tools for some things - but not what I would call GUI friendly or interactive. Sure you can get previews, and scopes, but it's much slower because you're typing, entering values. There aren't sliders, dials, or real GUI features. avspmod does have sliders programmable, but the GUI nothing compared to dedicated software like resolve or premiere


    In addition, there is interaction between the controls. For example, raising the "brightness" (lift/pedestal) necessitates a change in level. Ironically, this is the correct way of doing it.
    This is expected behaviour. Once you play with a NLE that has scopes, all this fussing, back and forth is 100x better. You will wonder why you went this path in the first place. The interactive feedback with preview and scopes are crucial for making color / levels manipulations . The software is not supposed to make the user struggle and pull out their hair

    Also, the order matters in how you do things. If you apply lift then clip , that's different than clipping then applying lift .



    This is all in the RGB domain. It gets even more complicated going back and forth between YUV, RGB and back.
    Yes , especially if there are additional problems in the conversions , besides expected errors and such

    Clipping in general should be applied absolutely last . It's typically only used in YUV because that's usually the end delivery format. You do all the manipulations you want manually using scopes, include grading, then only to kill the stray out of range pixels that you might have missed. Clipping is damaging and usually results in less than ideal results than if you corrected it gracefully and manually. If you do clip, you want to massage the values so they are very very close before you clip so you do the least damage . For that reason, the "auto" broadcast filters are rarely used for high quality productions . They get the job done quickly and accurately, but often with less than ideal results subjectively


    It would be nice if all this could be done interactively while watching a scope. Anyone want to collaborate?
    Not with ffmpeg. Wrong tool. Some options listed above , free ones too

    It looks like shotcut scopes are still on the to-do list ? But it looks promising, active development
    Quote Quote  
  24. I'm starting to prototype and interactive proc amp/scope using my existing code.

    The big bugaboo will be not RGB, not YUV, not clipping. Audio. The nice thing about the way I've been doing it is I haven't really had to worry about audio; just copy it over.

    If Vapoursynth can read, modify and write pixels, copy the audio over from the source file and have a python UI for adjusting parameters, this would be nice. I'm at the dead-nuts bottom of the learning curve with Vapoursynth and python.
    Quote Quote  
  25. Originally Posted by chris319 View Post
    I'm starting to prototype and interactive proc amp/scope using my existing code.
    Does the proc amp apply the changes too, or is it just a preview ? Sounds like an interesting project, maybe you can code for shotcut to help them progress forward



    The big bugaboo will be not RGB, not YUV, not clipping. Audio. The nice thing about the way I've been doing it is I haven't really had to worry about audio; just copy it over.

    If Vapoursynth can read, modify and write pixels, copy the audio over from the source file and have a python UI for adjusting parameters, this would be nice. I'm at the dead-nuts bottom of the learning curve with Vapoursynth and python.
    vapoursynth officially has no audio support . There is a plugin that offers very rudimentary support passing samples.

    Avisynth audio support is much better - you can do edits, manipulations, filters, there are multiple types of audio support, etc

    But if you are just copying audio, you can use ffmpeg . You can vspipe video stream, and pass the audio stream directly. There is actually a vpy input patch for ffmpeg, but it's not compiled or distributed with common ffmpeg builds

    I'm a python newbie, but I've learned enough basic python to use vapoursynth because it's really that useful . One of the best truly incredible free software ; up there along with avisynth, blender, natron , resolve, ffmpeg/libav
    Quote Quote  
  26. What I need to be able to do is read in a file, video and audio both, read, modify and write the video pixels, and write both video and audio back to the output file. I have C language and PureBasic code that uses ffmpeg to read in the video frames. Can avisynth do all those things with audio support. All I need to do is copy the audio from source file to output file.

    Or use ffmpeg to copy the audio from the original file and leave the video alone?
    Last edited by chris319; 9th Nov 2018 at 01:38.
    Quote Quote  
  27. My 2 cent - i complain on ffmpeg in terms of color control - struggle with unpredictable behaviour of ffmpeg, i mean by this hard to explain behaviour even if i'm trying to control signal flow explicitly - i think main issue with ffmpeg is it architecture and fact that it was born as a tool for software people written by software people. My advice is that you should disable explicitly control over quantization range from ffmpeg and remove at least this unwanted variable from your unequal fight.
    Try to add command '-color_range 2' in such fashion - at least it should force ffmpeg to stay in full quantization mode and don't follow automatically flag in your source - switching between those two ranges will be on your head but at least ffmpeg should behave in more predictable way. Even ffmpeg developers complain on way how ffmpeg is designed particularly libswscale but i don't think anyone will risk to touch this part of architecture... IMHO libswscale should written from zero with exposed control over complete behaviour (like prevent automatically and silently invoked quantization range or color space change). Perhaps i don't know how to do some things correctly - i've tried everything and still facing similar issues (very similar to yours only instead 8 bits per component i have 4 bits i.e. 12 bit per pixel and as you can imagine losing 1 bit from 4 hurts more than loosing 1 bit from 8)

    Code:
    ffmpeg.exe -y -stats -color_range 2 -report -hide_banner -i "%1" -color_range 2
    Bellow excerpt from ffmpeg help - some time ago ffmpeg developers added new commands to control basic properties of data, i see they are not very well known thus my advice is - control those things not only in libswscale but also explicitly in your commands feed to ffmpeg.

    Code:
      -color_primaries   <int>        ED.V..... color primaries (from 1 to INT_MAX) (default unknown)
         bt709                        ED.V..... BT.709
         unknown                      ED.V..... Unspecified
         bt470m                       ED.V..... BT.470 M
         bt470bg                      ED.V..... BT.470 BG
         smpte170m                    ED.V..... SMPTE 170 M
         smpte240m                    ED.V..... SMPTE 240 M
         film                         ED.V..... Film
         bt2020                       ED.V..... BT.2020
         smpte428                     ED.V..... SMPTE 428-1
         smpte428_1                   ED.V..... SMPTE 428-1
         smpte431                     ED.V..... SMPTE 431-2
         smpte432                     ED.V..... SMPTE 422-1
         jedec-p22                    ED.V..... JEDEC P22
         unspecified                  ED.V..... Unspecified
      -color_trc         <int>        ED.V..... color transfer characteristics (from 1 to INT_MAX) (default unknown)
         bt709                        ED.V..... BT.709
         unknown                      ED.V..... Unspecified
         gamma22                      ED.V..... BT.470 M
         gamma28                      ED.V..... BT.470 BG
         smpte170m                    ED.V..... SMPTE 170 M
         smpte240m                    ED.V..... SMPTE 240 M
         linear                       ED.V..... Linear
         log100                       ED.V..... Log
         log316                       ED.V..... Log square root
         iec61966-2-4                 ED.V..... IEC 61966-2-4
         bt1361e                      ED.V..... BT.1361
         iec61966-2-1                 ED.V..... IEC 61966-2-1
         bt2020-10                    ED.V..... BT.2020 - 10 bit
         bt2020-12                    ED.V..... BT.2020 - 12 bit
         smpte2084                    ED.V..... SMPTE 2084
         smpte428                     ED.V..... SMPTE 428-1
         arib-std-b67                 ED.V..... ARIB STD-B67
         unspecified                  ED.V..... Unspecified
         log                          ED.V..... Log
         log_sqrt                     ED.V..... Log square root
         iec61966_2_4                 ED.V..... IEC 61966-2-4
         bt1361                       ED.V..... BT.1361
         iec61966_2_1                 ED.V..... IEC 61966-2-1
         bt2020_10bit                 ED.V..... BT.2020 - 10 bit
         bt2020_12bit                 ED.V..... BT.2020 - 12 bit
         smpte428_1                   ED.V..... SMPTE 428-1
      -colorspace        <int>        ED.V..... color space (from 0 to INT_MAX) (default unknown)
         rgb                          ED.V..... RGB
         bt709                        ED.V..... BT.709
         unknown                      ED.V..... Unspecified
         fcc                          ED.V..... FCC
         bt470bg                      ED.V..... BT.470 BG
         smpte170m                    ED.V..... SMPTE 170 M
         smpte240m                    ED.V..... SMPTE 240 M
         ycgco                        ED.V..... YCGCO
         bt2020nc                     ED.V..... BT.2020 NCL
         bt2020c                      ED.V..... BT.2020 CL
         smpte2085                    ED.V..... SMPTE 2085
         unspecified                  ED.V..... Unspecified
         ycocg                        ED.V..... YCGCO
         bt2020_ncl                   ED.V..... BT.2020 NCL
         bt2020_cl                    ED.V..... BT.2020 CL
      -color_range       <int>        ED.V..... color range (from 0 to INT_MAX) (default unknown)
         unknown                      ED.V..... Unspecified
         tv                           ED.V..... MPEG (219*2^(n-8))
         pc                           ED.V..... JPEG (2^n-1)
         unspecified                  ED.V..... Unspecified
         mpeg                         ED.V..... MPEG (219*2^(n-8))
         jpeg                         ED.V..... JPEG (2^n-1)
    Quote Quote  
  28. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    FFmpeg have scopes too, and you basically could write GUI for it if you cared.
    Quote Quote  
  29. Originally Posted by chris319 View Post
    What I need to be able to do is read in a file, video and audio both, read, modify and write the video pixels, and write both video and audio back to the output file. I have C language and PureBasic code that uses ffmpeg to read in the video frames. Can avisynth do all those things with audio support. All I need to do is copy the audio from source file to output file.

    Or use ffmpeg to copy the audio from the original file and leave the video alone?

    If you were going to use avisynth or vapoursynth, you would use their input filters to read the video and audio . I guess you could use your C/PureBasic ffmpeg code, but you would have to adapt it to their API. Some avisynth plugins use C . But those types of source filters already exist, so no reason to re-invent the wheel

    avisynth/vapoursynth do not write out formats directly (some exceptions with image formats) , they typically pipe to something like ffmpeg / x264 / x265 or whatever encoder.



    ffmpeg can take multiple inputs (you can feed it a vapoursynth script or avs script, or pipe in from other sources). So you can feed your video stream and audio stream or any number of addition streams using -map . You can copy which ever audio stream(s) you wanted to . That's another reason to use audio copy - the original audio format will be retained instead of decoding / recompression if the original audio was compressed

    If you're using standardized formats like camcorder, xdcam, prores etc... it will not be a problem . Some sources can be a problem when you use different source filters for audio and video , or you have certain types of video (open gop) cut a certain way , or vfr video . Because what ffmpeg "sees" as number of frames might be different than another source filter. There are different ways to handle things like leading b-frames, and they are not always consistent. There are various workarounds and established ways to do things

    But if you were making cuts /edit, then presumably you need the audio too. In that case it's better to include the audio while making the edits
    Quote Quote  
  30. Easier than I thought to copy audio from one file to another:

    Code:
    ffmpeg  -y  -i C0058.mp4 -i C0060.mp4 -map 0:v -c copy  -map 1:a  -c copy  out.mp4
    So one file would have the audio and the other would have the processed video.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!