I need help writing a proper "lift" control, not "offset", like in DaVinci Resolve.
The whites would be clamped while the blacks would ride up and down. I'm having trouble wrapping my head around this.
+ Reply to Thread
Results 91 to 120 of 152
-
-
Did you mean "lift" similar to how resolve applies it ?
"Clamped" implies "squishing" . Resolve behaviour is more like "holding" the upper values as they are.
Something similar to this behaviour ?
[Attachment 47142 - Click to enlarge]
This demo is "output_low" increasing in a typical "levels" filter (input_low would go the other way, down, if positive increments) . It's the same as photoshop or gimp levels filter. Almost all programs have them, vdub, avisynth, vapoursynth. Curiously enough, ffmpeg doesn't have one. I don't know why, because it's one of the most basic filters. But you can look at the code from one of the other programs
There are other ways to achieve similar results, curves, luma masks, but "levels" is the most common to do this sort of manipulation -
Last edited by jagabo; 10th Nov 2018 at 16:55.
-
Keep in mind, if you brighten Y colors will get less saturated in the dark areas. So you need to adjust the chroma too.
-
-
This gets me closer to the effect I had in mind. All floats:
Code:lift = 50 factor = 1-(lift/255) rf= (rf+lift)* factor gf=(gf+lift)* factor bf=(bf+lift)* factor
-
An AviSynth variation of my earlier unit function:
Code:function Lift(clip v, int lift) { v.ColorYUV(gain_y=-lift, off_y=lift) }
Code:function LiftRGB(clip v, int lift) { factor = (255.0 - lift) / 255.0 RGBAdjust(v, r=factor, g=factor, b=factor, rb=lift, gb=lift, bb=lift) }
Last edited by jagabo; 13th Nov 2018 at 10:21. Reason: simplified Lift()
-
That does not sound good, YUV > RGB > YUV, our camcorders come up with illegal values, you cannot avoid it, even using ND filters, dynamics are not great.
RGB is necessary to make for a frame to get preview on screen, but you should stay in YUV using filters. I'd rather make filtering and previewing two different things. If that is what you do, making RGB for preview. Not sure exactly, that is what I am missing in this thread.Last edited by _Al_; 10th Nov 2018 at 11:11.
-
I'm working directly with RGB samples outside of Avisynth or Vapoursynth.
Your previous code made the whole picture darker and didn't clamp the whites.
This works:
Code:liftfactor = (255.0 - lift) / 255.0
Code:rf=(rf+lift)* liftFactor gf=(gf+lift)* liftFactor bf=(bf+lift)* liftFactor
Last edited by chris319; 10th Nov 2018 at 15:02.
-
-
Nor does it help that you're probably delivering 8bit YUV 4:2:2 . Something 100.0% broadcast safe / EBU R103 compliant in RGB can produce areas out of gamut by that conversion.
Partially from the 8bit conversion, rounding errors and lossy compression, but especially because of that subsampling step. Especially around lines , edges, graphics. It's easy to demonstrate this for yourself. Certain kernals create more "illegal" broadcast values, than others. The point is if you "fix" them at the RGB stage (I'm guessing that's what you were experimenting with earlier with the lutrgb clipping), you will miss the out of gamut errors you just introduced when converting to 8bit YUV 4:2:2 .
EBU R103 is just a recommendation. Sometimes broadcaster will say "based on EBU R103", but might clarify some specific parameters to meet their requirements, maybe more restrictive, maybe more relaxed. Places that include the 1% active picture area out of gamut allowance is a tremendous amount of wiggle room if the broadcaster allows it
I agree for the most part, and staying with the original YUV format is "best practices", avoidable additional losses - but many people find RGB color manipulations more intuitive to use, myself included. You could argue it's not a significant loss if done properly, and you can increase precision by using higher bit depths. And by the time a final format is encoded, usually there are going to be lossy compression and rounding differences anyways . (Although maybe not if it was some archival project)
On the topic of (in)accurate YUV=>RGB conversions, or YUV values that don't "map" to RGB , or discarded negative RGB values - you could also argue that many of the camera YUV recorded values were actually invalid in the first place. They are the result of the camera raw => debayering to RGB => YUV conversion and the subsampling to 8bit 4:2:0 in the recorded format. You can demonstrate this on cameras where you can simultaneously record raw or bypass the lower quality onboard compression and subsampling with an external recorder, and compare to the onboard recording
If someone is really worried about RGB conversion, vapoursynth is one of the very few tools that can facilitate truly lossless YUV <=> RGB roundtrip conversions in 32bit float. Not just "in theory", but in actual production workflows. You can export physical files (EXR) that you can use for import/export in other capable float programs such as AE, Nuke, Natron ,etc... for other manipulations then back. The EXR float format retains everything including the negative values. It's the most commonly used intermediate for higher end visual effects and CG post production. -
if you "fix" them at the RGB stage (I'm guessing that's what you were experimenting with earlier with the lutrgb clipping), you will miss the out of gamut errors you just introduced when converting to 8bit YUV 4:2:2 .
Meanwhile, progress is being made on the proc amp/scope project. The lift and offset/pedestal controls are working. Spent some time today on the user interface. The hangup now is talking to ffmpeg so this video can be exported to a file. Not sure if that can ever be made to work. -
I am able to read frames from a video into my program and that works well, but am having a tough time writing video back out. Below is the ffmpeg code I am using. Anyone see any problems with it?
Code:ffmpeg -i filename$ -f image2pipe -s 1280x720 -vcodec rawvideo -pix_fmt rgb24 - ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4
The idea for this comes from here:
https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/
My video processor works well but for the inability to write video back out. -
In windows it would look like something like this , you need to specify the out pipe and format
Code:ffmpeg -i filename$ -f image2pipe -s 1280x720 -vcodec rawvideo -pix_fmt rgb24 -f rawvideo - | ffmpeg -f rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -q:v 5 -vcodec mpeg4 -an output.mp4
-
How can I have the input file name as an output argument? When writing the file, the input is a buffer of pixels, not another file. I have already read in the file.
Code:ffmpeg -i filename$
Code:FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
Last edited by chris319; 12th Nov 2018 at 11:57.
-
Windows pipes don't convey filename information; so typically that information would be passed through dos scripting when used with ffmpeg at reading the physical file stage
~n in the output filename would use the input filename in a ffmpeg For loop batch script
I don't know how to do it with your program, but possibly you could wrap it all in a dos For loop by: reading the physical file => pipe to your program => final piping to ffmpeg to write physical file -
you could wrap it all in a dos For loop by: reading the physical file => pipe to your program => final piping to ffmpeg to write physical file
I could load in the entire file at once but that takes up a lot of memory.
If you're looking for something to do, if you want to download the free demo version of PureBasic I could send you the source code.
I've got SDL2 reading the file and drawing a little preview screen in C, but no widgets for making adjustments and no scope yet.
I also tried PureBasic under Linux but something went wrong. I'll have to try it again and report back.
Or I could try Avisynth or Vapoursynth. Keep in mind that it needs control widgets written in Python? Do I want Avisynth or Vapoursynth or continue to try getting PureBasic to work? I fear that Python will be slow at processing an entire bitmap pixel by pixel. PureBasic is compiled, not interpreted, and is pretty fast considering all the work it has to do. That helps when you're trying to make adjustments interactively. -
I'm not a programmer - so I probably can't help you with PureBasic or anything like that .
At the moment, Windows/PureBasic creates a file and the file seems about the right size; there is probably video data in it but the video data cannot be read by a player or by MediaInfo. -
I use Python3 with Vapoursynth and having older PC with 4GB RAM and older i5 650@3.20GHz, that is quite slow and
for example I have 4k 10bit video loading from Vapoursynth memoryview of arrays of planes on screen , playing it frame by frame and it gets 8fps on screen,
that does not differ much if I play video using mpvplayer for example, the only difference is that mpv skips frames and keeps time,
or for example fullhd 50fps M2TS file loading by ffms2 is playing 40fps. One catch is indexing in Vapoursynth, it can take a while to access frames especially if you randomly jump from frame to frame far apart , it might take a while to respond.
so as you can see no bad at all. I was quite surprised, how fast things can go. The way it works Vapoursynth lets you read memoryview of frame , you put it into arrays and those arrays seem to be lightning fast for processing.
Using GUI is your choice. I use qt5 or better pyqt5 made for Python3. You can use opencv, but not sure how you nice GUI would look like. Also using GUI , having lots of controls and dealing with GUI, you need to thread the heck out of some parts of code to process things independently if trying to process frame by frame playing it for example, to not get GUI unresponsive, pyqt5 or qt5 in general makes it easy to communicate across threads basically you can instantly make objects in one thread to appear in other thread. -
My first decision is Vapoursynth or Avisynth. What would be the pros and cons?
Does Avisynth support a gui through Python or some other? I have not used either program.
Real-time playback is not necessary. I would probably have the user freeze the video/pause playback while making adjustments.
Will Vapoursynth do filters my way, i.e. the "lift" control Jagabo came up with? Other than that I need to control gain, gamma, pedestal and knee (clip). Gain simply multiplies all RGB values by n — straightforward. Pedestal adds n to all RGB values, lift was explained a few posts back, gamma is gamma and knee acts as a clip, hard or soft, haven't worked this out.
All three RGB channels would be adjusted simultatneously. It would not be a fill-blown color corrector like Resolve. -
jagabo's ColorYUV filter is listed here so I'm sure you can do the same thing.
Avisynth has workarounds for more that 8bit videos, but Vapoursynth is just a modern tool for working 8, 10,16 bit videos. It has simple filter that is called resize, it can change not only resolution, but color space, matrix, range and it does right as it seems. I went with 64bit version (and 64bit Python as well) working only with 64bit applications , not looking back to use 32bit apps.
Vapoursynth does not support audio like Avisynth, there is 32bit and 64bit versions, it has some filter that suppose to do something or passes something I did not get into it yet. There is more scripts for Avisynth because it is here like forever. There is a build in plugin to import avs script into Vapoursynth script. As long as you have the same 64 or 32 bit versions, never tried that though and for work you do you would not perhaps need it also. Vapoursynth filters (functions) were written or ported to Vapoursynth from Avisynth. It takes some time to orient in all those parameters for filter/functions, lots of time there is only Avisynth doc and then you kind of interpret it for Vapoursynth. Folks porting plugins do not care much for that explanations thinking everyone knows it from Avisynth but as you can see starting with Vapoursynth that does not help. It is much more easier if you are a programmer. On regular basis it is published and discussed here. There is a chance just to get most of plugins (functions, moduls) together with Python , all portable and use it, it is called FATPACK. There is also plugin manager to update plugins because otherwise you'd need them to fish for them all over the web, mostly github, hoping there is also binaries released not just source code.
There is no build in GUI for Avisynth or Vapoursynth. Avisynth has API , Vapoursynth has its API too, but if you write in Python you don't need any, you are it, you are in the script as well. Vapoursynth is Python script. So you can write in Python whatever you want and along the lines you work with Vapoursynth in the same script. As for GUI modul, you need to choose some that already exist, does not matter what language you use. Python has its GUI moduls. I myself was choosing between tkinter or QT. There is some more moduls available.
Python is OOP language, so you can import, fetch VideoNode (that's how Vapoursynth names loaded video as a type of object), load VideoNode into functions, move it between your functions. I think pureBasic is not that much OOP. As long as there is a filter for it you can do some filtering. You have raw arrays of planes available, just raw YUV or RGB data, so you can come up with any filter you want just using python but then you need to sort of register properties correctly, that is I understand it, but never actually went into that. You might use those data directly just for previewing. If you use existing filters, they are mostly properties aware but not guaranteed 100% because it depends if source plugin that loads videofile into Vapoursynth can detect them in the first place. SAR info is not much 100% detectable by all source plugins, field order as well. Frame properties you can fetch using this, for video properties or previewing you can use this. To catch video properties into variables you can use other Python moduls like MediaInfoDLL3 (just fetching MediaInfoDLL.py), you import it into python as "import MediaInfoDLL3", you also need mediainfo DLL (from mediainfo web , putting it in the same folder as py file). Other thing you can use is ffprobe.exe and python script. Or other moduls, there is plenty of them but they do not fetch so many properties like those two or I do not know about them. Or you might not need them at all because you are handling always video with the same properties coming from camcorder, so you always know them.
Special mentioning belongs to numpy modul, that one is needed for preview and opencv plugin, it can merge those arrays of planes for QT gui, or opencv. Not sure how tkinter pans out here, what it needs for preview or if it is capable of it. Opencv can preview as well, it is really a powerful modul. People use it a lot , for video capturing and other video stuff, but Vapoursynth is more powerful just in that single thing - handling of video. Not sure how opencv handles to program all of those gui controls as you need it.Last edited by _Al_; 12th Nov 2018 at 22:13.
-
Adding to _Al_' s comments -
It's not necessarily either/or . There is a tremendous amount of overlap in terms function, but also a bit complementary. For the manipulations you mentioned, either can do those, but you should think ahead. Some things are handled better or faster in one or the other, or some plugins or feature might be exclusive. Vapoursynth is more inclusive in that you can load some avs plugins, and import whole avs scripts into vpy scripts, but you can only load some vpy scripts into avs scripts. Also there is avfs which can generate a virtual file (the script becomes a virtual video +/- audio), so there are ways to get either into almost any program without encoding a large physical file
Neither has a "real GUI" (what I would call a "real GUI" - something like Resolve) . They are both script driven, but you can preview them in various GUI's and media players . But that requires a lot of back and forth, edit script, refresh. They are definitely not as responsive as real GUI based programs, nor can you do stuff like make changes over time (keyframe) very easily. More limited in some ways, but more powerful in others.
avisynth does have a very basic script editor + GUI in avspmod, and sliders can be added (levels has them already because it's a built in function) . Once you move a slider, preview is auto refreshed, but it is laggy compared to a real GUI which allows for instant feedback
http://www.avisynth.nl/users/qwerpoi/UserSliders.html
Will Vapoursynth do filters my way, i.e. the "lift" control Jagabo came up with? Other than that I need to control gain, gamma, pedestal and knee (clip). Gain simply multiplies all RGB values by n — straightforward. Pedestal adds n to all RGB values, lift was explained a few posts back, gamma is gamma and knee acts as a clip, hard or soft, haven't worked this out.
All three RGB channels would be adjusted simultatneously. It would not be a fill-blown color corrector like Resolve.
Either can do it .
jagabo's lift function behaves the same as levels() "output low" parameter. "output high" would be doing it from the other end. Gamma is also part of levels function. You have the option to adjust each channel or work in YUV or RGB . Other options like dithering , and coring (really clipping) also in the levels function, and the ability to work at other bit depths (10, 16, 32/float would be the most common for RGB)
But you can define your own functions within an avs or vpy script - eg. if you copy & pasted jagabo's function (or Import the function), that function is now available in that avs script to use. You can rename or modify any way you want . For vapoursynth it's very similar, and you can import python modules which opens a lot of doors, especially with various cutting edge research projects, ... almost all use python on github and porting to vapoursynth is a lot easier .
Avisynth with RGBAdjust can multiply by a factor (what you're calling "gain") , or you can adjust bias (add offset value +/- to each pixel of any or all channels) (what you're calling "pedestal") . RGBAdjust isn't ported to vapoursynth, but you can do those with a Lut (similar you've been doing with ffmpeg lutrgb) . Or any manipulation based on math (eg. maybe for some reason you wanted to divide by 2 or whatever)
"knee" is a bit tricky, because it depends on how you define the knee behavior exactly - is it linear after , what is the slope, how curved or rounded etc... e.g. what if you wanted a slight s-curve after the knee point ?
That brings up the next point: the single most powerful levels manipulation filter if you had to choose only one, would probably be curves . You can do everything that RGB levels and RGBAdjust can, but much more. The power is in ease of non-linear mapping input/output of ranges. But you'd need real GUI to use it properly. There plugins to import a gimp curve, or photoshop curve and apply those in the script, but that's a lot of back and forth. A direct real GUI is a lot nicer -
OK, it works now! I got the PureBasic version to work under Windows. The trick was that you have to go one frame at a time: read frame -> process -> write frame, one frame at a time until it's done the entire video.
I will try Vapoursynth when I get a 10-bit camcorderThe lack of audio is not a problem because it is easy to import the audio from the source file.
-
Now things have changed.
I had all this working perfectly. Now it turns out another program I'm dealing with must have LIMITED RANGE YUV. My test files had to be re-encoded as limited range and this broke the YUV -> RGB conversion. My test green is R-G-B 16-180-16. It now comes out as 28-171-26.
Here is the ffmpeg code being used to encode the test bmp:
Code:ffmpeg -y -loop 1 -t 10 -r 59.94 -s 1280x720 -i raster.bmp -vf zscale=matrix=709:range=limited,format=yuv420p -c:v libx264 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -crf 1 -an raster.mp4
Code:yf = y uf = u vf = v Pr = (vf -128)/ 255 Pb = (uf-128) / 255 Yf / 255 Rf=Yf+Pr*2*(1-#Kr) Gf=Yf-(2*Pb*(1-#Kb)*(#Kb/(1-#Kr-#Kb)))-(2*Pr*(1-#Kr)*(#Kr/(1-#Kr-#Kb))) Bf=Yf+Pb*2*(1-#Kb)
-
Your source is: 0-255? ; 1-254? ; 16-235? - IMHO you should specify clearly for ffmpeg input and output quantization range.
Code:-color_range <int> ED.V..... color range (from 0 to INT_MAX) (default unknown) unknown ED.V..... Unspecified tv ED.V..... MPEG (219*2^(n-8)) pc ED.V..... JPEG (2^n-1) unspecified ED.V..... Unspecified mpeg ED.V..... MPEG (219*2^(n-8)) jpeg ED.V..... JPEG (2^n-1)
-
The whites top out at 255. There may be undershoots below 16 but there is very little information there.
What exactly are we looking at below? How can we rework your code to work with limited range? In the end the mp4 file MUST be flagged as limited range.
Code:-color_range <int> ED.V..... color range (from 0 to INT_MAX) (default unknown) unknown ED.V..... Unspecified tv ED.V..... MPEG (219*2^(n-8)) pc ED.V..... JPEG (2^n-1) unspecified ED.V..... Unspecified mpeg ED.V..... MPEG (219*2^(n-8)) jpeg ED.V..... JPEG (2^n-1)
-
There is this:
https://docs.microsoft.com/en-us/windows/desktop/medfound/recommended-8-bit-yuv-format...-yuv-to-rgb888
Which gives us this:
Code:R = clip( round( 1.164383 * C + 1.596027 * E ) ) G = clip( round( 1.164383 * C - (0.391762 * D) - (0.812968 * E) ) ) B = clip( round( 1.164383 * C + 2.017232 * D ) )
Studio video RGB is the preferred RGB definition for video in Windows, while computer RGB is the preferred RGB definition for non-video applications. In either form of RGB, the chromaticity coordinates are as specified in ITU-R BT.709 for the definition of the RGB color primaries. The (x,y) coordinates of R, G, and B are (0.64, 0.33), (0.30, 0.60), and (0.15, 0.06), respectively. Reference white is D65 with coordinates (0.3127, 0.3290). Nominal gamma is 1/0.45 (approximately 2.2), with precise gamma defined in detail in ITU-R BT.709.Last edited by chris319; 27th Nov 2018 at 15:17.
-
You're original post has the limited range YUV to RGB conversion. You just needed to add the bounds check.
-
Similar Threads
-
Help Converting YUV to RGB
By chris319 in forum Video ConversionReplies: 7Last Post: 24th Sep 2018, 18:51 -
RGB to YUV to RGB
By chris319 in forum ProgrammingReplies: 70Last Post: 20th Feb 2017, 16:49 -
ffmpeg/x264 RGB to YUV
By SameSelf in forum Video ConversionReplies: 40Last Post: 14th Nov 2016, 18:40 -
YUV/ RGB problem in Avisynth
By spiritt in forum Newbie / General discussionsReplies: 9Last Post: 6th Sep 2015, 04:31 -
is this YUV or RGB?
By marcorocchini in forum Newbie / General discussionsReplies: 2Last Post: 20th Apr 2014, 10:21