VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or try DVDFab and copy, convert or make Blu-rays and DVDs! :)
+ Reply to Thread
Page 4 of 4
FirstFirst ... 2 3 4
Results 91 to 114 of 114
Thread
  1. I need help writing a proper "lift" control, not "offset", like in DaVinci Resolve.

    The whites would be clamped while the blacks would ride up and down. I'm having trouble wrapping my head around this.
    Quote Quote  
  2. Originally Posted by chris319 View Post
    I need help writing a proper "lift" control, not "offset", like in DaVinci Resolve.

    The whites would be clamped while the blacks would ride up and down. I'm having trouble wrapping my head around this.

    Did you mean "lift" similar to how resolve applies it ?

    "Clamped" implies "squishing" . Resolve behaviour is more like "holding" the upper values as they are.

    Something similar to this behaviour ?
    Image
    [Attachment 47142 - Click to enlarge]


    This demo is "output_low" increasing in a typical "levels" filter (input_low would go the other way, down, if positive increments) . It's the same as photoshop or gimp levels filter. Almost all programs have them, vdub, avisynth, vapoursynth. Curiously enough, ffmpeg doesn't have one. I don't know why, because it's one of the most basic filters. But you can look at the code from one of the other programs

    There are other ways to achieve similar results, curves, luma masks, but "levels" is the most common to do this sort of manipulation
    Quote Quote  
  3. Originally Posted by chris319 View Post
    I need help writing a proper "lift" control, not "offset", like in DaVinci Resolve.

    The whites would be clamped while the blacks would ride up and down. I'm having trouble wrapping my head around this.
    With unit Y and lift (0.0 to 1.0):

    Code:
    Y' = (Y - 1.0) * (1.0 - lift) + 1.0
    <edit>
    Actually that equation is more complex than it needs to be. It can be simplified to

    Code:
    Y' = Y * (1.0 - lift) + lift
    </edit>
    Last edited by jagabo; 10th Nov 2018 at 17:55.
    Quote Quote  
  4. Thanks, jagabo.
    Quote Quote  
  5. Keep in mind, if you brighten Y colors will get less saturated in the dark areas. So you need to adjust the chroma too.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    Keep in mind, if you brighten Y colors will get less saturated in the dark areas. So you need to adjust the chroma too.
    For this project I am converting to RGB and acting on all three at once.

    What I might do is convert back to YUV when the sound is married to the picture.
    Quote Quote  
  7. This gets me closer to the effect I had in mind. All floats:

    Code:
    lift = 50
    
    factor = 1-(lift/255)
    rf= (rf+lift)* factor
    gf=(gf+lift)* factor
    bf=(bf+lift)* factor
    Quote Quote  
  8. An AviSynth variation of my earlier unit function:

    Code:
    function Lift(clip v, int lift)
    {
        v.ColorYUV(gain_y=-lift, off_y=lift)
    }
    For RGB:

    Code:
    function LiftRGB(clip v, int lift)
    {
        factor = (255.0 - lift) /  255.0
        RGBAdjust(v, r=factor, g=factor, b=factor, rb=lift, gb=lift, bb=lift)
    }
    Last edited by jagabo; 13th Nov 2018 at 11:21. Reason: simplified Lift()
    Quote Quote  
  9. That does not sound good, YUV > RGB > YUV, our camcorders come up with illegal values, you cannot avoid it, even using ND filters, dynamics are not great.

    RGB is necessary to make for a frame to get preview on screen, but you should stay in YUV using filters. I'd rather make filtering and previewing two different things. If that is what you do, making RGB for preview. Not sure exactly, that is what I am missing in this thread.
    Last edited by _Al_; 10th Nov 2018 at 12:11.
    Quote Quote  
  10. Originally Posted by jagabo View Post
    An AviSynth variation of my earlier unit function:

    Code:
    function Lift(clip v, int lift)
    {
        # optional compensation for rounding errors
        lift = int(float(lift) * 1.00782)
        v.Invert().ColorYUV(gain_y = -lift).Invert()
    }
    For RGB:

    Code:
    function LiftRGB(clip v, int lift)
    {
        factor = (255.0 - lift) /  255.0
        RGBAdjust(v, r=factor, g=factor, b=factor, rb=lift, gb=lift, bb=lift)
    }
    I'm working directly with RGB samples outside of Avisynth or Vapoursynth.

    Your previous code made the whole picture darker and didn't clamp the whites.

    This works:

    Code:
    liftfactor = (255.0 - lift) /  255.0
    so I use it like this:

    Code:
    rf=(rf+lift)* liftFactor
    gf=(gf+lift)* liftFactor
    bf=(bf+lift)* liftFactor
    Note that liftFactor is an INT.
    Last edited by chris319; 10th Nov 2018 at 16:02.
    Quote Quote  
  11. Originally Posted by _Al_ View Post
    That does not sound good, YUV > RGB > YUV, our camcorders come up with illegal values, you cannot avoid it, even using ND filters, dynamics are not great.

    RGB is necessary to make for a frame to get preview on screen, but you should stay in YUV using filters. I'd rather make filtering and previewing two different things. If that is what you do, making RGB for preview. Not sure exactly, that is what I am missing in this thread.
    It doesn't help that the r 103 spec is written for RGB.
    Quote Quote  
  12. Originally Posted by chris319 View Post
    Originally Posted by _Al_ View Post
    That does not sound good, YUV > RGB > YUV, our camcorders come up with illegal values, you cannot avoid it, even using ND filters, dynamics are not great.

    RGB is necessary to make for a frame to get preview on screen, but you should stay in YUV using filters. I'd rather make filtering and previewing two different things. If that is what you do, making RGB for preview. Not sure exactly, that is what I am missing in this thread.
    It doesn't help that the r 103 spec is written for RGB.

    Nor does it help that you're probably delivering 8bit YUV 4:2:2 . Something 100.0% broadcast safe / EBU R103 compliant in RGB can produce areas out of gamut by that conversion.

    Partially from the 8bit conversion, rounding errors and lossy compression, but especially because of that subsampling step. Especially around lines , edges, graphics. It's easy to demonstrate this for yourself. Certain kernals create more "illegal" broadcast values, than others. The point is if you "fix" them at the RGB stage (I'm guessing that's what you were experimenting with earlier with the lutrgb clipping), you will miss the out of gamut errors you just introduced when converting to 8bit YUV 4:2:2 .

    EBU R103 is just a recommendation. Sometimes broadcaster will say "based on EBU R103", but might clarify some specific parameters to meet their requirements, maybe more restrictive, maybe more relaxed. Places that include the 1% active picture area out of gamut allowance is a tremendous amount of wiggle room if the broadcaster allows it


    Originally Posted by _Al_ View Post
    That does not sound good, YUV > RGB > YUV, our camcorders come up with illegal values, you cannot avoid it, even using ND filters, dynamics are not great.

    RGB is necessary to make for a frame to get preview on screen, but you should stay in YUV using filters. I'd rather make filtering and previewing two different things. If that is what you do, making RGB for preview. Not sure exactly, that is what I am missing in this thread.
    I agree for the most part, and staying with the original YUV format is "best practices", avoidable additional losses - but many people find RGB color manipulations more intuitive to use, myself included. You could argue it's not a significant loss if done properly, and you can increase precision by using higher bit depths. And by the time a final format is encoded, usually there are going to be lossy compression and rounding differences anyways . (Although maybe not if it was some archival project)

    On the topic of (in)accurate YUV=>RGB conversions, or YUV values that don't "map" to RGB , or discarded negative RGB values - you could also argue that many of the camera YUV recorded values were actually invalid in the first place. They are the result of the camera raw => debayering to RGB => YUV conversion and the subsampling to 8bit 4:2:0 in the recorded format. You can demonstrate this on cameras where you can simultaneously record raw or bypass the lower quality onboard compression and subsampling with an external recorder, and compare to the onboard recording

    If someone is really worried about RGB conversion, vapoursynth is one of the very few tools that can facilitate truly lossless YUV <=> RGB roundtrip conversions in 32bit float. Not just "in theory", but in actual production workflows. You can export physical files (EXR) that you can use for import/export in other capable float programs such as AE, Nuke, Natron ,etc... for other manipulations then back. The EXR float format retains everything including the negative values. It's the most commonly used intermediate for higher end visual effects and CG post production.
    Quote Quote  
  13. if you "fix" them at the RGB stage (I'm guessing that's what you were experimenting with earlier with the lutrgb clipping), you will miss the out of gamut errors you just introduced when converting to 8bit YUV 4:2:2 .
    Yup. It turns out to be a lot of back-and-forth, checking, adjusting and re-encoding, very time consuming and a PITA.

    Meanwhile, progress is being made on the proc amp/scope project. The lift and offset/pedestal controls are working. Spent some time today on the user interface. The hangup now is talking to ffmpeg so this video can be exported to a file. Not sure if that can ever be made to work.
    Quote Quote  
  14. I am able to read frames from a video into my program and that works well, but am having a tough time writing video back out. Below is the ffmpeg code I am using. Anyone see any problems with it?

    Code:
    ffmpeg  -i  filename$  -f image2pipe  -s 1280x720  -vcodec rawvideo  -pix_fmt rgb24   -
    
    ffmpeg  -y  -f rawvideo -vcodec rawvideo  -pix_fmt rgb24  -s 1280x720  -r 25 -i -  -f mp4  -q:v 5  -an  -vcodec mpeg4  output.mp4
    I am able to read & write frames using gcc on Linux Mint, but no joy using PureBasic on Windows 10. However, I can process a video file using PureBasic under Windows 10 provided it is a straight command to process a file, i.e. doesn't read into a buffer and write the buffer contents back out, so I know my program is talking to ffmpeg. MediaInfo shows no video is contained in the file. I am writing out x frames of video using a "for" loop but am not sure I am terminating the process correctly.

    The idea for this comes from here:

    https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/

    My video processor works well but for the inability to write video back out.
    Quote Quote  
  15. Originally Posted by chris319 View Post
    I am able to read frames from a video into my program and that works well, but am having a tough time writing video back out. Below is the ffmpeg code I am using. Anyone see any problems with it?

    Code:
    ffmpeg  -i  filename$  -f image2pipe  -s 1280x720  -vcodec rawvideo  -pix_fmt rgb24   -
    
    ffmpeg  -y  -f rawvideo -vcodec rawvideo  -pix_fmt rgb24  -s 1280x720  -r 25 -i -  -f mp4  -q:v 5  -an  -vcodec mpeg4  output.mp4
    I am able to read & write frames using gcc on Linux Mint, but no joy using PureBasic on Windows 10. However, I can process a video file using PureBasic under Windows 10 provided it is a straight command to process a file, i.e. doesn't read into a buffer and write the buffer contents back out, so I know my program is talking to ffmpeg. MediaInfo shows no video is contained in the file. I am writing out x frames of video using a "for" loop but am not sure I am terminating the process correctly.

    The idea for this comes from here:

    https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/

    My video processor works well but for the inability to write video back out.



    In windows it would look like something like this , you need to specify the out pipe and format

    Code:
    ffmpeg  -i  filename$  -f image2pipe  -s 1280x720  -vcodec rawvideo  -pix_fmt rgb24  -f rawvideo - | ffmpeg -f rawvideo -pix_fmt rgb24 -s 1280x720  -r 25 -i - -q:v 5 -vcodec mpeg4 -an output.mp4
    Another potential problem (not with the writing) is -vcodec mpeg4 , that is mpeg4-asp, so will require yuv420p . By default, ffmpeg will use rec601 conversion for that rgb24 => yuv420p
    Quote Quote  
  16. How can I have the input file name as an output argument? When writing the file, the input is a buffer of pixels, not another file. I have already read in the file.

    Code:
    ffmpeg  -i  filename$
    Here is how Ted does it:

    Code:
    FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
    Note the "-i -" argument.
    Last edited by chris319; 12th Nov 2018 at 12:57.
    Quote Quote  
  17. Originally Posted by chris319 View Post
    How can I have the input file name as an output argument? When writing the file, the input is a buffer of pixels, not another file. I have already read in the file.

    Code:
    ffmpeg  -i  filename$
    Here is how Ted does it:

    Code:
    FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
    Note the "-i -" argument.



    Windows pipes don't convey filename information; so typically that information would be passed through dos scripting when used with ffmpeg at reading the physical file stage

    ~n in the output filename would use the input filename in a ffmpeg For loop batch script

    I don't know how to do it with your program, but possibly you could wrap it all in a dos For loop by: reading the physical file => pipe to your program => final piping to ffmpeg to write physical file
    Quote Quote  
  18. you could wrap it all in a dos For loop by: reading the physical file => pipe to your program => final piping to ffmpeg to write physical file
    I thought that was what I was doing but on a frame-by-frame basis. At the moment, Windows/PureBasic creates a file and the file seems about the right size; there is probably video data in it but the video data cannot be read by a player or by MediaInfo.

    I could load in the entire file at once but that takes up a lot of memory.

    If you're looking for something to do, if you want to download the free demo version of PureBasic I could send you the source code.

    I've got SDL2 reading the file and drawing a little preview screen in C, but no widgets for making adjustments and no scope yet.

    I also tried PureBasic under Linux but something went wrong. I'll have to try it again and report back.

    Or I could try Avisynth or Vapoursynth. Keep in mind that it needs control widgets written in Python? Do I want Avisynth or Vapoursynth or continue to try getting PureBasic to work? I fear that Python will be slow at processing an entire bitmap pixel by pixel. PureBasic is compiled, not interpreted, and is pretty fast considering all the work it has to do. That helps when you're trying to make adjustments interactively.
    Quote Quote  
  19. I'm not a programmer - so I probably can't help you with PureBasic or anything like that .


    At the moment, Windows/PureBasic creates a file and the file seems about the right size; there is probably video data in it but the video data cannot be read by a player or by MediaInfo.
    There should be clues in the last ffmpeg write portion log or console output - especially if it's writing some physical file out ok, and the filesize looks decent. Maybe start by posting that
    Quote Quote  
  20. I use Python3 with Vapoursynth and having older PC with 4GB RAM and older i5 650@3.20GHz, that is quite slow and

    for example I have 4k 10bit video loading from Vapoursynth memoryview of arrays of planes on screen , playing it frame by frame and it gets 8fps on screen,
    that does not differ much if I play video using mpvplayer for example, the only difference is that mpv skips frames and keeps time,
    or for example fullhd 50fps M2TS file loading by ffms2 is playing 40fps. One catch is indexing in Vapoursynth, it can take a while to access frames especially if you randomly jump from frame to frame far apart , it might take a while to respond.

    so as you can see no bad at all. I was quite surprised, how fast things can go. The way it works Vapoursynth lets you read memoryview of frame , you put it into arrays and those arrays seem to be lightning fast for processing.

    Using GUI is your choice. I use qt5 or better pyqt5 made for Python3. You can use opencv, but not sure how you nice GUI would look like. Also using GUI , having lots of controls and dealing with GUI, you need to thread the heck out of some parts of code to process things independently if trying to process frame by frame playing it for example, to not get GUI unresponsive, pyqt5 or qt5 in general makes it easy to communicate across threads basically you can instantly make objects in one thread to appear in other thread.
    Quote Quote  
  21. My first decision is Vapoursynth or Avisynth. What would be the pros and cons?

    Does Avisynth support a gui through Python or some other? I have not used either program.

    Real-time playback is not necessary. I would probably have the user freeze the video/pause playback while making adjustments.

    Will Vapoursynth do filters my way, i.e. the "lift" control Jagabo came up with? Other than that I need to control gain, gamma, pedestal and knee (clip). Gain simply multiplies all RGB values by n straightforward. Pedestal adds n to all RGB values, lift was explained a few posts back, gamma is gamma and knee acts as a clip, hard or soft, haven't worked this out.

    All three RGB channels would be adjusted simultatneously. It would not be a fill-blown color corrector like Resolve.
    Quote Quote  
  22. jagabo's ColorYUV filter is listed here so I'm sure you can do the same thing.

    Avisynth has workarounds for more that 8bit videos, but Vapoursynth is just a modern tool for working 8, 10,16 bit videos. It has simple filter that is called resize, it can change not only resolution, but color space, matrix, range and it does right as it seems. I went with 64bit version (and 64bit Python as well) working only with 64bit applications , not looking back to use 32bit apps.

    Vapoursynth does not support audio like Avisynth, there is 32bit and 64bit versions, it has some filter that suppose to do something or passes something I did not get into it yet. There is more scripts for Avisynth because it is here like forever. There is a build in plugin to import avs script into Vapoursynth script. As long as you have the same 64 or 32 bit versions, never tried that though and for work you do you would not perhaps need it also. Vapoursynth filters (functions) were written or ported to Vapoursynth from Avisynth. It takes some time to orient in all those parameters for filter/functions, lots of time there is only Avisynth doc and then you kind of interpret it for Vapoursynth. Folks porting plugins do not care much for that explanations thinking everyone knows it from Avisynth but as you can see starting with Vapoursynth that does not help. It is much more easier if you are a programmer. On regular basis it is published and discussed here. There is a chance just to get most of plugins (functions, moduls) together with Python , all portable and use it, it is called FATPACK. There is also plugin manager to update plugins because otherwise you'd need them to fish for them all over the web, mostly github, hoping there is also binaries released not just source code.

    There is no build in GUI for Avisynth or Vapoursynth. Avisynth has API , Vapoursynth has its API too, but if you write in Python you don't need any, you are it, you are in the script as well. Vapoursynth is Python script. So you can write in Python whatever you want and along the lines you work with Vapoursynth in the same script. As for GUI modul, you need to choose some that already exist, does not matter what language you use. Python has its GUI moduls. I myself was choosing between tkinter or QT. There is some more moduls available.

    Python is OOP language, so you can import, fetch VideoNode (that's how Vapoursynth names loaded video as a type of object), load VideoNode into functions, move it between your functions. I think pureBasic is not that much OOP. As long as there is a filter for it you can do some filtering. You have raw arrays of planes available, just raw YUV or RGB data, so you can come up with any filter you want just using python but then you need to sort of register properties correctly, that is I understand it, but never actually went into that. You might use those data directly just for previewing. If you use existing filters, they are mostly properties aware but not guaranteed 100% because it depends if source plugin that loads videofile into Vapoursynth can detect them in the first place. SAR info is not much 100% detectable by all source plugins, field order as well. Frame properties you can fetch using this, for video properties or previewing you can use this. To catch video properties into variables you can use other Python moduls like MediaInfoDLL3 (just fetching MediaInfoDLL.py), you import it into python as "import MediaInfoDLL3", you also need mediainfo DLL (from mediainfo web , putting it in the same folder as py file). Other thing you can use is ffprobe.exe and python script. Or other moduls, there is plenty of them but they do not fetch so many properties like those two or I do not know about them. Or you might not need them at all because you are handling always video with the same properties coming from camcorder, so you always know them.

    Special mentioning belongs to numpy modul, that one is needed for preview and opencv plugin, it can merge those arrays of planes for QT gui, or opencv. Not sure how tkinter pans out here, what it needs for preview or if it is capable of it. Opencv can preview as well, it is really a powerful modul. People use it a lot , for video capturing and other video stuff, but Vapoursynth is more powerful just in that single thing - handling of video. Not sure how opencv handles to program all of those gui controls as you need it.
    Last edited by _Al_; 12th Nov 2018 at 23:13.
    Quote Quote  
  23. Originally Posted by chris319 View Post
    My first decision is Vapoursynth or Avisynth. What would be the pros and cons?
    Adding to _Al_' s comments -

    It's not necessarily either/or . There is a tremendous amount of overlap in terms function, but also a bit complementary. For the manipulations you mentioned, either can do those, but you should think ahead. Some things are handled better or faster in one or the other, or some plugins or feature might be exclusive. Vapoursynth is more inclusive in that you can load some avs plugins, and import whole avs scripts into vpy scripts, but you can only load some vpy scripts into avs scripts. Also there is avfs which can generate a virtual file (the script becomes a virtual video +/- audio), so there are ways to get either into almost any program without encoding a large physical file


    Originally Posted by chris319 View Post
    Does Avisynth support a gui through Python or some other? I have not used either program.
    Neither has a "real GUI" (what I would call a "real GUI" - something like Resolve) . They are both script driven, but you can preview them in various GUI's and media players . But that requires a lot of back and forth, edit script, refresh. They are definitely not as responsive as real GUI based programs, nor can you do stuff like make changes over time (keyframe) very easily. More limited in some ways, but more powerful in others.

    avisynth does have a very basic script editor + GUI in avspmod, and sliders can be added (levels has them already because it's a built in function) . Once you move a slider, preview is auto refreshed, but it is laggy compared to a real GUI which allows for instant feedback
    http://www.avisynth.nl/users/qwerpoi/UserSliders.html

    Will Vapoursynth do filters my way, i.e. the "lift" control Jagabo came up with? Other than that I need to control gain, gamma, pedestal and knee (clip). Gain simply multiplies all RGB values by n straightforward. Pedestal adds n to all RGB values, lift was explained a few posts back, gamma is gamma and knee acts as a clip, hard or soft, haven't worked this out.

    All three RGB channels would be adjusted simultatneously. It would not be a fill-blown color corrector like Resolve.

    Either can do it .

    jagabo's lift function behaves the same as levels() "output low" parameter. "output high" would be doing it from the other end. Gamma is also part of levels function. You have the option to adjust each channel or work in YUV or RGB . Other options like dithering , and coring (really clipping) also in the levels function, and the ability to work at other bit depths (10, 16, 32/float would be the most common for RGB)

    But you can define your own functions within an avs or vpy script - eg. if you copy & pasted jagabo's function (or Import the function), that function is now available in that avs script to use. You can rename or modify any way you want . For vapoursynth it's very similar, and you can import python modules which opens a lot of doors, especially with various cutting edge research projects, ... almost all use python on github and porting to vapoursynth is a lot easier .

    Avisynth with RGBAdjust can multiply by a factor (what you're calling "gain") , or you can adjust bias (add offset value +/- to each pixel of any or all channels) (what you're calling "pedestal") . RGBAdjust isn't ported to vapoursynth, but you can do those with a Lut (similar you've been doing with ffmpeg lutrgb) . Or any manipulation based on math (eg. maybe for some reason you wanted to divide by 2 or whatever)

    "knee" is a bit tricky, because it depends on how you define the knee behavior exactly - is it linear after , what is the slope, how curved or rounded etc... e.g. what if you wanted a slight s-curve after the knee point ?

    That brings up the next point: the single most powerful levels manipulation filter if you had to choose only one, would probably be curves . You can do everything that RGB levels and RGBAdjust can, but much more. The power is in ease of non-linear mapping input/output of ranges. But you'd need real GUI to use it properly. There plugins to import a gimp curve, or photoshop curve and apply those in the script, but that's a lot of back and forth. A direct real GUI is a lot nicer
    Quote Quote  
  24. OK, it works now! I got the PureBasic version to work under Windows. The trick was that you have to go one frame at a time: read frame -> process -> write frame, one frame at a time until it's done the entire video.

    I will try Vapoursynth when I get a 10-bit camcorder The lack of audio is not a problem because it is easy to import the audio from the source file.
    Quote Quote  



Similar Threads