VideoHelp Forum




+ Reply to Thread
Page 5 of 6
FirstFirst ... 3 4 5 6 LastLast
Results 121 to 150 of 152
  1. Originally Posted by chris319 View Post
    The whites top out at 255. There may be undershoots below 16 but there is very little information there.

    What exactly are we looking at below? How can we rework your code to work with limited range? In the end the mp4 file MUST be flagged as limited range.

    Code:
    -color_range       <int>        ED.V..... color range (from 0 to INT_MAX) (default unknown)
         unknown                      ED.V..... Unspecified
         tv                           ED.V..... MPEG (219*2^(n-8))
         pc                           ED.V..... JPEG (2^n-1)
         unspecified                  ED.V..... Unspecified
         mpeg                         ED.V..... MPEG (219*2^(n-8))
         jpeg                         ED.V..... JPEG (2^n-1)

    Al those formulas are based on assumption that you are using normalized values. so YPbPr (within 0..1, ±0.5) or RGB (within 0..1) - limited quantization squish this range and shift up by some offset - to restore normalized values you need to perform reversed operation (so remove offset - pedestal and later divide value by scaling coefficient - in case of Y signal or RGB limited quantization range 0=16 and 1=235 thus scaling coefficient is 235-16=219). Those things are important.

    Whenever i'm not sure how ffmpeg will behave i always forcing ffmpeg to operate in full quantization range - you need to separate quantization range from signalled quantization range.
    Code:
    ffmpeg -color_range 2 -i file -color_range 2
    This should prevent ffmpeg to convert between limited and full quantization range - all data will be considered as full quantization range as such you are responsible for proper signalling - just use codec syntax to control this aspect.
    Clamping is mandatory as RGB and YCbCr are only partially compliant colour spaces (YCbCr is artificial colour space).

    Click image for larger version

Name:	0EF01A88-F874-4ECB-B2B6-3ADC38636CD4-imageId=3A71A8D6-7CC0-4E37-B85E-2831185476F9.jpg
Views:	112
Size:	27.9 KB
ID:	47317
    Quote Quote  
  2. Yep, that did it, thank you.

    I confined r, g and b to 1 - 254 because 0 and 255 are reserved for sync.

    Code:
    rf = (255/219)*yf + (255/112)*vf*(1-#Kr) - (255*16/219 + 255*128/112*(1-#Kr)) 
    gf = (255/219)*yf - (255/112)*uf*(1-#Kb)*#Kb/#Kg - (255/112)*v*(1-#Kr)*#Kr/#Kg - (255*16/219 - 255/112*128*(1-#Kb)*#Kb/#Kg - 255/112*128*(1-#Kr)*#Kr/#Kg) 
    bf = (255/219)*yf + (255/112)*uf*(1-#Kb) - (255*16/219 + 255*128/112*(1-#Kb))
    If rf > 254: rf = 254:EndIf: If gf > 254: gf = 254:EndIf: If bf > 254: bf = 254:EndIf
    If rf < 1: rf = 1:EndIf: If gf < 1: gf = 1:EndIf: If bf < 1: bf = 1:EndIf
    Quote Quote  
  3. Originally Posted by chris319 View Post
    I confined r, g and b to 1 - 254 because 0 and 255 are reserved for sync.
    Only YCbCr and only broadcast equipment that use SDI or similar link and as i wrote earlier due of SAV and EAV codes they are anyway clamped to 1 and 254 internally by design. Anyway when Chroma is within limited quantization range, anything lower than 16 and higher than 240 is unnatural and probably artificially generated and can't be properly represented by RGB colour space (as pure red and pure RGB is 240 and pure yellow and pure cyan is 16) - IMHO they can be safely clamp.
    Quote Quote  
  4. Originally Posted by chris319 View Post
    Yep, that did it, thank you.

    I confined r, g and b to 1 - 254 because 0 and 255 are reserved for sync.

    Code:
    rf = (255/219)*yf + (255/112)*vf*(1-#Kr) - (255*16/219 + 255*128/112*(1-#Kr)) 
    gf = (255/219)*yf - (255/112)*uf*(1-#Kb)*#Kb/#Kg - (255/112)*v*(1-#Kr)*#Kr/#Kg - (255*16/219 - 255/112*128*(1-#Kb)*#Kb/#Kg - 255/112*128*(1-#Kr)*#Kr/#Kg) 
    bf = (255/219)*yf + (255/112)*uf*(1-#Kb) - (255*16/219 + 255*128/112*(1-#Kb))
    If rf > 254: rf = 254:EndIf: If gf > 254: gf = 254:EndIf: If bf > 254: bf = 254:EndIf
    If rf < 1: rf = 1:EndIf: If gf < 1: gf = 1:EndIf: If bf < 1: bf = 1:EndIf
    Since you're working with floaging point, another way to approach it is to convert your limited range YUV to full range YUV (simple offset and scaling) then use the full range equation.

    Real production software (and hardware) doesn't used floating point, of course, it's too slow. It will be done with scaled integers.
    Quote Quote  
  5. Thanks for the help, guys, but my program isn't working well.

    I have two instances of ffmpeg running, one for input and one for output. The basic idea is to read input file -> modify -> write output file. For some reason only alternate frames are being correctly modified. This results in a horrible flicker.

    https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/

    The idea was to be able to make adjustments interactively and avoid some of the quirks of ffmpeg. That part works. It's kind of useless if the user can't export the video.

    I know about vapoursynth but it uses ffmpeg as I understand it.
    Last edited by chris319; 30th Nov 2018 at 00:57.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    Real production software (and hardware) doesn't used floating point, of course, it's too slow.
    Higher end visual effects production (including film / tv/ internet (netflix/amazon) ) , compositing, CG - they typically only use floating point . Some programs (like the industry standard Nuke) actually cannot use anything else for intermediate calculations or processes. Everything is converted to linear light 32bit float, and all operations work in float .



    Originally Posted by chris319 View Post
    I have two instances of ffmpeg running, one for input and one for output. The basic idea is to read -> modify -> write. For some reason only alternate frames are being correctly modified. This results in a horrible flicker.

    Double check your frame rates. The actual source frame rate, the -r parameter for the 1st and 2nd ffmpeg instance

    Or did you mean this is occurring "live" as you're making adjustments ? If it's not a mismatched framerate issue, it might be your filters are too slow for realtime processing for your current HW, and you're dropping frames .
    Last edited by poisondeathray; 30th Nov 2018 at 01:13.
    Quote Quote  
  7. Everything is explicitly set to 59.94.

    The user makes his adjustments on a single, frozen frame, then clicks a button which begins the export process and he is no longer able to adjust the video.

    If it's not a mismatched framerate issue, it might be your filters are too slow for realtime processing for your current HW, and you're dropping frames .
    That may well be the case. It's not real time but must keep up with ffmpeg.

    There are 1280 x 720 pixels to deal with. If I have it do only rows 0 to 360 there is no flicker but only for the upper half of the picture. At any rate, this may be an insoluble problem which puts the kibosh on the project.

    The alternative is to run the file through ffmpeg, look at the file on a scope, go back and make adjustments, run ffmpeg again, and back and forth until the levels are right — very time consuming and far from interactive.
    Quote Quote  
  8. Originally Posted by chris319 View Post
    Everything is explicitly set to 59.94
    It should be 60000.0/1001.0 i.e. 59.94005994005994005994005994006... those numbers are important...

    Perhaps you should use some timestamps to keep frame order?
    Quote Quote  
  9. Originally Posted by poisondeathray View Post
    Originally Posted by jagabo View Post
    Real production software (and hardware) doesn't used floating point, of course, it's too slow.
    Higher end visual effects production (including film / tv/ internet (netflix/amazon) ) , compositing, CG - they typically only use floating point .
    Sorry. Guess I'm behind the times.
    Quote Quote  
  10. Originally Posted by chris319 View Post
    There are 1280 x 720 pixels to deal with. If I have it do only rows 0 to 360 there is no flicker but only for the upper half of the picture.
    So did you mean you applied filter(s) to only top half, and only top half "flickered" ? Bottom half ok ?

    If you read => no filters => write , does it still occur ? (ie just pipe to write)
    If you read => modify (but nothing actually modified, apply the filters but set them to "zero" or whatever value that makes it default) => write , does it still occur ?

    The way a "slow" filter works in a linear filter chain for ffmpeg, is it acts as a "bottleneck" . But final exported / encoded file should be ok. So you just get slower processing. So I'm wondering if something about the live aspect is introducing the problem

    What is the pattern ? You said "alternate frame" - Is it exactly every 2nd frame, or was that just a rough description ? Maybe upload a sample of an exported file to examine
    Quote Quote  
  11. So did you mean you applied filter(s) to only top half, and only top half "flickered" ? Bottom half ok ?
    Yep.

    If you read => no filters => write , does it still occur ? (ie just pipe to write)
    Well here's the tricky part. If the image is unmodified between frame writes, the video could be flickering between two identical images and you won't see it flickering. You don't see the flicker unless the image is modified between frame writes.

    I'm wondering if something about the live aspect is introducing the problem
    Interesting thought, but at this point in the program it is not waiting for a window event, e.g. a mouse click, etc.

    What is the pattern ? You said "alternate frame" - Is it exactly every 2nd frame, or was that just a rough description ?
    The latter, but it is logical as it reads/writes one frame at a time.

    I tried compiling and running Ted Burke's C code under Linux and still see some weirdness. Remember what I said about it flickering between identical images and you don't see the flicker.
    Quote Quote  
  12. It should be 60000.0/1001.0 i.e. 59.94005994005994005994005994006... those numbers are important...
    That won't affect the execution speed of my code. My Sony camcorder has a frame rate of 59.94. I can't control that.
    Quote Quote  
  13. What about the specific filters ? What is being used? Do you have logic in some of them ? eg. Many "auto" leveling and legalization filters can introduce flicker , because values are adjusted per frame, instead of across temporal average or range

    Try a static application of a filter with a constant value. Maybe a lut to +20 all values or something. Nothing else, no other filters, no clipping or legalization applied. This will help rule out if it's something with your filters or something about the pipe process
    Quote Quote  
  14. Originally Posted by chris319 View Post
    it reads/writes one frame at a time.
    Synchronously or asynchronously? If async you need to double (or triple) buffer.
    Quote Quote  
  15. Originally Posted by chris319 View Post
    That won't affect the execution speed of my code. My Sony camcorder has a frame rate of 59.94. I can't control that.
    This is not about processing speed but about jitter, drift and related issues. How you verifying reported framerate? Commonly 59.94 works as abreviation or shortcut to 60000/1001 than hard 59.94.
    Quote Quote  
  16. It reads, modifies, then writes. There is no mechanism I know of that enables synchronization with ffmpeg.

    The flickering occurs even if I reduce the filter to the simplest commands.
    Last edited by chris319; 30th Nov 2018 at 14:12.
    Quote Quote  
  17. Originally Posted by chris319 View Post
    It reads, modifies, then writes. There is no mechanism I know of that enables synchronization with ffmpeg.
    I'm not sure you understand. For example, with asynchronous writes control is returned to the caller (your program) before the write has completed. If you are using a single frame buffer the next read into that buffer may overwrite some of your processed data before the write has completed. With synchronous writes control won't return to your program until the write operation has completed.

    I don't know exactly what you are doing (live camcorder to the screen? file to file?, something else?) and I've never used ffmpeg lib, but I suspect it supports both sync and async reads and writes.
    Quote Quote  
  18. I tried double buffering. I set up a buffer for reading and another one for writing and no joy. I'll go back and work with it some more.

    It's all file based. It reads an input file and writes to another output file. You can make video adjustments interactively on a frozen frame, then click a button and ffmpeg begins importing/exporting.

    This is at the core of the program:

    https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/
    Quote Quote  
  19. I built (Pelles C) the teapot program but had to make a few small modifications. popen() and pclose() had to be changed to _popen and _pclose(). The read mode for the input pipe had to be changed from "r" to "rb", and the write mode for the output pipe had to be changed from "w" to "wb". These allow for binary input and output -- otherwise non-text characters aren't passed properly.

    fread() and fwrite() in C are synchronous so those shouldn't be a problem. But just as a test, try adding a fflush(pipeout); right after the fwrite. See if that makes any difference.
    Quote Quote  
  20. Could you copy and paste your source code?

    I implemented Ted's code using GCC on Ubuntu in a virtual machine.

    I'm adroit at doing the GUI for this program in PureBasic but I'm clueless in writing a GUI in C or C++. Maybe Lazarus? I don't know if PureBasic has the equivalents of the changes you made, i.e. if the reading and writing functions are synchronous.

    Thank you.
    Quote Quote  
  21. Originally Posted by chris319 View Post
    Could you copy and paste your source code?
    It's identical to his except for the minor changes I mentioned. No GUI:

    Code:
    //
    // Video processing example using FFmpeg
    // Written by Ted Burke - last updated 12-2-2017
    //
     
    #include <stdio.h>
     
    // Video resolution
    #define W 1280
    #define H 720
     
    // Allocate a buffer to store one frame
    unsigned char frame[H][W][3] = {0};
     
    int main(int argc, char **argv)
    {
        int x, y, count;
         
        // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
        FILE *pipein = _popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "rb");
        FILE *pipeout = _popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "wb");
         
        // Process video frames
        while(1)
        {
            // Read a frame from the input pipe into the buffer
            count = fread(frame, 1, H*W*3, pipein);
             
            // If we didn't get a frame of video, we're probably at the end
            if (count != H*W*3) break;
             
            // Process this frame
            for (y=0 ; y<H ; ++y) for (x=0 ; x<W ; ++x)
            {
                // Invert each colour component in every pixel
                frame[y][x][0] = 255 - frame[y][x][0]; // red
                frame[y][x][1] = 255 - frame[y][x][1]; // green
                frame[y][x][2] = 255 - frame[y][x][2]; // blue
            }
             
            // Write this frame to the output pipe
            fwrite(frame, 1, H*W*3, pipeout);
        }
         
        // Flush and close input and output pipes
        fflush(pipein);
        _pclose(pipein);
        fflush(pipeout);
        _pclose(pipeout);
    }
    Since you are creating a video file, try writing the frame number (no other processing) on each frame before writing it. Open the resulting video verify you're getting each frame once, in order, and no skipped frames.
    Quote Quote  
  22. I tried using this in PureBasic in various arrangements:

    https://www.purebasic.com/documentation/process/availableprogramoutput.html

    One problem is that the program stalls when it reaches the end of the file because the above function returns 0 and my program thinks it needs to wait for some data. I also tried setting up a loop that would process x-number of frames.

    I also went back to my double-buffered scheme. No success so far. It's nice and clean if the read buffer is written to disk which is cheating. The write buffer is supposed to be written to disk. Also, if the program is flickering between two identical images, you won't see the flicker. This is all in progressive scan.

    Maybe I need to use a callback for this? I'm in over my head here.
    Quote Quote  
  23. Originally Posted by chris319 View Post
    One problem is that the program stalls when it reaches the end of the file because the above function returns 0 and my program thinks it needs to wait for some data.
    Why is that a problem? Does it return 0 at other times? If not, use a structure like this:

    Code:
    # open pipes
    
        while (AvailableProgramOutout(inPipe))
        {
            ReadProgramData(inPipe)
            ModifyData()
            Write ProgramData(outPipe)
        }
    
    # close pipes
    Quote Quote  
  24. I think it's a problem because the output file isn't being properly closed.
    Quote Quote  
  25. If you have to kill the program it's not surprising the output file is corrupt -- depending on the container. As a test try outputting a transport stream. Those are designed to withstand abuse like that.

    Why can't you use a structure like I outlined above? It should avoid your problem you're having with the program locking up.
    Quote Quote  
  26. Originally Posted by jagabo View Post
    If you have to kill the program it's not surprising the output file is corrupt -- depending on the container. As a test try outputting a transport stream. Those are designed to withstand abuse like that.

    Why can't you use a structure like I outlined above? It should avoid your problem you're having with the program locking up.
    I will.
    Last edited by chris319; 2nd Dec 2018 at 04:42.
    Quote Quote  
  27. Why can't you use a structure like I outlined above?
    Because it causes ffmpeg to stop abruptly. Apparently ReadProgramData() sometimes returns 0 and this terminates the loop.
    Quote Quote  
  28. Originally Posted by chris319 View Post
    Why can't you use a structure like I outlined above?
    Because it causes ffmpeg to stop abruptly. Apparently ReadProgramData() sometimes returns 0 and this terminates the loop.
    That's unfortunate. Another possibility is to call AvailableProgramOutout(inPipe) repeatedly with a timeout. If it hasn't responded in say one second, you can assume no more data is coming and shut the program down gracefully.
    Quote Quote  
  29. Another thought is to use a for loop for a specific number of frames.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!