VideoHelp Forum
+ Reply to Thread
Page 1 of 4
1 2 3 ... LastLast
Results 1 to 30 of 91
Thread
  1. This article will give you an idea of what I'm trying to do:

    https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/

    I'm trying to read video frames into a bitmap and write the bitmap out to a Prores file. I've got it working with mp4 on a Linux vm but am having trouble writing the frames out to Prores.

    Opening an input pipe works fine:

    Code:
    FILE *pipein = popen("ffmpeg -i KodakChart.MP4  -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    The problem seems to lie in the output pipe:
    Code:
    FILE *pipeout = popen("ffmpeg  -y  -i KodakChart.MP4 -c:v prores -profile:v 3 -c:a copy  output.mov", "w");
    The above code give me a playable Prores file with audio, but merely re-encodes the video from the original file and bypasses the bitmap.

    Here is the code that writes the bitmap named "frame":
    Code:
    fwrite(frame, 1, (H*W*3)/2, pipeout);
    I have tried the following without success:

    Code:
    //FILE *pipeout = popen("ffmpeg  -y           -f rawvideo  -vcodec rawvideo                -c:v prores -profile:v 3 -c:a copy  output.mov", "w");
    Code:
    //FILE *pipeout = popen("ffmpeg  -y  -f rawvideo  -vcodec rawvideo -pix_fmt yuv422p  -c:v prores -profile:v 3 -c:a copy  output.mov", "w");
    Any help is appreciated. Thank you.
    Quote Quote  
  2. Why would you want to do this ? What are you trying to do exactly ?

    There is potential for problems with pixel format conversions; ie. whatever the source is , 8bit RGB bitmap , 10bit422 YUV prores ... Matrix issues, rounding errors

    Why is it necessary to read from the physical bmp for this? Are you doing something to the bmp sequence in another program? why not read from pipe before the bmp and specify multiple outputs (eg if you wanted to include the rgb conversion before the prores write)? You could use filter_complex with 2 splits for example
    Quote Quote  
  3. Why is it necessary to read from the physical bmp for this? Are you doing something to the bmp sequence in another program?
    I'm modifying the pixels elsewhere in the same program.

    Ouch! Forgot about the 8- to 10-bit aspect of it.
    Quote Quote  
  4. Originally Posted by chris319 View Post
    Why is it necessary to read from the physical bmp for this? Are you doing something to the bmp sequence in another program?
    I'm modifying the pixels elsewhere in the same program.

    Ouch! Forgot about the 8- to 10-bit aspect of it.

    If you're performing operations in other programs, then there is a timing issue here as well; You can't access a bmp in the other program until it's written out, and presumably you don't want to read the bmp sequence until it's been modified in the other program (and the modified version written out by the other program)

    So you might as well just read a bmp sequence separately later (assuming you don't care about the pixel format , bit depth, matrix conversions) . ie. just separate the 2 stages
    Quote Quote  
  5. I'm modifying the pixels elsewhere in the same program.
    As I said previously, the bitmap modifications are taking place in the same program, after the bitmap is read in and before it is written out, just like in the article I linked to in the O.P.
    Quote Quote  
  6. Originally Posted by chris319 View Post
    I'm modifying the pixels elsewhere in the same program.
    As I said previously, the bitmap modifications are taking place in the same program, after the bitmap is read in and before it is written out, just like in the article I linked to in the O.P.

    What modifications ? If you're doing exactly the same modification, ie channel inversion in 8bit RGB, why not just use a ffmpeg filter ? (I know this doesn't address your original question but it seems much easier)
    Quote Quote  
  7. I know this doesn't address your original question but it seems much easier
    I realize that, but there is a reason I'm doing it this way which I don't care to divulge.
    Quote Quote  
  8. For the output to prores pipe, you have -i KodakChart.MP4 , which means take the original video as input

    Usually stdin is just -

    so

    Code:
    -i -
    Raw video pipes need all their characteristics specified. So pixel format, dimensions, fps . A recieving rawvideo pipe would look something like

    Code:
    ffmpeg -f rawvideo -s 1920x1080 -pix_fmt rgb24 -r 24  -i - .....
    Adjust your dimensions, fps to whatever yours is; also your pixel format might be bgr24
    Quote Quote  
  9. Here is the complete code. It is strictly experimental and may not even work. It disregards the 8-bit 10-bit difference.

    The idea is to clip off all luminance pixels > 235 to 235.
    Code:
    if (lum[y*W+x] > 235) lum[y*W+x] = 235;
    Code:
    // Video processing example using FFmpeg
    // Written by Ted Burke – last updated 12-2-2017
    // Now works in YUV space
    // Outputs to prores
    // To compile: gcc Ted2.c -o Ted2
    // Note: make sure .MP4 file extension matches case
     
    #include <stdio.h>
     
    // Video resolution
    #define W 1280
    #define H 720
     
    // Allocate a buffer to store one frame
    unsigned char frame[((H)*(W)*3)/2];
     
    int main(void)
    {
    int x, y, count;
    
        // Create a pointer for each component's chunk within the frame
        // Note that the size of the Y chunk is W*H, but the size of both
        // the U and V chunks is (W/2)*(H/2). i.e. the resolution is halved
        // in the vertical and horizontal directions for U and V.
    
    unsigned char *lum;//, *u, *v;
    unsigned char *u, *v;
    
        lum = frame;
        u = frame + H*W;
        v = u + (H*W/4);
     
    FILE *pipein = popen("ffmpeg -i KodakChart.MP4  -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    
    FILE *pipeout = popen("ffmpeg  -y     -f rawvideo   -s 1280x720   -pix_fmt rgb24       -vcodec rawvideo    -r 59.94     -i KodakChart.MP4 -c:v prores -profile:v 3 -c:a copy  output.mov", "w");
    
    // Process video frames
    while(1)
        {
            // Read a frame from the input pipe into the buffer
            // Note that the full frame size (in bytes) for yuv420p
            // is (W*H*3)/2. i.e. 1.5 bytes per pixel. This is due
            // to the U and V components being stored at lower resolution.
            count = fread(frame, 1, (H*W*3)/2, pipein);
             
            // If we didn’t get a frame of video, we’re probably at the end
            if (count != (H*W*3)/2) break;
     
    // Process this frame
    for (y=0 ; y<H ; y++)
            {
                for (x=0 ; x<W ; x++)
                {
    
    if (lum[y*W+x] > 235) lum[y*W+x] = 235;
    
                }
            }
     
      // Write this frame to the output pipe
            fwrite(frame, 1, (H*W*3)/2, pipeout);
        }
     
        // Flush and close input and output pipes
        fflush(pipein);
        pclose(pipein);
        fflush(pipeout);
        pclose(pipeout);
    }
    Quote Quote  
  10. Not sure about linux, but for windows piping, you'd also have to map the rawvideo pipe, and map which stream the audio is coming from

    Another way to clip would be to use -vf lutrgb or lutyuv . The filter also works at >8 bit depths now
    Quote Quote  
  11. Another way to clip would be to use -vf lutrgb or lutyuv . The filter also works at >8 bit depths now
    Good idea.

    I've tried that and itdoesn't work very well. It doesn't hard clip at the specified values. ffmpeg has issues with video levels. Also, when you re-encode to, let's say H.264 it puts the artifacts >235 back in.
    Quote Quote  
  12. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    lutyuv and limiter filters do clip at specified values, perhaps you do not know how to use filters.

    Non-lossless encoding can produce various artifacts you can not avoid that.
    Quote Quote  
  13. If you wish to limit pixels like that use -vf scale=in_range=pcut_range=tv
    Otherwise, command will do nothing.
    This compresses the video levels, which I do not want to do.

    If you just want to clamp pixels, use lut or limiter filter.
    I want to clip them without altering the levels, which lutyuv seems to have trouble doing.
    Last edited by chris319; 14th May 2018 at 14:45.
    Quote Quote  
  14. Originally Posted by chris319 View Post
    Perhaps you do not know how to correctly measure video levels.
    I assure you, richardpl does know how to measure video levels. He is a ffmpeg developer

    lutyuv does work, I've verified this in the past for both 8bit and 10bit



    If there are problems , then document the issues or post a detailed bug report so they can be fixed. Rounding errors are expected for some operations, and lossy codecs can produce excursions as well
    Quote Quote  
  15. If there are problems , then document the issues or post a detailed bug report so they can be fixed.
    Already have.
    Quote Quote  
  16. Originally Posted by chris319 View Post
    If there are problems , then document the issues or post a detailed bug report so they can be fixed.
    Already have.
    Was it the invalidated ticket, or did you post another ?
    Quote Quote  
  17. Have a look at the scope shot below. Here is the code used to generate it:

    Code:
    ffmpeg -i stairstep.mp4 -y -vf lutyuv=y='clip(val,16,235)' outfile.mp4
    I don't see any clipping. I see video (green traces) above and below 16 and 235. Is there something wrong with my code? Shouldn't it clip at 16 and 235?

    http://www.chrisnology.info/videos/Cliptest.jpg
    Quote Quote  
  18. The TV levels filter works as expected. It compresses the video levels to the range 16 - 235, thus altering the levels.

    I'm looking to clip the video without altering the levels.

    Code:
    bin\ffmpeg -i stairstep.mp4 -y -vf "scale=in_range=pc:out_range=tv" -c:a copy outfile.mp4
    http://www.chrisnology.info/videos/TVLevels.jpg

    The problem is that these ringing artifacts pop up when you encode the video to a deliverable format, be it h.264, h.265 or even prores. They can be tamed to an extent by applying some "unsharpening" and removing some of the high-frequency content.
    Quote Quote  
  19. Originally Posted by chris319 View Post
    Have a look at the scope shot below. Here is the code used to generate it:

    Code:
    ffmpeg -i stairstep.mp4 -y -vf lutyuv=y='clip(val,16,235)' outfile.mp4
    I don't see any clipping. I see video (green traces) above and below 16 and 235. Is there something wrong with my code? Shouldn't it clip at 16 and 235?

    http://www.chrisnology.info/videos/Cliptest.jpg
    Possible explanations: an issue with your source video, or testing method, or scopes ? It definitely works here

    I can check your video if you want to upload it




    Originally Posted by chris319 View Post

    I'm looking to clip the video without altering the levels.
    Understood; then you want to use lutyuv or limiter

    The problem is that these ringing artifacts pop up when you encode the video to a deliverable format, be it h.264, h.265 or even prores. They can be tamed to an extent by applying some "unsharpening" and removing some of the high-frequency content.
    You can use lossless h264 or h265, or other lossless formats - but they are typically not used for mezzanine or intermediate deliverables. The final end delivery format will have some of those lossy issues anyways.
    Quote Quote  
  20. 8bit YUV full range test pattern before , and lutyuv clipping after
    Click image for larger version

Name:	lutyuv.jpg
Views:	425
Size:	99.3 KB
ID:	45636
    Quote Quote  
  21. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    If you didn't know ffmpeg have own waveform monitor: http://trac.ffmpeg.org/wiki/WaveformMonitor
    Quote Quote  
  22. an issue with your source video, or testing method, or scopes ?
    Here it is with no filtering at all:

    Code:
    ffmpeg -i stairstep.mp4 -y outfile.mp4
    http://www.chrisnology.info/videos/Nofilter.jpg

    Nearly perfect reproduction of the video levels. No problem with source video, testing method or scope.

    So the problem is either with my ffmpeg filter syntax or with ffmpeg itself.
    Quote Quote  
  23. Originally Posted by chris319 View Post
    an issue with your source video, or testing method, or scopes ?
    Here it is with no filtering at all:

    Code:
    ffmpeg -i stairstep.mp4 -y outfile.mp4
    http://www.chrisnology.info/videos/Nofilter.jpg

    Nearly perfect reproduction of the video levels. No problem with source video, testing method or scope.

    So the problem is either with my ffmpeg filter syntax or with ffmpeg itself.


    Please upload your video. Likely it's either a problem with your video, or you're not actually measuring YUV, but RGB in a standard range conversion
    Quote Quote  
  24. Here is the code used to create the video from a bmp.

    Code:
    bin\ffmpeg -y  -loop 1 -t 10  -i stairstep.bmp  -pix_fmt yuv420p  -crf 17  -c:v libx264  -vf scale=out_color_matrix=bt709  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -r 59.94  -c:a copy  test_pattern.mp4
    Please upload your video. Likely it's either a problem with your video, or you're not actually measuring YUV, but RGB in a standard range conversion
    You miss the point. Near-perfect reproduction with no lutyuv filter. It just doesn't clip.

    Please post the code used for your clipping and point out any differences between your code and mine.

    http://www.chrisnology.info/videos/outfile.mp4

    The scope is beyond reproach. It has been thoroughly and rigorously calibrated and tested with Matlab and it passes muster. Again, it perfectly reproduces the gray scale.
    Quote Quote  
  25. Originally Posted by chris319 View Post
    Here is the code used to create the video from a bmp.

    Code:
    bin\ffmpeg -y  -loop 1 -t 10  -i stairstep.bmp  -pix_fmt yuv420p  -crf 17  -c:v libx264  -vf scale=out_color_matrix=bt709  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -r 59.94  -c:a copy  test_pattern.mp4
    Please upload your video. Likely it's either a problem with your video, or you're not actually measuring YUV, but RGB in a standard range conversion
    You miss the point. Near-perfect reproduction with no lutyuv filter. It just doesn't clip.

    Please post the code used for your clipping and point out any differences between your code and mine.

    http://www.chrisnology.info/videos/outfile.mp4

    The scope is beyond reproach. It has been thoroughly and rigorously calibrated and tested with Matlab and it passes muster. Again, it perfectly reproduces the gray scale.


    The reason is there is nothing to clip . It' s not a full range video . Your RGB to YUV conversion for the test video uses limited range (RGB 0,0,0-255,255,255 gets "mapped" to Y 16-235, CbCr 16-240)

    If nothing happens after lutyuv clipping, the most likely explanation is your video has nothing to clip - it is limited range Y (16-235) , and you are measuring standard range converted RGB values . That is exactly what is happening here

    Technically , a Y' waveform is supposed to measure Y values, not their RGB converted values

    If you look at a real Y' waveform, there are no values <16 or >235 in your video
    Quote Quote  
  26. First problem: the ffmpeg scope is confining the levels to 16 - 235. That's not what's in the file. The stairstep ranges from 0 - 255.

    http://www.chrisnology.info/videos/ffmpegscope.jpg

    Here is the PureBasic code used to generate the stairstep. Now you've got all the evidence.

    Code:
    Global screenWd, screenHt
    
    Procedure DrawGrayScale()
          StartDrawing(ImageOutput(1))
               Box(0,0,screenWd,ScreenHt, 0)
    
            Steps = 16
            X = 0
            Lum.f = 0
            LumStep.f = 255 / (Steps-1)
            XStep = ScreenWd/Steps
            For ct = 1 To Steps
              Box(X,0,XStep,ScreenHt, RGB(Lum,Lum,Lum))
              Lum + LumStep
              X   + XStep
            Next
            StopDrawing()
    EndProcedure  
    
    CreateImage(1,1280,720)
    
    ExamineDesktops()
          ScreenWd = DesktopWidth(0):ScreenHt = DesktopHeight(0)
          OpenWindow(0,0,0, ScreenWd,ScreenHt, #Null$,#PB_Window_BorderLess)
          AddKeyboardShortcut(n, #PB_Shortcut_Escape, #PB_Shortcut_Escape) ;"ESCAPE" KEY TO QUIT PROGRAM
          DrawGrayScale(): ShowCursor_(#False)
    
    SaveImage(1,"stairstep.bmp")
    StartDrawing(WindowOutput(0))
    DrawImage(ImageID(1),0,0)
    StopDrawing()
    
        Repeat
          event = WaitWindowEvent()
          If event = #PB_Event_Repaint
            DrawGrayScale()
            
          ElseIf event = #PB_Event_Menu ;KEYBOARD INPUT
            menuItem = EventMenu()
           
            Select menuItem
               
              Case #PB_Shortcut_Escape
                Break
               
            EndSelect
           
          EndIf
        ForEver
        End
    Quote Quote  
  27. Clearly you don't understand how the lum. component of a video signal is derived.
    Quote Quote  
  28. Your source video is limited range , it's already standard range ,with nothing to clip. (Just look at the command line you used, it converts RGB to YUV with limited range, that's already proof enough)

    Your scope actually measures RGB values if it shows the 0 and 255 values.

    You can verify this with multiple methods , not necessarily ffmpeg (I like using other methods to verify). For example , you can verify this with broadcast NLE or hardware SDI out

    I am 100% certain of these facts.
    Quote Quote  
  29. Code:
    Steps = 16
            X = 0
            Lum.f = 0
            LumStep.f = 255 / (Steps-1)
            XStep = ScreenWd/Steps
            For ct = 1 To Steps
              Box(X,0,XStep,ScreenHt, RGB(Lum,Lum,Lum))
              Lum + LumStep
              X   + XStep
            Next
    The stairstep is 0 to 255. Here's the source code.

    Code:
    ffmpeg -y  -loop 1 -t 10  -i stairstep.bmp  -pix_fmt yuv420p  -crf 17  -c:v libx264  -vf scale=out_color_matrix=bt709  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -r 59.94  -c:a copy  test_pattern.mp4
    Code:
     it converts RGB to YUV with limited range
    where do you see a filter that would limit levels to 16 - 235? It's not there.

    Your scope actually measures RGB values if it shows the 0 and 255 values.
    No, it calculates lum. from RGB. I hope you know that YUV can range from 0 to 255.

    Now view the video and count the steps (there are 16 steps, 16 decimal values apart. Now take an eyedropper program or use Colorzilla to measure the colors. They range all the way from 0 to 255.

    Your ffmpeg scope is hiding everything outside the range of 16 - 235, but you're resistant to the notion that it's basically lying to you.

    Where do you think the lum. component comes from? It's calculated from the R, G and B components. Kr, Kg and Kb are the lum. coefficients. I'm surprised you don't already know this.

    Code:
    Y = R * Kr + G * Kg + B * Kb
    I will scour the docs and try to find out it it's possible to get the scope to show the 0 - 255 range.
    Quote Quote  
  30. Originally Posted by chris319 View Post

    The stairstep is 0 to 255. Here's the source code.
    Your RGB test pattern is probably OK. That's not what the concern is. The concern is the RGB=>YUV conversion, and your scopes

    Code:
    ffmpeg -y  -loop 1 -t 10  -i stairstep.bmp  -pix_fmt yuv420p  -crf 17  -c:v libx264  -vf scale=out_color_matrix=bt709  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -r 59.94  -c:a copy  test_pattern.mp4
    Code:
     it converts RGB to YUV with limited range
    where do you see a filter that would limit levels to 16 - 235? It's not there.
    I never said there was a filter. I said that commandline is for a limited (standard) range conversion, not a full range conversion

    Read up on ITU Rec 709 conversion. If I recall, you coded some RGB<=>YUV functions. Might be a good idea to revisit
    https://www.itu.int/rec/R-REC-BT.709/en
    https://en.wikipedia.org/wiki/Rec._709

    Yes you started with a nice RGB test pattern 0-255. But your RGB conversion to YUV gets "mapped" to Y 16-235, CbCr16-240 . So you have no values Y<16 or Y>235. You have to use a "full range" conversion to map RGB0-255 to YUV0-255. It's know as a "full range" conversion.



    Your scope actually measures RGB values if it shows the 0 and 255 values.
    No, it calculates lum. from RGB. I hope you know that YUV can range from 0 to 255.
    This tells me it's measuring RGB values. Your source file only started with Y 16-235. If it's reading 0-255, it's wrong. Likely Rec conversion back to RGB and reading that. ie. the reverse transform of YUV back to RGB. , not the actual YUV values





    Your ffmpeg scope is hiding everything outside the range of 16 - 235, but you're resistant to the notion that it's basically lying to you.
    I'm not resistant, because I wasn't using a ffmpeg scope. I use multiple tools to confirm things. Look at the screenshot I posted. That is a broadcast NLE scope. You can confirm with other professional programs or a SDI hardware scope - they all say the same thing.

    This is your outfile.mp4 source. Notice it's limited range, there are no values >100 or <0 IRE. Compare to the full range one I posted above before applying lutyuv. Values are preset <0 IRE, and >100IRE (which correspond to Y=16, Y=235 in 8bit)
    Click image for larger version

Name:	outfile.jpg
Views:	580
Size:	46.1 KB
ID:	45637

    But in this case, the ffmpeg waveform is correct too (it's not always, some formats have decoding and levels issues, or certain flags can confuse the issue). This is a true full range YUV video, and with lutyuv clipping applied . Your outfile.mp4 shows no values <0 or >100. It's limited range
    Click image for larger version

Name:	ffmpeg fullrange vs lutyuv clip.jpg
Views:	643
Size:	56.8 KB
ID:	45638



    Again, I'm 100% certain about this.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!