VideoHelp Forum
+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 91
Thread
  1. Here is the documentation for the ffmpeg waveform monitor:

    https://ffmpeg.org/ffmpeg-filters.html#waveform

    The closest thing I could find that might set the range is this:

    scale, s
    Set scale used for displaying graticule.

    ‘digital’
    Note that in all of your scope shots there are no graticule lines for digital 0 or 255.

    your RGB conversion to YUV gets "mapped" to Y 16-235, CbCr16-240
    Again, show me in the ffmpeg docs where "TV range" is the default. Unless you do that it's getting pretty tiresome arguing with someone who can't see what's in front of their face. If you play the video I uploaded for you then you will see 16 bars ranging in value from 0 to 255. For the second time, use an eyedropper program or Colorzilla program to measure the values.

    I don't have ffmpeg on this computer but later on I'll be able to check this and try to get it to display digital 0 - 255.

    that commandline is for a limited (standard) range conversion, not a full range conversion
    Show me in the docs where "TV range" is the default, which is what you're contending.

    Read up on ITU Rec 709 conversion. If I recall, you coded some RGB<=>YUV functions. Might be a good idea to revisit
    I've been all over the BT.709 spec. You should recognize that as the equation for lum. It is for RGB -> Y. Look at the spec and see for yourself. Page 4, section 3, Item 3.2.https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.709-6-201506-I!!PDF-E.pdf
    Quote Quote  
  2. Originally Posted by chris319 View Post

    Note that in all of your scope shots there are no graticule lines for digital 0 or 255.
    Look at the broadcast waveform IRE 0, IRE 100

    Just use some common sense , step back and think why clipping is not working . There are no values to clip!



    your RGB conversion to YUV gets "mapped" to Y 16-235, CbCr16-240
    Again, show me in the ffmpeg docs where "TV range" is the default. Unless you do that it's getting pretty tiresome arguing with someone who can't see what's in front of their face. If you play the video I uploaded for you then you will see 16 bars ranging in value from 0 to 255. For the second time, use an eyedropper program or Colorzilla program to measure the values.
    Normal , standard range is the default. That's why it's called "standard " Show me where "full range" is the default. You have to specify full range to get full range

    Hint: When you use eyedropper are you looking at RGB values, or YUV values ? When you "play back" a YUV video, are you looking at YUV values? No - it gets converted to RGB for display

    When you "see" something, it's been converted to RGB for display. There are 4 common ways that can happen , Rec 601, Rec 709, PC 601, PC 709 . The "PC" versions are full range. If it gets converted back to RGB with the wrong matrix , you get the wrong appearance (colors if wrong 601 vs 709, levels if PC vs. Rec)

    A waveform monitor looks at Y values , not RGB values (or at least is supposed to)




    I'm 100% certain. Not 99%.
    Quote Quote  
  3. Originally Posted by chris319 View Post
    Note that in all of your scope shots there are no graticule lines for digital 0 or 255.
    IRE 0 to 100 is same thing as Y=16 to 235 . You can change the "look" if you want

    This is a YC waveform (shown here without the "C", since it's greyscale pattern)

    outfile.mp4
    Click image for larger version

Name:	y_waveform_8bit_outfile.jpg
Views:	62
Size:	55.3 KB
ID:	45641

    true full range example
    Click image for larger version

Name:	y_waveform_8bit_fullrange.jpg
Views:	61
Size:	56.4 KB
ID:	45642

    Obviously if I push "clamp signal" button, nothing will happen to outfile.mp4, because it's limited range. There are no values below Y=16, or Y=235 . That is why lutyuv is doing nothing when you set it to clip <16 , >235 .

    Note that "previews" are not always accurate, because they are RGB converted representations for the YUV data. And there are many processes and settings and color profiles that can affect how preview is rendered. But the waveform is supposed to be measuring the YUV data directly . It's 1 step closer to the true values of what you're measuring
    Quote Quote  
  4. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Aw, snap!

    pdr & richardpl for the win. I concur. ffmpeg conversion script is using bt709 (aka limited range).
    Plus, this Colorzilla app reads reconverted RGB screen, as was pointed out, NOT any kind of YUV. Go to the site - it shows RGB values and HSV values.

    Scott
    Quote Quote  
  5. Back at my home computer.

    Here is a C-language program written by Ted Burke and modified by me. It examines the Y component of the YUV samples and keeps track of the maximum and minimum lum. values.

    On outfile.mp4 it reports a maximum of 255 and a minimum of 0.

    This proves pretty conclusively that the Y values are not confined to the range 16 - 235 and that the ffmpeg scope is inaccurate.

    If you don't believe the results, you have the source code and outfile.mp4. Compile and run it yourself.

    Code:
    // Video processing example using FFmpeg
    // Written by Ted Burke – last updated 12-2-2017
    // Now works in YUV space
    // To compile: gcc Ted2.c -o Ted2
    // Note: make sure .MP4 file extension matches case
     
    #include <stdio.h>
     
    // Video resolution
    #define W 1280
    #define H 720
     
    // Allocate a buffer to store one frame
    unsigned char frame[((H)*(W)*3)/2];
     
    int main(void)
    {
    int x, y, count;
    int maxLum = 0; int minLum = 255;
    
        // Create a pointer for each component's chunk within the frame
        // Note that the size of the Y chunk is W*H, but the size of both
        // the U and V chunks is (W/2)*(H/2). i.e. the resolution is halved
        // in the vertical and horizontal directions for U and V.
    
    unsigned char *lum;//, *u, *v;
    //unsigned char *u, *v;
    
    lum = frame;
    //    u = frame + H*W;
    //    v = u + (H*W/4);
     
    FILE *pipein = popen("ffmpeg -i outfile.mp4  -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    
    // Process video frames
    while(1)
        {
            // Read a frame from the input pipe into the buffer
            // Note that the full frame size (in bytes) for yuv420p
            // is (W*H*3)/2. i.e. 1.5 bytes per pixel. This is due
            // to the U and V components being stored at lower resolution.
            count = fread(frame, 1, (H*W*3)/2, pipein);
             
            // If we didn’t get a frame of video, we’re probably at the end
            if (count != (H*W*3)/2) break;
     
    // Process this frame
    for (y=0 ; y<H ; y++)
            {
                for (x=0 ; x<W ; x++)
                {
    
    if (lum[y*W+x] > maxLum) maxLum = (lum[y*W+x]);
    if (lum[y*W+x] < minLum) minLum = (lum[y*W+x]);
    
                }
            }
         }
    
    printf("Maximum lum: %d\n",maxLum);
    printf("Minimum lum: %d\n",minLum);
     
        // Flush and close input pipe
        fflush(pipein);
        pclose(pipein);
    }
    Quote Quote  
  6. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Look at source code of waveform scope, values are nowhere clipped. Whatever you are doing, you are doing it wrong.
    Quote Quote  
  7. Here is the command line I'm using to launch the scope. You tell me what's wrong with it.

    Your scope is being fed video levels from 0 to 255 but is not displaying them as such.

    Code:
    ffplay stairstep.mp4 -vf waveform=filter=flat:scale=digital:graticule=green:flags=numbers+dots  -vf "scale=out_range=pc"
    Quote Quote  
  8. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Only one of set -vf are used. Another one is always ignored.
    Scale filter is crap because of old libswscale, use something else like zscale.
    Quote Quote  
  9. Again, the problem is stairstep.mp4 has Y = 16-235 levels . Your commandline never specified full range to produce full range. You're essentially "mapping" RGB 0-255 to Y 16-235 (standard range) , instead of RGB 0-255 to Y 0-255 (full range) . Thus there is nothing to clip with lutyuv .




    Here is a demo, all files are included in the zip

    Here is a greyscale test pattern image 0-255 RGB "greyscale.png"

    Here are 2 videos, one uses your commandline which uses standard range (you need to specify full range to get full range).

    chris219 standard range
    Code:
    ffmpeg -r 24 -i greyscale.png -pix_fmt yuv420p  -crf 17  -c:v libx264  -vf scale=out_color_matrix=bt709  -color_primaries bt709  -color_trc bt709  -colorspace bt709 -an chris219.mp4
    fullrange
    Code:
    ffmpeg -r 24 -i greyscale.png -crf 17  -c:v libx264  -vf scale=in_range=pc:out_range=pc,format=yuv420p -x264opts colorprim=bt709:transfer=bt709:colormatrix=bt709:fullrange=on -an fullrange.mp4


    Notice the metadata is correct too, indicating full range. The metadata does not affect the actual video bitstream levels, but it's more proper to use correct signaling. The receiving application might use it to convert back to RGB for display properly. Recall I said there were many factors affecting the preview, this is one of them. You can have full range video, but "labelled" as limited, or vice versa. They are just "labels" and can be wrong. "Don't judge a book by it's cover"

    chris219
    Code:
    Color range                              : Limited
    Color primaries                          : BT.709
    Transfer characteristics                 : BT.709
    Matrix coefficients                      : BT.709
    full range
    Code:
    Color range                              : Full
    Color primaries                          : BT.709
    Transfer characteristics                 : BT.709
    Matrix coefficients                      : BT.709

    Here is the ffmpeg scope.

    Code:
    ffplay chris219.mp4 -vf waveform=filter=lowpass:scale=ire:graticule=green:flags=numbers+dots
    
    ffplay fullrange.mp4 -vf waveform=filter=lowpass:scale=ire:graticule=green:flags=numbers+dots

    chris219
    Name:  chris219.jpg
Views: 333
Size:  21.2 KB

    full range
    Name:  fullrange.jpg
Views: 342
Size:  23.1 KB

    And you can verify the levels in professional applications. They all say the same thing
    Image Attached Files
    Quote Quote  
  10. The problem with your copied and modified code from Ted Burke, is your converting a YUV video to a RGB intermediate using limited range. Essentially you're doing the reverse Rec transform, not full range

    Essentially it's Y 16-235 => RGB 0-255 . So you don't even "see" Y<16 or Y>235 , the "superdark" or "superbright" full range values

    Code:
    FILE *pipein = popen("ffmpeg -i outfile.mp4  -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    You shouldn't be converting to RGB for an intermediate to read YUV values. It's bad practice , and unnecessary step, less accurate, there are rounding errors and you're prone to misinterpretations
    Quote Quote  
  11. Congratulations! It works now that you've added the full-range filter. You were right; the default is TV range (16 - 235) after all.

    This code works,too. Note that it does not specify an in_range, only an out_range:

    Code:
    bin\ffmpeg -y  -r 24 -i stairstep.bmp -crf 17  -c:v libx264  -vf scale=out_range=pc,format=yuv420p -x264opts colorprim=bt709:transfer=bt709:colormatrix=bt709:fullrange=on -an fullrange.mp4
    There is another caveat: with the stairstep signal, the top and bottom steps get lost in the window borders, so it is necessary to expand the window vertically or maximize it to see those steps.

    Click image for larger version

Name:	stairstep.jpg
Views:	55
Size:	41.9 KB
ID:	45653
    Quote Quote  
  12. There is actually a problem with the one I posted. I forgot to specify out_color_matrix , so by default it would use 601 for the actual conversion, but be flagged 709. (You won't "see" this on a greyscale pattern, but just for completeness sake)

    It should have been this
    Code:
    -vf scale=in_range=pc:out_range=pc:out_color_matrix=bt709,format=yuv420p
    The -x264opts colormatrix, colorprim, transfer flags are just metadata flags, they don't do any actual conversion. But x264 range commands can alter the actual video data. If in range and out range are different, the data is altered (scaled). So be careful.

    You don't need in_range in this case, because it's RGB input (RGB is always "full" range 0-255) . I make a habit of specifying in and out range , because sometimes ffmpeg "reads" a YUV input as a certain range. I don't want any mixups.

    If you're still planning on using the pipe RGB BMP for various operations - be very very careful. There are many "gotchas" in ffmpeg . It autoscales and reads flags so sometimes you get the wrong reading if source files are incorrectly flagged. And the behaviour has changed in some versions. I can think of several scenarios right now where you will get the wrong readings.

    You shouldn't have to use that for clipping; as you can now see lutyuv works correctly . If it doesn't, or you find a situation where there is a problem , then submit a proper report
    Quote Quote  
  13. If I clip off the ringing artifacts, ffmpeg puts them right back in when I re-encode to h.264 for delivery as an mp4 file, so you're chasing your tail to try to eliminate them. The best way to deal with them is to roll off some of the high-frequency information.
    Quote Quote  
  14. Originally Posted by chris319 View Post
    If I clip off the ringing artifacts, ffmpeg puts them right back in when I re-encode to h.264 for delivery as an mp4 file, so you're chasing your tail to try to eliminate them. The best way to deal with them is to roll off some of the high-frequency information.
    I guess it depends on what your exact deliverables are, but you're generally allowed to have small excursions; that's what the "headroom"and "footroom" are for . There is allowance in most broadcast specs for most regions. They even occur in mastering formats, so forget about lower quality deliverables. They are actually desired in some formats for a more gradual roll off for highlights and shadows. Many displays don't actually clip those deep darks and brights but display them nicely
    Quote Quote  
  15. Yes and no. EBU r103 specifies digital 5 - 246. That leaves 16 - 235 for actual video and 5 - 15 and 236 - 246 for artifacts. The BBC adheres to r103 and could reject programmes submitted to them which are not in compliance.

    In any case, video data at 0 or 255 is strictly verboten because they are used for sync.

    https://www.google.com/search?client=firefox-b-1&ei=Bkr7WprCO4XW_wS6qL4Y&q=ebu+r103&oq....0.3HgY5W-JGTE
    Quote Quote  
  16. What kind of ringing artifacts are you getting ?

    If you're clipping Y 16 to 235 and still getting excursions of >11 from presumably lossy encoding only - something is definitely wrong. That's not defined as a "small excursion"
    Quote Quote  
  17. After clipping, the video must be re-encoded to H.264. I think that's what puts the ringing back in.

    If the whites are at 235, there is a small amount of ringing above that. It's not a problem if the ringing does not exceed 246.

    My camcorder puts ringing into the video. I'll post something much later.
    Quote Quote  
  18. Here is a scope shot which will give you an idea of the ringing so you don't have to wait for hours.

    My custom scope shows the r103 limit of digital 246 or 105 IRE for luminance. This shot would not pose a problem AFAIK. Any artifacts over digital 246 are a potential problem.

    In order to achieve this, the sharpness must be cranked way down using ffmpeg's "unsharp mask". This has the unfortunate effect of sucking much of the detail out of the picture.

    The scope graticule has been revised since this shot was taken.

    http://www.chrisnology.info/videos/KodakChart.jpg
    Quote Quote  
  19. Originally Posted by chris319 View Post
    Here is a scope shot which will give you an idea of the ringing so you don't have to wait for hours.

    My custom scope shows the r103 limit of digital 246 or 105 IRE for luminance. This shot would not pose a problem AFAIK. Any artifacts over digital 246 are a potential problem.

    In order to achieve this, the sharpness must be cranked way down using ffmpeg's "unsharp mask". This has the unfortunate effect of sucking much of the detail out of the picture.

    The scope graticule has been revised since this shot was taken.

    http://www.chrisnology.info/videos/KodakChart.jpg


    That's sort of a different thing; You're applying a manipulation, a filter . So it' s not entirely because of lossy encoding which was the assumption

    If you have a legal signal, and do some manipulation, of course it can make it out of range . You can apply clipping afterwards, but I'm guessing that's probably not the look you want to end up with.

    There are several variations on unsharpmask (not sure which one ffmpeg uses). Some that use LCE (local contrast enhancement) are very extreme, even at low strengths so it's not surprising you could go way over (and under), even from a hardclipped 16 to 235. Increasing contrast, at it's simplest definition in terms of video, is just increasing the difference between high and low values. High values get higher, low values get lower.

    But there are other options besides "unsharpmask" sharpening that use different algorithms, different approaches to "sharpening" . They don't all affect contrast as drastically. There is also a category of limited sharpening algorithms which have options to clamp overshoots , undershoots, and edges

    You mentioned reducing high frequency information, or applying a pass filter, that's another option

    They all have certain tradeoffs, and it depends on what "look" you are going for too . If you want some options, you'll have to provide more detailed info, sample clips etc...
    Quote Quote  
  20. This is after some high-frequency rolloff (unsharpening). There was much more ringing coming straight out of the camera.

    The idea is to get rid of the artifacts without reducing the overall level of the picture. Unsharpening is the only way I've found so far to do this. Do you have any other ideas?

    The good thing is that the artifacts are invisible. However, technically it won't pass QC if the artifacts are too high.

    Typically they "legalize" video with a hardware legalizer:

    https://mixinglight.com/color-tutorial/legalizing-video-harris-dl860-part3/
    Quote Quote  
  21. Originally Posted by chris319 View Post
    This is after some high-frequency rolloff (unsharpening). There was much more ringing coming straight out of the camera.

    The idea is to get rid of the artifacts without reducing the overall level of the picture. Unsharpening is the only way I've found so far to do this. Do you have any other ideas?

    The good thing is that the artifacts are invisible. However, technically it won't pass QC if the artifacts are too high.

    Typically they "legalize" video with a hardware legalizer:

    https://mixinglight.com/color-tutorial/legalizing-video-harris-dl860-part3/


    I don't know what you're talking about exactly, what artifacts exactly . You're going to have to post some videos or more information or better descripton

    In video, "ringing" is a typically a term reserved to describe oversharpening halos. They are the high contrast edges, made worse by sharpening. They literally look like rings or halos.

    A hardware or software legalizer does many things, but sometimes it's not a good way of tacking some problems. For example, sometimes they clip values, sometimes a linear shift. Often that's not ideal, it's more of a shortcut or time saver. Sometimes highlights look terrible after passing through a legalizer, you get splotches or banded transitions
    Quote Quote  
  22. How about level-dependent smoothing, i.e. only smooth a pixel if it exceeds a certain level?

    https://pdfs.semanticscholar.org/0b35/fc528841c08eb8ebb0394f1b2d579ae76c8a.pdf

    https://trac.ffmpeg.org/wiki/Postprocessing
    Last edited by chris319; 15th May 2018 at 21:49.
    Quote Quote  
  23. Originally Posted by chris319 View Post
    How about level-dependent smoothing, i.e. only smooth a pixel if it exceeds a certain level?
    Sure, if you meant Y level, this is usually done by luma masks

    Essentially it's applying filters differentially dependent on Y level or ranges

    For example Y 180-230 might get filter A, or strength 50% , Y 120-179 might get filter B with strength 20%, etc...

    A lot of this can be done by compositing too, because that's what luma masking really is when you boil it down

    There are different categories, it doesn't have to be by "Y" level. It can be other parameters, like say, saturation, or hue range etc...

    If you're doing something like this, it's much harder to do in ffmpeg alone. You'd typically use avisynth / vapoursynth or some NLE, or compositing tool like AE , Fusion etc...
    Quote Quote  

  24. Ok, I didn't see this before replying

    This is something entirely different. It's probably the wrong link, it has nothing to do with what you want.
    Quote Quote  
  25. It looks like the ringing is being caused by contrasty edges, so something that would smooth them out. It would have to look at neighboring pixels. I could try this with Ted Burke's code, but I'm not sure how to handle the audio. I also have Matlab.

    If pixel > 235 and neighbor < darker shade, then blur them.

    I tried to get avisynth working last year but was never able to. Can it do pixel-by-pixel manipulation?
    Quote Quote  
  26. [QUOTE]Essentially it's Y 16-235 => RGB 0-255 . So you don't even "see" Y<16 or Y>235 , the "superdark" or "superbright" full range values

    So fix this to be YUV?

    Code:
    FILE *pipein = popen("ffmpeg -i outfile.mp4  -f image2pipe -vcodec rawvideo -pix_fmt yuv24 -", "r");
    Quote Quote  
  27. Originally Posted by chris319 View Post
    It looks like the ringing is being caused by contrasty edges, so something that would smooth them out. It would have to look at neighboring pixels. I could try this with Ted Burke's code, but I'm not sure how to handle the audio. I also have Matlab.

    If pixel > 235 and neighbor < darker shade, then blur them.

    I tried to get avisynth working last year but was never able to. Can it do pixel-by-pixel manipulation?
    Originally Posted by chris319 View Post
    Essentially it's Y 16-235 => RGB 0-255 . So you don't even "see" Y<16 or Y>235 , the "superdark" or "superbright" full range values
    So fix this to be YUV?

    Code:
    FILE *pipein = popen("ffmpeg -i outfile.mp4  -f image2pipe -vcodec rawvideo -pix_fmt yuv24 -", "r");





    If you can't describe it accurately, post a video sample , someone will look at it. Maybe in a new thread with descriptive title is a better idea. As you can imagine you wouldn't be the first trying to deal with video problems - there are specific filters and workflows designed for specific types of artifacts and problems. I'm guessing it's some type of general noise and exposure problems, along with sharpened edges. A video is worth 10,000 words.

    You can probably get avisynth to do almost any type of manipulation . There are probably filters and scripts already made for whatever your problem is. But if you wanted to, you can piece together any math operation with masktools2 and it will evaluate all pixels and apply whatever you specify . It's really a variation on a LUT . You can use ffmpeg LUT filter too, but masktools2 has more defined accessory functions and operators

    A potential problem and massive headache with that Ted Burke code is the RGB step. Especially if it's automated without user entry or manipulation. -pix_fmt rgb24 uses swscale for RGB conversion, and Rec601 by default, but ffmpeg behaviour now is partially modulated by flags. eg If you had unflagged or flagged limited range video with overshoots (e.g. 90% of regular consumer video is actually 16-255), it will clip usable values (the vast majority have usable signal you can rescue). A full range example works only if it's flagged correctly as full. ie. Full range YUV gets converted to RGB using full range. But if it was full range, but unflagged, you would get the wrong standard range conversion to RGB. Moreover, sometimes you have wrong flags like standard range video flagged full (e.g. somebody messes up somewhere, or some software just automatically tags, or some video is resized but colormatrix isn't adjusted for) . If ffmpeg (or any program) paid attention to flags, you'd get incorrect results. If you don't control exactly what the in/out range and matrices are for the RGB<=>YUV conversions, you're going to get burned at least some of the time. In that respect , you can see why some programmers choose to ignore flags for their software . It's even worse now, because of Rec2020 and various HDR flags and metadata. There are about a dozen new ones and it's becoming more mainstream
    Quote Quote  
  28. I don't know why you're making such a big deal about YUV -> RGB conversions. Recall that I asked about making Ted Burke's code be all YUV. If that can happen then conversion to/from RGB is moot.
    Quote Quote  
  29. You can use a yuv4mpegpipe (preferred) or rawvideo pipe

    -f yuv4mpegpipe or -f rawvideo

    yuv4mpegpipe sends metadata, such as frame rate, resolution, pixel type . So that is always preferred for YUV. Rawvideo require you enter that data
    Quote Quote  
  30. Something like this? Note that all is YUV420p.

    Code:
    // Video processing example using FFmpeg
    // Written by Ted Burke – last updated 12-2-2017
    // Now works in YUV space
    // Outputs to prores
    // To compile: gcc Ted2.c -o Ted2
    // Note: make sure .MP4 file extension matches case
     
    #include <stdio.h>
     
    // Video resolution
    #define W 1280
    #define H 720
     
    // Allocate a buffer to store one frame
    unsigned char frame[((H)*(W)*3)/2];
     
    int main(void)
    {
    int x, y, count;
     //unsigned char whiteclip;
    
        // Create a pointer for each component's chunk within the frame
        // Note that the size of the Y chunk is W*H, but the size of both
        // the U and V chunks is (W/2)*(H/2). i.e. the resolution is halved
        // in the vertical and horizontal directions for U and V.
    
    unsigned char *lum;//, *u, *v;
    unsigned char *u, *v;
    
        lum = frame;
        u = frame + H*W;
        v = u + (H*W/4);
     
    FILE *pipein = popen("ffmpeg -i KodakChart.MP4  -f image2pipe -vcodec rawvideo   -pix_fmt yuv420p  -", "r");
    //FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p  -s 1280x720 -r 59.94  -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
    FILE *pipeout = popen("ffmpeg -y  -f rawvideo    -pix_fmt yuv420p  -s 1280x720 -r 59.94  -i - -f mp4  -q:v 1 -an  output.mp4", "w");
    
    // Process video frames
    while(1)
        {
            // Read a frame from the input pipe into the buffer
            // Note that the full frame size (in bytes) for yuv420p
            // is (W*H*3)/2. i.e. 1.5 bytes per pixel. This is due
            // to the U and V components being stored at lower resolution.
            count = fread(frame, 1, (H*W*3)/2, pipein);
             
            // If we didn’t get a frame of video, we’re probably at the end
            if (count != (H*W*3)/2) break;
     
    // Process this frame
    for (y=1 ; y<H - 1; y++)
            {
                for (x=1 ; x<W - 1 ; x++)
                {
    
    if (lum[y*W+x] > 235  && lum[(y+1)*W+x] <= 235)
    {
    lum[y*W+x] = (lum[y*W+x] + lum[(y+1)*W+x]) / 2;
    }
    else if (lum[y*W+x] > 235  && lum[(y-1)*W+x] <= 235)
    {
    lum[y*W+x] = (lum[y*W+x] + lum[(y-1)*W+x]) / 2;
    }
    else if (lum[y*W+x] > 235  && lum[(y)*W+x+1] <= 235)
    {
    lum[y*W+x] = (lum[y*W+x] + lum[(y)*W+x+1]) / 2;
    }
    else if (lum[y*W+x] > 235  && lum[(y)*W+x-1] <= 235)
    {
    lum[y*W+x] = (lum[y*W+x] + lum[(y-1)*W+x-1]) / 2;
    }
                }
            }
     
      // Write this frame to the output pipe
            fwrite(frame, 1, (H*W*3)/2, pipeout);
        }
     
        // Flush and close input and output pipes
        fflush(pipein);
        pclose(pipein);
        fflush(pipeout);
        pclose(pipeout);
    }
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!