VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or try DVDFab Passkey and copy Blu-ray and DVDs! :)
+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 91
Thread
  1. A potential problem is if KodakChart.mp4 (or any input) is full range, flagged full range , using -pix_fmt yuv420p will clamp (not clip) the output . For example, many DLSR videos are full range, flagged full range (and will be read as yuvj420p by ffmpeg)
    Quote Quote  
  2. A workaround if using rawvideo pipe, don't include the -pix_fmt yuv420p argument in the input pipe to avoid the full to limited range clamping; only specify -pix_fmt in the receiving application or pipe .

    Something like this . It should work for normal range video too, or full range unflagged
    Code:
    ffmpeg -i input.ext -f rawvideo - | ffmpeg -f rawvideo -pix_fmt yuv420p -s (width)x(height) -r (frame rate) -i - ...
    Quote Quote  
  3. YUV420p has been removed from the input pipe; thank you for the suggestion.

    My idea for a smoothing filter isn't working out very well. The best thing I've found yet has been ffmpeg's "unsharp mask".
    Quote Quote  
  4. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    A smoothing filter uses blur or other averaging techniques to make edges less contrasty.

    "Unsharp mask" is, contrary to how its name seems, NOT a smoothing filter but a sharpening filter. It makes edges MORE contrasty.

    "I don't think it means what you think it means" - Inigo Montoya, from the Princess Bride.

    Scott
    Quote Quote  
  5. ffmpeg nomenclature calls it "unsharp".

    Shotcut has a filter called "blur" and they ain't kidding. It tames the ringing all right but fuzzes up the picture something awful.
    Last edited by chris319; 17th May 2018 at 09:40.
    Quote Quote  
  6. Unsharp can blur if you set it to negative values

    Are you really referring to "ringing" artifacts and halos ? They occur around edges, and if you just reduce the in camera sharpening it's probably the best way to go

    A generalized blur will just turn the picture to mush. You would usually want to use a dehalo/deringing filter . They are a category of filters that use line masks to limit the effect to high contrast lines or edges

    eg. something like this, you can see in the previews how it targets the edges without blurring everything
    https://github.com/IFeelBloated/Vine



    If those aren't the artifacts you're describing, and you want suggestions, then post a video sample
    Quote Quote  
  7. But if it's because of filter or operation your applied (e.g. you applied a sharpen filter in post), then there are better ways to sharpen with limits to prevent excessive ringing in the first place. You don't want to create artifacts only to have to remove them later. Or if it's partially because of sharpened noise, edge artifacts you usually want to apply a denoiser first, or edge denoiser - the order matters in how you do things
    Quote Quote  
  8. Here is what I presume are ringing artifacts. They make little difference to the picture quality but all of the excursions above digital 246 would cause it to fail Q.C.

    This is straight out of the camcorder. The blacks would have to be pulled up to >= 16.

    Unsharp with a negative value tames those excursions but also sucks the detail out of the picture.

    http://www.chrisnology.info/videos/NormalScope.jpg
    Quote Quote  
  9. Post an actual video clip, directly from the camera, otherwise unprocessed
    Quote Quote  
  10. I wanted you to see the scope which reveals the overshoots.

    Here is the video:

    http://www.chrisnology.info/videos/Velvet%20Sharp%20Lo.MP4
    Quote Quote  
  11. Originally Posted by chris319 View Post
    I wanted you to see the scope which reveals the overshoots.

    Here is the video:

    http://www.chrisnology.info/videos/Velvet%20Sharp%20Lo.MP4

    There is nothing to worry about. Lots of room

    I think there still is a problem with your scope reading, or maybe you didn't update it yet. That "white" patch is clearly below Y=235 (it's around Y=223 with a YUV picker), but is ~IRE100 in your screenshot

    Click image for larger version

Name:	ffplay_Velvet Sharp Lo.jpg
Views:	29
Size:	108.6 KB
ID:	45663

    Click image for larger version

Name:	premiere_Velvet Sharp Lo.jpg
Views:	15
Size:	104.4 KB
ID:	45664



    But in an analyzer, there are a few stray pixels with a max of 242 in some frames. So in the more general case - you have to decide what is important "signal" and what is "noise"

    When you denoise something, usually the signal gets much cleaner . That's when you want to look at the scopes (in conjunction with your manipulations) . Because if you have to make adjustments, you can do so with feedback, how much to adjust

    If it's just noise in the overshoots, like it is here - you can just clip it - that won't adversely affect the picture. But if it's actual wanted, useful signal (important elements of the picture), then you probably want to adjust levels using filters, you don't want to clip in that scenario




    It depends on who you were submitting it to, and for what program - but submissions get rejected for other things as well before they even reach QC. For example, usually there are minimum spec requirements for camera and formats.
    Last edited by poisondeathray; 17th May 2018 at 15:15.
    Quote Quote  
  12. Figure this one out. Same file, the one I sent you. Same color picker on both players.

    Playback in VLC:

    White patch Y= 223, the same as you get

    Playback in Windows Media Player:

    White patch Y = 241
    Quote Quote  
  13. Originally Posted by chris319 View Post
    Figure this one out. Same file, the one I sent you. Same color picker on both players.

    Playback in VLC:

    White patch Y= 223, the same as you get

    Playback in Windows Media Player:

    White patch Y = 241

    It usually means a playback configuration problem.

    Usually either GPU settings or renderer configuration issue ; but WMP relies on that system directshow configuration (which could differ between different client computers), so that predisposes it to directshow filter issues in addition to the other areas that might cause problems, whereas a player like vlc is not directshow based - so it's more consistent (even if it has known problems, at least it's consistent. Whereas WMP behaviour can be completely different between say, a Win7 install on your basement computer compared to a Win10 install in your office)

    Also, make sure you check with 1 video player instance open at one time, otherwise another player will use another renderer. For example if you have a player configured to use , say overlay mixer, another instance cannot use it and it might use EVR or VMR9 or haali or madvr etc...

    Renderers have a major effect on what you "see", because they also influence how YUV is converted to RGB.



    I'm 100% sure that patch is Y~223 (ok, there are some areas that are 224, but it's definitely not as high as 241)
    Quote Quote  
  14. I had been swapping graphics drivers to get DaVinci Resolve to work and the graphics card setting wound up at TV levels (16 - 235) . Resolve still doesn't work but that's another matter.

    Now here's what I get:

    Using my eyedropper program:

    VLC: Y = 240 - 241

    WMP: Y = 240 - 241

    My scope program: 240 - 241

    Using ffplay scope: Y < 235

    I don't know about this ffplay scope. I think there's something going on behind our backs.

    I have another grayscale file and all checks well on my scope program. It tells you what the values are supposed to be:

    http://www.chrisnology.info/videos/grayscale.mp4

    Now try this line on ffmpeg:

    Code:
    ffplay velvetsharplo.mp4  -vf waveform=filter=flat+acolor:intensity=1:envelope=peak+instant:scale=digital:graticule=green:flags=numbers+dots
    Quote Quote  
  15. Originally Posted by chris319 View Post
    I had been swapping graphics drivers to get DaVinci Resolve to work and the graphics card setting wound up at TV levels (16 - 235) . Resolve still doesn't work but that's another matter.

    Now here's what I get:

    Using my eyedropper program:

    VLC: Y = 240 - 241

    WMP: Y = 240 - 241

    My scope program: 240 - 241

    Using ffplay scope: Y < 235

    I don't know about this ffplay scope. I think there's something going on behind our backs.
    Are you referring to the "white" patch or stray pixels ?


    If you want it to pass the levels portion of QC at any TV station, any web multimedia company, any post production facility - I would adjust your scopes

    If you don't like ffmpeg, don't use it. It's known to have quirks and bugs , like any program, but very useful for many things too.

    I'm using several professional tools to check (not referring to ffmpeg) . You know, the same tools that will be checking your submissions and rejecting it for the wrong black and white level

    I don't know about the ffmpeg scope either; it's be quirky and has problems on some types of files. But here , on that file "Velvet Sharp Lo.MP4" with the default settings - it's correct.

    When I say I'm 100% certain <= this really means 100%. If I'm only 95% certain, I will say so
    Quote Quote  
  16. I was referring to the white patch, not the overshoots.

    What professional tools are you using to check this video?

    Calibrating a scope and the calculations involved are not nuclear physics. You simply inject a signal of known value and see what the scope displays. I've done this dozens of times over. Simply inject, say, digital 235 into your scope and see if it shows up on the 235 line. Not nuclear physics.

    Lum. is easily calculated as Y = (R*Kr + G*Kg + B*Kb). My scope filters nothing so you see everything, warts and all. I can vouch for the accuracy of my scope because I wrote it. I'm not sure what's in these other scopes.

    ffplay can handle simple signals with no chroma such as our stairstep signal. My scope agrees with ffplay. In addition, I have generated test signals using matlab. Again, total agreement. So I'm 1,000% confident of the accuracy of my scope.
    Quote Quote  
  17. If whatever you are doing is reading the "white" patch as Y=240-241, it's wrong .

    Perhaps your math is correct, the code is correct, scope is correct under certain conditions - but somewhere else in your workflow there are other errors or procedural problems ? Maybe it's reading the wrong thing? Perhaps there are other conversions you're not aware of? Maybe there is an assumption somewhere that is wrong leading you to the wrong conclusions? (just check earlier in this thread if you need a reminder...you seemed to be pretty confident about a number of things... we know how that turned out..). Maybe some other "gotchas" you're not aware of?
    Quote Quote  
  18. Member
    Join Date
    Apr 2018
    Location
    Croatia
    Search Comp PM
    Sorry but you are wrong here. Using several solutions like waveform(set intensity to 1), oscilloscope(set probe on start of white patch), or signalstats it will report max Y is higher than 235.
    Code:
    ffmpeg -i ~/Downloads/Velvet\ Sharp\ Lo.MP4 -vf signalstats,metadata=print:key=lavfi.signalstats.YMAX -f null -
    Quote Quote  
  19. If whatever you are doing is reading the "white" patch as Y=240-241, it's wrong .
    When I run richardpl's code I see a lot of YMAX around 239 - 241.
    Last edited by chris319; 18th May 2018 at 06:53.
    Quote Quote  
  20. I downloaded the Instant Eyedropper color picker and measured the R,G,B values of the white patch from VLC.

    You already know the equation: Y = (R*Kr + G * Kg + B * Kb). These are the BT.709 lum. coefficients.

    Y comes out to 242.

    Code:
    R=252	    G=239	        B=232	
    
    Kr=0.2126	    Kg=0.7152	Kb=0.0722	
    
    53.5752	170.9328	16.7504
    
    Y =	242
    Quote Quote  
  21. Originally Posted by chris319 View Post
    If whatever you are doing is reading the "white" patch as Y=240-241, it's wrong .
    When I run richardpl's code I see a lot of YMAX around 239 - 241.

    A lot, or a few stray pixels ?

    I mentioned there were some frames with a few stray pixels 242 pixels. That is Ymax. But most of it is 223. That's why the majority of the line is at 223 or definitely < 100IRE. It's like that in a QC checker too. Your scope shows the line representing the white patch much higher



    Originally Posted by chris319 View Post
    I downloaded the Instant Eyedropper color picker and measured the R,G,B values of the white patch from VLC.

    You already know the equation: Y = (R*Kr + G * Kg + B * Kb). These are the BT.709 lum. coefficients.

    Y comes out to 242.

    Code:
    R=252	    G=239	        B=232	
    
    Kr=0.2126	    Kg=0.7152	Kb=0.0722	
    
    53.5752	170.9328	16.7504
    
    Y =	242

    I don't doubt that the math is correct.

    But you are assuming whatever VLC is doing is correct in producing the RGB values, whatever your drivers and setup are doing is correct, then working backwards.

    Do you think that is a valid assumption ? Do you think VLC is accurate for color work or QC work ?
    Quote Quote  
  22. I'm using several professional tools to check (not referring to ffmpeg) . You know, the same tools that will be checking your submissions and rejecting it for the wrong black and white level
    You never answered my question about what these several "professional tools" are, so please do so. And how do you know those tools aren't lying to you? All I've seen you post so far is a scope shot which looks like it is heavily filtered and is of ffplay, the ffmpeg scope.

    But you are assuming whatever VLC is doing is correct in producing the RGB values, whatever your drivers and setup are doing is correct, then working backwards.

    Do you think that is a valid assumption ? Do you think VLC is accurate for color work or QC work ?
    That's why I checked it against WMP, bypassing VLC and ffmpeg entirely, the results of which I have already posted.

    Did you run the ffplay scope code I posted? There is plenty of stuff above 235 but ffplay renders it in dark blue which is very hard to see against black, but it's there. ffplay lacks a graticule line above 235 at, say, 255 so it is very difficult to judge the amplitude. ffmpeg doesn't have a graticule line at 246, the limit under ebu r103. Here is that code in case you missed it. Please run it.

    Code:
    ffplay velvetsharplo.mp4  -vf waveform=filter=flat:intensity=1:envelope=peak+instant:scale=digital:graticule=green:flags=numbers+dots
    The way these color pickers work is that they follow the mouse pointer. In Instant eyedropper I can specify a window which it uses to average several pixel values to overcome your concern about stray pixels. So you drag your mouse pointer around until you get some clean samples.
    Last edited by chris319; 18th May 2018 at 13:21.
    Quote Quote  
  23. Originally Posted by chris319 View Post
    I'm using several professional tools to check (not referring to ffmpeg) . You know, the same tools that will be checking your submissions and rejecting it for the wrong black and white level
    You never answered my question about what these several "professional tools" are, so please do so. And how do you know those tools aren't lying to you?

    All I've seen you post so far is a scope shot which looks like it is heavily filtered and is of ffplay, the ffmpeg scope.

    The second screenshot is from Premiere Pro. I mentioned a broadcast NLE earlier. It's also labelled in the name. (BTW, BBC has transitioned to using this for their main editing tool) .

    Several other tools are Vegas pro, After Effects, VideoQC , I have access to several more tools too, including hardware scopes. Open source/free tools agree here too (vapoursynth, avisynth). But there is no reason to check after a few, because they all agree

    That's how I know. They all say the same thing. I've dealt with video for a long time. I've done this many times before. This isn't my first rodeo. So I'm 100% certain.
    Quote Quote  
  24. I have downloaded and installed Video QC. Already I see issues.

    Must get ready for work; will have more later.
    Quote Quote  
  25. I have an idea how to investigate this, but it's going to take a while and won't be an overnight solution.
    Quote Quote  
  26. I think I know what the problem might be. I'm reading pixels after they've been processed by the Windows graphics infrastructure, IOW, I'm reading RGB pixels off the screen like a color picker would. My math for converting them to YUV is correct, but the RGB pixels have already been processed and gawd knows what has been done to them.

    I need to read the YUV samples directly from the file, before any processing takes place. I could repurpose Ted Burke's code to do this. Then I can make a scope that has graticules for the EBU r103 limits and one at digital 111 for 18% gray, and add "stops" values.
    Quote Quote  
  27. There is this picker int vd2 showing both original YCbCr and converted RGB.
    Btw grayscale.mp4 has little chroma error
    Image
    [Attachment 45678 - Click to enlarge]
    Quote Quote  
  28. Thanks to pdr pushing and pushing, I have rewritten my scope program to read the Y values directly out of the file, bypassing the OS graphics infrastructure. Now the scope tells a much different story. So the error was creeping in through the OS graphics infrastructure, i.e. the system that decodes YUV and delivers pixels to the screen.
    Quote Quote  
  29. I don't know if this has been said, but I think it is worth mentioning. In terms of colorspaces and gamut, the RGB colorspace exists fully within the Y'CbCr colorspace which is to say there is no combination of RGB values that will produce illegal or out of gamut Y'CbCr values. This is not to say rounding problems don't exist but that is a separate issue from illegal values. However, the reverse cannot be said for Y'CbCr signals converted to RGB, and since OS's operate in RGB space, displaying out of gamut values is not possible. Additionally, while doing a simple matrix conversion of a single pixel is trivial, doing that for an entire image at 24+ fps is super hard. Therefore, OS's are notorious for being adequate but disastrous for exacting workflows. This is why broadcast signal chains exist. I haven't studied your C code, but I am almost certain your math is not correct only because I have never seen a post on VH with the correct conversion.
    Quote Quote  
  30. As I said in the post before yours:

    I have rewritten my scope program to read the Y values directly out of the file, bypassing the OS graphics infrastructure
    So the issue of RGB <-> YUV is moot now.

    This is the case for mp4 files. The scope code was originally written for webcams and all I had to work with was a buffer full of RGB samples as the webcam driver wasn't converting to YUV.
    Quote Quote  



Similar Threads