VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 36
Thread
  1. Could someone explain how the pixels are packed in10-bit 420p as output by ffmpeg's yuv420p10le?

    Below is the documentation from Microsoft for something called P010. I'm not even 100% sure that is whe same thing ffmpeg puts out when using yuv420p10le.

    https://docs.microsoft.com/en-us/windows/win32/medfound/10-bit-and-16-bit-yuv-video-fo...ts#420-formats

    It says the video data is contained in the 10 MSB's of a 16 bit word, one word for each channel. Is that correct? To use one would simply shift off the 6 LSB's.

    Thank you.
    Quote Quote  
  2. most likely its planar, so three planes going after each other, Y as full resolution, then U and V as width/half * height/half for 4:2:0
    Quote Quote  
  3. Right, but how are the pixels arranged? Are they still the top 10 bits of a 16-bit word?
    Quote Quote  
  4. If its planar, then 10 bit sample might be as 2bytes, so stride is 2bytes * w, then size for that plane is * h. So for Y - 2bytes * w * h, and accordingly U and V , with that w/2 and h/2. So you need some pointer in memory and read concrete values.
    I Vapoursynth when having a plane in an array it is just a_plane[x][y] that returns a value. Same as in numpy. Actually those two in memory could be the same, they are the same. No pointer needed, there is a function that returns memory view.
    If is is packed, is very similar though, like YUY2 , packes as YU,YV... then figuring out if you have U or V , odd or even, adjust it to get Y,U and V.
    Quote Quote  
  5. In this link: https://stackoverflow.com/questions/8349352/how-to-encode-grayscale-video-streams-with-ffmpeg, there is ffmpeg format table , and for PIX_FMT_YUV420P10LE it says planar, little endian,
    Last edited by _Al_; 26th Feb 2020 at 19:41.
    Quote Quote  
  6. Al - thanks, but you're talking about how the pixels are addressed. My question is, once you have a 16-bit word, which bits within that word contain the video data? According to Microsoft they are the 10 most significant bits. Is that true with ffmpeg's yuv420p10le?
    Quote Quote  
  7. If you have 16bit word, my impression is that you pull first 8bit as a value , v = pack & 0xFF, shift it 8bits and get another value, v = (pack >> 8) & 0xFF. This is for two byte pack. Or values are reversed if it is other endian. Not sure which one is which now, little or big.

    oh I see, that is for packs with 8bit values, you want 10bit, not sure know. Anyway if using Vapoursynth it can avoided. Getting values just by their coordinates.
    Last edited by _Al_; 26th Feb 2020 at 21:00.
    Quote Quote  
  8. If you have 16bit word, my impression is that you pull first 8bit as a value , v = pack & 0xFF, shift it 8bits and get another value, v = (pack >> 8) & 0xFF.
    For a 10-bit pixel? No.
    Quote Quote  
  9. FFmpeg normally stores 9,10,12,14-bit values as is (not shifted): maximum 10-bit value is 1023.
    Exceptions from this are only in special packed formats like P010 (not sure if it is implemented in FFmpeg).

    (yuv420p10le is not P010)
    Quote Quote  
  10. Originally Posted by shekh View Post
    FFmpeg normally stores 9,10,12,14-bit values as is (not shifted): maximum 10-bit value is 1023.
    Exceptions from this are only in special packed formats like P010 (not sure if it is implemented in FFmpeg).

    (yuv420p10le is not P010)
    So this means 16 x 3 = 48 bits for Y, U and V?

    Do you have some sample code you could post? Thanks.
    Last edited by chris319; 28th Feb 2020 at 03:09.
    Quote Quote  
  11. I've got it drawing in black and white. I have to shift the 10-bit values by 2 to convert them to 8-bit. My hardware doesn't support 10-bit color so it has to be converted.
    Last edited by chris319; 28th Feb 2020 at 05:27.
    Quote Quote  
  12. What system, app, module etc. are you using to draw RGB on screen?
    Quote Quote  
  13. Originally Posted by _Al_ View Post
    What system, app, module etc. are you using to draw RGB on screen?
    In the end, I'm drawing 8-bit RGB to the screen. I want to at least be able to decode pixels in a 10-bit file but they must be converted to 8 bits because that's all my hardware can handle. I'm already successfully drawing 8-bit pixels which are read directly from YUV and converted to RGB.

    Converting 10-bits to 8 is a simple matter of losing the 2 LSB's.
    Last edited by chris319; 28th Feb 2020 at 15:51.
    Quote Quote  
  14. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    ...wasting the whole point of 10bit. Dither, my friend.

    Scott
    Quote Quote  
  15. Originally Posted by Cornucopia View Post
    ...wasting the whole point of 10bit. Dither, my friend.

    Scott
    When your hardware only supports 8 bits it's what you have to do.

    Buy me a 10-bit monitor and I can use all 10 bits.
    Quote Quote  
  16. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    No. if you have 10bit, but only 8 of which you get to use, there are multiple ways to get there.
    Truncating as you suggest, is quickest, simplest, but also statistically least true to the original and ugliest.
    Rounding is better.
    Dither+Rounding is best. But takes most computation, plus since it trades error for noise needs to be tailored to use noise which is most compatible/least disruptive. But the results are often worth it.

    Can't afford to get myself a 10bit monitor, much less anyone else.

    Scott
    Quote Quote  
  17. What's your solution for addressing 10-bit u and v pixels in yuv420p10le?
    Quote Quote  
  18. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    If you are referring to the bitdepth reduction, I'd say do the exact same thing - dither.
    If you are referring to the process of color subsampling and any dependent processes, I'd say the point resize method is closest to the way subsampling works in practice, so even though it may be counterintuitive, it should be the preferred method of up/downscaling.
    If you are referring to something else, not sure what you're getting at.

    Scott
    Quote Quote  
  19. If you are referring to something else, not sure what you're getting at.
    The crux of this thread is addressing the u and v pixels. In other words, how to fetch those samples out of memory.
    Quote Quote  
  20. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    What put it in memory? That'll determine what order it's in.
    Regardless there are a very finite number of possibilities, so it wouldn't take much to try all the permutations.

    Scott
    Quote Quote  
  21. The O.P. states plainly that it comes from an ffmpeg file:

    as output by ffmpeg's yuv420p10le
    Rather than try all the permutations, I could ask someone who knows the answer.
    Quote Quote  
  22. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Or, you could do what most people do and look at the source code.

    Scott
    Quote Quote  
  23. Originally Posted by Cornucopia View Post
    Or, you could do what most people do and look at the source code.

    Scott
    Tell me which ffmpeg file to look in and I'll look there.
    Quote Quote  
  24. Chris319 - this is done on many platforms, including ffmpeg that you use, having hard time to understand why you want do it manually,

    If you really want to do it yourself, and start with reading values. You came here a while ago with that Burke code to change RGB values, you can modify that. You can see there is all in loop, where inside RGB values are changed inside for each plane. This time you haveYUV planes where U and V is halfed. There are 2bytes per value. When using Python/Vapoursynth and wanting to know YUV value I'd use:
    Code:
    #planar YUV only
    def get_pixel_values(self, clip, frame, x,y):
        try:
            fr = clip.get_frame(frame)
            planes =[fr.get_read_array(i) for i in range(clip.format.num_planes)]
        except:
            pass
        try:    Y = planes[0][y][x]
        except: Y = 'x'
        ys = y >> clip.format.subsampling_h          #chroma planes are reduced if subsampling
        xs = x >> clip.format.subsampling_w
        try:    U = planes[1][ys,xs]
        except: U = 'x'            
        try:    V = planes[2][ys,xs]
        except: V = 'x'
        return Y,U,V
    not knowing what equations would follow to get RGB, just to get YUV values.
    Quote Quote  
  25. It occurs to me that the YUV-to-RGB code will have to change. I don't know if there is anything more to it than multiplying the coefficients by 4 for 10-bits, everything except the luma coefficients Kr, Kg and Kb. So ...

    Code:
    rf = (255/219)*yf + (255/112)*vf*(1-Kr) – (255*16/219 + 255*128/112*(1-Kr))
    
    gf = (255/219)*yf – (255/112)*uf*(1-Kb)*Kb/Kg – (255/112)*vf*(1-Kr)*Kr/Kg – (255*16/219 – 255/112*128*(1-Kb)*Kb/Kg – 255/112*128*(1-Kr)*Kr/Kg)
    
    bf = (255/219)*yf + (255/112)*uf*(1-Kb) – (255*16/219 + 255*128/112*(1-Kb))
    Becomes

    Code:
    rf = (1020/876)*yf + (1020/448)*vf*(1-Kr) – (1020*64/876 + 1020*512/448*(1-Kr))
    
    gf = (1020/876)*yf – (1020/448)*uf*(1-Kb)*Kb/Kg – (1020/448)*vf*(1-Kr)*Kr/Kg – (1020*64/876 – 1020/448*512*(1-Kb)*Kb/Kg – 1020/448*512*(1-Kr)*Kr/Kg)
    
    bf = (1020/876)*yf + (1020/448)*uf*(1-Kb) – (1020*64/876 + 1020*512/448*(1-Kb))
    Quote Quote  
  26. having hard time to understand why you want do it manually
    Black boxes such as vapoursynth and ffmpeg are not the right solution for every situation. At the end of the day, even your black boxes need low-level code to do their magic. As you know I've already got it working in 8 bits. I'm having trouble with 10-bit video which is a bit underdocumented. Someone here said ffmpeg's yuv420p10le is not the same as P010 as documented here: https://docs.microsoft.com/en-us/windows/win32/medfound/10-bit-and-16-bit-yuv-video-fo...ts#420-formats

    The code is not that complicated if you have good documentation to go by.
    Quote Quote  
  27. Originally Posted by chris319 View Post
    having hard time to understand why you want do it manually
    Black boxes such as vapoursynth and ffmpeg are not the right solution for every situation. At the end of the day, even your black boxes need low-level code to do their magic. As you know I've already got it working in 8 bits. I'm having trouble with 10-bit video which is a bit underdocumented. Someone here said ffmpeg's yuv420p10le is not the same as P010 as documented here: https://docs.microsoft.com/en-us/windows/win32/medfound/10-bit-and-16-bit-yuv-video-fo...ts#420-formats

    The code is not that complicated if you have good documentation to go by.
    In post #11 you wrote you've got it drawing so it is unclear what you have trouble with. IMO planar layout is trivial and does not need extensive documentation.
    Do you mean you have yuv pixels already but cant convert it to rgb? Why not use FFMpeg API to do the conversion?
    Quote Quote  
  28. I have it working perfectly in YUV -> RGB color in 8 bits. In 10 bits I am able to draw a black-and-white image but am having trouble addressing the U and V samples.
    Quote Quote  
  29. Originally Posted by chris319 View Post
    having hard time to understand why you want do it manually
    Black boxes such as vapoursynth and ffmpeg are not the right solution for every situation. At the end of the day, even your black boxes need low-level code to do their magic.
    you are severely mistaken, you can manipulate pixel same way as you do, but it is really ridicules. Poisondeathray laid down your yuv to rgb conversion in Vapoursynth, I just simply set it as range full, but he used levels. Did you try those lines? Definitely go that way, you are heading into abyss of looking for new equations for every single format or solution. And doing it pixel by pixel, not under some sort of array is kind insane if you ask me.

    To demonstrate what you are trying to do, now say 10bitYUV pipe from ffmpeg, doing the same as you do, loading it into some array, but in Python, I do not use C:
    Code:
    import vapoursynth as vs
    core = vs.core
    import subprocess
    import ctypes
    
    source_path = r'C:/10bitYUV.mp4'
    yuv_placeholder = core.std.BlankClip(width=1280, height=720, format=vs.YUV420P10)
    ffmpeg = r'D:\path\ffmpeg.exe'
    
    w = yuv_placeholder.width
    h = yuv_placeholder.height
    Ysize   = w * h * 2
    UVsize  = w * h//2         #YUV420 10bit, 2bytes per sample
    YUVsize = Ysize + 2*UVsize #YUV420 10bit, 2bytes per sample
    
    command = [ ffmpeg,
                '-i', source_path,
                '-vcodec', 'rawvideo',
                '-pix_fmt', 'yuv420p10le',  
                '-f', 'rawvideo', '-']
    
    pipe = subprocess.Popen(command,
                            stdout = subprocess.PIPE,
                            bufsize=YUVsize,
                            )
    
    def frame_from_pipe(n,f):
        vs_frame = f.copy()
        try:
            for plane, size in enumerate([Ysize, UVsize, UVsize]):
                ctypes.memmove(vs_frame.get_write_ptr(plane), pipe.stdout.read(size),  size)
            pipe.stdout.flush()
        except Exception as e:
            raise ValueError(repr(e))
        else:    
            return vs_frame
    
    try:
        yuv_clip = core.std.ModifyFrame(yuv_placeholder, yuv_placeholder, frame_from_pipe)
    except ValueError as e:
        pipe.terminate()
        print(e)
    
    yuv_clip.set_output()
    just to get pipe from your ffmpeg to memory , then some sort of array so it can be addressed pixel, by pixel. Above example gets it into Vapoursynth array/frame. That is all that code does. Then converting it to RGB, manually, as you request, which can be done using Expressions in Vapoursynth , but because expressions use reverse polish notation ,it would take a time to figure it out, to write it down, so I passed. You can load that pipe into numpy and do it there as well, if you want that I can post it, but guessing you pass anyway. Then do some algebra en mass for the whole frame, not in some loop for X and then other loop in it for Y. This is why numpy array is here. To do some calculations per frame at once. I have no idea how that would be done manually right now, because there are one line opencv codes (using numpy) that do exactly that, you can dig into their manuals, source code what code they use.

    to replace all that insanity above and WITH conversion just using couple of lines instead, using proper loading Source Plugin:
    Code:
    clip = core.lsmas.LibavSMASGSource(r'C:/10bitYUV.mp4')
    rgb_clip = core.std.Point(clip, format =  vs.RGB24, matrix_in_s = '709', range_in_s = 'full')
    or what pdr posted:
    Code:
    clip = #sourcefilter #RGB  75% of 16-235 source
    clip = core.resize.Point(clip, format=vs.RGBS) 
    clip2 = core.std.Levels(clip2, min_in=0.063, max_in=0.92, gamma=1, min_out=0, max_out=1, planes=[0,1,2]) 
    YUV_clip = ... to some YUV
    something that would work for you, I do not know why you do not want to go that way, I remember you insisted to get certain RGB values after converting colorbars, which i basically got, not sure where is the problem. I'm sure you'd get help with Vapoursynth or numpy and you have also tools with gui ready to actually read your values right away using vsedit or something else.

    Also I might add, running that code above , that YUV piped from ffmpeg is planar as you can see. Loading Y size first, then U and V.
    Last edited by _Al_; 4th Mar 2020 at 21:46.
    Quote Quote  
  30. doing it pixel by pixel, not under some sort of array is kind insane if you ask me.
    I'm using ffmpeg to read frames into an array. See Ted Burke's code — that's exactly what it does.

    As an interpreted language, Python would be too slow for pixel-by-pixel.

    There are three projects: the first is to limit RGB to EBU R 103 limits. That uses ffmpeg and is finished. The second is to determine the maximum and minimum RGB values to calibrate the second program and to make sure the video is R 103 compliant. That is finished for 8 bits only. The third is to read 10-bit files and to adapt the second program to 10 bits.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!