VideoHelp Forum




+ Reply to Thread
Results 1 to 16 of 16
  1. I have a video with these info:

    Format : MKV
    Codec : HEVC (h.265)
    Bit depth : 10 bits

    Using ffmpeg, I am able to get the 8 or 16 bit image frame from this video, but I cannot find a way to get 10-bit images from it.
    I would appreciate it if you can help, any output image format would be OK.
    Last edited by namdvt; 25th Aug 2017 at 06:15.
    Quote Quote  
  2. I don't know of a 10-bit image format that can be created in ffmpeg. Most jump straight from 8bit to 16bit.
    https://en.wikipedia.org/wiki/Comparison_of_graphics_file_formats#Technical_details

    Sample command line for 10bit in, 16bit out:
    Code:
    ffmpeg -i ColorBarsHD2-10bit-x264-crf22.mkv -pix_fmt yuv444p10le test-%d.png
    Quote Quote  
  3. The most common 10bit image format is dpx / cineon (and it's only "common" in post production)

    ffmpeg doesn't support writing/export of dpx, but you might be able to pipe ffmpeg to imagemagick to write, or maybe vapoursynth's imagemagick's export plugin

    One potential problem with dpx is there are a variety of interpretation options for black/white points , gamma curves. dpx officially supports YUV and RGB, but only the RGB variant is widely supported, so you're going to have to specify how you're converting YUV to RGB (same if you're using a 16bit format like PNG, assuming source was YUV, and if it's a different bit depth, then whether or not you're dithering, and what algorithm)

    EDIT: yes vpy imagemagick writer works . It generates a valid dpx , valid 10bit values (0-1023), tested in AE
    Last edited by poisondeathray; 25th Aug 2017 at 10:26.
    Quote Quote  
  4. raw should work :

    Code:
    @set OUTDIR="pic_out"
    @if not exist %OUTDIR% (mkdir %OUTDIR%)
    @ffmpeg -hide_banner -color_range 2 -i "%1" -an -sn -dn -c:v v410 -pix_fmt yuv444p10le -f image2 -vsync 0 "%OUTDIR%\%~n1_%%06d.yuv"
    Quote Quote  
  5. Originally Posted by poisondeathray View Post
    The most common 10bit image format is dpx / cineon (and it's only "common" in post production)

    ffmpeg doesn't support writing/export of dpx, but you might be able to pipe ffmpeg to imagemagick to write, or maybe vapoursynth's imagemagick's export plugin

    One potential problem with dpx is there are a variety of interpretation options for black/white points , gamma curves. dpx officially supports YUV and RGB, but only the RGB variant is widely supported, so you're going to have to specify how you're converting YUV to RGB (same if you're using a 16bit format like PNG, assuming source was YUV, and if it's a different bit depth, then whether or not you're dithering, and what algorithm)

    EDIT: yes vpy imagemagick writer works . It generates a valid dpx , valid 10bit values (0-1023), tested in AE
    I can not find the way to convert video to dpx. Can I first use ffmpeg to get 16bit images, after that convert them to dpx 10 bit images using imagemagick? I'm afraid that using this way the quality of the output images is not be guaranteed.
    Quote Quote  
  6. Originally Posted by namdvt View Post

    I can not find the way to convert video to dpx. Can I first use ffmpeg to get 16bit images, after that convert them to dpx 10 bit images using imagemagick? I'm afraid that using this way the quality of the output images is not be guaranteed.

    What is the background information - why are you doing this ?




    Converting to 16bit depth is a problem, original values won't be kept, because of dithering ; and even if you disable dithering the rounding will not ensure same values

    Unless you do it in a way that uses 10bit in 16bit (ie. original 0-1023 values "padded" within 0-65535; not "scaled" to 0-65535). Not sure how to "convince" ffmpeg to do it that way




    Pandy's method is actually the most accurate if your source is YUV, but technically it's not an "image" format, or at least not a standard one. Because converting 10bit YUV source to 10bit RGB image format like DPX is lossy at 10bits . Raw YUV is just the decoded image at the same bit depth and chroma subsampling would be lossless (But you would have to specify the correct pixel format)



    vapoursynth can export dpx as mentioned above, but the Imagemagick plugin for dpx has a dummy alpha channel, and there is no switch to use RGB instead of RGBA for imagemagick plugin. Regardless, it's a valid 10bit dpx (I checked with 10bpc color picker in after effects)

    ffmpeg might be able to pipe 10bit to IM . But there is more control over the methods/algorithms used in vapoursynth. For example if you started with yuv420p10 , you need to decide on which matrix to convert to 10bit RGB, which scaling algorithm (since 420 has subsampled U, V planes) . Also you can double check the values in the vapoursynth editor, using a color picker. You can actually check the original 10bit Y,U,V values (directly decoded from MKV), or the 10bit R,G,B converted values. The RGB values correspond to the values in the output dpx in AE, so I know it's valid. But some programs might not "like" the alpha channel in DPX
    Quote Quote  
  7. Sorry I am a newbie in this field. I have just install python and vapoursynth and tried to export dpx files from the video but I couldn't. Could you please show my how can I get the dpx files?
    Quote Quote  
  8. [QUOTE=poisondeathray;2494905]
    Originally Posted by namdvt View Post
    Unless you do it in a way that uses 10bit in 16bit (ie. original 0-1023 values "padded" within 0-65535; not "scaled" to 0-65535). Not sure how to "convince" ffmpeg to do it that way
    Scaling from 10bit to 16bit is normally not a problem and can be assumed lossless, also I'm quite sure no dithering is applied in such conversion.
    Quote Quote  
  9. Originally Posted by shekh View Post
    Originally Posted by poisondeathray View Post
    Unless you do it in a way that uses 10bit in 16bit (ie. original 0-1023 values "padded" within 0-65535; not "scaled" to 0-65535). Not sure how to "convince" ffmpeg to do it that way
    Scaling from 10bit to 16bit is normally not a problem and can be assumed lossless, also I'm quite sure no dithering is applied in such conversion.

    You can't just "assume" . It's certainly "good enough" for most purposes, but not lossless unless done properly. You have to specify what method for up and down back to 10bit. Many things can go wrong on the trip. If you just do it "normally" , it's definitely not lossless in ffmpeg. I've shown this for v210 round trip to 16bpc a while back in another thread.

    You can disable dithering with sws_dither=none in ffmpeg , but another issue is the chroma subsampling. 4:2:0 up to full color 4:4:4 and back to 4:2:0 is not lossless, unless you use nearest neighbor algorithm with centered sample interpolation.




    Sorry I am a newbie in this field. I have just install python and vapoursynth and tried to export dpx files from the video but I couldn't. Could you please show my how can I get the dpx files?

    Can you answer the background questions ? There might be other , better ways to do what you want



    It will look like this

    Code:
    import vapoursynth as vs
    core = vs.get_core()
    clip = core.lsmas.LWLibavSource(r'PATH\INPUT.mkv')
    clip = core.resize.Bicubic(clip, format=vs.RGB30)
    clip = core.imwrif.Write(clip, "DPX", "OUTPUT_%05d.dpx", firstnum=0, dither=False)
    clip.set_output()
    %05d is the number of placeholder digits, so this will look like . If you wanted more digits , e.g 6, it would be "%06d"
    OUTPUT_00000.dpx
    OUTPUT_00001.dpx
    OUTPUT_00002.dpx
    .
    .

    Rec709 is used for the RGB conversion, unless you specify the matrix otherwise. I used Bicubic in this example, but you can specify a different algorithm.
    Quote Quote  
  10. Can you answer the background questions ? There might be other , better ways to do what you want
    Thank you for all your help. I'm currently creating an algorithm for 10 bit image enhancement. I want to get 10 bit frames in a given 10 bit video so that after applying the enhancement algorithm for each frames, these enhanced image sequences will be used to create an enhanced video (just for testing).
    Quote Quote  
  11. But what pixel format and colorspace / color model are your source(s) and do you want the images in ? Or what pixel format and colorspace does your algorithm work in ?

    For example, if you had a YUV420P10 source, converting to RGB30 image format like dpx will be lossy if you intended to roundtrip it back to the original YUV420P10 . It's more ideal if you can work in the native source pixel format and colorspace . Not just rounding errors, a "standard" range conversion to 10bit RGB will clip values Y < 64 , Y> 940 and UV <64, UV > 960 . "Clip" as in "discard".
    Quote Quote  
  12. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    Use the V210 codec.
    Code:
    ffmpeg -i myinput.mkv -c:v v210 -pix_fmt yuv422p10le -c:a copy myoutput.mov
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    But what pixel format and colorspace / color model are your source(s) and do you want the images in ? Or what pixel format and colorspace does your algorithm work in ?

    For example, if you had a YUV420P10 source, converting to RGB30 image format like dpx will be lossy if you intended to roundtrip it back to the original YUV420P10 . It's more ideal if you can work in the native source pixel format and colorspace . Not just rounding errors, a "standard" range conversion to 10bit RGB will clip values Y < 64 , Y> 940 and UV <64, UV > 960 . "Clip" as in "discard".
    The video colorspace is yuv420p10le. Would you have a suggestion?
    Quote Quote  
  14. Originally Posted by namdvt View Post
    Originally Posted by poisondeathray View Post
    But what pixel format and colorspace / color model are your source(s) and do you want the images in ? Or what pixel format and colorspace does your algorithm work in ?

    For example, if you had a YUV420P10 source, converting to RGB30 image format like dpx will be lossy if you intended to roundtrip it back to the original YUV420P10 . It's more ideal if you can work in the native source pixel format and colorspace . Not just rounding errors, a "standard" range conversion to 10bit RGB will clip values Y < 64 , Y> 940 and UV <64, UV > 960 . "Clip" as in "discard".
    The video colorspace is yuv420p10le. Would you have a suggestion?


    Do you really need an "image" ? As in an "image sequence" ?

    I have no idea what your "enhancements" entail , or what requirements you have to for this "enhancement". Do you need RGB or can you work in YUV ? or raw YUV video for that matter ?

    Best practice is try to keep the same format. So if you start with yuv420p10le , then ideally you would stay there, or at least in YUV colorspace. If you stay in YUV, then you avoid the YUV<=>RGB losses. If you stay with 4:2:0, then you avoid the chroma up/downsampling losses.

    But sometimes you have some process or filter that only works in some other colorspace , like RGB, then it's unavoidable. In that case, processing at a higher bit depth is usually better than the same bit depth. As a general rule you need at least +2 bits to avoid the rounding error lossless, and you have to process the chroma up/down properly by duplicating samples (up) or discarding those same samples (down) with nearest neighbor interpolation

    Or maybe it really doesn't matter for this particular goal ? If you have +/- a few pixel values from the round trip does it matter for this ?
    Quote Quote  
  15. Originally Posted by poisondeathray View Post
    Originally Posted by namdvt View Post
    Originally Posted by poisondeathray View Post
    But what pixel format and colorspace / color model are your source(s) and do you want the images in ? Or what pixel format and colorspace does your algorithm work in ?

    For example, if you had a YUV420P10 source, converting to RGB30 image format like dpx will be lossy if you intended to roundtrip it back to the original YUV420P10 . It's more ideal if you can work in the native source pixel format and colorspace . Not just rounding errors, a "standard" range conversion to 10bit RGB will clip values Y < 64 , Y> 940 and UV <64, UV > 960 . "Clip" as in "discard".
    The video colorspace is yuv420p10le. Would you have a suggestion?


    Do you really need an "image" ? As in an "image sequence" ?

    I have no idea what your "enhancements" entail , or what requirements you have to for this "enhancement". Do you need RGB or can you work in YUV ? or raw YUV video for that matter ?

    Best practice is try to keep the same format. So if you start with yuv420p10le , then ideally you would stay there, or at least in YUV colorspace. If you stay in YUV, then you avoid the YUV<=>RGB losses. If you stay with 4:2:0, then you avoid the chroma up/downsampling losses.

    But sometimes you have some process or filter that only works in some other colorspace , like RGB, then it's unavoidable. In that case, processing at a higher bit depth is usually better than the same bit depth. As a general rule you need at least +2 bits to avoid the rounding error lossless, and you have to process the chroma up/down properly by duplicating samples (up) or discarding those same samples (down) with nearest neighbor interpolation

    Or maybe it really doesn't matter for this particular goal ? If you have +/- a few pixel values from the round trip does it matter for this ?
    I'm working on RGB, and I think a few pixel values from rounding doesn't matter...
    Quote Quote  
  16. Originally Posted by namdvt View Post
    I'm working on RGB, and I think a few pixel values from rounding doesn't matter...
    Then why not use 16bit RGB? If you still need an image format, then TIFF or PNG, or EXR if processing in linear . Because 16bit RGB is more standard than 10bit RGB (many more programs will accept it) , and the conversion back to 10bit YUV will have higher precision (and can be lossless if done properly) . 10bitYUV to 10bitRGB and back to 10bitYUV can never be lossless. That's one of the reasons why dpx had been so common in post production for movies , at least in the past - it's +2 bits to the final distribution format which are all 8bit. Now that 10bit is become more common, even in consumer formats - the 12bit intermediate is still uncommon, most people make the jump straight to 16bit.

    But if your program/process finds it "easier" to deal with 10bit values that might be a consideration. Another consideration might be filesize - dpx is "only" 10bit but uncompressed, but PNG and EXR have compression options . Uncompressed usually process faster in the absense of transfer I/O bottlenecks (compressed formats need to decompress or decode)
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!