VideoHelp Forum
+ Reply to Thread
Results 1 to 21 of 21
Thread
  1. Ive been using handbrake to compress my 1080p bluray rips and it works great, but i have 4k hdr rips as well. I asked on the handbrake forum if it supports this content, as i knew that it didnt a long time ago, but maybe it had been updated. It doesnt. A lot of people were reccomending staxrip, but some people said xmedia recode. I know that staxrip does support this content, but i couldnt find anything on xmedia recode. Im not as well versed with xmediarecode as i am with handbrake, but i have used it before, so would be preferable to staxrip. Can anyone one tell me, definitively if xmedia has a true 10bit pipeline, and can pass through hdr meta data, leaving the encoded video, indentical to the source, at a lower bitrate? I dont want any hdr to sdr conversion or bt2020 to bt709 or anything. If it does support it, is there anything i need to do to get it working?
    Quote Quote  
  2. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    I've converted 10 bit HDR using Vidcoder utilizing HEVC/Main 10 profile and they play and look fine in HDR on my TV,
    they're still 10-bit, according to mediainfo

    However, I don't know what Vidcoder is doing under the covers regarding the "10 bit pipeline"
    Last edited by davexnet; 11th Jun 2020 at 19:25.
    Quote Quote  
  3. Member
    Join Date
    Jul 2020
    Location
    Summerville, SC, USA
    Search Comp PM
    I'm not certain about XMedia recode's 10 bit capability. Handbrake makes 10 bit videos, but only after converting to 8 bits first. I can find no evidence that XMedia is any different. I've scoured the net and the developer appears to be somewhat uncommunicative.
    Quote Quote  
  4. You can test it with a 10bit gradient . If there is an 8bit step, you will get gaps in the data such as 0,4,8, instead of 0,1,2,3,4,6,7,8 , which of course results in more banding, less accuracy, worse compression

    Handbrake/vidcoder definitely uses an 8bit intermediate stage; this is well documented . When you test - it dithers the intermediate stage (functions like noise to "hide" the banding), so it doesn't look as bad on a gradient as no dither - but the filesize balloons up because of the dithering

    xmedia can preserve 10bit if you set it up correctly; but I see no way of preserving the HDR metadata automatically . I think some GUI's like staxrip can


    eg.
    0-1023 gradient original produced at crf1 5kb
    x264cli 10bit 2nd generation crf 1 5kb (output PSNR actually lossless @ CRF1)
    xmediarecode 10bit x264 2nd generation crf 1 (same settings) 5kb
    handbrake "10bit" 2nd generation x264 crf 1 70kb
    handbrake "10bit" 2nd generation x265 crf 1 76kb

    Handbrake output >10x the size mainly because of the dithering, and gaps in the data (0,4,8) when you examine it. Note on "real" content, the "ballooning" size won't be so drastic, but it's still suboptimal if you start with 10bit content
    Quote Quote  
  5. I recently tried out Ripbot264 to do 10bit hevc HDR passthrough (PQ and HLG), I didn't find it confusing and it seemed to work OK.

    I didn't do the 10bit gradient test: how do you generate the gradient, with avisynth ?
    Quote Quote  
  6. Originally Posted by butterw View Post

    I didn't do the 10bit gradient test: how do you generate the gradient, with avisynth ?

    I recommend using vapoursynth, because vsedit can read higher bit depths than 8 with the color picker . If you use avspmod , if you use a color picker, it will only read downconverted 8bit values . Avisynth's waveform (histogram) only displays downconverted 8bit values too

    Internally avs+ can use higher bit depths, but how to you plan on examining the data? One workaround is with other programs like ffmpeg -vf waveform (supports 8,10,12 bit waveforms) .

    But to answer the question for avisynth, you can create a 10bit RGB gradient in image editor (photoshop or similar) . 0-1023 . Then use "full range" equations to convert to YUV (PC matrix in avisynth)

    In vapoursynth, you can use python functions to generate the gradient directly, different bitdepths.
    Quote Quote  
  7. OK, I can imagine that the difficulty may in fact be analyzing the result rather than generating a gradient. Could you still please post your 10bit gradient test video here ? It's easy enough, even for beginners, to measure filesize and check for 10bit output in MediaInfo.
    Also why not use the encoder Lossless mode (x265 --lossless, x264 crf=0) ?
    Quote Quote  
  8. Originally Posted by butterw View Post
    OK, I can imagine that the difficulty may in fact be analyzing the result rather than generating a gradient. Could you still please post your 10bit gradient test video here ? It's easy enough, even for beginners, to measure filesize and check for 10bit output in MediaInfo.
    Also why not use the encoder Lossless mode (x265 --lossless, x264 crf=0) ?
    I check end to end tests for various workflows (e.g. before I start a project). "Lossless mode" isn't as widely supported in other programs, NLE's etc.. And for a simple gradient, it is lossless (You can re-encode with CRF1 and PSNR is infinity)

    Be careful about just looking at the "filesize" only ; especially when it's just 1 frame, ~5Kb . Container overhead can be that large (Check the elementary stream if you're doing that). Also the encoding settings can change the size (for example xmediarecode didn't use same settings as default cli, file size was >2x the size) .

    A program that does not apply dither (or maybe you can disable it in handbrake), would still have gaps in the data, banding . But smaller filesize than with dither. For example if I convert to 8bit in avs or vpy without dither, then encode - it's about the same size (~5kb) , but has gaps in the data (show below in a waveform)

    "orig_crf1unflagged.mp4" attached is a 2048x1080 "double wide" ramp pattern yuv420p10 AVC , 1 frame, each value is repeated, 0,0,1,1,2,2... so 1024x2 = 2048 . It's left unflagged (because range flag can cause some programs to compress the range)



    I didn't upload all the waveform screenshots, but if you look at a 10bit waveform (it's 1024 pixels "tall", each value 0-1023 is represented) . You can see the original looks perfect, handbrake has vertical gaps (missing values) and is noisy (dither) , 8bit down + 10up has vertical "gaps" but is clean

    eg.
    ffmpeg -i input.ext -vf waveform -frames:v 1 whatever.png
    Image Attached Thumbnails Click image for larger version

Name:	orig10bit.png
Views:	181
Size:	23.9 KB
ID:	54290  

    Click image for larger version

Name:	hb_x264_10bit_crf1_2ndgen.png
Views:	188
Size:	30.5 KB
ID:	54291  

    Click image for larger version

Name:	down8bit,up10bit_x264_10bit_crf1_2ndgen.png
Views:	172
Size:	22.7 KB
ID:	54292  

    Image Attached Files
    Quote Quote  
  9. script to make that poisondeathray's gradient, for any bitdepth could go like this in vapoursynth, I just added panning for the image:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    import numpy as np
    import functools
    #from view import Preview
    
    WIDTH = 640
    HEIGHT = 360
    color1 = (0,0,0)
    color2 = (1,1,1)
    format_out = vs.YUV420P10
    
    
    Img1 = core.std.BlankClip(width=WIDTH, height=HEIGHT, format=vs.RGBS, color=color1, length=1)
    numpyImg1 = np.dstack([np.asarray(Img1.get_frame(0).get_read_array(i)) for i in range(3)])
    Img2 = core.std.BlankClip(width=WIDTH, height=HEIGHT, format=vs.RGBS, color=color2, length=1)
    numpyImg2 = np.dstack([np.asarray(Img2.get_frame(0).get_read_array(i)) for i in range(3)])
    
    #create gradient in floating point
    c = np.linspace(0, 1, WIDTH)[None,:, None]
    gradient = numpyImg1 + (numpyImg2 - numpyImg1) * c
    
    def get_vsFrame(n, f, npArray):
        vsFrame = f.copy()
        [np.copyto(np.asarray(vsFrame.get_write_array(i)), npArray[:, :, i])    for i in range(3)]
        return vsFrame
    
    clip_placeholder = core.std.BlankClip(width=WIDTH, height=HEIGHT, format= vs.RGBS, length=1)  
    clip = core.std.ModifyFrame(clip_placeholder, clip_placeholder, functools.partial(get_vsFrame, npArray=gradient))
    clip = clip * WIDTH
    
    def pan(n, clip):
        if n: 
            cut1 = clip.std.CropAbs(width=WIDTH-n, height=HEIGHT, left=n, top=0)
            cut2 = clip.std.CropAbs(width=n, height=HEIGHT, left=0, top=0)
            return core.std.StackHorizontal([cut1,cut2])
        else:
            return clip
       
    clip_out = core.std.FrameEval(clip, functools.partial(pan, clip=clip))
    clip_out = clip_out.resize.Bilinear(matrix_s='709',format=format_out)
    clip_out = clip_out.std.AssumeFPS(fpsnum=60000, fpsden=1001)
    
    #Preview([clip_out])
    clip_out.set_output()
    Gradient was designed in numpy, it could be avoided with something that is already out there as function, I do not know any atm.
    Last edited by _Al_; 30th Jul 2020 at 21:10.
    Quote Quote  
  10. ^ Yes I got that ramp function from _Al_

    https://forum.doom9.org/showthread.php?p=1885013

    vapoursynth is nicer for this; not just to make the gradient - but you can run a color picker in vsedit and it will read out the values, you can "see" the gaps easily from going down to 8bit (0-256)
    Quote Quote  
  11. I got rid of that vshelper function (that I was learning from) from KotoriCANOE that deals with numpy, its down to basics now, its always better to see things as basics.
    Quote Quote  
  12. thanks guys.
    I've come across this post (about hybrid encoder) which shows screenshot results of a similar gradient test (an extreme levels function has been applied to the result to highlight what is happening):
    https://forum.blackmagicdesign.com/viewtopic.php?f=3&t=109259#p613300


    for true 10bit pass-through, you need:
    - a 10bit capable ffmpeg/x265.
    - avisynth/vapoursynth Filters that work in 10bit
    - and an encoding app that uses 10bit or better internal precision.
    - for HDR pass-through app also needs to parse and pass on the relevant info to x265 (ex: PQ transfer, Mastering Metadata, etc.)
    Quote Quote  
  13. Don't mean to thread jack, but I have been thinking about this lately, let's assume that you have 10-bit source, a true 10-bit processing pipeline and a 10-bit encoder, does it really make any difference from an 8-bit encode, if you don't have a 10-bit monitor, cables, video card, and drivers?

    I know that the argument is that 10-bit encodes give smoother gradients even when displayed on an 8-bit monitor, but that leads to the question of how is the 10-bit signal mapped to an 8-bit display?
    Last edited by sophisticles; 31st Jul 2020 at 11:44.
    Quote Quote  
  14. SDR 10bit hevc encode >> the consensus is that yes it's noticeably better than 8bit encode at same bitrate (less banding, more efficient encode), even on a 8 bit monitor ! My own experience with 720p/1080p encodes played on mpc-hc confirms this: no disadvantage to 10bit encoding except more limited hardware decoding support.

    As to why, it can only come down to 2 things: gains on a more efficiently encoded 10bit yuv image, not offset by display on a 8 bit only rgb monitor.
    Last edited by butterw; 31st Jul 2020 at 13:05.
    Quote Quote  
  15. Originally Posted by sophisticles View Post
    Don't mean to thread jack, but I have been thinking about this lately, let's assume that you have 10-bit source, a true 10-bit processing pipeline and a 10-bit encoder, does it really make any difference from an 8-bit encode, if you don't have a 10-bit monitor, cables, video card, and drivers?

    I know that the argument is that 10-bit encodes give smoother gradients even when displayed on an 8-bit monitor, but that leads to the question of how is the 10-bit signal mapped to an 8-bit display?

    You asked this before from 8bit YUV source; Recall the lighthouse thread ? NVEnc demonstrated the 10bit vs. 8bit benefit too
    https://forum.videohelp.com/threads/394569-I-there-any-benefit-to-encoding-to-10-bits


    It "mapped" in the sense that 10bit YUV is converted to accurate 8bit sRGB colors. YUV is a larger color model and many 8bit YUV values "map" to the same 8bit RGB value (colors are duplicated, contributes to banding). +2 YUV bits means no errors (assuming it's done properly; some pipelines, like current browsers, go through an 8bit YUV intermediate, so that obviously negates the benefit)

    jagabo has explained this nicely before
    https://forum.videohelp.com/threads/381298-RGB-to-YUV-to-RGB#post2467087
    Quote Quote  
  16. Originally Posted by poisondeathray View Post
    Originally Posted by sophisticles View Post
    Don't mean to thread jack, but I have been thinking about this lately, let's assume that you have 10-bit source, a true 10-bit processing pipeline and a 10-bit encoder, does it really make any difference from an 8-bit encode, if you don't have a 10-bit monitor, cables, video card, and drivers?

    I know that the argument is that 10-bit encodes give smoother gradients even when displayed on an 8-bit monitor, but that leads to the question of how is the 10-bit signal mapped to an 8-bit display?

    You asked this before from 8bit YUV source; Recall the lighthouse thread ? NVEnc demonstrated the 10bit vs. 8bit benefit too
    https://forum.videohelp.com/threads/394569-I-there-any-benefit-to-encoding-to-10-bits


    It "mapped" in the sense that 10bit YUV is converted to accurate 8bit sRGB colors. YUV is a larger color model and many 8bit YUV values "map" to the same 8bit RGB value (colors are duplicated, contributes to banding). +2 YUV bits means no errors (assuming it's done properly; some pipelines, like current browsers, go through an 8bit YUV intermediate, so that obviously negates the benefit)

    jagabo has explained this nicely before
    https://forum.videohelp.com/threads/381298-RGB-to-YUV-to-RGB#post2467087
    Yeah, I have read through that, but I am not entirely convinced, for a couple of reasons:

    1) If I do a test encode with 8-bit x264 and 10-bit x264, then I definitely see an improvement.

    2) Same holds true with either NVENC HEVC or QSV HEVC, to the point where I would say you should almost always use 10-bit if you are planning on using x264, NVENC or QSV.

    3) x265 is a different story, to my eyes, on some encodes x265 10-bit looks worse than 8-bit and 12-bit looks way worse, to my eyes, with certain encodes. Which makes me wonder how 12-bit is being mapped to a 8-bit monitor.

    I guess the issue I am having is that 8-bit color means that there are 256 shades for each of the Red, Green, Blue primary colors, which means that with an 8 -bit hardware/software stack (monitor, video card, cables, drivers, content) each pixel can be one of 256 shades of one of those colors and that's it.

    But 10-bit color means that there are 1024 shades for each of the Red, Green, Blue primary colors; so if you have 10-bit content, drivers, cables, video card but the monitor can only display 256 shades per pixel, then you have to have a way to map the 1024 shades to 256 shades and in the process of doing so, you effectively lose any benefit of the extra shades of color, or so I would think. With 12-bit the problem becomes bigger.

    So how exactly are you getting "smoother gradients", if anything, you should be getting coarser gradients, as you try to shoehorn 1024 shades of color into 256 possible values.
    Last edited by sophisticles; 1st Aug 2020 at 13:02.
    Quote Quote  
  17. Originally Posted by sophisticles View Post


    So how exactly are you getting "smoother gradients", if anything, you should be getting coarser gradients, as you try to shoehorn 1024 shades of color into 256 possible values.

    If you were using 10bit RGB to 8bit RGB - yes that's correct . Or 10bit YUV to 8bit YUV . That's entirely true. That's not what is being discussed. The video is in YUV and it's being converted to 8bit RGB for display on a monitor . This is just a colorspace/color model conversion discussion; note that' s a separate topic from the encoding efficiency gained when using 10bits. When you have 8bit YUV to 8bit RGB, you have errors of +/- 3 per channel (compared to original 8bit RGB values) . 10bit YUV is +/- 0 (ie. perfect, in the absence of lossy encoding etc... ) - So if you start with a perfect gradient, you end with a perfect gradient , no banding.

    3) x265 is a different story, to my eyes, on some encodes x265 10-bit looks worse than 8-bit and 12-bit looks way worse, to my eyes, with certain encodes. Which makes me wonder how 12-bit is being mapped to a 8-bit monitor.
    For encoding, there is a point at lower bitrate ranges, where the "cost" of encoding 10bits outweighs the efficiency gains from the higher precision. But at typical , not mega bitrate starved ranges, 10bit is almost always better if done correctly, for any encoder or format. (The other "negatives" of 10bit encoding are slower encoding, higher decoding demands, less compatibility for some hardware/devices even some software)

    10bit is enough for YUV to get perfect original 8bit RGB colors (if done correctly) . 12bit from an 8bit source is too - But it's almost always worse in terms of encoding efficiency . The cost of encoding the extra bits does not outweight the efficiency. But 12bit YUV is "enough" for 10bit RGB . That's where 12bit is used - for 10bit RGB display. It's the +2 bits rule
    Quote Quote  
  18. I said I do not know any functions for gradients in Vapoursynth, but found something, relatively new: core.draw.Draw(), sort of same as mt_lutspa for Avisynth.
    It can create black&white gradients, any bitdepth 8bit to 16bit integer masks (not float), for example our simple gradient from black(left) to white(to right):
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    
    WIDTH  = 1920
    HEIGHT = 1080
    DEPTH  = 8        #bit depth mask as integer 8, 9, 10, 11, ..., to 16 
    
    
    format = core.register_format(vs.ColorFamily.GRAY, vs.INTEGER, DEPTH, 0, 0).id
    MAX = 2**DEPTH - 1
    blank = core.std.BlankClip(width=WIDTH, height=HEIGHT, format=format, length=1)
    gradient = core.draw.Draw(blank, [f'x {MAX/WIDTH} *'])
    gradient.set_output()
    Last edited by _Al_; 2nd Aug 2020 at 15:46.
    Quote Quote  
  19. gradient from above script:
    Image Attached Thumbnails Click image for larger version

Name:	Capture.JPG
Views:	177
Size:	93.6 KB
ID:	54346  

    Quote Quote  
  20. @ _Al_

    for the script in post #9, it should be range_s="full" when converting to YUV for the full range gradient


    That draw function is based on mt_lutspa; so I was thinking AVS+ should be able do this too
    http://avisynth.nl/index.php/MaskTools2/mt_lutspa

    10bit gradient 0-1023 (1024 width, 1024 values)
    Code:
    blankclip(width=1024, height=720, pixel_type="YUV420P10")
    mt_lutspa(mode="relative", yexpr="x 1024 *", u=-128, v=-128)
    And verifying in vsedit (AVISource to load the .avs in vapoursynth) , the pixel values are correct . I would have expected to set u,v to -512 for 10bit , but it gives actual value of "zero". "-128" works (as actual value "512") . It's a shame avspmod is limited to 8bit preview
    Last edited by poisondeathray; 2nd Aug 2020 at 16:49.
    Quote Quote  
  21. in vapoursynth U and V are set in that expression:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    
    WIDTH  = 1920
    HEIGHT = 1080
    DEPTH  = 10        #bit depth mask as integer 8, 9, 10, 11, ..., to 16 
    
    format = core.register_format(vs.ColorFamily.YUV , vs.INTEGER, DEPTH, 1, 1).id  #420 subsampling
    MAX = 2**DEPTH
    blank = core.std.BlankClip(width=WIDTH, height=HEIGHT, format=format, length=1)
    gradient = core.draw.Draw(blank, [f'x {MAX/WIDTH} *',f'{MAX/2}',f'{MAX/2}']) #x goes from 0 to WIDTH
    gradient.set_output()
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!