VideoHelp Forum




+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 31
  1. I was not familiar with this until just a couple of days ago. I understand it is used in H.264.

    https://wiki.multimedia.cx/index.php/YCoCg
    Quote Quote  
  2. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Sounds like someone optimized the "tilt angles" between two colorspace cubes...
    Quote Quote  
  3. Originally Posted by chris319 View Post
    I was not familiar with this until just a couple of days ago. I understand it is used in H.264.

    https://wiki.multimedia.cx/index.php/YCoCg
    It can be used in any standardized and well defined video codec - works fine from MPEG-1 up for sure - tested this.

    Some scientific papers claim that it provide higher de-correlation chominance signal from luminance signal (higher than YCbCr) also it can be faster to compute than YCbCr (especially when dedicated HW color space converter is not present).
    Quote Quote  
  4. Trouble is, it requires 9 bits of chroma for transparency. To be fair, you'd have to compare it to a hypothetical 9-bit YCrCb.

    I played around (a lot) and got the published transfer functions to work in Avisynth (because reasons);
    the round trip conversion is close but not exact, due to lack of precision in the intermediate Co & Cg.
    (They have to be divided by 2 to fit in the 0-255 range; hence, 9 bits are needed for full precision)

    Here (.ods, 18KB) is the spreadsheet I used to work out the equations; and the AviSynth script is below.
    Code:
    ## MaskTools required
    LoadPlugin(pathBase + "MaskTools2\masktools2.dll")
    
    Colorbars
    ConvertToRGB32(matrix="PC.601")
    
    org=Last
    #return Last
    
    R  = ShowRed  ("YV12")
    G  = ShowGreen("YV12")
    B  = ShowBlue ("YV12")
    
    # (YCoCg equations reworked to obtain values that fall within 0-255 range)
    
    # Co = (R-B)/2+128
    # t  = B+(Co-128)/2
    # Cg = (G-t)/2+128
    # Y  = t+(Cg-128)
    Co = mt_lutxy(R, B,  chroma="-128", yexpr=mt_polish("(x-y)/2+128"))
    t  = mt_lutxy(B, Co, chroma="-128", yexpr=mt_polish("x+(y-128)/2"))
    Cg = mt_lutxy(G, t,  chroma="-128", yexpr=mt_polish("(x-y)/2+128"))
    Y  = mt_lutxy(t, Cg, chroma="-128", yexpr=mt_polish("x+(y-128)"))
    
    YToUV(Co, Cg, Y)
    #return Last
    
    ####
    Y  = ConvertToY8
    Co = UtoY8
    Cg = VtoY8 
    
    # t  = Y-(Cg-128)
    # G  = 2*(Cg-128)+t
    # B  = t-(Co-128)/2
    # R  = B+2*(Co-128)
    t  = mt_lutxy(Y,  Cg, chroma="-128", yexpr=mt_polish("x-(y-128)"))
    G2 = mt_lutxy(Cg, t,  chroma="-128", yexpr=mt_polish("2*(x-128)+y"))
    B2 = mt_lutxy(t,  Co, chroma="-128", yexpr=mt_polish("x-(y-128)/2"))
    R2 = mt_lutxy(B2, Co, chroma="-128", yexpr=mt_polish("x+2*(y-128)"))
    
    MergeRGB(R2, G2, B2)
    return Interleave(org, Last)
    (posted very late - errors likely)
    Quote Quote  
  5. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Of course, you have two color space cubes which intersect each other in a certain space angle, and even if this angle is convenient for binary calculation, a sufficient lossless 8 bit to 8 bit projection is not possible as long as both color spaces are not identical (trivial transpositions ignored here).

    I am curious:

    Would support for this color space be interesting for AviSynth(+)? Would an intermediate YUV processing in YCoCg and a late conversion (just before return) to Rec.601/Rec.709 (like with ColorMatrix, in case an encoder does not support this space directly) be an advantage?

    I see there is already a plugin: ConvertToYCgCo; but that doesn't look like native support yet. The order of the chroma difference components appears inconsistent here.
    Last edited by LigH.de; 24th Feb 2017 at 01:49.
    Quote Quote  
  6. There is a true 24-bit version of this by David Cary. I will post the code in the Programming section when I get a working C example.

    So far it passes muster quite nicely. The values that come out are the same ones that go in, spanning the range from 0 through 255. You wind up with three signed bytes representing Y, Co and Cg. The R, G and B values must be UNsigned. I also have a demo written in PureBasic which makes pretty pictures if anyone is interested.

    I would think this would be of interest to avisynth folks. As it is error free, you could re-encode the RGB it returns to 601, 709, 2020 or whatever.

    Semantically it is error free, not lossless.
    Quote Quote  
  7. I don't think it would be useful for avisynth folks in real life , because you still need a format that supports YUV (or Y'CbCr). You still need YUV in the end for 99.999% of distribution formats (real physical files). Sure if more formats supported YCgGo then it might be useful, but that's not going to happen. x264 supports it (a specific h264 encoder implementation) , but the common decoders software and hardware do not necessarily support it.

    If you were starting in RGB+/-A, but needed YUV filters (this is one of the reasons for needing to do this RGB to YUV transform), then round tripping back to RGB , then there is already true lossless workaround already in avisynth working in 8bits per channel (only) by treating each R,B,G channel as a Y plane

    eg.
    Code:
    main=WhateverSource()
    
    main.showred("YV12")
    ##Your FILTERS HERE
    r=last
    
    main.showblue("YV12")
    ##Your FILTERS HERE
    b=last
    
    main.showgreen("YV12")
    ##Your FILTERS HERE
    g=last
    
    main.showalpha("Y8")
    a=last
    
    mergeargb(a,r,g,b)
    Where this is useful is coding for lossless storage formats; YCgCo is more efficient to store than RGB (RGB stored as RGB)



    Otherwise there is no reason to convert to anything in the first place. You don't convert "just for fun" .
    Quote Quote  
  8. Ideally all color encoding would be error free. The challenge is getting widespread adoption of an improved technology.

    I would love to build this into ffmpeg and VLC but wouldn't know where to begin.

    I have posted the code on the Programming board. That version gives a 24-bit payload: 3 x 8-bit bytes for Y, Co and Cg.
    Quote Quote  
  9. Originally Posted by chris319 View Post
    Ideally all color encoding would be error free. The challenge is getting widespread adoption of an improved technology.

    I would love to build this into ffmpeg and VLC but wouldn't know where to begin.

    I have posted the code on the Programming board. That version gives a 24-bit payload: 3 x 8-bit bytes for Y, Co and Cg.


    Lots of things would be ideal but lower on the list of priorities. Ideally there would be no chroma subsampling either, and everything would be >10bits/channel.

    But the reality is all about $$. Exiting standards, existing infrastructure, bandwidth, cost, compatibility and legacy issues. Large companies with motives and patent pools that dictate what gets "supported" and what is less likely to be supported. It's an insurmountable challenge.

    The YCgCo conversion is already implemented in ffmpeg in the -vf colorspace filter, also in the zimg library (I don' t think ffmpeg's zscale filter based on the zimg library has all the conversions implemented, but vapoursynth's code based on zimg does have YCgCo)
    Quote Quote  
  10. YUV (or YCbCr) is accurate to +/- 1 if properly implemented, and that figure is for one encoding pass. If resampled then all bets are off. I don't know how many times youTube, Vimeo, etc. resample video, but the color error has to be cumulative with every resampling.

    YCoCg is incompatible with everything out there. There is probably too much inertia to change anything, but this proves error-free sampling can be done. If you want to duplicate video internally and distribution or transmission are not concerns, this would give you error-free results.
    Last edited by chris319; 24th Feb 2017 at 23:14.
    Quote Quote  
  11. Originally Posted by chris319 View Post
    YUV (or YCbCr) is accurate to +/- 1 if properly implemented, and that figure is for one encoding pass. If resampled, then all bets are off.
    Right, and that's "good enough" in 99.999% of end user cases . You're going incur more losses through lossy compression. That's where all the money and R&D is going into - improving compression. Saving money is a tangible benefit

    YCoCg is incompatible with everything out there. There is probably too much inertia to change anything, but this does prove it can be done. If you want to dupicate video internally, and distribution or transmissiion is not a concern, this would give you error-free results.
    It's not completely incompatible with everything... it's been implemented in x264 for quite some time as a VUI option (but nobody uses it) ; and it's actually in the official ITU h.265 spec, but nobody is going to use it either. Rec2020 is the what is being pushed. All the newer TV's , devices etc...already have support

    If you want to duplicate internally, you already have error free, mathematically lossless workflows results using lossless RGB codecs (some of them don't store as RGB because of compression inefficiencies), including temporal compression.
    Quote Quote  
  12. It's not completely incompatible with everything... it's been implemented in x264 for quite some time as a VUI option
    I should have been more specific. I was referring to 24-bit or 3 x 8 YCoCg, or YCoCg24. Is that part of H.264 or H.265 or is it a higher bit count? One implementation of YCoCg adds a ninth bit to the Co and Cg samples, making it a 26-bit system.
    Last edited by chris319; 25th Feb 2017 at 00:23.
    Quote Quote  
  13. Originally Posted by chris319 View Post
    It's not completely incompatible with everything... it's been implemented in x264 for quite some time as a VUI option
    I should have been more specific. I was referring to 24-bit or 3 x 8 YCoCg, or YCoCg24. Is that part of H.264 or H.265 or is it a higher bit count?
    I don't know , you can download the ITU paper and take a look .

    But for true lossless encoding modes, I suspect higher bit depths because the transforms for other lossless codecs that don't store RGB as RGB definitely use higher depth internally - the difference coded is usually +2 per channel . But those are truely lossless codecs . The "original" YCgCo paper did too. The "prototypical" one is FFV1 which uses JPEG2000-RCT for the lossless YCbCr/RGB transform . FFV1 is battletested and proven in real use for >10 years. Originally it only offered intra modes but it can be used with temporal compression.

    Now, if you could make some sort of business case, you can save x% bandwidth using YCgCo with so and so codec etc, instead of JPEG2000-RCT or XYZ transform etc...... then it might be more persuasive, or if speed was y% faster etc...
    Quote Quote  
  14. I know you were originally only talking about the colorspace/model conversions, but you need to look at the bigger picture too - more "bits" isn't always necessarily "worse" . You' d think it "cost" more in terms of storage (it certainly does in uncompressed terms), but for lossy compression it turns out for h.264 10bit is actually more efficient than 8bit for encoding in most cases. ie. You can achieve a higher objective PSNR or SSIM db, or subjective quality at a lower bitrate (filesize) with 10bit encoding , even with an 8bit source file. You have higher accuracy, less truncation of motion estimation vectors. It's not just some theoretical thing either - this has been proven in real use over the last few years.
    Quote Quote  
  15. In addition to file size you also have to think about bandwidth in transmission applications such as satellite, broadcast and even YouTube/vimeo streaming, especially when using RF spectrum.
    Quote Quote  
  16. Originally Posted by chris319 View Post
    Ideally all color encoding would be error free. The challenge is getting widespread adoption of an improved technology.

    I would love to build this into ffmpeg and VLC but wouldn't know where to begin.

    I have posted the code on the Programming board. That version gives a 24-bit payload: 3 x 8-bit bytes for Y, Co and Cg.

    This topic remains me some earlier discussion https://forum.doom9.org/showthread.php?t=113798

    ffmpeg support YCoCg quite nicely except zscale where YCoCg is not recognized as valid colorspace.
    Quote Quote  
  17. ffmpeg supports YCoCg quite nicely except zscale where YCoCg is not recognized as valid colorspace.
    To make sure we're talking about the same thing, does ffmpeg support YCoCg24 as I have presented in the Programming section and in an earlier post in this thread? Some YCoCg implementations add a ninth bit to the chroma samples. You might call it. YCoCg26 for 26 bits. I have presented YCoCg24, consisting of 3 x 8-bit samples as the name implies. It does not add extra bits.
    Quote Quote  
  18. Originally Posted by chris319 View Post
    ffmpeg supports YCoCg quite nicely except zscale where YCoCg is not recognized as valid colorspace.
    To make sure we're talking about the same thing, does ffmpeg support YCoCg24 as I have presented in the Programming section and in an earlier post in this thread? Some YCoCg implementations add a ninth bit to the chroma samples. You might call it. YCoCg26 for 26 bits. I have presented YCoCg24, consisting of 3 x 8-bit samples as the name implies. It does not add extra bits.
    Seem that ffmpeg has implemented YCgCo in a way covered by https://www.itu.int/rec/T-REC-H.264-201610-I/en (page 396).
    Quote Quote  
  19. Originally Posted by chris319 View Post
    In addition to file size you also have to think about bandwidth in transmission applications such as satellite, broadcast and even YouTube/vimeo streaming, especially when using RF spectrum.

    That is file size. You transmit actual data - that takes bits. That's why we're talking about compression. Bandwidth and money. If Netflix or BBC can send the same quality show with half the bits (thus half the file size), that is huge and you've hired. That's why this topic is so much lower on the list. The appreciable benefit is negligible compared to even a minor advance in compression technology.

    Just because YCgCo support is outlined in the ITU specs, it doesn't mean it's implemented end to end or even properly. Many features are outlined in specs, but only few actually make it into hardware decoders and devices. That's what the comments on infrastructure were referring to. The switching costs are high - You'd have to upgrade all the decoding chips in cable boxes, set top boxes, up and downstream. All the portable devices, phones, TV's, hardware. Browsers and streaming applications would need to be modified . It would cost less to fly a few manned missions to Mars. But if you can make a business case , some cost/benefit analysis... people will listen .

    Even on a smaller scale - If you can demonstrate that this modified YCgCo implementation has some benefit in real world scenario over some existing intermediate workflow, people would use it. I would use it. People care about actual results , everything else is academic. For example, if you could replace say FFV1's portion of the transform routine (that is only a portion of what is computed in compression) and demonstrate some benefit (in terms of computational speed, or compression ratio), at least a handful people would use it for intermediate workflows. But for the general use, lossy case, it's not going to catch on. Just another footnote in history
    Quote Quote  
  20. There is a paper on "Reversible Colour Spaces without Increased Bit Depth and Their Adaptive Selection" by Tilo Strutz and Alexander Leipnitz , (2015)

    Basically, 24-bit RGB data to 24-bit YUV data ‘24-bit RCT’

    Here is the abstract
    Abstract
    The efficient compression of colour images requires a processing step exploiting the correlation
    between the colour components. This is typically realised using a colour transformation. In lossless
    compression systems, the reversible colour transformation increases the bit-depth for chrominance
    components from eight to nine bits per pixels. This can be avoided by using modulo arithmetic, while
    keeping the property of reversibility. This paper investigates the impact of these modulo operations on
    the compression performance, compares different processing structures, and proposes a new adaptive
    selection of suitable colour spaces. It is shown that (i) the limitation of the bit depth generally leads to
    lower compression performance, (ii) the drop in performance depends on the processing structure used,
    and (iii) the average performance can be improved by the proposed adaptive selection of the colour
    space.
    I don't know which ones they tested etc..., but like any bad student I just skip to the conclusion: (i) the limitation of the bit depth generally leads to lower compression performance . I'm always looking for something better so I just tune out when I see something like that .
    Quote Quote  
  21. If you can demonstrate that this modified YCgCo implementation has some benefit in real world scenario over some existing intermediate workflow, people would use it. I would use it.
    I'm not pushing YCoCg24; I'm merely presenting an interesting technological development. I don't have to demonstrate anything to you. If all you want to do is argue, you'll have to find someone else to argue with.
    Quote Quote  
  22. Originally Posted by chris319 View Post
    If you can demonstrate that this modified YCgCo implementation has some benefit in real world scenario over some existing intermediate workflow, people would use it. I would use it.
    I'm not pushing YCoCg24; I'm merely presenting an interesting technological development. I don't have to demonstrate anything to you. If all you want to do is argue, you'll have to find someone else to argue with.

    Yes I know you're not pushing it

    Relax , I'm not "arguing" anything.

    I'm merely being realistic and discussing possible reasons why something like this might not be used... Why the original YCgCo hasn't been used despite being around for over a decade. I'm viewing this as: does this variation have any real benefit in the big picture. Can I use it personally to some benefit even in some limited scenarios? That's all

    You said "The challenge is getting widespread adoption of an improved technology." - That's why I responded the way you make people listen and adopt something is you make a solid case. A cost/benefit analysis. Is it really "improved" in all aspects, when you look at everything? And I don't necessarily mean "YOU" chris319, I mean in the general sense. No, you don't have to do anything LOL .

    If this can be made into something useful that is awesome. Seriously. I'm all for progress.

    Cheers
    Quote Quote  
  23. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Originally Posted by poisondeathray View Post
    I don't think it would be useful for avisynth folks in real life, because you still need a format that supports YUV (or Y'CbCr). You still need YUV in the end for 99.999% of distribution formats (real physical files). Sure if more formats supported YCgGo then it might be useful, but that's not going to happen.
    I doubt that. YCgCo is (if I am not completely wrong) just another case of YUV, just a matter of matrix coefficients, similar to Rec.601 vs. Rec.709, defining the extents and space angle of the YUV color space cube, and how much and at which intersection positions it covers the RGB space cube. Most AviSynth functions (including those provided by plugins) operating in any YUV configuration won't care about the coefficients and just rely on the color model YUV as such. Only except for those which are specifically made for colorimetry conversions.

    The existence of ManualColorMatrix apparently confirms my opinion. Convert YCgCo to RGB using:

    Code:
    ManualColorMatrix(2, 1.0, -1.0, 1.0, 1.0, 1.0, 0.0, 1.0, -1.0, -1.0, 0.0, -128.0, 256.0)
    I wonder if anyone is able to display the intersections of YUV Rec.601, Rec.709, and YCgCo, in relation to RGB, each as two intersecting space cubes...
    Image Attached Images  
    Last edited by LigH.de; 25th Feb 2017 at 17:02.
    Quote Quote  
  24. YCgCo is (if I am not compltely wrong) just another case of YUV, just a matter of matrix coefficients, similar to Rec.601 vs. Rec.709
    There are no coefficients (and there is no floating point) in YCoCg24.
    Quote Quote  
  25. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Did you ever learn vector maths? Do you even understand at all which coefficients people mean when they talk about the conversion between color spaces in theory? Implementations in software don't matter, the computer is off.
    Quote Quote  
  26. Originally Posted by LigH.de View Post
    Originally Posted by poisondeathray View Post
    I don't think it would be useful for avisynth folks in real life, because you still need a format that supports YUV (or Y'CbCr). You still need YUV in the end for 99.999% of distribution formats (real physical files). Sure if more formats supported YCgGo then it might be useful, but that's not going to happen.
    I doubt that.

    What are you doubting exactly ? That it would be useful in avisynth , that you need YUV in the majority of distribution formats, or that you need support ?


    How does this benefit you in avisynth or how are you using it ? What were your reasons for converting in the first place?

    Support is needed, because the reverse transform needs to be applied on the recieving end . So this means at minimum, metadata flagging and decoder support . If there is no support, how does the decoder hardware or software know to apply the correct reverse transform ?



    YCgCo is (if I am not completely wrong) just another case of YUV,
    That's what it is - The point of the original color model, was reversible transform. It was presented as a superior alternative to RGB<=>YCbCr . The selling points were faster conversions, completely reversible, improved compression performance. But the original required more bits. The variant here is the same bit depth.





    The existence of ManualColorMatrix apparently confirms my opinion. Convert YCgCo to RGB using:

    Code:
    ManualColorMatrix(2, 1.0, -1.0, 1.0, 1.0, 1.0, 0.0, 1.0, -1.0, -1.0, 0.0, -128.0, 256.0)

    Did you try verify this, and apply the reverse transform ? In 8 bits ?
    Quote Quote  
  27. the original required more bits. The variant here is the same bit depth.
    Right. The original YCoCg added a ninth bit to each of the chrominance samples. The result was 26 bits total.

    The version presented herein, YCoCg24, uses 24 bits as 3 x 8-bit bytes.
    Quote Quote  
  28. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Originally Posted by poisondeathray View Post
    What are you doubting exactly ?
    Your remark sounded to me like you wanted to express that YCgCo is remarkably different from YUV. But to me, YUV is just a concept comparable to Left/Right encoding vs. Mid/Side encoding of audio: Luminance as a (weighted) sum of channels, and Chrominances as differences between neutral luminance and a color in hue (angle) and saturation (radius). In the HSL model, you would indeed report radius and angle.

    If you wanted to point at a difference between YCbCr and YCgCo instead, then I can agree with you: The space tilt is indeed different. Therefore we have different matrix coefficients for YCbCr Rec.601, YCbCr Rec.709, and YCgCo. But all of them are a special case of the same color model: YUV. And AviSynth doesn't care much about the specific coefficients. In most cases, it only separates RGB from YUV. You have to tell the encoder which colorimetry to assume if it doesn't guess right from e.g. the resolution, as you may already do when processing Rec.601 vs. Rec.709 in an up- or downscale.

    Originally Posted by poisondeathray View Post
    How does this benefit you in avisynth or how are you using it ? What were your reasons for converting in the first place?
    Most of all, little loss of precision when you have material originally in RGB space. Like CG render movies. If your original is already in a YCbCr space, converting to YCgCo is just as lossy as a conversion between RGB and YCbCr.

    Originally Posted by poisondeathray View Post
    Support is needed, because the reverse transform needs to be applied on the recieving end . So this means at minimum, metadata flagging and decoder support . If there is no support, how does the decoder hardware or software know to apply the correct reverse transform ?
    Very true. This will be one of the main disadvantages. Regarding playback on a PC, madVR supports it.

    Originally Posted by poisondeathray View Post
    Did you try verify this, and apply the reverse transform ? In 8 bits ?
    Not yet. Here I trusted the doom9 forum threads so far. And I just have no material yet to test it. My week and even weekends are full of my job at the moment. So ... GN8, it's already after midnight here.

    P.S.: YCgCo with only 24 bit won't be lossless, that's rather obvious to me. But the regular loss of only 1 bit precision in chrominances may be easier to handle than the rather irregular loss due to floating point rounding in the cases of Rec.601 and Rec.709; always assuming that your original material is in RGB space.

    P.P.S.: I noticed that the order of the chrominance parameters is inconsistent. The algebra in Wikipedia prefers Cg before Co, and so do x264 and x265.
    Last edited by LigH.de; 25th Feb 2017 at 18:04.
    Quote Quote  
  29. Originally Posted by LigH.de View Post


    Most of all, little loss of precision when you have material originally in RGB space. Like CG render movies. If your original is already in a YCbCr space, converting to YCgCo is just as lossy as a conversion between RGB and YCbCr.

    Exactly ! And from my perspective that what I am interested in .

    You NEED a reason to be converting in the first place, you don't just convert things for fun - otherwise you would keep the original RGB. One common reason is a compatible distribution format (YCbCr). The 2nd common reason is compression or file size. CG renders are massive. The 3rd common reason is some filtering or process incompatible with the current color model

    So I want to know how much benefit . The residuals from the calculated differences are supposed to be smaller, thus better compression . I would like to see some numbers , some tests. Quantify what you are gaining by using one of these YCgCo workflows, compared to existing lossless or lossy workflows. Either the original higher bit depth or this proposed error free equal depth method. So some measures of calculation speed deltas, some compression/ quality measures . (And no, nobody has to do anything, I'm just saying it would be nice to see these numbers)



    Originally Posted by LigH.de View Post
    P.S.: YCgCo with only 24 bit won't be lossless, that's rather obvious to me. But the regular loss of only 1 bit precision in chrominances may be easier to handle than the rather irregular loss due to floating point rounding in the cases of Rec.601 and Rec.709; always assuming that your original material is in RGB space.
    For the 8bit/channel <=> 8bit/channel proposal he said "error free" , not lossless. He was clear on that distinction

    But I would to know how much better. I need to see some actual test cases, not just math models
    Last edited by poisondeathray; 25th Feb 2017 at 18:59.
    Quote Quote  
  30. For the 8bit/channel <=> 8bit/channel proposal he said "error free" , not lossless. He was clear on that distinction
    I'm glad you caught that.

    The results I'm getting from my round-trip testing program are red error = 0, green error = 0 and blue error = 0. The source code is at the following link. All you have to do is compile and run it.

    https://forum.videohelp.com/threads/382655-Error-Free-YCoCg24-Encoding

    There is a more elaborate program written in PureBasic which loads a .bmp file and encodes -> decodes it in 4-4-4 and displays the decoded picture. Let me know if interested.

    The only benefit I can vouch for is error-free color. This may have some importance when video undergoes multiple generations of resampling a la YouTube, Vimeo, etc.

    I will leave it to someone else to profile the code to see how many CPU cycles it uses compared to conventional encoding. As posted now, it could be a wee bit more efficient.
    Last edited by chris319; 25th Feb 2017 at 20:23.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!