I've read comments that you should transcode 8-bit 4:2:0 camcorder footage (e.g. HDV, AVCHD) to prores (mac) or cineform which is 10-bit 4:2:2 for keying and color correction because it "stands up better" and "can be pushed farther" etc...
My question is how is this possible?
Is it "magically" increasing the color information in terms of color depth (8-bit vs. 10-bit) and chrominance subsampling (4:2:2) from an 8 bit per channel, 4:2:0 source ? Are they just interpolating values or padding values ? What about errors in "guessing"?
When you import your native 8-bit 4:2:0 footage into your NLE or software, etc.., isn't it usually decompressed to RGB 4:4:4 internally anyways?
Even if you are working in 10-bit color depth project, wouldn't you need a 10bpc capable monitor like HP Dreamcolor to "reap" the benefits? or do the benefits hold, but you just can't "see" them?
It's true, when I am working with a 16bpc or 32bpc project (instead of 8bpc) in after effects for example, the color gradients become smoother and there is less banding - this is noticable, but what happens when you encode to the final 4:2:0 final format e.g. blu-ray or dvd etc...isn't that extra color infomation downsampled again ? Banding reappears. So what's the advantage of the digital intermediate then?
I don't see the benefit of that workflow? or am I missing something?
Thanks for any insight![]()
+ Reply to Thread
Results 1 to 18 of 18
-
-
I would generally agree that using more spacial resolution (4:2:2 vs 4:2:0) and greater intensity resolution (10-bit vs 8-bit) can give you better results. Even starting with 8-bit 4:2:0 and upsampling to 10 bit 4:2:2 can improve results if you're filtering. And even if your final result is going to be 8 bit 4:2:0 again. The more bits you have while working the more accurate the final result will be. It also gives you the ability to randomize the last bit or two to reduce posterization problems.
Even on an 8 bit display you can see a minor difference in smoothness (less banding) when working with 10 bit data. I'll see if I can come up with some examples filtering 6 bit data natively, and 6 bit data upsampled to 8 bits, filtered, then downsampled back to 6 bits. I'm pretty sure the differences will be visible. -
In my experience, the extra bit-depth is beneficial in terms of color detail..more accurate colors bring out the image. I'm speaking from my own expience with working with laserdisc captures. I have many capture cards and ran them through their paces over the years with my laserdisc player, and I can tell you there is nothing better than a 10bit (internal processing) capture card then most 8bit ones. My previous 8bit laserdisc captures can't compare to the 10bit ones.
here are some examples of 10bit captures of laserdisc to help express, but I don't have the bandwidth to post new (8bit) captures, sorry.
--> https://forum.videohelp.com/topic362850.html#1929699
If you deal with a source in 10bit, assuming it was "captured" the color depth or detail will be greater, (better) but that will largely depend on the output source being captured, ie laserdisc. The capture card had nothing but plusses, a greate (composite) 2d-comb filter, 10bit, and the laserdisc output source originated from composite encoded data, everything was basically transparent in this case. But I suppose it would be similar to a degree in 8bit->10bit image conversions, algorithm-icly speaking of course, and the better the "decoder" the less problems in the gradient error you mentioned, ie start trek voyagers opening theme is a perfect example representing the error you mentioned. In fact, its still prevailent in the dvd series, last episode I just saw this error on was in s5e6, "Timeless" where the voyager ship passes through the suns flare. hmm. that would be a great challenge to fix, graphically, in adobe photoshop, as an excerciseI think I'll etch that into my some day todo list
Still, I think the level of this error could be reduce, but the idea just escaped me at the moment..oh well..it was just in my head, the part of the cause and fix. Most of this error is user-error imho. That voyager example could have been fixed no problem. I've seen many "gradient" examples (mpeg-2 sources) where transients were smooth.
-vhelp 5229 -
Thanks for the replies guys,
I can definitely see the difference in project as it's being edited, but once converted to YV12 4:2:0 8-bit again, I don't see how the difference translates into final edit (i.e you see banding again)
I'm having difficulty with the concept of having something better than what you started with...This concept is so entrenched with video, and we preach that here everyday with re-encoding lossy formats... at best you should have the same quality, not better. How is this different? Not to mention cineform and prores are in fact lossy formats...
vhelp - I don't know anything about laserdisc, but capturing at a higher bit depth (and subsequently bit rate) makes that less lossy right? - that much I can understand. e.g. if I captured a video and used RGB 4:4:4 there is no color sub sampling vs. a YV12 4:2:0 capture format. So this would better represent the original capture information better, but how can it be "better" than the original?
My understanding is that RGB24 is 8bits per channel right? 8red , 8green, 8 blue ? (and the RGB32 saves the other 8 for alpha).
If i converted my 8-bit 4:2:0 format to 8-bit 4:4:4 uncompressed RGB24, how does that stack up to a 10-bit 4:2:2 format like cineform? Would that even make a difference, since internally, most NLE's convert to RGB anyways (I think?)...
Can you have a 10-bit uncompressed format? 10red, 10green, 10blue ?
Still a bit confused... -
Originally Posted by poisondeathray
Digital intermediate codecs in general recompress GOP based formats into individual frames for easier editing and less generation loss (after conversion).
Originally Posted by poisondeathray
http://techblog.cineform.com/?p=1280Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Thanks for reply edDV,
I understand why it's used as a digital intermediate for quicker editing and it's robustness over generations, but I'm more interested in the mechanics or theory or purported benefit for keying and color correction. I'm not convinced it's beneficial for those reasons.
And once your 10-bit, 16-bit or 32-bit project is converted to the final 8-bit 4:2:0 distribution format wouldn't those purported benefits be lost anyways?
They interpolate new values and/or add noise to randomize the added bit depth.
If you were to compare the 8-bit 4:2:0 original to a cineform or prores intermediate, the latter 2 are lossy. Sure they have higher 4:2:2 chrominance subsampling, but it's based on the 4:2:0 original. The 10-bit is based on the 8-bit original. I can't see how this makes it "better" than the original?
Originally Posted by jagabo
The more bits you have while working the more accurate the final result will be
even when interpolated? isn't there more room for errors and yet again when you downsample? Isn't fewer lossy conversions better? How can you have more (accurate) information than what you started with? (I don't mean noise or padding) ? -
Originally Posted by poisondeathray
Digital intermediates use intraframe compression to keep bit rate within the range of a single HDD even when scanning or scrubbing.Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Yes, as edDV said, its real benefit is when/while doing Non-keyframe editing, COMPOSITING/MIXING, and Filtering & FX. Especially if you work with complex combinations of layers (with alpha masks) and color mixing calculations.
If you start with 8bit4:2:0 and do only keyframe editing onto a 8bit4:2:0 final master, you won't notice really ANY difference vs. bumping up.
BUT
If you do the hard stuff mentioned above WHILE staying in 8bit4:2:0, you're basically going down 2, 3, 4 or more generations, quality-wise. And compounding the already inherent banding.
So bumping up TEMPORARILY bypasses the EXPECTED loss which is the byproduct of that kind of manipulation. Works the same way for audio too.
Scott -
Originally Posted by edDV
-
Originally Posted by poisondeathrayRecommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Thanks for replying cornucopia,
Originally Posted by Cornucopia
I'm not following you, if everything is the same original generation, how do you incur more losses than upon final export? I'm not talking about multiple generations here. Or am I misunderstanding something?
I understand how using cineform would be better for multiple generations vs. a 8-bit 4:2:0 codec. That isn't what we are discussing. If I were to use multiple generations, I usually use lossless formats.
Please, I only am concerned about the purported benefits for color correction/keying of using a digital intermediate vs. the original on the current generation, not about the other things like ease of editing, or higher bit depth and less color sampling is better on subsequent genrations, which is pretty clear... -
The key concept for a NLE-processor is all computation is done in the final encode pass. The 8 bit YCbCr processed frames are converted once to 4:4:4 RGB and then back to the export format. While in RGB space, bit depths expand with each multiply but all gets rounded back to 8bit for export. Block based MPeg will result in more block edge oriented rounding errors vs the more randomized Cineform wavelet pixels.
8 bit NLE's don't perform as well for serial processes where intermediate effects are stored as 8 bit and then recycled as source for another level of processing. The 8bit rounding errors build up with each pass. Broadcast and film production flows often are sequential so 10, 12 or 14 bit source and process flows are used.Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Originally Posted by poisondeathray
As for keying, there may be edge blend advantages from 10 bit upconversion. Keep in mind that consumer digital luminance is encoded 16-235 which only has 219 levels. MPeg block errors within 219 levels can be seen.
10 bit upconversion would interpolate 64-960 or 896 levels of gray. This presumably would be done in combination with a deblocking filter.Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Chroma upsampling increases the # of pixels in the chroma world, but usually does so by interpolation, or BLENDING. While that's usually BAD in terms of overall resolution/quality (mainly in the luminance channel), for THIS PURPOSE it has the benefit of SMOOTHING the transitions and making the keyhole less blocky/jaggy.
You could have just done BLUR on the chroma channel without the uprezzing/upsampling, but then instead of the chroma looking SMOOTHED, it would just have looked BLURRED and of LOWER resolution.
Does that answer your specific question about keying?
Scott -
Originally Posted by edDV
Let me see if I can make this clear with a hypothetical example. This isn't necessarily a realistic situation but I need a situation that's simple yet has enough data to work with to show clear results. Let's say we have an 8 bit grayscale image and we want to perform an 8x enlargement. We're only going to look at three pixels in one dimension for simplicity. Here are the three pixels in our original image:
Code:5 6 7
Code:5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 7
Now let's convert our original image to a 10 bit colorspace by multiplying all pixels by 4:
Code:20 24 28
Code:20 20 21 21 22 22 23 23 24 24 25 25 26 26 27 27 28
Code:21 19 22 20 23 21 24 22 25 23 26 24 27 25 28 26 29
Code:5 4 5 5 5 5 6 5 6 5 6 6 6 6 7 6 7
There is still some banding in the final image. This could have been reduced by adding and subtracting 2 or 3 instead of 1 at the 20 bit stage. And instead of simply alternating you could use a more sophisticated dithering pattern, or randomized numbers.
Now if you consider that calculations with real image data are 2 dimensional (width, height), or 3 dimensional (width, height, time), and that filtering may be more sophisticated than our simple linear interpolation, you can see that having more bits to work with can give you better results -- even if you convert back to 8 bits for final output. -
Originally Posted by Cornucopia
But what about color correction and banding? Now in terms of banding, my understanding is this has almost entirely to do with bit depth, rather than the chrominance subsampling?
Higher bit depth definitely helps, at least in the editing stage. I see this all the time where you switch into a 16bpc or 32bpc in after effects, and PRESTO, all the banding and color transitions are smoothed over instantly. But this doesn't hold up very well when you export to 8-bit end format (i.e. banding reappears, unless you do some "tricks" like dithering and noise). Would a prores or cineform intermediate perform "better" in this case?
(Now I'm going to try to digest jagabo's message, might take me awhile)
Thanks alot for your explanations and help guys, much appreciated -
Originally Posted by jagabo
That example makes it perfectly clear to me now.
In after effects, you can use 16bit or 32bit per channel projects. As I mentioned earlier , banding is visiblity eliminated immediately when switching even on an 8-bit LCD display. (But of course the benefits are not translated to the 8-bit export entirely, but I can see from jagabo's #'s how it still would provide better results). So what benefit would a cineform or prores intermediate have over the native footage in this case? (lets disregard the other advantages like ease of editing etc....)
Similar Threads
-
color correction in MeGUI
By codemaster in forum DVD RippingReplies: 1Last Post: 2nd Mar 2012, 20:56 -
Color Depth
By gingka in forum Video ConversionReplies: 4Last Post: 20th Aug 2009, 21:53 -
Change Color Depth 24bits to 12bits
By PBoy in forum Video ConversionReplies: 6Last Post: 6th Jul 2009, 13:05 -
Xvid 8 bit color depth resoultion problem
By afcoff in forum Video ConversionReplies: 4Last Post: 23rd Feb 2009, 23:50 -
Color Correction for DV
By bsuska in forum Camcorders (DV/HDV/AVCHD/HD)Replies: 16Last Post: 23rd Oct 2008, 17:24