Hello, in Sony Vegas Pro 13 Project Properties window, there are three options about pixel format.
1) 8-bit
2) 32-bit floating point (video levels)
3) 32-bit floating point (full range)
Well, as I know, 8-bit and 32-bit are basically number of colors, in this way:
8-bit color = 256 colors (2^8)
32-bit color = 4,294,967,296 colors (2^32)
But the Pixel Format is more complicated thing then just number of colors. Well, more bit produces more color precision, but I have a question, what is the difference between these two options:
32-bit floating point (video levels)
32-bit floating point (full range)
What do "video levels" and "full range" mean? Also I know that some effects/transitions/media generators don't support 32-bit format. So, I just can't use them when I am working in 32-bit format, right?
I know that using just 8-bit is very good and quite enough for general video production, and I know that 32-bit format is a headache for non-professional user, But I have a question, in what situations 32-bit format is better than 8-bit? For example, I know that 32-bit supports Alpha (transparent) channel rendering, and 8-bit doesn't, is that right?
Thank you.......
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 4 of 4
Thread
-
-
http://en.wikipedia.org/wiki/Rec._601
For each 8 bit luminance sample, the nominal value to represent black is 16 and the value for white is 235. Eight-bit code values between 1 and 15 provide footroom, and can be used to accommodate transient signal content such as filter undershoots. Similarly, code values 236 through 254 provide headroom, and can be used to accommodate transient signal content such as filter overshoots. -
I don't use Vegas but I'm pretty sure the bit depth is per color channel. 8 bits per channel (red, green, blue) gives you ~16 million different colors. But when working with integers you get rounding errors. Ie, you can't have 125.3 so it gets rounded down to 125. or 125.8 gets rounded up to 128. That's not critical if you're using a single filter but the errors can add up when you're using multiple filters.
The 32 bit floating point options using floating point values (again, one for each color channel) while working within Vegas. So 125 and 125.3 and 125.300001 are all maintained as different values. This gives more precise results while working in Vegas. In the end the floating point values are averaged back to integers for final output (for most formats). The disadvantage is 4 times as much memory usage in Vegas.Last edited by jagabo; 24th Sep 2014 at 21:50.
-
Yes.
"8" = R_2^8, G_2^8, B_2^8 or RGB_2^24. What most users would call 24bit color.
"32" = R_2^32, G_2^32, B_2^32 or RGB_2^96 (but not all of it is integer, some is mantissa)
There's also a difference on whether processing is done linearly or with Gamma correction.
Here is an article for v9,v10. It should give you more detail, even if things have slightly changed with newer versions (doubtful, but possible): http://www.glennchan.info/articles/vegas/v8color/vegas-9-levels.htm#levelsIn32bit
Scott