# Sony Vegas pixel format

1. I've done some research on pixel format 8 and 32 bit. I notice colour difference, but I've realised that 32 bit takes much longer to process. Is there quality difference between those two?
2. 8 usually means 8 bits per channel. Video normally has 3 or 4 channels(. 3 channels is 24 bit (RGB or YUV), 4 channels is 32 bit RGBA or YUVA). So 32 bit and 8 bit is usually the same thing. There is such a thing as true 8 bit color (single channel, 256 color palette) but that is hardly every used these days.
3. So is 8 bit recommended?
4. Originally Posted by jseo13579
I've done some research on pixel format 8 and 32 bit. I notice colour difference, but I've realised that 32 bit takes much longer to process. Is there quality difference between those two?
it's color depth, not pixel format, the label for this is misleading, 8bit integers and 32 means is is calculating in floating point, that takes longer, but calculations are more exact
5. Originally Posted by _Al_
Originally Posted by jseo13579
I've done some research on pixel format 8 and 32 bit. I notice colour difference, but I've realised that 32 bit takes much longer to process. Is there quality difference between those two?
it's color depth, not pixel format, the label for this is misleading, 8bit integers and 32 means is is calculating in floating point, that takes longer, but calculations are more exact
Oops, my reply was misleading in this case (I don't use Vegas).

Calculations with 32 bit floats is slower but more precise, especially when performing multiple operations. Floats are less likely to result in banding (posterization). Suppose you have an operation like (100 / 3) * 3. This is of course a null operation. As real numbers dividing by 3 then multiplying by three will return the original value (100). But on a computer... In floating point 100.0 / 3.0 gives you 33.333... Multiplying that by 3.0 gives you 100.000... (or maybe 99.999...). 100 / 3 as integers results in 33 (the ".333..." is lost). Multiplying that by 3 gets you a final result of 99. So working in floating point gives a result closer to the real restult of 100.
6. I dont understand your point. Anyway, does 32 bit give better quality?
7. Yes, if you use effects like color corrections, changing levels and especially if you have a chain of those.

If you just load video, cut it and then export it, I don't think it matters much what you have selected (8bit integer or 32bit floating point.
But not sure now how internally Vegas works. if you if it uses original clip's YUV values for export without any effects (filters) or RGB values. Vegas might use at least one YUV to RGB to YUV trip and setting 32bit might help.
8. I've recently found out that 32 bit does alter colour in a way that the video looks sharper, but as far as I know I can use colour curves filter to make the display in 8 bit look exactly the same as if I'm using 32 bit. Perhaps, I found 32 bit takes much longer than 8 bit. Anyway, thanks for your tips. Your information has significantly helped.

Statistics