In a comment on this question someone suggested that camera sensors typically only output 12-14 bits of data. I was surprised because that would mean that 24 bits of color is only useful for doing photo manipulation (where the added bits reduce the noise one picks up from interpolating middle values repeatedly doing multiple manipulations).

Does anyone know enough about camera sensor to be able to authoritatively answer the 12-14 bit claim? If so, what are typical encodings?

My apologies to Itai and Guffa, as I thought all three answers were very interesting, and thanks for the comments following Guffas answer mattdm and Matt Grum. I wish I could have selected all of them.
–
John RobertsonMay 16 '11 at 17:37

4 Answers
4

The photosites of a digital sensor are actually analog devices. They don't really have a bit depth at all. However, in order to form a digital image, an analog-to-digital converter (A/D converter) samples the analog signal at a given bit depth. This is normally advertised in the specs of a camera — for example, the Nikon D300 has a 14-bit A/D converter.

But keep in mind that this is per channel, whereas 24-bit color usually means 8 bits per channel. Some file formats — and working spaces — use 16 bits per channel instead (for 48 bits total), and some use even more than that.

This is partly so the extra precision can reduce accumulated rounding errors (as you note in your question), but it's also because human vision isn't linear, and so the color spaces we use tend to not be either. Switching from a linear to "gamma compressed" curve is is a lossy operation (see one of the several questions about raw files), so having more bits simply means less loss, which is better if you change your mind about exposure/curves and don't have access to the RAW file anymore.

When this data is converted to RGB, two color components per pixel are interpolated from the information in surrounding pixels. A pixel holding green information for example has two neighboring pixels holding red data and two holding blue data, which are used to create the RGB value.

So, 14 bits per pixel of RAW data produces 42 bits per pixel of RGB data. Of course, the interpolated data is less accurate, but you usually process it down to 24 bit RGB anyway.

Each RGB pixel is created from a weighted average of potentially many pixels so you can't just multiply by 3 (or 4) to determine how many bits of colour data you get. If you want to talk about bits of colour information then experimentally you get about 22 bits with a typical DSLR
–
Matt GrumMay 6 '11 at 18:13

1

@Matt Grum: Yeah. The simple number tells you how many bits of data you have, but you end up with far less actual information.
–
mattdmMay 6 '11 at 19:28

Be careful not to confuse per-pixel bit-depth and per-component bit-depth.

The output of digital sensors is almost always between 10 and 14-bit per-component on a linear scale. That would give between 30-bit color (1 billion) and 42-bit (4 trillions) per-pixel.

The site DXOMark measures this using a normalized scale (explained in their white-paper) and publishes per-pixel bit-depth which accounts for noise that tends to destroy the lower-order bits. Based on their findings full-frame, DSLRs can reach 24.7 bits-per-pixel while medium format cameras reach 26-bits. For cropped-sensor cameras, 23.8 bit is the current maximum.

Current DSLR cameras top out at 14-bit output. Some medium format cameras claim 16-bit output, but it's been argued by various folks (such as the ASMP's dpBestflow site) that the increase in bit depth between 14 bits and 16 bits actually produces a better image.