A couple of recent reviews (here and particularly here) have shown dynamic ranges from a camera's RAW files which exceed the 12 bit resolution of the camera's analog to digital converter. How might that work?

Well, both my assertion that this is what is happening and the explanation are a guess but for a grey test subject, such as one would use for a DR measurement, the red, green and blue photosites will have different quantum efficiencies so it's entirely likely that when, say, the green sites have reach full well capacity there's still room for some more electrons in the blue and/or red sites. A clever RAW converter might use the extra headroom from those adjacent differently coloured sites to reconstruct a luminance signal in excess of the theoretical 12 EV available from a single pixel.

Is this cheating? In a way I suppose one should say "Yes" but in the real world swapping a little spatial resolution to recover detail from otherwise blown highlights seems like a very good trade off provided the RAW converter reverts to a more expected behaviour when pixels haven't reached full well capacity. I would also argue that it doesn't invalidate comparative tests between cameras provided the test conditions are the same and the RAW converter used is the same and one can assume that said RAW converter uses the same algorithm to extrapolate dynamic range.

There is a potential "gotcha" in that when a camera is tested it can be so new that the only RAW converter that can be used is the proprietary one that ships with the camera. In such cases the red flag would be "converted to TIFF" before the software used to measure dynamic range is deployed as that implies that the test is more likely to be of the supplied software rather than a best case result of what can be achieved from the hardware once more capable third party RAW converters are available.

The first test I linked to above did use the same software so comparison between the two cameras is fair. I strongly suspect that the second test I linked to used different RAW converters which raises a huge question mark in my mind about its validity.

.Just to be clear, I'm not arguing that dynamic range should equal bit depth but I am arguing that for a single isolated pixel it shouldn't be possible for dynamic range to exceed bit depth. Enjoy your lunch (pizza? ) - I'm about to climb on the elliptical trainer!

Update: I guess I'm also assuming a linear response curve from the read-out amplifiers. If the response curve isn't linear but some well documented (gamma?) curve then one could extend the DR but I still think trying to compare DR from different cameras using different RAW converters is very dangerous territory.

Update 2: Now that i'm 323kcal in deficit I think I've earned some lunch as well.

Now that my brain energy has been refilled, I may have given my previous answer with perhaps a bit too much confidence. Looking at the details of it, we always come back down to what exactly you define as dynamic range?

For acquisition, we're looking at the input dynamic range, or what the sensor could detect. The links earlier also includes jpeg results, which only have meaning after processing. I'd call that output dynamic range.

For photos, these aren't measured at an individual pixel, since you need more than one to form a picture. One of my earlier thoughts was essentially extension beyond the nominal bit depth by (over)sampling. This actually requires random noise to work, and would not work if there was no noise or fixed noise. At very low levels, the sample step size is significant relative to the signal, resulting in quantisation noise. If you sample multiple times and take an average (low pass filtering), it would tend towards the real level. For audio, this is usually done temporally, but for a digital image you could do this spatially. This assumes you are applying noise reduction afterwards to remove the residual high frequency noise element.