Yes, I agree, but it is pretty much what I say in the article. There is a comparison of two exposures of a color checker which I presume were taken under identical conditions.

My understanding is that color is either accurate or pleasant. Both of the images I tested were oversaturated (technically speaking) and I reduced saturation on both to get close to correct saturation. The article says: "The measured data above actually indicates that the D800E is better in reproducing a color checker card under a given set of conditions. The main difference between the Hassy and the D800E was that the Hassy image processed in LR4.2 was significantly oversaturated. When processing in LR4.2 I pulled back 13 units of saturateion on the Hassy and 4 units on the Nikon. Delta E is about half on the Nikon."

I tried to profile the CC-shots I used but they were both slightly overexposed.

Erik, I think the weakest part of your great article is the section on color accuracy. There is so much that can influence the results, especially in the RAW processor. Every manufacturer imposes their idea of good color into a camera. I guess the best test would be how close could you get the cameras to a target by profiling and then see where the cameras differ from each other. And color accuracy has really two criteria, how accurate it is in absolute and relative terms. You can have very high absolute accuracy and really bad looking (unnatural) color.

Which is why I was careful to preface my comment with the disclaimer "Under the assumptions that S/N per pixel is dominated by the number of photons collected ..."

As you point out, when readout noise is significant, there is a penalty for more numerous smaller pixels, which have a noise component which does not scale with pixel size or collected photons.

I think we agree, which is why I said that it doesn't matter much for photographic applications with decent exposure. It still does matter slightly for the dark areas of a properly exposed picture, which is why Nikon is essentially castrating them for a signal processing point of view. It did of course matter more when small pixels (say 15000 / 25000 FWC) where suffering from higher read noise (say 15)...

What tickles me a bit is when instead of saying "for a whole lot of complex reasons it ends up not mattering much in practice and the result is roughly equivalent" one says "this is precisely so as demonstrated there".

Very minor issue, I concede, and in the context of photographic education, your approach probably beats mine.

Under the assumptions that S/N per pixel is dominated by the number of photons collected, and that the number of photons collected per pixel is proportional to the pixel area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels. The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.

This is not exactly correct, since if one bins 4 pixels into one pixel post capture via software, the binned superpixel will have 4 read noise contributions whereas a larger pixel with 4x the area would have only one read noise contribution. Software binning is the mechanism underlying the DXO screen vs pixel data. Hardware binning is widely used with monochrome scientific CCDs (see here), but the process is considerably more complex for Bayer array sensors and as far as I know, hardware binning with Bayer sensors is only available with the Phase One sensor plus technology (see here. Click on the P+ tutorial).

While a large sensor does collect more photoelectrons, one should remember that the SNR contribution from shot noise increases as the square root of the number of photons collected. Doubling the sensor area (as in going from an APS sized sensor to a full frame 35 mm sensor) will improve the SNR by a factor of only 1.4. Newer technology CMOS sensors (such as in the Nikon D7000) can compete quite well with older full frame sensors. The same considerations apply to MFDBs. As Erik has pointed out, the MFDBs are hampered by their high read noise which limits their dynamic range. However, their SNR in the midtones (where shot noise does not contribute significantly to the SNR) is quite good.

For ultimate image quality few well informed observers would deny that MFDBs are the way to go, but the price to performance ratio is quite steep.

I guess that my findings agree pretty well with Bill's conclusion. MFDB has a small advantage regarding shot noise in highlights and midtones. I would suggest that this may be hard to illustrate with images, because all the modern sensors are pretty good in this area.

I would expect that MFDBs would respond better to sharpening compared to DSLRs because I would expect them to have less shot noise. MFDBs are normally not OLP-filtered and they normally don't have microlenses, which may also reduce the need of sharpening.

You really need to look at the whole package. I'm pretty sure that MFDBs have an advantage in the resolution/MTF/microcontrast area. On the other hand I suspect that the DR advantage of MF is by and large a myth. Color reproduction and midtone tonality, I don't know.

This is not exactly correct, since if one bins 4 pixels into one pixel post capture via software, the binned superpixel will have 4 read noise contributions whereas a larger pixel with 4x the area would have only one read noise contribution. Software binning is the mechanism underlying the DXO screen vs pixel data. Hardware binning is widely used with monochrome scientific CCDs (see here), but the process is considerably more complex for Bayer array sensors and as far as I know, hardware binning with Bayer sensors is only available with the Phase One sensor plus technology (see here. Click on the P+ tutorial).

While a large sensor does collect more photoelectrons, one should remember that the SNR contribution from shot noise increases as the square root of the number of photons collected. Doubling the sensor area (as in going from an APS sized sensor to a full frame 35 mm sensor) will improve the SNR by a factor of only 1.4. Newer technology CMOS sensors (such as in the Nikon D7000) can compete quite well with older full frame sensors. The same considerations apply to MFDBs. As Erik has pointed out, the MFDBs are hampered by their high read noise which limits their dynamic range. However, their SNR in the midtones (where shot noise does not contribute significantly to the SNR) is quite good.

For ultimate image quality few well informed observers would deny that MFDBs are the way to go, but the price to performance ratio is quite steep.

Had a brief look. I would argue that such a technically detailed article needs references. For example, you claim "Readout noise for CCDs used in MFD digital is about 12 electron charges". Why should this be the case? How can I see that?

Some things are rather confusing, e.g. "If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 24x48 mm size the later[sp?] one will have twice the area, so it will collect about the same number of photons". Equal intensity assumed, twice the area gives twice the number of photos.

If we make an image of an evenly illuminated surface there will be a statistical variation on the pixels. If we presume the data numbers correspond to electron charges we would now that 64% of the pixels would be within 1090 and 1159 electron charges if the mean was 1124. But a single pixel doesn't have noise.

If we make an image of an evenly illuminated surface there will be a statistical variation on the pixels. If we presume the data numbers correspond to electron charges we would now that 64% of the pixels would be within 1090 and 1159 electron charges if the mean was 1124. But a single pixel doesn't have noise.

Single pixel noise is a useful concept and does make sense. For a group of similar pixels with similar illumination, we agree that noise causes statistical variation in the response across the different pixels. Similarly for a single pixel, I think we can agree that noise will cause statistical variation in the response of that pixel over time or over repeated measurements. It is not a great conceptual leap to say that a single pixel has a true value and a measurement error. The true value is the average light intensity, while the error is the statistical fluctuation expected for that light intensity. For a single image the best estimate of the true value is the measured value, but many noise reduction techniques rely on capturing multiple images to provide an improved estimate of the true value.

Instead of saying simply "this pixel has a measured value of 1124" it would be more informative to say "this pixel has a measured value of 1124 and a statistical uncertainty of 33". In scientific publications, when measurement results are reported, it is rather common to see statements like "pixel value = 1124 +- 33". If there are multiple sources of error, it is even better to say something like "pixel value = 1124 +- 33 (statistical) +- 12 (readout noise)".

Yes, I absolutely agree with that reasoning. On the other hand you are still sampling photons so you essentially says the pixel varies over time. It is pretty similar to the statistical variation over a simultaneous sampling over a number of pixels. You still need several samples to see a variation.

My point, mostly, is that we never do anything useful with a single pixel. We always use a large number of pixels and there will be a statistical variation.

You are right about the readout noise. It would be significant at 1124 electron charges on older sensors. On the latest CMOS sensors readout noise seems to be around 3 electron charges. I also think that you would add noise in quadrature. So shot noise which would be around 34 charges would dominate.

Single pixel noise is a useful concept and does make sense. For a group of similar pixels with similar illumination, we agree that noise causes statistical variation in the response across the different pixels. Similarly for a single pixel, I think we can agree that noise will cause statistical variation in the response of that pixel over time or over repeated measurements. It is not a great conceptual leap to say that a single pixel has a true value and a measurement error. The true value is the average light intensity, while the error is the statistical fluctuation expected for that light intensity. For a single image the best estimate of the true value is the measured value, but many noise reduction techniques rely on capturing multiple images to provide an improved estimate of the true value.

Instead of saying simply "this pixel has a measured value of 1124" it would be more informative to say "this pixel has a measured value of 1124 and a statistical uncertainty of 33". In scientific publications, when measurement results are reported, it is rather common to see statements like "pixel value = 1124 +- 33". If there are multiple sources of error, it is even better to say something like "pixel value = 1124 +- 33 (statistical) +- 12 (readout noise)".

Same as EricV says and I agree. My point is more like that a pixel is pretty meaningless without a context. With multiple exposures you add a temporal context, but I still think that very few of us would enjoy a single pixel movie, although the pixel would have both shot noise and readout noise. A couple of millions of those pixels on the other hand give a nice image.

It would be feasible to build a sensor that has binary pixels either black or white. If there was enough of those pixels the sensor would form a good image. As far as I know such sensor designs have been proposed.

It would be feasible to build a sensor that has binary pixels either black or white. If there was enough of those pixels the sensor would form a good image. As far as I know such sensor designs have been proposed.

Totally agree with you Bernard. The amount of light reaching the sensor will be a function of the gathering power of the lens, the internal transmission losses and the percentage of the image circle that actually falls on the sensor. Perhaps an interesting test would be to compare various 35mm lenses with various MFD lenses in a test rig on both types of sensors so that these variables can be eliminated.

I shoot with both MFD (H4D-60) and 35mm (D800E). Both are fine instruments and very often I could use either camera for a job. There are situations however where the ease of use and portability of the Nikon make it my tool of choice and situations where the Hasselbald is my preferred option - usually in the studio. They are both very very good.

However from a subjective point of view I like what the combination of Hasselblad back and lenses and their Phocus software producing in a large format print.