Regarding the readout noise I have chosen 12 electrons/pixel because I wanted to use an optimistic value, I believe that 15-20 range is more probable for MFDB. Just a few lines below an example is given that is based on data from sensorgen, I will include a reference in the next revision.

Had a brief look. I would argue that such a technically detailed article needs references. For example, you claim "Readout noise for CCDs used in MFD digital is about 12 electron charges". Why should this be the case? How can I see that?

Some things are rather confusing, e.g. "If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 24x48 mm size the later[sp?] one will have twice the area, so it will collect about the same number of photons". Equal intensity assumed, twice the area gives twice the number of photos.

Ultimately, all other things being equal, a bigger sensing area always wins.

but the hard fact is that technology for smaller sensors (dSLR/PS/cell phones) is nowadays consistently not the same as for bigger sensors in MF(DB)... so we can't consider all other things equal and operate just by sensor area alone (so while it wins in some, in certain areas it doesn't)

My take is that Doug's observation is a valid one. It is about the whole package. I have great respect for Doug and I think that he has made great contributions to these forums.

My intention with the article is to put things in some perspective. Now, we can have different perspectives, depending on experience. I just try to present some facts, keeping bias to a minimum.

Let us just take an example. There is something called thermal noise. It is my understanding that Phase One raw files contain info about ambient temperature, and perhaps also sensor temperature. Phase can use that information to selectively reduce noise in the darks. Unfair advantage to Phase? Probably! Do other vendors do similar things? Probably!

Regarding the number of photons collected, the only factor that really matters is the number of photons collected. Smaller pixels would collect fewer photons, but there would be more pixels. It matters very little if you collect 24 000 000 x 1000 photons or 6 000 000 x 4000 photons you still end up 24 000 000 000 000 photons. Would you print the image at 8x10" at 360 PPI you would end up with 2300 photons/pixel in both cases.

So if you cut an image in half, you have half the number of photons? But so what. What is visible is still the same. (Lets ignore the fact that prints and files don't have photons.)

And don't you care about the well capacity and how many photons get in that well? This is what signal is after all. A pixel with a small signal is still a pixel with a small signal--what is around it does not change it into a pixel with more of a signal.

The pixel is a luminance/color data point. It has no spacial information beyond its position in the array. That luminance/color value is directly related to how many photon strikes it receives. The pixels around it are unrelated. S/N of the pixel is important.

So if you cut an image in half, you have half the number of photons? But so what. What is visible is still the same. (Lets ignore the fact that prints and files don't have photons.)

And don't you care about the well capacity and how many photons get in that well? This is what signal is after all. A pixel with a small signal is still a pixel with a small signal--what is around it does not change it into a pixel with more of a signal.

The pixel is a luminance/color data point. It has no spacial information beyond its position in the array. That luminance/color value is directly related to how many photon strikes it receives. The pixels around it are unrelated. S/N of the pixel is important.

but the hard fact is that technology for smaller sensors (dSLR/PS/cell phones) is nowadays consistently not the same as for bigger sensors in MF(DB)... so we can't consider all other things equal and operate just by sensor area alone (so while it wins in some, in certain areas it doesn't)

RFPhotography

Isn't it true that both the pixel well capacity and the size of the sensor matter? The pixel well capacity plays a larger part in the dynamic range. The size of the sensor plays a larger part in the total number of photons captured and thus the overall signal to noise ratio. So, Eric, in your example, while the two sensors would capture the same number of photons and could have the same S/N as a result, the sensor with the larger pixels should exhibit a better overall dynamic range. Correct?

Deeejjjaaa is also correct that not everything is equal between the different formats. As you point out, Eric, the noise characteristics of CCD and CMOS sensors are different so a direct comparison is somewhat difficult. Technology such as back-side illumination is also advantageous, allowing more light to be captured (a) in each pixel and (b) in total.

Yes, a larger pixel would have a small advantage in DR. The reason that this is often not the case is that the sensor chips with the highest resolution often use a more advanced chip technology, leading to lower readout noise.

Back side illumination has many advantages, but I don't think it has been implemented in DSLRs yet.

Isn't it true that both the pixel well capacity and the size of the sensor matter? The pixel well capacity plays a larger part in the dynamic range. The size of the sensor plays a larger part in the total number of photons captured and thus the overall signal to noise ratio. So, Eric, in your example, while the two sensors would capture the same number of photons and could have the same S/N as a result, the sensor with the larger pixels should exhibit a better overall dynamic range. Correct?

Deeejjjaaa is also correct that not everything is equal between the different formats. As you point out, Eric, the noise characteristics of CCD and CMOS sensors are different so a direct comparison is somewhat difficult. Technology such as back-side illumination is also advantageous, allowing more light to be captured (a) in each pixel and (b) in total.

RFPhotography

Yes, a larger pixel would have a small advantage in DR. The reason that this is often not the case is that the sensor chips with the highest resolution often use a more advanced chip technology, leading to lower readout noise.

And I think this is the point deeejjjaaa was making.

Quote

Back side illumination has many advantages, but I don't think it has been implemented in DSLRs yet.

Best regardsErik

Not sure about DSLRs. Certainly P&S cameras and cell phones. Maybe some mirrorless type cameras, not sure. But this gets back to the point about how different technologies can impact the final analysis.

Regarding the number of photons collected, the only factor that really matters is the number of photons collected. Smaller pixels would collect fewer photons, but there would be more pixels. It matters very little if you collect 24 000 000 x 1000 photons or 6 000 000 x 4000 photons you still end up 24 000 000 000 000 photons. Would you print the image at 8x10" at 360 PPI you would end up with 2300 photons/pixel in both cases. Once a print scale is fixed photons/pixel in the sensor is irrelevant.

I agree completely. My point was that your article does not make this clear, since the discussion is almost all about photons per pixel. Adding the content above to your article in progress would be a great improvement.

I agree completely. My point was that your article does not make this clear, since the discussion is almost all about photons per pixel. Adding the content above to your article in progress would be a great improvement.

I'd suggest that there are some advantages to back illuminated CMOS like better fill factor and less lens cast effects, but a change of technology would have little effect on the analysis. It would foremost give better low light performance and that is not included in the article. I may add some info about the issues.

Not sure about DSLRs. Certainly P&S cameras and cell phones. Maybe some mirrorless type cameras, not sure. But this gets back to the point about how different technologies can impact the final analysis.

And don't you care about the well capacity and how many photons get in that well? This is what signal is after all. A pixel with a small signal is still a pixel with a small signal--what is around it does not change it into a pixel with more of a signal.

The pixel is a luminance/color data point. It has no spacial information beyond its position in the array. That luminance/color value is directly related to how many photon strikes it receives. The pixels around it are unrelated. S/N of the pixel is important.

A pixel is not just a luminance data point at a particular location, it also represents a certain subject size. One way to think of this is by taking the angular coverage of the lens and dividing by the number of pixels covered. Another way to think of this is by considering how many pixels are occupied by a physical object in the scene. By representing a physical object (say a patch of uniform gray sky) with more pixels, I get inherently better signal/noise in the final printed image. It is not just noise per pixel that matters.

Under the assumptions that S/N per pixel is dominated by the number of photons collected, and that the number of photons collected per pixel is proportional to the pixel area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels. The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.

When your pixel states 1124, it simply means that on a certain sensor you can expect the actual incoming data that made that pixel value was (for example) between 1114 and 1134. The neighbouring pixel wich could be measuring the same data (say a uniform background) could have 1113 +/-10 and another one, slightly less sensitive 983 +/-11. You'd then see a "noisy" band of three pixels in place of the uniform one you were hoping for.

If you want to state it in another way, there is a margin of incertitude on that single value, and that uncertain part of the signal is the noise in the signal.

Quote

2) I'm not stretching. If you crop the image it will be enlarged more. So each pixel in the print will "see" less photons, noise will increase.

If, in the simplest method, the pixel is doubled. That doubling doesn't change the SNR. You are simply increasing the area that represents the 1124 (+/-10) measurement. Neither changing its value, nor changing the error margin that occurred when it was captured.

area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels. The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.

In practice, this works roughly for digital cameras (as demonstrated in Emil Martinec's paper).

Now, consider the following cameras that happen to have 1 unit of read noise.

So you see that this is not "precisely compensated" although the distribution of your signal will indeed be centered on the same 15-16-17.

It doesn't matter too much with photography because you are almost always using a decent exposure and working with a significantly larger number of units than the ones above. So you clould simply say that you'll ignore the issue because it is negligible (do the math with 60000 units and 4 times 15000 units and a 10 units read noise for example) but redefining noise or demonstrating an equality that doesn't exist will not satsify everyone.

The noise (uncertainty on the captured signal) doesn't change after the capture if you can store the data reliably.

One last thing: the "scientific" definition of noise and the "photographic" perception of noise are two different things: shooting a deep dark sky background should be noisy simply because the sky background is not uniform (there's also the variability rates or light unit arrivals, Poisson, but le't's not get into that) but a photographer will find the totally uniform black background produced by the Nikon blackbox less noisy.