Had a brief look. I would argue that such a technically detailed article needs references. For example, you claim "Readout noise for CCDs used in MFD digital is about 12 electron charges". Why should this be the case? How can I see that?

Some things are rather confusing, e.g. "If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 24x48 mm size the later[sp?] one will have twice the area, so it will collect about the same number of photons". Equal intensity assumed, twice the area gives twice the number of photos.

I think you've got potentially a really good article, Eric. I didn't get past the 2nd page; however.

Why?

A few things.

First, there appears to be an error in the statement about sensor area in the 2nd graf of page 2. You say a 24x48 sensor will have twice the area of a 24x36 sensor. Is that correct? 24x36=864 and 24x48=1152. That's only a 33% increase in sensor area. If my math is wrong, please let me know. I think the statement ".... the later one will have twice the area, so it will collect about the same number of photons." is confusing. I know what you mean, but a lot of people may not.

Additionally, there are some, what I'll call, leaps of logic that someone who doesn't know about all this would find difficult to follow. I think even someone with some knowledge may find some of it difficult to follow.

I agree with Fips that the statement of the sensor having a read noise of 12 for a CCD needs support. You also state further down the page that read noise is 15. I think the SNR 12.5% needs some explanation. What does the 12.5% represent?

The statement "That noise is kept down when increasing ISO by applying preamplification before Analogue Digital Conversion." I think also needs clarity. People without a background in the science of electronics won't understand what this means.

Bernard, are you saying that there would be, or that there is, a significant difference in the transmission of light between different lenses all else being equal?

Right. And that was the point of my question. If the lens put in front of the sensor impacts the number of photons hitting the sensor doesn't that have to result from it transmitting a significantly different number of photons of light?

Right. And that was the point of my question. If the lens put in front of the sensor impacts the number of photons hitting the sensor doesn't that have to result from it transmitting a significantly different number of photons of light?

That's one part of it, but you of course have before that the size of the front element of the lens relative to its lenght, also defined as its aperture and the aperture being used, right?

That's one part of it, but you of course have before that the size of the front element of the lens relative to its length, also defined as its aperture and the aperture being used, right?

Cheers,Bernard

True. But that's what I meant with my 'all else being equal' comment. Equal exposure in both cases. That should mean equal light being transmitted. Ignoring differences in aperture to get the same DOF on both formats, I'm just talking basic exposure.

True. But that's what I meant with my 'all else being equal' comment. Equal exposure in both cases. That should mean equal light being transmitted. Ignoring differences in aperture to get the same DOF on both formats, I'm just talking basic exposure.

If I remember right, the term we're looking for is "flux". Given equal flux density, a larger sensor will absorb more photons.

Well, I have no idea what 'flux density' is. Is it in any way related to the flux capacitor from "Back to the Future"?

But yes, I understand a larger sensor will capture more photons. That's not in question. Nor is it, I don't believe, related to the issue that Bernard raised and that I'm trying to get clarification on.

Yes, light transmission differs depending on systems, number of lenses, coatings, etc... and that plays a role in the endless discussions amateur astronomers have about their well characterized sensors (compared to digital cameras blackboxes in most cases). But I think that for photographic purposes, it is an acceptable approach to assume, in thoughts experiments, that all systems are exposed optimally, in other words exploit fully the linear zone of their sensors (even if they don't provide access to that level of raw data). Now I of course agree that if we go into sub-optimal low light exposures, long exposures etc... that assumption may not hold. Ultimately, all other things being equal, a bigger sensing area always wins. The main basis for our ongoing discussions is that all other things are never equal in this field/market and that by the time we have come to some kind of consensual evaluation, the target has moved as fast as a DSLR depreciates...

Erik, I think the weakest part of your great article is the section on color accuracy. There is so much that can influence the results, especially in the RAW processor. Every manufacturer imposes their idea of good color into a camera. I guess the best test would be how close could you get the cameras to a target by profiling and then see where the cameras differ from each other. And color accuracy has really two criteria, how accurate it is in absolute and relative terms. You can have very high absolute accuracy and really bad looking (unnatural) color.

i don't know who your intended audience is but i reckon what you wrote is the cat's pyjamas. i read all of it from start to end and recommend others who do not have a phd yet but want to be able to speak at dinner parties as if they do should do the same. hopefully anybody who disagrees will make their own attempt to do as you have done that would help create a useful stream of readable literature on this subject instead of the usual tobacco industry marketing hype. of course what you are saying may be all smoke and mirrors but that is your right and much better than hiding behind a zillion pages of scientific gobblydook followed by 2 zillion pages of references that is for (yawn) academic journals not internet essays, right?

"It is possible that better results would been achieved by Capture One, but I'm pretty sure the comparison is pretty relevant regarding raw image data."

Based on knowing you here from the forum I assume it's a well researched, wel intended piece, written with minimal bias. But if the center of the discussion is centered on the "raw image data" then I lose interest pretty quickly.

Photographers show end-results, not raw pixels. If using a manufacturer's software (free to use with their backs) to process the images makes the end-result better (based on whatever criterium are important to the photographer) then that software should be considered an essential element of any analysis of the "myths or facts" of what makes the camera system sing.

Especially when a strong majority of Phase users use Capture One to process their Phase files (and many, maybe even most, use it to process their dSLR files as well).

There is a lot of good information here, but it is presented in a somewhat disjointed fashion, like you were just writing down good stuff as you thought of it. I think the article needs a second draft, editing the content to make it well organized and coherent. Maybe it would help to write an outline to define the structure of the article? It also needs a lot more detailed explanation if you want it to make sense to a non-expert audience.

Some notes on your "collecting more photons" section:

"If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 24x48 mm size the later one will have twice the area, so it will collect about the same number of photons." I have no idea what you mean here. Apart from the math error, don't you mean to say that the larger sensor will collect more photons? Or are you assuming something about relative pixel sizes and talking about photons per pixel?

Throughout the article, you never make a proper distinction between sensor size and pixel size. All of your dynamic range discussion is at the pixel level. Does the larger sensor have an advantage only because it provides larger pixels? (The answer is no, but I don't think I could conclude that from anything in your article.)

This is a technical article, so you should get the details right. For example, one of your conclusions is "A larger sensor will collect more photons and therefore have less shot noise." This is incorrect (as was shown in the earlier discussion). A correct statement would be "A larger pixel will collect more photons at a given gray level, providing a better signal/noise ratio".

From the discussion up to this point, I do not think you are entitled to replace "pixel" with "sensor" in the preceding statement, although it seems important to be able to do so. That reasoning comes next in your article, but only as a passing comment in an example.

Your discussion about pixels per printed image size comes late in the article but deserves more prominence. This is precisely why a large sensor has an advantage over a small sensor (irrespective of pixel size). It is not "software binning". It has a larger affect on the darks (where S/N is poor) than on the grays (where S/N is good).

Please note that the first line of the article says: "Note: This is an article in progress".

There are some typing mistakes, like the one you have pointed out. I'll correct errors when they are found.

Regarding the number of photons collected, the only factor that really matters is the number of photons collected. Smaller pixels would collect fewer photons, but there would be more pixels. It matters very little if you collect 24 000 000 x 1000 photons or 6 000 000 x 4000 photons you still end up 24 000 000 000 000 photons. Would you print the image at 8x10" at 360 PPI you would end up with 2300 photons/pixel in both cases.

Once a print scale is fixed photons/pixel in the sensor is irrelevant.

There is some relevance to pixel size regarding DR but none at all with regard to shot noise.

Look at the included figures, the first one shows the effects of "sensor plus" on DR, there is a small improvement of DR above 400 ISO where Sensor+ is engaged (reducing resolution to 20 MP). The second figure shows that there is very little effect of "Sensor+" on tonal range, which is dominated by shot noise.

There is a lot of good information here, but it is presented in a somewhat disjointed fashion, like you were just writing down good stuff as you thought of it. I think the article needs a second draft, editing the content to make it well organized and coherent. Maybe it would help to write an outline to define the structure of the article? It also needs a lot more detailed explanation if you want it to make sense to a non-expert audience.

Some notes on your "collecting more photons" section:

"If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 24x48 mm size the later one will have twice the area, so it will collect about the same number of photons." I have no idea what you mean here. Apart from the math error, don't you mean to say that the larger sensor will collect more photons? Or are you assuming something about relative pixel sizes and talking about photons per pixel?

Throughout the article, you never make a proper distinction between sensor size and pixel size. All of your dynamic range discussion is at the pixel level. Does the larger sensor have an advantage only because it provides larger pixels? (The answer is no, but I don't think I could conclude that from anything in your article.)

This is a technical article, so you should get the details right. For example, one of your conclusions is "A larger sensor will collect more photons and therefore have less shot noise." This is incorrect (as was shown in the earlier discussion). A correct statement would be "A larger pixel will collect more photons at a given gray level, providing a better signal/noise ratio".

From the discussion up to this point, I do not think you are entitled to replace "pixel" with "sensor" in the preceding statement, although it seems important to be able to do so. That reasoning comes next in your article, but only as a passing comment in an example.

Your discussion about pixels per printed image size comes late in the article but deserves more prominence. This is precisely why a large sensor has an advantage over a small sensor (irrespective of pixel size). It is not "software binning". It has a larger affect on the darks (where S/N is poor) than on the grays (where S/N is good).