Bart, I have a similar noise-based way of deciding when to stop turning up the ISO that has nothing to do with unity gain, although my targets are much simpler; just one D65 patch.

Hi Jim,

I prefer your current presentation over the above quicky that I did some 4 years ago. Despite a larger number of variables (non-uniformity of the CC patches, light/exposure, and the Raw converter) in my test, total noise also approaches a more constant build up at or above unity gain, but it also illustrates an overall increase of the noise level. Your presentation allows to find a sweet-spot (if there is one) in the S/N ratio, which is very useful for Astrophotography, because the overall noise which may be a distraction (and problematic for sharpening) in regular photography can be reduced by averaging multiple exposures quite dramatically.

I've read your blog, and am pleased to see the progress you've made in finding a relatively easy to interpret graphical form to express the complexities underneath. I'm also pleased with the attention given to characterizing/reducing methodological errors with respect to sensor temperature and shutter speed variations.

The only improvement I can currently think of is an elimination (or at least determination) of the influence of non-uniformity of the light-source and lens vignetting, and sensels (PRNU / sensor dust / dead- or hot pixels), in the central crop area used for analysis. The only way would be to do a check with subtracted image pairs (and Stdev/Sqrt(2)), or (slightly less accurate but still informative) a comparison between the 2 Green filtered sensel sub-images. It may reveal that your current results do not vary much from an even more normalized data-set, but dust and PNRU do keep lurking around the corner, waiting to strike.

I find e.g. that there is a (negligible) small systematic difference in the mean sensel values from the G1 and G2 sensels of my 1Ds3 camera, the G1 sensels on average give an up to 1.605 ADU higher response than the G2 sensels. It's nice to know that this will usually not bias the noise statistics in a noticeable way, but still there is a difference. It also does allow to reveal shortcomings in isolated Green filtered sensels in the area under investigation, which may influence small crop area results (see attachment, with seriously boosted contrast, which reveals a slightly less responsive G1 sensel).

Quote

I first do a series of exposures of the target at various ISOs, using only the central 200x200 pixels, and holding the digital values constant, in this case 5 stops below clipping for the green channel. That means every time I increase the ISO by a stop, I have to stop down a stop (because I'm compensating for the read noise, which varies with shutter speed, I hold the shutter speed constant.)

There is a slight (nitpicking) concern here. By varying the aperture you do avoid effects from dark current and potentially Raw converters that are not really giving the same type of Raw data at all exposure times. Unfortunately at the same time you introduce a variable in light uniformity over the selected area due to vignetting. You can minimize that effect by taking only a small crop at the center of the image (assuming the light fall-off is symmetrical). But the smaller the crop becomes, the larger a few outliers will influence the statistics. Another potential source of non-uniformity is that there may be a slight asymmetry in the aperture that the (partially sticky) blades of the iris leave open, and there may be a less than perfectly linear progression in the amount of light that's let through (narrower aperture also require longer to close, hopefully to a repeatable final position). Another issue is that at apertures wider than approx. f/3.5, the analog gain of some cameras seems to be increased ...!

Quote

I do have one concern with this test, however. I've noticed that SNR holds up remarkably well as resolution decreases, if the noise is big enough.

I'm not sure which resolution you have in mind, so I can't comment on that.

All in all, a very useful exercise and helpful presentation, and it (as usual) learns us a lot about our tools. Do take my remarks a encouragement and not a criticism, because I know how much time it takes to do these things right.

I don't know if Iliah Borg happens to stumble across this exchange, but I would love it if a G1 vs G2 metric could be added to RawDigger. Just a number giving the mean of the G1-G2 differences in a selected area, and the standard deviation of the result divided by Sqrt(2). A large non-zero mean indicates a calibration issue if measured of a uniform patch, and a smaller standard deviation than that of the G1 or G2 sensels suggests the presence of PRNU or of uneven lighting that injects an upward bias in the noise statistics.

Jim, what I believe you have done is replicated the 18% SNR curves at DxO - in your case at around 3.2% (-5 stops) of full scale.

Hi Jack,

Yes, but with an arrangement that's more useful for the topic at hand, and with a few additional pieces of information.

Quote

The highest point on the curve represents the cleanest spot of just photon noise, as unpolluted as possible by read noise or PRNU, which can be seen here as the red and green patches at the tip and tail end of the (unstraightened) curves, where they drop just like in yours

Correct, but that may well coincide with the Unity Gain level, the underlying hypothesis of the usefulness of knowing that metric. Perhaps it's easier to determine the Unity Gain than the optimal SNR trade-off. Nevertheless, it's probably more important to know the real optimal Photon noise versus camera noise trade-off point, as far as it's controlled by the ISO setting, although it may be different at various level of exposure. However, do note that the overall noise level at a certain ISO setting may be objectionable from an aesthetical point of view, even though the objective SNR is optimal. It's still a trade-off, and collecting more photons will always give better quality.

Bart, upon further thought I now believe that my question about the 5DIII or in fact most Canon DSLRs is not useful for the issue at hand, since I understand that they employ a two stage analog amplifier design, complicating things unnecessarily.

Hi Jack,

I agree, but it is still very relevant for Canon owners and for future camera models that may emerge.

Quote

Let's stick to single amplifier designs for simplicity. What do you think about my comment to Jim above?

Does the area cleanest of camera-induced noise on an SNR curve correspond to unity gain?

Jack,

For the RX-1 it happens to, at least with the input five stops down from clipping, but for the NEX-7, it does not:

By the way, you wondered what would happen if the stimulus was a little darker. You can get a sense of that by looking at the blue curve which averages 312/467 = 2/3 of a stop down from the green, and the red curve, which averages 183/457 = 1 1/3 stop down from the green. Not as far down as you wanted to go, but it's some more information.

Also by the way, I believe the reason the ISO 6400 points jump up like they do is because there is some noise reduction that the camera does at that ISO that you can't turn off.

After taking many shots by various means, I have given up on the Grail of measuring the ISOug of a Foveon-based camera.

Ted,

I'm sorry the method didn't work with the Fovean-based cameras. I thank you for your patience and for doing a great job of researching the difficulties. I will note your findings on my blog to save others the frustration.

The only improvement I can currently think of is an elimination (or at least determination) of the influence of non-uniformity of the light-source and lens vignetting, and sensels (PRNU / sensor dust / dead- or hot pixels), in the central crop area used for analysis. The only way would be to do a check with subtracted image pairs (and Stdev/Sqrt(2)), or (slightly less accurate but still informative) a comparison between the 2 Green filtered sensel sub-images. It may reveal that your current results do not vary much from an even more normalized data-set, but dust and PNRU do keep lurking around the corner, waiting to strike.

Bart, I now recognize this as a deficiency. I plan to do what I can to fix it, but it will take me at least a few days. Right now I don't have a good way to do the calculations, but help is on the way. Thanks for your help with all this.

There is a slight (nitpicking) concern here. By varying the aperture you do avoid effects from dark current and potentially Raw converters that are not really giving the same type of Raw data at all exposure times. Unfortunately at the same time you introduce a variable in light uniformity over the selected area due to vignetting. You can minimize that effect by taking only a small crop at the center of the image (assuming the light fall-off is symmetrical). But the smaller the crop becomes, the larger a few outliers will influence the statistics. Another potential source of non-uniformity is that there may be a slight asymmetry in the aperture that the (partially sticky) blades of the iris leave open, and there may be a less than perfectly linear progression in the amount of light that's let through (narrower aperture also require longer to close, hopefully to a repeatable final position).

You raise some concerns that I never thought of, and one that I've considered and corrected for. I do take a 200x200 pixel area in the center of the image as my region of concern. That gives me 10,000 pixels in each color plane, which seems to be good compromise. If there's aperture shape effects or differential vignetting, I don't correct for that. I could perform some measurements to see what's going on, though.

I did consider the possibility that the aperture varies from precisely the set f-stop, and, on the two cameras that open their diaphragms between exposure, that there's exposure to exposure variation. I didn't go into that on my blog or here because I thought it was a pretty nerdy detail, but I will now. It's the source of the word "corrected" on some of my charts.

For the entire ISO series, 16 exposures times the number of ISO settings tested, I compute the average count for the red, green, and blue channels. For every exposure, I compute a corrected count by dividing the raw count by the average count for that channel. I divide each raw count by the correction factor for that exposure to get a corrected count. I divide each raw standard deviation by the square root of the correction factor for that exposure to get a corrected standard deviation. I use the corrected count and the corrected standard deviation for the SNR calculations.

Basically, I'm making small corrections for the SNRs based upon how the measured exposure errors would affect a photon-noise-limited (perfect) camera. I think of it as kind of a model-mediated one-step Newton's method.

Oops! I didn't know that. At least it won't affect the M9 results, since the camera can't figure out where the ISO is set.

Hi Jim,

It's not a huge effect (<1/3rd of a stop worth) as far as my camera is concerned, but these small variables add up. Besides, we never know what other cameras do or will do in the future, so it's best to either detect it (calibrate for aperture used per camera model) or avoid it. Maybe the small errors due to shutter time are relatively smaller and more gradual (=predictable) afterall?

For the RX-1 it happens to, at least with the input five stops down from clipping, but for the NEX-7, it does not:

Yes, the RX-1 does appear to have a read noise sweetspot around ISO 800, and I calculate its Unity-Gain at around that based on a 13 bit ADC:

The D800e's read noise minimum doesn't correlate at all with its Unity-Gain, though, and neither do some of the other Exmor Nikons I've looked at.

Quote

By the way, you wondered what would happen if the stimulus was a little darker. You can get a sense of that by looking at the blue curve which averages 312/467 = 2/3 of a stop down from the green, and the red curve, which averages 183/457 = 1 1/3 stop down from the green. Not as far down as you wanted to go, but it's some more information.

Good idea.

Quote

Also by the way, I believe the reason the ISO 6400 points jump up like they do is because there is some noise reduction that the camera does at that ISO that you can't turn off.

I agree.

Jack

PS Jim, could you post somewhere the NEF with the 1-2 electron image? I'd be interested to play with it a bit if at all possible.

Yes, but with an arrangement that's more useful for the topic at hand, and with a few additional pieces of information.

Indeed.

Quote

Nevertheless, it's probably more important to know the real optimal Photon noise versus camera noise trade-off point, as far as it's controlled by the ISO setting, although it may be different at various level of exposure.

You mean in case of non-linearities? What could those be in a practical situation?

Quote

However, do note that the overall noise level at a certain ISO setting may be objectionable from an aesthetical point of view, even though the objective SNR is optimal. It's still a trade-off, and collecting more photons will always give better quality.

Noted. As far as I am concerned the exercise is really useful to determine ISO for maximum information (IQ) captured, assuming a maxed-out, fixed exposure.

Yes, the RX-1 does appear to have a read noise sweetspot around ISO 800, and I calculate its Unity-Gain at around that based on a 13 bit ADC.

Jack, at the light levels that I'm using, the read noise is a very small contributor to the total noise, especially since, being uncorrelated with the shot noise and PRNU, it adds as the square root of the sum of the squares. That calculation renders relatively small numbers inconsequential. Two noise sources a factor of ten apart product a result that’s 0.5% more than the larger one.

Something seems wrong here: the M9 sensor should have roughly the same 60,000 electron full well capacity as other Kodak full frame type sensors with 6.8 micron pixels, like the KAF-31600: http://www.truesenseimaging.com/all/download/file?fid=11.62so that 30,000 seems too low. Also, the other numbers seem too high, and it would be puzzling for full frame type CCDs to have lower rather that higher full well capacity, since the main virtue of full frame type sensors is using almost all of the sensor area for storing photo-electrons, whereas CMOS sensor use some space for the three or more processing transistors per photo-site.

Sorry it's taken me so long to research this. I can't find anything wrong with my numbers. I did look at Sensorgen. They put the M9 FWC at 30976 while I put it at 28670. I don't think that's so far off. They say the D4 FWC is 117813 and I say it's 111404. Again, considering the crudeness of my testing, I don't think that's way off. I've got the D800E at 47510 and they've got it at 44972. If I'm way off, so are they. Anybody want to comment on their methods or accuracy?

BTW, both Sensorgen and I are guilty of using more decimal places for these electron counts than can possibly be significant. My high school Physics teacher would be all over me. Mea culpa. I have no defense, other than it's fun to think in terms of individual electrons.

Also, sensors like these with microlenses are known to hav quantum efficiency of around 40% or better with color filter arrays and higher without: about 80%. That is, about 2.5 photons per photo-electron with CFA, under 2 photons/electron without. So 6.2 photons per photo-electron is way to high.

Also, understand that it is not a matter of the sensor counting up to some number of photons and then scoring one electron; it is instead a probabilistic thing. For example, when a sensor has 80% quantum efficiency with no CFA in place, it means that each photon has an 80% chance of causing an electron to be deposited in the well, a 20% chance of going undetected.

My methods don't worry about how many photons it takes to get an electron, although I'm open to extending the procedure (and my camera modeling) to that.

Also, sensors like these with microlenses are known to hav quantum efficiency of around 40% or better with color filter arrays and higher without: about 80%. That is, about 2.5 photons per photo-electron with CFA, under 2 photons/electron without. So 6.2 photons per photo-electron is way to high.

I had missed this originally, BJL. I think that you are correct in general, but perhaps a little exploration of the subject would be useful. What we typically refer to as QE in photography, often specifying it as a single percentage, is better referred to as Effective Absolute Quantum Efficiency (or AQE). Absolute Quantum Efficiency is in fact a function made up of the product of three major components: transmittance (T) of light above a detector (CFA and other filters), the effective Fill Factor (FF), and the Charge Collection Efficiency (n, or simply QE) of the photodiode/detector/site.

T and n are wavelength dependent, so AQE is too. T is wavelength dependent in part because we only want one of three RGB color ranges to make it through to the photosite, so we design color filter arrays with the appropriate wavelength-dependent filtering action.

Charge Collection Efficiency (n) is wavelength dependent because a photon carries a wavelength-dependent amount of energy. More energy, more electrons according to the Responsivity of the photodetector: the vertical axis in the following image represents the number of electrons generated at a given exposure for a silicon photodiode without filters, the horizontal axis is the wavelength of the incoming light:

Note how in this case light of wavelength 550nm generates more than twice the electrons (current) than light at 400nm. If this is incorrect, I would be grateful if someone could explain why.

If we put together T, FF and n we get Absolute Quantum Efficiency like so:

So for instance if we were to illuminate that sensor behind an IR filter with light of D50 spectrum, I understand that a photosite would only get the integral of the respective R, G or B curve under which it is sitting - or about 13% of the total number of arriving photons. Current sensors are only slightly better. The D800e for instance averages out at an Effective Absolute Quantum Efficiency of just shy of 16% - requiring 6+ photons to hit the sensor before the energy necessary to release one electron is achieved. You can read the Effective AQE of the RX-1 in the table that I posted earlier in this page.

Quote

P. S. the name "unity gain" is a bit unfortunate, as it perpetuates the myth that there is "no amplification" at some natural exposure index level, and "amplification" at all higher EI settings. Instead with various dimensional conversions from photo-electron counts (charges) to currents to voltages to numerical ADC levels, the idea of "unamplified" or "gain of one" is physically meaningless. I suppose the idea of "one ADC level per detected photon" can be useful, as an upper limit on the level of amplification that can help with image quality, SNR and such.

I would tend to agree, but I am intrigued by the possibilities none the less

Note how in this case light of wavelength 550nm generates more than twice the electrons (current) than light at 400nm. If this is incorrect, I would be grateful if someone could explain why.

Jack

It's almost correct except for the units, Jack. An electron has units of charge (well, energy really). Current has units of charge per unit time.

Quote

The D800e for instance averages out at an Effective Absolute Quantum Efficiency of just shy of 16% - requiring 6+ photons to hit the sensor before the energy necessary to release one electron is achieved.

Are we really sure that, for a given QE, there is some threshold value of photon count below which free electrons can not be produced?

I would have thought the QE factor would be applicable to any number of photons such that there is probability that even one photon could produce an output. By this I mean that, for many succeeding tries (say 100 tries with a QE of 16%) one electron would be produced in 16% of those tries (at some confidence level, depending on the number of tries). Putting that another way, if 1 photon arrives at the D800e sensor, the probability of it producing an electron is 0.16 (16%).

Would be interested to know the difference between "Effective Absolute Quantum Efficiency" and "Absolute Quantum Efficiency"?

It's almost correct except for the units, Jack. An electron has units of charge (well, energy really). Current has units of charge per unit time.

Ted, I'm not sure that the camera engineers look at it this way, but we can get the dimensions correct if we think of the electrons forming in the well over a period of time as a kind of current, although it's not flowing past a point. If we have a 100,000 electrons FWC, and the well gets 62,400 electrons during a 1/100 second exposure, that's one picoamp.

We can get a kind of quantum efficiency if we divide photographic current (a word I just made up), measured in electrons/sec, by photon flux, measured in photons/sec. If the denominator is the same in both, it's the same number as just dividing photons by electrons.

Ted, I'm not sure that the camera engineers look at it this way, but we can get the dimensions correct if we think of the electrons forming in the well over a period of time as a kind of current, although it's not flowing past a point. If we have a 100,000 electrons FWC, and the well gets 62,400 electrons during a 1/100 second exposure, that's one picoamp.

Jim

Hello Jim,

The literature poses the whole concept of using photodiodes in a camera sensor as based on charge collection, as opposed to their use in solar panels where current is the prime consideration - hence Jack's graph units of A/W where W is a radiometric measurement as opposed to photometric.

It is not correct to assign 1 pA to the well example above. Yes, it can argued of course that 1 pA is the average current that would flow in order to charge the capacitance by some value, but the true capacitor charge is a simple count of electrons collected during the exposure period (ignoring leakage, etc). The example value of 1 pA becomes invalid if, for example during an exposure in low light of some seconds, there were some flashes of lightning!

It is not correct to assign 1 pA to the well example above. Yes, it can argued of course that 1 pA is the average current that would flow in order to charge the capacitance by some value, but the true capacitor charge is a simple count of electrons collected during the exposure period (ignoring leakage, etc). The example value of 1 pA becomes invalid if, for example during an exposure in low light of some seconds, there were some flashes of lightning!

Ted,

Well, it was a thought. If current is not a useful concept in photography, why all the talk about "dark current"?

Well, it was a thought. If current is not a useful concept in photography, why all the talk about "dark current"?

Jim

Yes, dark current is camera-ese for what is more generally know as the leakage current of a reverse-biased diode, photo-sensitive or not. It is a function of the reverse voltage and absolute temperature of the junction. In other words, "dark current" is not restricted to camera sensors; the big fat diodes in your car alternator have it too

In many ways, the passage of electrons can be regarded as "current" with the complication that current has time-1 in it's units. The analogy in mechanical engineering is the difference between "work done" (lbf-ft, Joules) and power (HP, Watts).

In many ways, the passage of electrons can be regarded as "current" with the complication that current has time-1 in it's units. The analogy in mechanical engineering is the difference between "work done" (lbf-ft, Joules) and power (HP, Watts).

Hi Ted,

Jim did a good job of outlining the way I've seen it in the literature, even for camera sensors. The confusion arises when we mix radiometric (flux, power etc.) and photometric quantities (Illuminance, exposure etc.). The units are different but they describe exactly the same physical processes. For instance I find it useful to think of Exposure as a certain number of photons incident on an area during exposure time. But to calculate that number one has to do a few backward somersaults with both sets of units (maybe worth a separate thread).

Illuminance (in lux) provides a certain number of photons per second which get converted into e-/second by the photodiode. So while a photographic sensor is exposed to a certain illuminance, a current of e-/s is indeed generated within the integrating photodiode, which holds the resulting total charge so that it can be read by the downstream circuitry. The responsivity diagrams of a solar vs a photographic photodiode look very similar. What changes are the slopes, which are related to the material used and charge collection efficiency.