In a previous post, I showed how to determine the read noise for the D800e using RawDigger and obtained an average read noise of about 1.05 ADU. To determine the read noise in e-, one needs to know the camera gain, which is expressed as e-/ADU. One way to determine the camera gain is to plot the variance of subtracted pairs of duplicate frames (the subtraction is done to remove fixed pattern noise, leaving shot noise and read noise) against the data number, as outlined by Christian Buel. The gain is equal to the reciprocal of the slope and the read noise is equal to the square root of the intercept. I obtained data for the D800e as shown below. The data are for ISO 100.

The slope is 0.2287, so the gain is 1/0.2287 or 3.46 e-/ADU. The read noise is sqrt(17.026) or 4.13 e-. The previously determined read noise was on average 1.05 ADU, so the read noise in electrons is 1.05*3.46 = 3.64 e-. Since the sensor saturates at 15735 ADU, the full well would be 15735*3.46 or 54503 e-. The Sensogren data for the D800 at base ISO gives a read noise of 2.7 e- and a full well of 44972 e-. The engineering DR would be 54503/3.64 or 13.7 f/stops, which is in agreement with the DXO value of 13.24 stops.

As a check, Bill Claff's data for the D800e lists the read noise at ISO 100 as 4.167, in essential agreement with my value of 4.13. I conclude that the Sensogren data for the D800 at base ISO is not accurate. They derive their value from reverse engineering of the DXO data and their methods have been questioned. Comments are welcome.

One way to determine the camera gain is to plot the variance of subtracted pairs of duplicate frames (the subtraction is done to remove fixed pattern noise, leaving shot noise and read noise) against the data number, as outlined by Christian Buel.

Hi Bill,

Yes, that's the common method of determining the actual situation for one's specific sensor array.

Quote

The gain is equal to the reciprocal of the slope ...

Correct, with the small precaution that the lower exposure pairs results will have slightly elevated noise levels due to the proportion of read noise added to the photon shot noise. Also, one should try and avoid any clipping. During exposure of the highest exposure pair watch out for clipping of the upper tail of the Poisson noise distribution (your data may have incurred some tail clipping), and when subtracting the lower exposure pairs it can be necessary to add an offset to avoid results below zero being truncated. When all these precautions are applied, then a reasonably good linear regression fit should be possible.

Quote

and the read noise is equal to the square root of the intercept.

I prefer to use a pair of 1/8000 second exposures (least possible amount of thermal noise (dark count) accumulation), with the bodycap on (instead of a lens with potential electronic interference and light leakage), and the viewfinder blocked (to avoid stray light from entering the mirror box). That will produce a slightly more accurate read noise only value than the intercept of a linear regression which may be influenced by some of the other noise averages. Some sensors have light shielded areas which can be used as no exposure areas, but I would even then try to use the shortest possible exposure time to reduce any thermal accumulation effects.

Quote

I obtained data for the D800e as shown below. The data are for ISO 100.

The slope is 0.2287, so the gain is 1/0.2287 or 3.46 e-/ADU. The read noise is sqrt(17.026) or 4.13 e-. The previously determined read noise was on average 1.05 ADU, so the read noise in electrons is 1.05*3.46 = 3.64 e-. Since the sensor saturates at 15735 ADU, the full well would be 15735*3.46 or 54503 e-.

That's an amazing capacity for such small sensel areas (~2289 photons per square micron), great technology. Together with the low noise floor that really helps the DR.

Quote

The Sensogren data for the D800 at base ISO gives a read noise of 2.7 e- and a full well of 44972 e-.

They're both lower than your observed values, maybe due to their DxO derived nature.

Quote

The engineering DR would be 54503/3.64 or 13.7 f/stops, which is in agreement with the DXO value of 13.24 stops.

My tests also show that the DxO conclusions (for 'screen' DR) in general are close to my own observations (like the determination I made for my 1Ds3 in a similar fashon for gain and Read noise as you did).

Quote

As a check, Bill Claff's data for the D800e lists the read noise at ISO 100 as 4.167, in essential agreement with my value of 4.13. I conclude that the Sensogren data for the D800 at base ISO is not accurate. They derive their value from reverse engineering of the DXO data and their methods have been questioned. Comments are welcome.

In general, I prefer well executed empirical data experiments over derived/converted data adopted from other sources.

I prefer to use a pair of 1/8000 second exposures (least possible amount of thermal noise (dark count) accumulation), with the bodycap on (instead of a lens with potential electronic interference and light leakage), and the viewfinder blocked (to avoid stray light from entering the mirror box). That will produce a slightly more accurate read noise only value than the intercept of a linear regression which may be influenced by some of the other noise averages. Some sensors have light shielded areas which can be used as no exposure areas, but I would even then try to use the shortest possible exposure time to reduce any thermal accumulation effects.

Bart, Thanks for useful information and hints. Unfortunately, the lens cap method does not work with Nikon cameras since they add no offset to the data and the read noise is clilpped. The histogram of such a frame is shown and it appears approximately half Gaussian;

One can plot the data number vs the exposure time and attempt to extrapolate to zero exposure. One such attempt is shown below. The lowest exposure above zero is the point at which the read noise begins to clip as indicated by minimum values of zero in the data. The extrapolated value is about 5 16-bit ADUs or 1.25 14-bit ADUs.

That's an amazing capacity for such small sensel areas (~2289 photons per square micron), great technology. Together with the low noise floor that really helps the DR.

Yes, that is rather amazing since the larger pixels of the D3 have a full well of 65,568 e- as determined by Peter Facy. This is 929 e-per square micron, using the nominal pixel size, but the actual sensing area would be less than 8.4 x 8.4 microns (the pixel pitch of the camera).

Note: this portion of the post was revised on 5:32 pm cst on 2/21/2013

As a further check, I plotted the number of electrons as a function of 14 bit data number with the following results. The number of electrons was calculated by squaring the signal to noise ratio, which yields the number of electrons directly without having to calculate a gain. The shot noise, which was obtained by subtracting out the read noise in quadrature, was used for the noise. Applying the regression equation gives a value of 53248 e- at the saturation point of 15785 ADU. From the number of electrons and the data number, one can calculate the gain, which seems to vary with the signal.

.........As a further check, I plotted the number of electrons as a function of 14 bit data number with the following results. The number of electrons was calculated by squaring the shot noise, which was obtained by subtracting out the read noise in quadrature. Applying the regression equation gives a value of 53248 e- at the saturation point of 15785 ADU.

Bill

Bill, thanks for your work.

At which ISO did you make your calculations ??.

At low ISOs (hi pixel saturation) Pixel Responce Non Uniformity affects much the resulting SNR. It's not enough to subtract read noise to solve for electrons. You have to subtract PRNU also.

Usually PRNU is a bit less than 1% (typical values are 0.3%-0.7%) so for 50,000 electrons and 0.5% PRNU there is 250 e stdev to add (in quadrature) to the photon noise sqrt(50000) = 223 ...

At low ISOs (hi pixel saturation) Pixel Responce Non Uniformity affects much the resulting SNR. It's not enough to subtract read noise to solve for electrons. You have to subtract PRNU also.

Usually PRNU is a bit less than 1% (typical values are 0.3%-0.7%) so for 50,000 electrons and 0.5% PRNU there is 250 e stdev to add (in quadrature) to the photon noise sqrt(50000) = 223 ...

Ilias,

You are welcome. The calculations were for the base ISO of 100. PRNU (pixel response nonuniformity) was eliminated by subtracting two identical frames taken with the same exposure and measuring the standard deviation of the result. That SD would be for two frames, and the SD for the individual frame is obtained by dividing that SD by sqrt(2). This method is outlined by Christian Buel.

As an example, one pair of frames gave pixel values of 52347.8 and 52274.1 with SDs of 302.7 and 300.1 respectively, giving an average of 301.4 ADUs. An offset of 2000 was added to one image to prevent negative values with the subtraction, and the SD of the subtracted images was 341.95 ADU for two images and 241.8 ADU for a single image. The 241.8 ADU represents shot noise and read noise, PRNU having been eliminated by the subtraction. One could obtain the shot noise by subtracting the read noise of 1.2 in quadrature.

The total noise was 300.1 ADU, and the components are shot noise, read noise, dark frame (thermal noise) and PRNU. Thermal noise at the exposure time used for the exposures is negligible. The PRNU can be obtained by subtracting 241.8 in quadrature, giving a result of 179.9 ADU. The percent PRNU is 179.9/52234 or 0.34%. This is a very good value.

Bart, Thanks for useful information and hints. Unfortunately, the lens cap method does not work with Nikon cameras since they add no offset to the data and the read noise is clipped.

Yes, but you can use the light-shielded (masked) sensels at the edge of the sensor array, apparently they do still have an offset to avoid the read-noise distribution's lower half/tail clipping. That would also give the best results when a pair of Raws with the shortest possible exposure time is used.

Comparing that, versus the read-noise of longer exposure time pairs will also allow to judge how fast the dark current adds thermal noise as exposure times approach or exceed the 1 second exposure limit. In my experience with other cameras it's a minor influence at shorter exposure times than 1/8th second, but it is something that e.g. astrophotographers do run in to. They not only subtract dark frames for the actual exposure times, but also black frames (=read-)noise from the shortest possible exposure time (which is problematic for Nikon Raws).

Quote

As a further check, I plotted the number of electrons as a function of 14 bit data number with the following results. The number of electrons was calculated by squaring the shot noise, which was obtained by subtracting out the read noise in quadrature. Applying the regression equation gives a value of 53248 e- at the saturation point of 15785 ADU.

You are welcome. The calculations were for the base ISO of 100. PRNU (pixel response nonuniformity) was eliminated by subtracting two identical frames taken with the same exposure and measuring the standard deviation of the result. That SD would be for two frames, and the SD for the individual frame is obtained by dividing that SD by sqrt(2). This method is outlined by Christian Buel.

As an example, one pair of frames gave pixel values of 52347.8 and 52274.1 with SDs of 302.7 and 300.1 respectively, giving an average of 301.4 ADUs. An offset of 2000 was added to one image to prevent negative values with the subtraction, and the SD of the subtracted images was 341.95 ADU for two images and 241.8 ADU for a single image. The 241.8 ADU represents shot noise and read noise, PRNU having been eliminated by the subtraction. One could obtain the shot noise by subtracting the read noise of 1.2 in quadrature.

The total noise was 300.1 ADU, and the components are shot noise, read noise, dark frame (thermal noise) and PRNU. Thermal noise at the exposure time used for the exposures is negligible. The PRNU can be obtained by subtracting 241.8 in quadrature, giving a result of 179.9 ADU. The percent PRNU is 179.9/52234 or 0.34%. This is a very good value.

Regards,

Bill

Τhanks for answering in detail. I have to apologize because you had already explained it in your first post but I had not read it properly. And I wondered how on earth the result looks OK !!

Is 52347 & 52274 in 16bit ??. If so you measured at about 1/3 under saturation. I am curious .. did you have any indication for not linear data (or maybe linearized after the sensel read) as is suspected by Iliah Borg. Can you upload raw histograms or the raw files ??.

Τhanks for answering in detail. I have to apologize because you had already explained it in your first post but I had not read it properly. And I wondered how on earth the result looks OK !!

Is 52347 & 52274 in 16bit ??. If so you measured at about 1/3 under saturation. I am curious .. did you have any indication for not linear data (or maybe linearized after the sensel read) as is suspected by Iliah Borg. Can you upload raw histograms or the raw files ??.

Yes, those numbers are in 16 bit notation, which is used by ImagesPlus. To get the 14 bit value, one divides by 4. I have only the raw data as written to the file and have no idea of how to determine if the data were linearized before being written to the data card. How does Iliah check for this?

The numbers are quite linear up to clipping as shown in this graph:

The raw data are shown here in tabular form. Exposures above 1 sec start to show clipping as indicated by the reduced standard deviations and were excluded from analysis, so that the highest nonclipped raw data number is around 52300 for 16 bit notation and 13075 in 14 bit notation. I don't know which histograms and raw files you are interested in.

Here are histograms at the upper range where clipping begins:

You raise some interesting points and I could make some of the raw files available to you via YouSendit. Since they are quite large, one has to be a bit selective. Let me know.

Yes, but you can use the light-shielded (masked) sensels at the edge of the sensor array, apparently they do still have an offset to avoid the read-noise distribution's lower half/tail clipping. That would also give the best results when a pair of Raws with the shortest possible exposure time is used.

Bart,

Yes, the masked pixels do have an offset applied and the read noise is not clipped. From the RawDigger histograms, the offset is about 600. Here is a RawDigger histogram of the masked pixels in an exposure with the lens cap on:

Comparing that, versus the read-noise of longer exposure time pairs will also allow to judge how fast the dark current adds thermal noise as exposure times approach or exceed the 1 second exposure limit. In my experience with other cameras it's a minor influence at shorter exposure times than 1/8th second, but it is something that e.g. astrophotographers do run in to. They not only subtract dark frames for the actual exposure times, but also black frames (=read-)noise from the shortest possible exposure time (which is problematic for Nikon Raws).

Yes, I tried some dark frames at 30 sec, 1 min, and 5 min and the results were nonsense. Dark frame subtraction was off (long exposure NR in Nikon parlance), but Nikon uses HPS (hot frame suppression) for exposures 1/4 sec and longer with the D800 and this can not be turned off by the user as discussed in this thread on DPReview. Marianne Oelund did an interesting analysis.

The Sensogren data for the D800 at base ISO gives a read noise of 2.7 e- and a full well of 44972 e-. The engineering DR would be 54503/3.64 or 13.7 f/stops, which is in agreement with the DXO value of 13.24 stops.

As a check, Bill Claff's data for the D800e lists the read noise at ISO 100 as 4.167, in essential agreement with my value of 4.13.

Hey Bill,

I agree that Sensorgen's RN data often looks funny at base ISO. FWIW I derive data graphically from DxO's full SNR curves. I have looked at the D800 for which I get RN of 4.23e-, FWC of 47587e- and gain of 3.02 e-/ADU (using your white point DN) at camera ISO100. Bill Claff gets 4.18e- and 2.98 gain, close enough I think. Your gain looks different because the 'e' performs slightly differently.

Cheers,JackPS I notice that your variance vs DN graph is a bit stilted. I have found it useful to exclude the bottom and the top of the curve from the fitting algorithm in order to get better values. For Nikon cameras at ISO100 the best 'shot noise only' range seems to be somewhere around 0.5%-5% of full scale, with 1-2% the sweetspot.

the gain is 1/0.2287 or 3.46 e-/ADU. The read noise is sqrt(17.026) or 4.13 e-. The previously determined read noise was on average 1.05 ADU, so the read noise in electrons is 1.05*3.46 = 3.64 e-. Since the sensor saturates at 15735 ADU, the full well would be 15735*3.46 or 54503 e-.

Again FWIW I quickly looked at the full SNR DxO data at base ISO for the D800e.

I agree that Sensorgen's RN data often looks funny at base ISO. FWIW I derive data graphically from DxO's full SNR curves. I have looked at the D800 for which I get RN of 4.23e-, FWC of 47587e- and gain of 3.02 e-/ADU (using your white point DN) at camera ISO100. Bill Claff gets 4.18e- and 2.98 gain, close enough I think. Your gain looks different because the 'e' performs slightly differently.

Jack,

Where do you get bill Claff's read noise of 4.18 e- and gain of 2.98 e-/ADU?

In his read noise by data number chart he reports that the read noise for the D800e at ISO 100 is 1.261 14 bit ADUs, and in his read noise in electrons chart he liste the read noise for base ISO at 4.167 e-. This implies a gain of 3.402 e-/DN.

PS I notice that your variance vs DN graph is a bit stilted. I have found it useful to exclude the bottom and the top of the curve from the fitting algorithm in order to get better values. For Nikon cameras at ISO100 the best 'shot noise only' range seems to be somewhere around 0.5%-5% of full scale, with 1-2% the sweetspot.

Thanks for the tip. I did note that in my data but didn't know what to make of it. For shot only noise, why do you use such low values? 1-2% would result in ADU values of only 158-316. In this range, read noise begins to contribute significantly to total noise and it would seem advisable to have a higher value to anchor the regression line.

Note that I revised a portion of my previous post regarding the direct calculation of the number of electrons as a function of the data number. Each calculation is independent and does not rely on calculating a gain.

For shot only noise, why do you use such low values? 1-2% would result in ADU values of only 158-316. In this range, read noise begins to contribute significantly to total noise and it would seem advisable to have a higher value to anchor the regression line.

You can use those values directly, but I prefer to draw a tangent at the appropriate slope (30db/2^10x) and read the full scale SNR to determine FWC - it's less arbitrary. The .5-5% range is typically the portion of the curve where the tangent sits best at base ISO for Nikon DSLRs. See below for an example based on the D5200:

Bill, those are D800 figures. The 'e' figures were in the post that followed.

You can use those values directly, but I prefer to draw a tangent at the appropriate slope (30db/2^10x) and read the full scale SNR to determine FWC - it's less arbitrary. The .5-5% range is typically the portion of the curve where the tangent sits best at base ISO for Nikon DSLRs. See below for an example based on the D5200:Jack

Jack,

Now I see your method. I was calculating the shot noise by eliminating PRNU with duplicate images and subtracting the read noise in quadrature. The adjustment for read noise is significant only at low signal levels. Your method does seem to work amazingly well, but it is nice to verify it with primary data.

Now I see your method. I was calculating the shot noise by eliminating PRNU with duplicate images and subtracting the read noise in quadrature. The adjustment for read noise is significant only at low signal levels. Your method does seem to work amazingly well, but it is nice to verify it with primary data.

Right. I'd be interested in what gain and read noise you would get if you fit the curve to your original data in the 200-1000DN range only.

Right. I'd be interested in what gain and read noise you would get if you fit the curve to your original data in the 200-1000DN range only.

Jack

Jack,

The graph for the DN in that range does not work well, since the intercept for the read noise is way off. I think that is due to insufficient anchoring at the high end.

IMHO, perhaps the best method using my approach would be similar to that of Peter Facey, who calculated the number of electrons by squaring the signal-to-noise ratio, and the latter is calculated after quadratically subtracting the read noise from the measured noise. This gives the number of electrons directly without having to plot the data and calculate a slope. This is also the method used by Roger Clark.

Here are my results. For gain, I would use the figures in the mid range of the exposures. What do you think?

Since the forum software limits the scale of the chart, I have also attached it to see what happens.

IMHO, perhaps the best method using my approach would be similar to that of Peter Facey, who calculated the number of electrons by squaring the signal-to-noise ratio, and the latter is calculated after quadratically subtracting the read noise from the measured noise. This gives the number of electrons directly without having to plot the data and calculate a slope. This is also the method used by Roger Clark.

Here are my results. For gain, I would use the figures in the mid range of the exposures. What do you think?

I think those are very neat results. Note how close the SD vs SDfor1 values are in the sweet spot around 1-2% of full scale: that tells us that the noise around there is entirely random, no PRNU. And RN is still proportionately small, so we can assume that virtually all noise is shot noise. Extrapolating from, say 14-bit DN 211, FWC = (211/8.1)^2*15785/211 = 50764 e-. Trying a little higher at 14-bit DN 334, FWC = (334/10.25)^2*15758/334 = 50181. Going all the way up to DN 13087 gives a FWC of 56550 e-.

From each data pair far away from read noise you can also easily calculate PRNU: For instance at DN 13087, PRNU = sqrt(302.7^2 - 241.8^2) / 52345 = 0.35% - not bad at all, and better than the regular D800.

As for the curve you kindly produced for me, it should show virtually no read noise because it was chosen in an interval where RN supposedly had no effect. So I am glad it shows a negligible read noise of 0.6e-. As for the gain of 3.22...

If I were to look at your data witout knowing what it is, I would be tempted to treat anything above 14-bit DN 5300 as somehow influenced by other variables. I would then (incorrectly perhaps) guess the correct gain to be around 3.22 for DN's below that. I am not sure why it (and FWC) is so different than values above that level.