[...]The answer is no, you cannot reduce the noise through use of a lower in-camera ISO. The noise at fixed exposure will at best be about the same. The advantage of a lower ISO, if present, is not in the arena of noise but rather in added highlight headroom.

Hi Emil,

I beg to differ (slightly), based on the following experiment (don't we love empirical evidence...) with my 1Ds3.

I did a test by determining the standard deviation (luminosity) of the 6 grayscale patches of a MacBeth Mini Colorchecker, by Raw converting in Capture One, and using Photoshop for the statistical readout of exactly the same cropped pixels between conversions. I made sure that the different exposures gave almost the exact same RGB output levels for the corresponding patches after Raw conversion. The push was performed in postprocessing only, and I made sure they resulted in similar RGB values, so I didn't rely on exposure slider values only but also used a little WB where needed (to keep the luminosity standard deviation stable). All noise reduction settings were set to zero. Caveat, we do not know if the ISO setting influences the Raw conversion in other ways, but Capture One seems pretty well behaved.

Noise standard deviation of a 50x49 pixel area of each patch.

The patches named 1 through 6 in my table are formally called patches 19 (white) to 24 (Black) in CC speak.

As can be seen, the ISO 400 ('unity') gain has lower noise than the ISO 800 group, and the ISO 800 group has lower noise than the ISO 1600 group. Each group has a 'correct' exposure at the main ISO setting, and the lower ISOs in that group were underexposed by one or two EV, effectively resulting in the same amount of photons in each exposure for that group.

Within the ISO 800 group, there is little difference between the normal exposure and the 1 EV underexposed plus 1 EV pushed (in Raw conversion) setting, but the 'pushed' settings will have more highlight clipping latitude (a stop headroom).

Within the ISO 1600 group there is also little difference, although the ISO 800 pushed 1 stop is slightly better than the rest, and again it has 1 stop overexposure headroom. Even ISO 400 pushed 2 stops is a bit better than the ISO 1600 gain setting. The differences within each group are actually very difficult to see, but they are measurable.

IMHO the 1Ds3 doesn't perform very well above ISO 1600 (for my type of use), so I didn't test higher ISO settings, but I expect e.g. the 5D2 to show similar performance at ISO 3200.

It all boils down to the amplification of readnoise by increasing the gain for these camera models. When there are less and less photons for the actual exposure at higher and higher ISO gain settings, the readnoise amplification generates a higher contribution in the mix of noise sources. So for my specific camera model, I do not set the ISO above 800 if I can underexpose by one or two stops and push in Raw conversion.

What happens if you measure the raw data directly, rather than filtering it through the converter (eg by sending it through dcraw -T -4 -D -v)? I am a bit suspicious that either (a) the number of photons is not the same between shots; or more likely (b) the converter is doing something without telling you. the first three squares (1-3 in your counting, 19-21 on the CC chart) should be entirely dominated by photon noise, and yet your results show consistent differences (I think we need to regard 400+2EV, square number two in your chart, as an anomaly).

What happens if you measure the raw data directly, rather than filtering it through the converter (eg by sending it through dcraw -T -4 -D -v)?

It's an older experiment, so I'll have to search for the files when I have some spare time. I'm not too fond of doing noise analysis on single frames though, I prefer to really cancel camera specific influences by subtracting 2 frames, which I didn't take for this complete workflow test.

Quote

I am a bit suspicious that either (a) the number of photons is not the same between shots; or more likely (b) the converter is doing something without telling you.

If anything not already in the Raw data, it would be the Raw converter (which would be a bit of a surprise with Capture One). I made exposures of the CC by bracketing in 1/3rd stops, and matching the closest exposures, in order to reduce the potential effect of shutter inaccuracies which would directly influence the photon shot noise. So the exposures are within 1/3th (probably even 1/6th) of a stop accurate. The Raw converter (using a linear tonecurve) was in this case specifically included in the equation because I was testing a workflow solution for someone with Capture One when faced with poor (stage performance with dance movement) lighting conditions. No matter how nice the sensor statistics, it's the final result after demosaicing that counts in practice. Final noise reduction was not a part of the workflow test (although an option afterwards), and thus set to zero in the Raw converter.

Quote

the first three squares (1-3 in your counting, 19-21 on the CC chart) should be entirely dominated by photon noise, and yet your results show consistent differences (I think we need to regard 400+2EV, square number two in your chart, as an anomaly).

Outliers, however unlikely, are still possible in a probability distribution, apparently. I have no explanation for it.

The experiment is easy to repeat though (for someone with enough time to do it, I'm a bit swamped now), even for those without Raw analysis tools. So I invite others to try it as well.

The Camera to Print & Screen videos demonstrate the benefits of ETTR very clearly. I have also read Michael's earlier articles on this and I understand the reason why this works. The videos also point out the obvious fact that lower ISO settings also improve S/N ratios. Now, by exposing to the right, I am also using a slower shutter speed. If instead of exposing to the right I used the same slower shutter speed but at a lower ISO setting, I would also benefit from lower noise. My question is this: which result would give me the better image - the ETTR image at the higher speed, or the "middle"exposure at a lower ISO, using the same shutter speed and f-stop?

There's been many very good and very technical responses. Here's one a bit less technical.

ETTR is only relevant when your shutter speed does not need to be short. The purpose of ETTR is to gather as much light as possible, and thus have the shutter open as long as possible without over-saturating the sensor. The sensor can gather most photons at base ISO so that is the setting that should be used.

However, often you end up with too long shutter speeds if doing ETTR at base ISO. The workflow is then to use as long shutter speed as the scene situation allows and then use a low ISO as possible to get a good "middle" exposure (an exposure that gives properly exposed in-camera JPEGs). The logic here is that it is not much difference between ISO800 ETTR and ISO400 in the middle when same shutter speed f/stop is used since you're capturing the same number of photons.

A perfect sensor would actually not need ISO setting at all, it would just deliver the exact number of photons captured at each sensel, but today's sensor need analog amplification of the sensel signals, higher ISO higher amplification (and more noise). However, it differs a bit between cameras and sensor how it behaves at high ISO, so there is no safe answer if ISO800 ETTR or ISO400 in the middle will be better image quality, you would have to test for your camera. However, the difference is likely small, so the rule to not use shorter shutter speeds than necessary and combine that with a low ISO setting but not necessary press it to ETTR is a good one I think. If you are the optimizing kind of guy you may want to do your own tests with your equipment though to find out the best strategy.

The experiment is easy to repeat though (for someone with enough time to do it, I'm a bit swamped now), even for those without Raw analysis tools. So I invite others to try it as well.

Cheers,Bart

I have recently become interested in this problem and, although I am certainly not in the same league as you and Dr. Martin. I hope you don't mind if I accept your invitation and join the game.

I started by making a few preliminary analyses on a single exposure using different methods. The results are somewhat puzzling to me and I hope you can give me some suggestions. The exposure was of a daylight lit gray card filling the field of a 105mm lens on a D700. Exposure was F/8 @ 1/1000, which was about 2 stops below the metered value.

1. Raw file opened in ACR using default values and then opened in PSCS5. The histogram shows a mean value of 85.7 with a S.D. = 1.6

2. The raw file was converted with dcraw (-v -d -r 1 1 1 1 -T -4). When the resulting tiff file was opened in PSCS5 the histogram shows a mean = 12.36 and a S.D. = 2.9

3. Using IRIS software, the raw file was converted to a .pic version of a CFA file. The color components were separated with the command CFA2RGB and the following data were extracted. Looking only at the green channel: mean = 959; S.D. = 46.4

Allowing for the fact that the scale of ADU in PS is 0 - 255 and that in the IRIS files and Rawnalyse is 0 - 16383, I am still confused by the large differences in magnitude between the statistics read in PSCS5 and the other programs. I am heartened by the reasonably good agreement between iris and Rawnalyse statistics. I am not surprised by the larger SD read from the IRIS green channel alone when compared with the SD computed from the difference between 2 windows. I do wonder how Rawnalyse arrived at a similar SD.

So my questions to the experts: Do my methods make any sense? Are my results in the proper ballpark? Any suggestions appreciated.

I started by making a few preliminary analyses on a single exposure using different methods. The results are somewhat puzzling to me and I hope you can give me some suggestions. The exposure was of a daylight lit gray card filling the field of a 105mm lens on a D700. Exposure was F/8 @ 1/1000, which was about 2 stops below the metered value.

1. Raw file opened in ACR using default values and then opened in PSCS5. The histogram shows a mean value of 85.7 with a S.D. = 1.6

ACR conversion opened in PS will have gamma applied, as well as tone curve etc unless you have zero'd out the controls in ACR. With sRGB gamma of 2.4 that brings 985/16383 up to about 79/255, close to what you are observing. Tone curve could account for the rest. Similarly, SNR has been increased by a factor of about 2.5 relative to IRIS/Rawanalyze values, presumably also because gamma raises the value of S and compresses the N.

Quote

2. The raw file was converted with dcraw (-v -d -r 1 1 1 1 -T -4). When the resulting tiff file was opened in PSCS5 the histogram shows a mean = 12.36 and a S.D. = 2.9

dcraw -d will give the raw data, without separating the color planes the std dev is not meaningful.

Quote

3. Using IRIS software, the raw file was converted to a .pic version of a CFA file. The color components were separated with the command CFA2RGB and the following data were extracted. Looking only at the green channel: mean = 959; S.D. = 46.4

If you look at the entire color plane then there is a chance that vignetting affects the result; best to choose reasonable size patches, say 100x100 or so.

ACR conversion opened in PS will have gamma applied, as well as tone curve etc unless you have zero'd out the controls in ACR. With sRGB gamma of 2.4 that brings 985/16383 up to about 79/255, close to what you are observing. Tone curve could account for the rest. Similarly, SNR has been increased by a factor of about 2.5 relative to IRIS/Rawanalyze values, presumably also because gamma raises the value of S and compresses the N.

Yes, I had a hunch that gamma conversion was responsible for most of the problem.

Quote

You can have IRIS separate the channels, then subtract one green channel from the other. Many effects such as vignetting and other sources of signal variation such as uneven lighting cancel out.

I recall reading one of your posts, some time ago, that you did just that. Can you tell me where to find that message? I was unable to find the proper instruction in IRIS to obtain the two green channels as separate files. Can you enlighten me?

Many thanks for your comments. I shall continue my experiments and bore you all with the results.

I recall reading one of your posts, some time ago, that you did just that. Can you tell me where to find that message? I was unable to find the proper instruction in IRIS to obtain the two green channels as separate files. Can you enlighten me?

The IRIS command

split_cfa c1 c2 c3 c4

will split the raw data array into four separate color channels, assuming a Bayer type pattern. You can find them as c1.fit etc in whatever directory is the default for IRIS (which you can find by pulling up the preferences dialog). You can load each color plane in turn via

load c1

and so on. Find the two channels that are closest in value; these will be the two green planes. Suppose it is c1 and c3. Load c1 and then execute the command

sub c3 1000

(here 1000 is a fixed amount that is added so that the average is not zero; make it whatever you want). IRIS can then compute average and std dev of a patch -- just drag the mouse with left click to select a window, and right click to select 'Statistics' to get the mean and std dev for the selection.

will split the raw data array into four separate color channels, assuming a Bayer type pattern. You can find them as c1.fit etc in whatever directory is the default for IRIS (which you can find by pulling up the preferences dialog). You can load each color plane in turn via

load c1

and so on. Find the two channels that are closest in value; these will be the two green planes. Suppose it is c1 and c3. Load c1 and then execute the command

sub c3 1000

(here 1000 is a fixed amount that is added so that the average is not zero; make it whatever you want). IRIS can then compute average and std dev of a patch -- just drag the mouse with left click to select a window, and right click to select 'Statistics' to get the mean and std dev for the selection.

Thanks. Worked like a charm. Results: S.D = 22.7 corrected x 0.7 = 16.2Agrees well with result of subtracting two windows from green channel as described above.

I am still puzzled that the results from Rawnalyse come so close without the benefits obtained from subtracting two images.

dcraw -d will give the raw data, without separating the color planes the std dev is not meaningful.

That's the reason I don't use DCRaw for ths type of analysis. Besides, I wonder if the -d is perhaps better replaced by -D which does no scaling at all. Likewise, it might be useful to add the -k 0 parameter as a safeguard against losing the blackpoint offset (maybe it defaults to zero, but it wouldn't hurt to make sure).

Quote

If you look at the entire color plane then there is a chance that vignetting affects the result; best to choose reasonable size patches, say 100x100 or so.

It indeed helps to take a smallish area from the center of the image and use an aperture of f/5.6 or f/8 as that minimizes the vignetting influence. To avoid image detail from interfering, a slight defocus can be used when shooting test images. I also try to select an area that has no hot sensels, or dust bunnies (which tend to pile up in corners if the sensor is not cleaned to perfection).

Find the two channels that are closest in value; these will be the two green planes. Suppose it is c1 and c3. Load c1 and then execute the command

sub c3 1000

(here 1000 is a fixed amount that is added so that the average is not zero; make it whatever you want).

I use a minimum amount of 1024 to accommodate for the blackpoint offset of Canon cameras when doing a Blackframe (read noise) analysis. When doing S/N analysis at higher ISOs I use 4000. In any case, I follow the subtraction command by a stat command and check for minimum>0 or maximum<clipping level, just to make sure that the sigma/standard deviation as reported is not based on clipped noise.

My workflow is based on pre-cropped image segments, so the stat command is adequate for obtaining the standard deviation at the same time as the boundary checking with the stat command.

2. The raw file was converted with dcraw (-v -d -r 1 1 1 1 -T -4). When the resulting tiff file was opened in PSCS5 the histogram shows a mean = 12.36 and a S.D. = 2.9

As Emil points, dcraw's -d doesn't separate the colour cells from the Bayer pattern, but there is a very easy and simple way to achieve that in PS: apply nearest neighbour resize to 50% (that will pick one pixel from each 2x2 Bayer cell):

Now we are free to calculate mean and StDev over that individual RAW channel (not sure about PS's precision on this though, I prefer to use Rawnalyze).

To obtain the four RGGB components just add one pixel line up/down and/or left/right before resizing.

It indeed helps to take a smallish area from the center of the image and use an aperture of f/5.6 or f/8 as that minimizes the vignetting influence. To avoid image detail from interfering, a slight defocus can be used when shooting test images. I also try to select an area that has no hot sensels, or dust bunnies (which tend to pile up in corners if the sensor is not cleaned to perfection).

Cheers,Bart

I assume you are referring to the method of using two identical exposures. Would you also recommend taking a window from the center of the image when one is taking the difference of the two green channels from the same exposure?

I assume you are referring to the method of using two identical exposures. Would you also recommend taking a window from the center of the image when one is taking the difference of the two green channels from the same exposure?

It applies to any noise measurement based on some area average. When there is a slope in average brightness, even if there is no noise, the standard deviation is going to increase. So for a truely acurate noise measurment, one strives for measuring of a uniform surface that's as evenly lit as possible. The larger the area is that's being analysed, the larger the chance is that a brightness slope becomes significant enough to influence the outcome.

Basing the analysis on subtracting an aligned exposure pair will remove any residual slope (pattern noise) from the equation, and leave us with random noise. Subtracting the G1 and G2 green filtered sensel values of a single image from eachother is a close enough substitute to use instead of two separate exposures.

You have to be careful with comparing G1 and G2 (and I would not recommend it) because they can diverge due to flare, crosstalk, and other issues. Some cameras correct for this in-camera before writing out raw data, and others do not. For consistency I advise comparing like-colors only, or averaging all greens and comparing two separate images.

I assume you are referring to the method of using two identical exposures. Would you also recommend taking a window from the center of the image when one is taking the difference of the two green channels from the same exposure?

And thanks to all for tolerating the questions of a tyro!

Subtracting two identical images removes PRNU (pixel response nonuniformity) noise as well as variations in the target, dust on the sensor and other fixed pattern noise. If you measure the standard deviation of a uniform flat field, the result will be considerably higher than the shot noise due to PRNU. PRNU increases in direct proportion to luminosity, while shot noise increases as the square root of the luminosity. With most cameras, PRNU is the most prominent source of noise in the highlights, but the signal to noise in the highlights is such that noise is not perceived in the highlights.

The difference of two flat fields at high luminosity is almost entirely shot noise. One can determine the signal to noise ratio (SNR), and the SNR squared equals the number of electrons collected. Such measurements enable calculation of the full well capacity of the sensor. When performing such measurements, one must avoid clipping of the highlights, since this decreases the noise (a fully clipped image has a standard deviation of zero).

I use a minimum amount of 1024 to accommodate for the blackpoint offset of Canon cameras when doing a Blackframe (read noise) analysis. When doing S/N analysis at higher ISOs I use 4000. In any case, I follow the subtraction command by a stat command and check for minimum>0 or maximum<clipping level, just to make sure that the sigma/standard deviation as reported is not based on clipped noise.

My workflow is based on pre-cropped image segments, so the stat command is adequate for obtaining the standard deviation at the same time as the boundary checking with the stat command.

Cheers,Bart

First, I fail to understand the procedure for doing a Blackframe (read noise) analysis. If the Canon camera adds an offset of 1024 to all data, I would think that one needs to subtract 1024 from the data; particularly the mean since this is the significant datum in regard to read noise, and the S.D. is not of any importance. From what I have read (and perhaps misunderstood), in doing a black-frame (offset) analysis, one takes several images and computes the average of the means, or alternatively, the median value. Further it seems that one would want to include pixels whose value was 1024 (or 0 after adjustment) when calculating the mean noise.

When doing a S/N analysis, I can understand that one wishes to exclude pixels outside the range of 0 to clipping level from influencing the S.D. However, since the S.D. (and the other statistics) are computed first and the offset added after the fact, the addition of an offset should not affect the S.D. It also eludes me how one recognizes the inclusion of clipped noise by examination of the stats. For example, if the image used as the subtrahend contains (clipped) pixels at level 0 while the minimun pixel in the minuend are at level 20, the minimum level returned by the stat command on the difference image will be 20 (or 4020 if one uses an offset of 4000).

Clearly I must be missing something here. Would much appreciate your help.

First, I fail to understand the procedure for doing a Blackframe (read noise) analysis.

Hi Mike,

No problem, I'll explain. A Blackframe is supposed to be black because it received no exposure. However, when we analyse it there is noise. The noise is produced by the camera electronics. When we take precautions to eliminate as many noise sources (e.g. thermal noise doubles for approx. each 6 degrees Celsius rise) as possible, we could assume that the remaining noise is unavoidable and linked to the action of reading out the sensor data, hence coined "Read noise".

A Blackframe is typically produced by setting the camera to it's shortest possible exposure time (to counteract thermal noise build-up), using a body cap instead of a lens (to avoid light leaks, electronic noise from the lens, and camera gain adjustments at certain apertures), and covering the eyepiece of the viewfinder (to avoid light leaking into the mirrorbox though the back).

The signal that is still recorded is the lowest signal possible and is usually random with a Gaussian distribution. It changes with the ISO (gain) setting. It is not the same as a Darkframe, which is produced with a much longer (>1 sec. typically) exposure time, as used for Darkframe subtraction. By comparing a Darkframe and a Blackframe one can quantify the (mostly thermal) contribution.

Quote

If the Canon camera adds an offset of 1024 to all data, I would think that one needs to subtract 1024 from the data; particularly the mean since this is the significant datum in regard to read noise, and the S.D. is not of any importance.

The offset in most Canons cameras is part of the ADC quantization, so it is not added afterwards. That's why the noise has a Gaussian distribution centered at (usually) ADU 1024. There are also values below 1024 because of the Readnoise.

Quote

From what I have read (and perhaps misunderstood), in doing a black-frame (offset) analysis, one takes several images and computes the average of the means, or alternatively, the median value.

What you describe is a Darkframe (not Blackframe) noise reduction technique, commonly used in astrophotography where long exposure times are needed to collect enough photons to record faint signals. This is also why Canon cameras are often used in astrophotography, because the Readnoise improves predictably with averaging multiple frames and may reveal faint signals.

Quote

When doing a S/N analysis, I can understand that one wishes to exclude pixels outside the range of 0 to clipping level from influencing the S.D. However, since the S.D. (and the other statistics) are computed first and the offset added after the fact, the addition of an offset should not affect the S.D. It also eludes me how one recognizes the inclusion of clipped noise by examination of the stats.

You probably figured it out after the above explanation, but to make sure... When you subtract 2 noisy data sets with a mean value of e.g. 1024, then there is a 50% chance that an image has a value of 1024 or less. There is a equal chance of it being 1024 or higher. When we subtract an image with a higher data value from one with a lower data value we would get a negative number, which cannot be encoded in an integer number calculation, and thus result in a clipped noise distribution.

Therefore we add an offset to both datasets, which only changes the mean value but not the SD around that mean, and the result of the subtraction can be statistically evaluated. My choice of 1024 is not a must, one can use any number that doesn't add to the risk of integer value clipping, although it could also indicate an ADC problem. That's why I use the IRIS stat command after the subtraction, to check that there are no values that resulted in (probably clipped) zero despite the offset. If it would have a minimum of zero, then I redo the subtraction with a higher offset (for light exposure frames), but for Blackframes this is usually not needed (especially for lower ISO gain settings).