Since signal to noise varies as the square root of exposure, doubling of the exposure will improve S:N only by a factor of 1.4. With current sensors, this may not make much difference.

You know Bill this applies only to photon noise, which is the least important in case of underexposure, i.e. in those cases where noise is really an issue.Read noise, which is almost constant no matter the exposure, is the limiting factor in the deep shadows and hence determine DR. Here SNR is improved by a factor of 2 when doubling exposure.

Very deep shadows in the dark window of this high DR scene, allowing 1 stop headroom (left, indicated as +2EV) and extreme ETTR (right, indicated as +3EV):

.

RAW histograms:

The +2EV and +3EV indications are the exposure compensations applied over the spot metering on the white wall. The few clipped G pixels in the extreme ETTR had no visible effect at all in the final result.

PS: BTW this reminds me I owe you an email; I cannot think of any solution for the XP/Histogrammar issue, sorry.

You know Bill this applies only to photon noise, which is the least important in case of underexposure, i.e. in those cases where noise is really an issue.Read noise, which is almost constant no matter the exposure, is the limiting factor in the deep shadows and hence determine DR. Here SNR is improved by a factor of 2 when doubling exposure.

Very deep shadows in the dark window of this high DR scene, allowing 1 stop headroom (left, indicated as +2EV) and extreme ETTR (right, indicated as +3EV):

An excellent point, Guillermo, and a very good ilustration. Read noise does predominate in the deep shadows, limiting dynamic range. However, current dSRL sensors have low read noise, so for most of the range of the sensor, shot noise predominates. This is shown in Figure 12 of Emil's paper. The slope of the SNR plot is 0.5 down to about 6 stops below clipping, reflecting the effect of shot noise. With less exposure, the slope increases to unity, reflecting the effect of read noise. Like you, I am a proponent of ETTR, but one should not carry it too far. With my Nikon D200, underexposure is a killer, but with my D3, with much better read noise and a larger pixel size, maximum ETTR is less crucial.

current dSRL sensors have low read noise, so for most of the range of the sensor, shot noise predominates. This is shown in Figure 12 of Emil's paper. The slope of the SNR plot is 0.5 down to about 6 stops below clipping.

Correct Bill, but looking at the SNR values I wouldn't consider 6 stops below clipping the critical zone where to worry about noise. Even on my noisy 350D you can struggle and succeed to find texture 8 stops below clipping, so I prefer to go to lower RAW exposures to find the limit.

This is a Canon 20D plot done from real read noise and photon measures from Emil. When SNR reaches 2EV (12dB is a good photographic criteria to still be able to recognize textures), we are already more than 8 stops far from clipping and in that zone the slope is clearly almost 1.0:

(6dB/EV = slope 1.0)If you look at Emil measures, they are always parallel lines in the areas before and after the toe.

Gabor, the data came from your Excel file! they were numbers, not curves.

Gullermo,

the graphs I presented don't go over 80% noise (SRN=1.25), and already that is too far. There is a problem in principle with the measurement of the noise: as soon as black clipping occurs, i.e. some of the pixel values are zero - or under zero in Canon files - the black level corrected standard deviation is lower than the real one. The clipping starts at about -9.5 EV with the 5D2, ISO 100.

Example from the 5D2 ISO 100 file: at -11 EV, the standard deviation is 6.3 absolute, but only 5.6 after BL correction.

Thus the numbers at the dark end are useless. I did not "publish" the uncorrected numbers, because they are not comparable to other cameras.

Now, specific examples from the 5D2 ISO 100 sample:

-EV StDev Noise %9.27 6.74 28.38.27 7.11 14.97.23 8.06 8.23

The step from -9.27 EV to -8.27 EV in fact halves the noise ratio, as you stated - but from -8.27 to -7.23 the reduction is less.

The limit of clipping at ISO 1600 is about -8 EV. Samples not darker than this limit show

-EV StDev Noise %8.07 17.8 32.67.09 22.3 20.85.97 13.0 30.5

this is far from being doubled by 1 stop difference in the exposure. Note, that the amount of captured light here is 1/16 of that that with ISO 100 with the same pixel value.

I don't know the reason of the difference from the "ideal" result; I know nothing of the hardware, but I think the "read noise" has several sources and I guess not all are constant.

Buil's protocols are the correct ones, from the point of view of maximizing SNR, but hardly practical for standard photography (who wants to substract master biases made of 20+ frames from a shoot of a building at night).

Maximizing SNR involves, at the most basic level, minimizing noise whatever the origin is, and getting as many photons as one can for a given exposure. In other words, this is exposing as far "right" as one can without overexposing and staying in the sensor linear response zone if photometric measures are to be made, probably of little practical interest for photography... except when testing to the limit for the sake of it.

The behaviour of cameras changes somewhat as the sensor heats (dramatically as far as the thermal noise component is concerned). It may seem again like a non-issue for photography, but if one does extensive testing consisting of almost continuously shooting, live view usage, etc, the sensor will be warmer after a while if there's no cooldown period between shots and basic camera use.

Camera manufacturers use lots of tricks to minimize noise, this is why Nikon cameras, with their "artificially" low noise are not suitable, or at least the best choice, for astrophotography. The Nikon strategy could very well be the best for everyday photography, but it also means that the ideal "ETTR" strategy will vary with the camera model just as the ideal ISO, as far as maximizing SNR is concerned, varies between cameras of the same brand.

And beyond deep pixel peeping, the factor that matters most, in my observations and for my purpose, is heat. The difference in image quality between a warm sunny afternoon and a colder morning is striking. Keeping the camera in cold storage, if practical, beats ETTR by a wide margin. But of course, one can try to do both.

I don't underdstand French. As much as I can interpret that paper, it does not explain why the noise classified as "read noise" is not constant within a single shot.

Quote

And some more data here

Again, no explanation. However, honestly if someone would explain the hardware reason, I could only repeat it anyway.

On the other hand, I have an observation re one of the points, "quantum efficiency": it can not be measured without removing the color filters; and if that could be accomplished at all, then the microlenses too would be eliminated, making the result irrelevant.

Another observation: beside heat, the individual camera copy counts a lot. If I were interested on the cleanness to that degree as astrophotographers are, I would make raw shots from several cameras under identical circumstances and pick the one with the lowest read noise (measured on the masked pixels).

On the other hand, I have an observation re one of the points, "quantum efficiency": it can not be measured without removing the color filters; and if that could be accomplished at all, then the microlenses too would be eliminated, making the result irrelevant.

QE can definitely be measured with the bayer matrix. Why couldn't it be? All you need to do is to count incoming photons, either from calibrated sources (or known sources, see the link) and then count the number of electrons. For example

There is nothing special to a Bayer Matrix that would prevent the comparison the amount of incoming photons to the amount of generated electrons. There is even the notion of Geometric QE which, for a given surface area takes into acount that 50% of that area has a specific G QE, 25% R QE and 25% B QE

As far as the read noise variation with ISO is concerned, the Martinec paper addresses the issue in English.

QE can definitely be measured with the bayer matrix. Why couldn't it be?

It has nothing to do with the Bayer arrangement but with the microfilter and with the color filter. How on earth do you calculate with the number of photons, when you don't know the proportion passing through the filter and directed in the well?

Quote

All you need to do is to count incoming photons, either from calibrated sources (or known sources, see the link) and then count the number of electrons. For example

I find Clark's calculations childish.

Quote

Some manufacturers, for example Kodak, provide QE numbers

I know and I regard that as QE.

Quote

As far as the read noise variation with ISO is concerned, the Martinec paper addresses the issue in English

I am looking for an explanation of the variation in read noise at a given ISO; even more, in a single shot.

I would like to thank everyone who replied to my OP, especially GLuijk whose early reply answered my question. One further question though. I can see 2 possible ways of increasing the exposure. I use a Canon 5D and 350D.1. Shoot Aperture or Shutter priority with exposure compensation set +2. Spot metering from lightest area of subject. This semms to me to limit the EV to +2. Lock exposure and focus and shhot.2. Shoot Manual with spot metering and then dial in reqired EV. This does not limit one to +2.Is there apreferred way of doing it used by you all?PeterPS The replies by the end became very technical and left me dizzy.LOl

I would like to thank everyone who replied to my OP, especially GLuijk whose early reply answered my question. One further question though. I can see 2 possible ways of increasing the exposure. I use a Canon 5D and 350D.1. Shoot Aperture or Shutter priority with exposure compensation set +2. Spot metering from lightest area of subject. This semms to me to limit the EV to +2. Lock exposure and focus and shhot.2. Shoot Manual with spot metering and then dial in reqired EV. This does not limit one to +2.Is there apreferred way of doing it used by you all?PeterPS The replies by the end became very technical and left me dizzy.LOl

To use ETTR effectively, you have to know the headroom that the camera allows for the highlights. For example, if you expose so that the highlights are just short of clipping according to the histogram and camera allows 0.5 EV of headroom for the highlights, the highlights in the raw file will be 0.5 EV short of clipping and you have not exposed fully to the right. You have to conduct tests using the camera JPEG processor or the raw converter and compare the results to those obtained by a program which shows the raw data (Iris, DCRaw, etc). See this article on Libraw for an example.

To implement ETTR, the purist would use spot metering to take a reading from the highlights, use manual exposure, and then give 2-3 EV additional exposure (depending on previous tests) to bring the highlights up to the desired value just short of clipping. A common value for the exposure increment is +2.5 EV. If you use aperture priority or shutter priority, you would judge exposure by the histogram (after performing the above mentioned tests) and use exposure compensation to set the highlights. In this case, a fixed compensation is not appropriate. For example, if you are shooting snow scenes you would have to use a larger value than for a normal scene, since the camera will render the snow as mid gray. If you use evaluative exposure (matrix with Nikon), the camera has already added a compensation and the compensation you dial in will be added to the automatic compensation.

> It has nothing to do with the Bayer arrangement but with the microfilter and with the color filter.

On one side you have incoming photons, on the other side you have outgoing electrons.

> How on earth do you calculate with the number of photons, > when you don't know the proportion passing through the filter and directed in the well?

Well, with a calibrated source, eventually as a star, you merely take into account the number of photons you measure vs the ones that arrive. There are a few ways to do that, but it won't be necessary to get into details...

> I know and I regard that as QE.

Great! Then you have just discovered the solution to your question above.

Take an arbitrary source.Take shot(s) with that Kodak chip. Get rid of noise (through standard calibration).Take shot(s) with the camera you want to measure. Get rid of noise as above.Compare the two recorded signals.

That's it, there you go, you have relative QE.

> I am looking for an explanation of the variation in read noise at a given ISO; even more, in a single shot.

OK - that's an easy one too.

There are two components to read noise

- noise during analog to digital conversion: this conversion is not perfectly repeatable. There is a statistical uncertainty, not necessarily gaussian. (See Merline - Howell, 1996)- noise introduced by the electronic chain itself (for example the size of the amplifier, its sensitivity and, most importantly, its temperature). In a CCD, simply reading a line will raise the temperature of the amplifier. In a CMOS sensor, the problem is a bit different because each pixel sensing well has its own amplifier. But this introduces another source of non uniform response as those amplifiers do not behave identically. In a way, you have hot and cold amplifiers just as you have hot and cold pixels. It also introduces leakage currents. Etc... etc... BTW, you can't read/convert/work as fast as you want, because this does increase heat significantly, which is why there are now multiple channel electronics.

2. Shoot Manual with spot metering and then dial in reqired EV. This does not limit one to +2.

I strongly recommend shooting in M mode for ETTR, where you can overexpose as much as you want.

I have the same cameras as you (350D and 5D). Bill (bjanes) explains clearly the way to do ETTR; to find out more about how much headroom these cameras allow from light metering to saturation have a look at this article (ignore the Spanish text if you don't understand, the sample images can easily be interpreted, MEDICIÓN PUNTUAL=SPOT METERING): ETTR WITH SPOT METERING. You can use spot metering on the 5D which allows up to +3EV over spot metering if only the highlights entered the metering circle. Some +2.5EV as Bill suggests is probably the best tradeoff between quality and safety.

The problem to chek in place if your ETTR succeded, is that camera displays are not RAW-oriented but JPEG-oriented, which is an overexposed processing of the RAW file, i.e. the camera will report pesimistic information about how you exposed. To learn more on how to make your camera display (histograms and highlight clipping) closer to the real RAW condition: UNIWB. MAKE CAMERA DISPLAY RELIABLE. You will find there files to implement UniWB on both your cameras.

Taking pictures was never so easy as with digital photography... they said.

Taking pictures was never so easy as with digital photography... they said.

I sense here a certain sarcasm, Guillermo. However, I believe it really is true. Taking pictures has never been so easy. I paid A$850 for my first 1GB Micro drive, for my Canon D60. One can now buy 32GB compact flash cards for very much less.

Modern DSLRs now takes 5 or more frames per second. If anyone has some doubt about his ability to get a good ETTR exposure, then bracket all shots. Problem solved.

Modern DSLRs now takes 5 or more frames per second. If anyone has some doubt about his ability to get a good ETTR exposure, then bracket all shots. Problem solved.

My crappy Canon DSLR supports only three shots in a bracket; this is a shame, particularly because the shots can be made in a small fraction of a second with MLU and live view combined. A reasonable camera software could even automatically delete the files of those shots from the bracket, which are definitively worthless compared to the others.

I strongly recommend shooting in M mode for ETTR, where you can overexpose as much as you want.

I have the same cameras as you (350D and 5D). Bill (bjanes) explains clearly the way to do ETTR; to find out more about how much headroom these cameras allow from light metering to saturation have a look at this article (ignore the Spanish text if you don't understand, the sample images can easily be interpreted, MEDICIÓN PUNTUAL=SPOT METERING): ETTR WITH SPOT METERING. You can use spot metering on the 5D which allows up to +3EV over spot metering if only the highlights entered the metering circle. Some +2.5EV as Bill suggests is probably the best tradeoff between quality and safety.\

Guillermo,

A few points for discussion. I don't understand Spanish, but the illustrations are more or less self explanatory. The histograms shown by Histogrammar are excellent, but the program does not work with my current XP machine due to problems in the graphics driver. As far as I know, one still needs to use DCRaw or something something similar to process the raw file. Personally, I prefer Iris, since it has a graphical interface. Another option is to use Rawnalize (Gabor), which can show the raw histogram directly from the raw file and has a graphical interface.

Quote from: GLuijk

The problem to chek in place if your ETTR succeded, is that camera displays are not RAW-oriented but JPEG-oriented, which is an overexposed processing of the RAW file, i.e. the camera will report pesimistic information about how you exposed. To learn more on how to make your camera display (histograms and highlight clipping) closer to the real RAW condition: UNIWB. MAKE CAMERA DISPLAY RELIABLE. You will find there files to implement UniWB on both your cameras.

Since the red and blue multipliers are greater than unity for daylight and most other illuminatioin, one can use the RGB histograms on most of the more adavnced cameras to check for clipping. If the green channel is truly clipped, one must reduce exposure. If the scene contains strong reds or blues and clipping in these channels is present, this could be due to the multipler greater than unity or true clipping in the sensor. One should then use UNIWB. I keep normal white blance in one custom profile on the camera and UNIWB in another.

Saturation clipping can also occur if the camera is rendering into a small color space such as sRGB. I always set my camera to aRGB, which is the widest space on Nikon cameras. Does Canon offer a wider space for JPEG? A low saturation setting on the camera can reduce saturation clipping.

If the camera allows considerable headroom and applies an S curve to the data, a strong S curve with a high contrast setting can also cause clipping in the histogram. Many photographers set the camera to low contrast.

In addition to all that, another thing I find bad in UniWB is that output profiling means a weighted mix of all three channels data. So if the output G is a function of R, G and B, how much can we trust camera clipping? if output G is not clipped, is because it is really not clipped in the RAW or because the influence of R and B made it appear as non-clipped after the matrix linear combinations done for camera colour profile conversions?

Anyway I like to use it, not because of its accuracy but because it's more stable (you know better what to expect, as opposite to selecting a particular WB). BTW in my cameras multipliers are always >=1 so the R and B channels can clip due to WB (in fact with ETTR'ed RAW files they usually do if UniWB is not set).