If someone has already discovered this technique and posted a reference to it here, I apologize, and will remove the post if it contributes nothing further. I've done searches and haven't found anything, but I may have missed something.

Anyway, here goes. On the RawDigger site, there's a technique for computing Unity Gain ISO. It is basically a search over several exposures made with the camera ISO setting at different places for the ISO setting that, with a flat, relatively bright (but not saturated) compact target rectangle, produces a standard deviation in the (pick a channel) raw values that's the square root of the mean raw value in that channel.

I thought there ought to be a way to do the same thing without a search. I applied some algebra to the problem, and came up with the following algorithm: Set your camera to some middling ISO; call that value ISOtest. Point your camera at a featureless target. Defocus a bit to make sure you don't have any detail. Expose so that the target is about Zone VI, or a count of about 4000 for a 14-bit ADC. If you have a 12-bit ADC in your camera, try for a count of 1000. Bring the resultant image into RawDigger, select a 200x200 pixel area, and read the mean and standard deviation for each color plane. For each plane, call the mean Sadc and the standard deviation Nadc. The unity gain ISO is ISOtest*Sacd/(Nadc^2). Average all three color channels for the Unity Gain ISO of the camera.

I tried the algorithm out on a Nikon D4 over a range of ISOtest values, making 16 exposures for each ISOtest value and plotting the mean Unity Gain ISOs, the mean plus two standard deviations, and the mean minus two standard deviations.

The result looks like this:

All of the Unity Gain ISOs are within about a third of a stop, so the accuracy is probably good enough to make this a useful measurement; I don't know why I'd want to know the Unity Gain ISO to greater accuracy than that. There is some systematic variation. Some of it may be due to the fact that, for the measurements at ISO 100, 200, and 400, the camera is below the Unity Gain ISO and the statistics of the image my be affected enough to skew the results. I'll be doing some simulation to see if that's a reasonable explanation.

I've done tests at other exposure levels (mean raw values) and the results are only weakly dependent on exposure. I've done similar tests on the following cameras: Nikon D800E, Leica M9, Sony NEX-7, and Sony RX-1, and, with the marginal exception of the Leica, all the results for each camera model cluster within a third of a stop of each other.

The math I used to derive the above equation and the Nikon D4 results are here. The results for the other cameras are here.

I welcome discussion on what might be the source of the systematic variations, which indicate that the simple model I used is incomplete. In the case of the Sony cameras, the raw files are compressed in a way that reduces the resolution in the lighter values. That might be a possible source. I think the value of understanding the systematic variation is to better understand the internal makeup of the cameras, since the test appears to be sufficiently accurate even with this variation.

It turns out that you can take the same data and compute full-well capacity, if you assume that the well fills as the ADC output approaches full-scale at base ISO. The full-well capacity should be proportional to photosite area, all else being equal. With the CCD-based Leica M9 in the mix, all else is not equal:

What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have? Similarly what practical purpose does knowing the full well capacity have? Are we able to use the information to determine exposure on the fly, in the field?

What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have?

There is little to be gained in increasing the camera ISO setting much beyond the Unity Gain ISO. All you're doing is losing headroom, and you're not reducing the noise in the raw file. You're better off letting the histogram slide towards the left and cranking up the Exposure control in Lightroom or ACR. That's assuming you can see the playback image in the camera LCD (derived from the raw preview JPEG, and therefore affected by the camera ISO setting) well enough to do all the chimping you want to do.

Similarly what practical purpose does knowing the full well capacity have? Are we able to use the information to determine exposure on the fly, in the field?

The full-well capacity is a pretty darned good indicator of the dynamic range of the camera. It's a nice thing to know when you're trying to decide what camera to buy, or what camera to use for a particular job.

Once you've purchased the camera and are using it, you might use the dynamic range of the camera to determine when you need to use HDR, averaging, or similar techniques to get more shadow detail. You can't do that directly from the full-well capacity, but you could take the log base 2 of the full-well capacity, and subtract 4 to 7 stops (some people say you need 100 electrons for photographic quality, and that's a tad under two to the seventh) to account for the signal-to-noise ratio (SNR) you want in the shadows, and what's left would be the approximate difference, in stops, between the highlights and the shadows-with-detail (Zone II or III).

Here's the graph with a log base 2 vertical axis to make it easy for you to do the math in your head:

This ignores dark noise, read noise, and other things that affect the shadows but not the light tones. It also ignores resolution, and you can decrease noise in an image by rezzing it down. In practice, I've found the D4 and the D800 to give similar noise performance at similar resolutions. If we compute the dynamic range by averaging the photosites to get to 12 megapixels for each camera, we see that, except for the M9, the size of the sensor pretty much detirmines the dynamic range:

There is little to be gained in increasing the camera ISO setting much beyond the Unity Gain ISO. All you're doing is losing headroom, and you're not reducing the noise in the raw file. You're better off letting the histogram slide towards the left and cranking up the Exposure control in Lightroom or ACR.

OK, that makes sense.

Quote

That's assuming you can see the playback image in the camera LCD (derived from the raw preview JPEG, and therefore affected by the camera ISO setting) well enough to do all the chimping you want to do.

Jim

Very rarely chimp.

Quote

The full-well capacity is a pretty darned good indicator of the dynamic range of the camera.

True. Although with advances in technology I think it's a less valuable indicator.

Quote

It's a nice thing to know when you're trying to decide what camera to buy, or what camera to use for a particular job.

Less enthralled by that concept. We were using slide film with, maybe, a 6 stop brightness range for decades and felt it 'did the job' quite nicely. Any camera that has that or more should be suitable.

Quote

Once you've purchased the camera and are using it, you might use the dynamic range of the camera to determine when you need to use HDR, averaging, or similar techniques to get more shadow detail. You can't do that directly from the full-well capacity, but you could take the log base 2 of the full-well capacity, and subtract 4 to 7 stops (some people say you need 100 electrons for photographic quality, and that's a tad under two to the seventh) to account for the signal-to-noise ratio (SNR) you want in the shadows, and what's left would be the approximate difference, in stops, between the highlights and the shadows-with-detail (Zone II or III).

Here's the graph with a log base 2 vertical axis to make it easy for you to do the math in your head:

Why bother with all of that? Why can't sites like DxO or other 'credible' review sites be used for that information? DxO, for example, suggests the D800 has a drange of just over 13 stops at ISO 100 when the noise floor is SNR=1. Knowing that SNR=1 isn't a practical limit, why can't I simply subtract 2 or 3 stops from the DxO number and consider that the practical brightness range of the sensor? All that aside, one still needs to know the brightness range of the scene/subject being shot or all the math is moot.

I thought there ought to be a way to do the same thing without a search. I applied some algebra to the problem, and came up with the following algorithm: Set your camera to some middling ISO; call that value ISOtest. Point your camera at a featureless target. Defocus a bit to make sure you don't have any detail. Expose so that the target is about Zone VI, or a count of about 4000 for a 14-bit ADC. If you have a 12-bit ADC in your camera, try for a count of 1000. Bring the resultant image into RawDigger, select a 200x200 pixel area, and read the mean and standard deviation for each color plane. For each plane, call the mean Sadc and the standard deviation Nadc. The unity gain ISO is ISOtest*Sacd/(Nadc^2). Average all three color channels for the Unity Gain ISO of the camera.

I tried the algorithm out on a Nikon D4 over a range of ISOtest values, making 16 exposures for each ISOtest value and plotting the mean Unity Gain ISOs, the mean plus two standard deviations, and the mean minus two standard deviations.

I welcome discussion on what might be the source of the systematic variations, which indicate that the simple model I used is incomplete. In the case of the Sony cameras, the raw files are compressed in a way that reduces the resolution in the lighter values. That might be a possible source. I think the value of understanding the systematic variation is to better understand the internal makeup of the cameras, since the test appears to be sufficiently accurate even with this variation.

Jim,

A truly excellent piece of work. I would suggest using a somewhat lower DN (data number) than a 14 bit value of 4000. At high DNs, PRNU (pixel response non-uniformity) is increasingly prominent. And at low DNs, read noise becomes significant.

I did an analysis of my D800e using Roger Clark's methodology with ImagesPlus, which uses 16 bit DNs. One can convert to 14 bit DNs by dividing by four. The observed standard deviation is almost entirely the sum in quadrature of the shot noise, PRNU, and read noise. PRNU can be eliminated by subtracting two identical exposures and determining the SD for the subtracted image. This is the noise for two images, and the SD for 1 image is obtained by dividing by sqrt(2). The results are shown below with exposures giving 14 bit DNs of 4000 and 1000 and 16 bit DNs of 16000 and 4000 highlighted in yellow.

At the 16 bit DN of about 16000, the SD is contaminated by PRNU. The observed SD is 156 and the corrected SD is 144. At a 16 bit DN of around 4000, the observed SD is 73.8 and the corrected SD is 72.3.

Thanks. Nice article. Takes a technical concept and explains it such that an engineering degree isn't needed to decipher it. Too rare an occurrence. See the response directly above for the typical bit of bafflegarb.

One question. Why only the G channel? Why not the entire signal of the entire sensor?

Why bother with all of that? Why can't sites like DxO or other 'credible' review sites be used for that information? DxO, for example, suggests the D800 has a drange of just over 13 stops at ISO 100 when the noise floor is SNR=1. Knowing that SNR=1 isn't a practical limit, why can't I simply subtract 2 or 3 stops from the DxO number and consider that the practical brightness range of the sensor? All that aside, one still needs to know the brightness range of the scene/subject being shot or all the math is moot.

Bob,

There are many ways to approach the technical side of photography. Some people just ignore it, and are perfectly happy with their iPhones and P&S cameras with auto-everything and tiny sensors. Other people believe that the more you understand about your tools the better you can use them, and that deep understanding comes through experimentation. I'm mostly in that camp. Those are the extremes, and it sounds like you are somewhere in between. That's great. If if works for you, keep at it. I'll applaud.

If you'll bear with me, I'll take this opportunity to expand on what I get out of testing and calibration. I'll concentrate on cameras, but, for me, the attitude that I want to do as much as possible for myself extends throughout the photographic process. For example, I make my own printer profiles. It's expensive in equipment and time, and the results may or may not be better as judged by someone other than me, but I like taking control, and I like the fact that I can tweak the profile to do exactely what I want it to do.

Have you ever taught a workshop or course on a subject you know well? I have, and, every time I'm surprised about how much I learn about the subject that I thought I knew cold. Trying to come up with simple explanations for complicated things makes me understand the complicated things more deeply. Student questions sometimes come out of left field, approaching the subject from a direction that I'd never considered makes me dig deep and come up with a new way to think about the subject.

Testing's like that for me. Developing the test makes me think harder about what I'm testing for than just reading about it on the DxO site. When I go to do the testing, things never go exactly the way I thought they'd go, and I learn something from that. One of the things that I often do is perform tests many times and collect statistics on the results. I don't see that on test web sites very often. Having access to the statistics lets me figure out the accuracy, and even the statistical importance, of a result.

Sure, there are people who spend all their time testing and never make good pictures. They're not new; some of the Zone System acolytes did the same thing. I call it the sharpening pencils syndrome. But there are others who use their deep knowledge of the technology to make better pictures.

Bob, I'm not trying to convince you to go with my approach. You've got something that works the way you want it to work, and then it's perfect for you; I'm just trying to help you understand where I'm coming from.

I would suggest using a somewhat lower DN (data number) than a 14 bit value of 4000. At high DNs, PRNU (pixel response non-uniformity) is increasingly prominent. And at low DNs, read noise becomes significant.

Bill,

Thanks for the tip, and for the explanation. I will do some further testing and report. I have enough data collected to do most of the work stemming from your post on previously-made exposures (a good thing when I have to make so many ones under identical conditions to understand the statistics), although most of my data is for dark noise and at Zones III and VI. I notice the DxO does their noise testing at Zone V. Your results suggest the Zone IV is better.

There are many ways to approach the technical side of photography. Some people just ignore it, and are perfectly happy with their iPhones and P&S cameras with auto-everything and tiny sensors. Other people believe that the more you understand about your tools the better you can use them, and that deep understanding comes through experimentation. I'm mostly in that camp. Those are the extremes, and it sounds like you are somewhere in between. That's great. If if works for you, keep at it. I'll applaud.

No, I'm all for testing and experimentation. It becomes a matter of time and necessity. Also perhaps a matter of convenience. If I can rely on the experience of a source that's dedicated to certain types of testing then it becomes a matter of 'why reinvent the wheel'. There's also the point of diminishing returns that has to be considered as well. I don't think some do that. Some feel that getting the nth to the 12th power degree of accuracy is important. And it may be to them but it has zero practical relevance.

Quote

Have you ever taught a workshop or course on a subject you know well? I have, and, every time I'm surprised about how much I learn about the subject that I thought I knew cold. Trying to come up with simple explanations for complicated things makes me understand the complicated things more deeply. Student questions sometimes come out of left field, approaching the subject from a direction that I'd never considered makes me dig deep and come up with a new way to think about the subject.

Many times. And I absolutely agree with what you're saying. It's a great aspect of the interchange between you and the people you're teaching. Authoring a book is much the same. I've had a number of useful and interesting comments from people who've read mine.

Quote

Testing's like that for me. Developing the test makes me think harder about what I'm testing for than just reading about it on the DxO site. When I go to do the testing, things never go exactly the way I thought they'd go, and I learn something from that. One of the things that I often do is perform tests many times and collect statistics on the results. I don't see that on test web sites very often. Having access to the statistics lets me figure out the accuracy, and even the statistical importance, of a result.

Sure, there are people who spend all their time testing and never make good pictures. They're not new; some of the Zone System acolytes did the same thing. I call it the sharpening pencils syndrome. But there are others who use their deep knowledge of the technology to make better pictures.

I understand.

Quote

Bob, I'm not trying to convince you to go with my approach. You've got something that works the way you want it to work, and then it's perfect for you; I'm just trying to help you understand where I'm coming from.

Jim

And I'm not trying to criticise your approach. I'm trying to understand what you feel you gain from it. I'm trying to learn. I do that by asking questions and challenging points of view. I've learned some interesting things from this thread and the article at RAWDigger. So thanks.

One question. Why only the G channel? Why not the entire signal of the entire sensor?

what is entire singal of the entire sensor ? the fact is that you have 4 channels (typically), sometimes the difference is just because of CFA, but sometimes indeed the manufacturer can do differen things for different sensels based on where under CFA they are located, etc... so you can average 4 channels or use the strongest one (like G1/G2) for a typical daylight... rawdigger website has a forum - you can ask authors directly - they are quite prompt if a good question is asked.

The signal for the entire sensor would be the sum of the 4 channels, would it not?

Thinking about this testing a bit more, I'm wondering how it relates to the concept of the 'ISO-less' sensor that people talked about, mostly, when the D7000/K5D came out. The thought there was that it made no sense to increase ISO because of the essentially equal drop in drange for each 1 stop increase in ISO and that simply underexposing and pushing in the RAW converter would serve the same purpose. The D800 sensor behaves much the same way yet it appears that it does make sense to increase ISO up to a point. Is there conflict between the two schools of thought?

Why only the G channel? Why not the entire signal of the entire sensor?

Bob,

I thought that was a good question, so I looked at all three channels, plus their average, which I call the "white" channel for the purposes of the following graph. The fat lines are the means. The skinny ones are the +/- two standard deviations. The red, green, and blue channels are colored appropriately. The average or white channel is colored black. I only averaged one of the green channels, and ignored the other.

It looks to me that it doesn't make a whole lot of difference what channel you pick. Some of the differences may be due to the color of the target, which was D65, more or less. With the D4, as with most cameras, that gives you a higher value in the green channels than in the red of blue ones.

Thinking about this testing a bit more, I'm wondering how it relates to the concept of the 'ISO-less' sensor that people talked about, mostly, when the D7000/K5D came out. The thought there was that it made no sense to increase ISO because of the essentially equal drop in drange for each 1 stop increase in ISO and that simply underexposing and pushing in the RAW converter would serve the same purpose. The D800 sensor behaves much the same way yet it appears that it does make sense to increase ISO up to a point. Is there conflict between the two schools of thought?

I don't think so. Above the Unity Gain ISO (or maybe a stop above that to make allowances for imperfect analog-to-digital converters), the camera is effectively ISO-less, and the ISO dial serves mainly for amusement of the photographer and getting the preview image to be bright enough to use for chimping.

Thanks. Nice article. Takes a technical concept and explains it such that an engineering degree isn't needed to decipher it. Too rare an occurrence. See the response directly above for the typical bit of bafflegarb.

One question. Why only the G channel? Why not the entire signal of the entire sensor?

What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have? Similarly what practical purpose does knowing the full well capacity have? Are we able to use the information to determine exposure on the fly, in the field?

The "Real World" card has been played

The well-respected and knowledgeable Roger Clark seems to feel it has a practical purpose, at least in the comparison of camera performance:

I don't think so. Above the Unity Gain ISO (or maybe a stop above that to make allowances for imperfect analog-to-digital converters), the camera is effectively ISO-less, and the ISO dial serves mainly for amusement of the photographer and getting the preview image to be bright enough to use for chimping.

Jim

I think that makes sense, and your results seem to be consistent with what DxOMark shows on their Dynamic Range plot for the D4.

The plot for all 3 (or 4) colour channels is interesting too. I think illuminant type could play a part there. Perhaps, even if the illuminant is rated for a certain colour temperature, if it isn't full-spectrum that could make a difference.