The plot for all 3 (or 4) colour channels is interesting too. I think illuminant type could play a part there. Perhaps, even if the illuminant is rated for a certain colour temperature, if it isn't full-spectrum that could make a difference.

Bob,

Yes, but it's almost certainly a second- or third-order effect. In my testing, the calculated Unity Gain ISO is only weakly sensitive to the histogram mean. Since all the underlying sensor cells are statistically the same no matter what color filter in the CFA is over them, the effect of various illuminant spectra is to change the relative histogram means of the various color channels.

By the way, figuring out the details of this has dramatically changed the way I work in dim light. I use manual exposure almost all the time where I usually used aperture-preferred with adjustments to the exposure compensation dial. I use the histogram mainly to figure out how much noise I'm going to have to deal with in the file. I leave the exposure the same for many pictures when before I would be constantly changing it. It feels very different to me.

And the problem with that is.....? I wasn't saying that the concept of unity gain ISO isn't important. But there are a lot of things that get studied in a lab that have little to no practical application (at least at this time). Most of us aren't lab rats; however. Most of us are 'in the field' photographers so it makes complete sense to understand how a given lab or theoretical test plays out in practical use. As has been laid out over the course of this discussion there are, definitely, practical implications. And the findings that Jim has laid out and explained are consistent with other, considered reliable, sources which simply lends increased credibility to both. There's also something to be said for being able to explain a technical or theoretical construct and make it more widely understood. Some are unable to do that because they don't really understand the underlying technicalities themselves. Others won't do that because they think it gives them some measure of superiority over others, it makes them feel elite or special.

By the way, figuring out the details of this has dramatically changed the way I work in dim light. I use manual exposure almost all the time where I usually used aperture-preferred with adjustments to the exposure compensation dial. I use the histogram mainly to figure out how much noise I'm going to have to deal with in the file. I leave the exposure the same for many pictures when before I would be constantly changing it. It feels very different to me.

Jim

I can definitely appreciate that. It's not something I run into often but something I'm looking at more closely now with a D800 because it has an ISO/DRange curve shape that's very much linear; like the D7000.

And the problem with that is.....? I wasn't saying that the concept of unity gain ISO isn't important. But there are a lot of things that get studied in a lab that have little to no practical application (at least at this time). Most of us aren't lab rats; however. Most of us are 'in the field' photographers so it makes complete sense to understand how a given lab or theoretical test plays out in practical use. As has been laid out over the course of this discussion there are, definitely, practical implications. And the findings that Jim has laid out and explained are consistent with other, considered reliable, sources which simply lends increased credibility to both. There's also something to be said for being able to explain a technical or theoretical construct and make it more widely understood. Some are unable to do that because they don't really understand the underlying technicalities themselves. Others won't do that because they think it gives them some measure of superiority over others, it makes them feel elite or special.

My response was to your earlier post:

Quote

What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have? Similarly what practical purpose does knowing the full well capacity have? Are we able to use the information to determine exposure on the fly, in the field?

The problem with that, if you must know, is:

A gentleman puts a lot of research and work into a post - which he publishes on what I thought was a forum where there is at least some slight interest in matters technical. Your quote immediately above decries the usefulness of the OP with a series of demeaning rhetorical questions. Your post offered nothing other than the negative implication that the OP is of no practical use. Effectively thereby, it dismissed the OP as useless.

The "Real World" card was played in the last line " . . to determine exposure on the fly, in the field . . "

Perhaps the problem wasn't what was said, it was how it was said. And, I must confess to a certain sensitivity in the area of this topic. I once calculated the saturation-based "native" ISO of a Sigma DSLR sensor, posted it with formulae and quotes from the ISO standard on a Sigma forum, thinking it might be of interest, and was promptly beaten up for being too technical.

A gentleman puts a lot of research and work into a post - which he publishes on what I thought was a forum where there is at least some slight interest in matters technical. Your quote immediately above decries the usefulness of the OP with a series of demeaning rhetorical questions. Your post offered nothing other than the negative implication that the OP is of no practical use. Effectively thereby, it dismissed the OP as useless.

The "Real World" card was played in the last line " . . to determine exposure on the fly, in the field . . "

Perhaps the problem wasn't what was said, it was how it was said. And, I must confess to a certain sensitivity in the area of this topic. I once calculated the saturation-based "native" ISO of a Sigma DSLR sensor, posted it with formulae and quotes from the ISO standard on a Sigma forum, thinking it might be of interest, and was promptly beaten up for being too technical.

Ted

I suggest that your past experience is clouding your perception of the current situation. I know full well what your initial response was in reference to. My questions were not rhetorical, nor demeaning. I didn't say his work was of no practical value. I asked what it all meant in practical terms. Because, as I noted, how or if theoretical constructs impact practical photography is of relevance. Through the course of the thread, we've found out how Jim is using the information for practical purposes in his 'real world' photography. How can that be anything but a positive? I also indicated that I've been thinking about the related concept of the ISO-less sensor because I now have a camera that exhibits that trait. As far as playing the 'real world' card, you still haven't answered my question of why that is unimportant. Do you disagree with my reasons why it is important?

I'll ask you the same question: What practical application does your analysis of the native ISO of a Sigma sensor have? How can photographers use the information in the field to advantage?

but the processing by camera's hardware (including on chip/off chip ADCs)/firmware of the might not be the same

I'd never considered that possibility. Do cameras really have differential processing of the three (well, four) color planes of a raw file? If so, how can you find out if that's the case in a given camera? There's one case I have heard of, and that's multiplicative scaling of the values in a given color plane after the ADC, with the evidence being missing codes in that channel. Compression like Sony uses could, I suppose, cover up that evidence, or at least muddy the waters. I don't think such scaling would have much effect on the test results of the algorithm I'm proposing, unless the exposure gets really low, but I haven't done any testing, since I don't have a camera that I know does that kind of thing.

As far as playing the 'real world' card, you still haven't answered my question of why that is unimportant. Do you disagree with my reasons why it is important?

I guess not. I just didn't like those non-rhetorical, non-demeaning questions, when it comes right down to to it. Should have kept my opinion to myself, and I'll say no more about it if you won't.

I'll ask you the same question: What practical application does your analysis of the native ISO of a Sigma sensor have? How can photographers use the information in the field to advantage?

I can answer without really going OT. I will assume that we are familiar with ISO 12232: 2006. It gives several legal methods of determining a value to be shown or selected on a camera. Some of the methods allow the camera [manufacturer] considerable latitude in these values, provide the word "equivalent" appears somewhere in the product, perhaps buried deep in the manual. And some manufacturers are said to take liberties even with that, according to fora elsewhere. On the camera itself, of course, we will only see 'ISO' unless anyone knows of a camera which says 'SOS' or 'REI' right there on the LCD or knob or whatever. Thus we are lulled into thinking that a setting of 100 gives exactly the same exposure for any camera (all other things being equal).

It is possible to calculate or, for that matter, test as per the OP to determine a sensors saturation-based ISO value (Ssat) - independently of what the camera says. If you then know (up front) how much over or under your camera is, you can dial in exposure compensation such that the camera 'tells the truth' which would be of practical benefit to those seeking, for example, to ETTR. Or those whose images come out less exposed or more exposed than they would like. Or those who want to push the envelope a bit, hoping to claw back some highlight details in post.

I would say that is both useful and practical to know that a camera has an Ssat of, say 130, when the LCD 'ISO' says 100. Feel free to disagree.

I'd never considered that possibility. Do cameras really have differential processing of the three (well, four) color planes of a raw file? If so, how can you find out if that's the case in a given camera? There's one case I have heard of, and that's multiplicative scaling of the values in a given color plane after the ADC, with the evidence being missing codes in that channel. Compression like Sony uses could, I suppose, cover up that evidence, or at least muddy the waters. I don't think such scaling would have much effect on the test results of the algorithm I'm proposing, unless the exposure gets really low, but I haven't done any testing, since I don't have a camera that I know does that kind of thing.

I can answer without really going OT. I will assume that we are familiar with ISO 12232: 2006. It gives several legal methods of determining a value to be shown or selected on a camera. Some of the methods allow the camera considerable latitude in these values, provide the word "equivalent" appears somewhere in the product, perhaps buried deep in the manual. And some manufacturers are said to take liberties even with that, according to fora elsewhere. On the camera itself, of course, we will only see 'ISO' unless anyone knows of a camera which says 'SOS' or 'REI' right there on the LCD or knob or whatever. Thus we are lulled into thinking that a setting of 100 gives exactly the same exposure for any camera (all other things being equal).

It is possible to calculate or, for that matter, test as per the OP to determine a sensors saturation-based ISO value (Ssat) - independently of what the camera says. If you then know (up front) how much over or under your camera is, you can dial in exposure compensation such that the camera 'tells the truth' which would be of practical benefit to those seeking, for example, to ETTR. Or those whose images come out less exposed or more exposed than they would like. Or those who want to push the envelope a bit, hoping to claw back some highlight details in post.

I would say that is both useful and practical to know that a camera has an Ssat of, say 130, when the LCD 'ISO' says 100. Feel free to disagree.

I would say that you're doing something that people have done for decades. That is, essentially, verifying the accuracy of the exposure system in the camera? How are you isolating ISO from the other components of the exposure? What you're effectively doing, it seems, is verifying the accuracy of the metering system. Or are you using a hand held meter to determine exposure? If the latter, then back to the earlier question of how you're separating ISO from shutter speed and aperture. How do your figures compare with DxO's? And how would you account for any differences between your numbers and theirs? How are you determining that 'saturation-based ISO'? What is the methodology? While I understand the practical relevance of what you're doing, it still raises many questions. And dressing it up in pseudo-buzzwords like Ssat, or even 'saturation-based ISO', as opposed to something like 'the point at which overexposure occurs' is going to turn some people off. This is back to the point I made yesterday. There is something to be said for being able to explain technical constructs in a way that they are more widely understood. The more people who understand, the better it is for everyone, no? The more people who understand, the more it can foster discussion. Can you explain your methodology in such a way?

What you're effectively doing, it seems, is verifying the accuracy of the metering system. Or are you using a hand held meter to determine exposure? If the latter, then back to the earlier question of how you're separating ISO from shutter speed and aperture.

No, in most cases you are verifying the rating of the sensor. In assigning an ISO to the sensor, manufacturers allow widely differing amounts of headroom for the highlights, and this is the major variable. Light meters are fairly well standardized according to ISO 2720:1974. My personal experience is with Nikon, and their meters are usually spot on. Otherwise, the use of a hand held meter would give different results from the built in meter. These considerations are discussed in depth in articles by Doug Kerr on his web site.

How do your figures compare with DxO's? And how would you account for any differences between your numbers and theirs? How are you determining that 'saturation-based ISO'? What is the methodology? While I understand the practical relevance of what you're doing, it still raises many questions. And dressing it up in pseudo-buzzwords like Ssat, or even 'saturation-based ISO', as opposed to something like 'the point at which overexposure occurs' is going to turn some people off. This is back to the point I made yesterday. There is something to be said for being able to explain technical constructs in a way that they are more widely understood. The more people who understand, the better it is for everyone, no? The more people who understand, the more it can foster discussion. Can you explain your methodology in such a way?

where A is the aperture, q is a constant, Lsat is the luminance in cd/m2 required for saturation, and t is the integration time (shutter speed). On modern digital cameras the apertures and shutter speeds are quite accurate. Most photographers do not have a photometer to measure the luminance, but as mentioned above, the camera meters are usually within spec and the meter reading can be used with confidence. An exposure according to the saturation standard should yield approximately 12.7% sensor saturation, and this is easy to verify using RawDigger or a similar tool. Why complicate things and throw up all your disclaimers?

No, in most cases you are verifying the rating of the sensor. In assigning an ISO to the sensor, manufacturers allow widely differing amounts of headroom for the highlights, and this is the major variable. Light meters are fairly well standardized according to ISO 2720:1974. My personal experience is with Nikon, and their meters are usually spot on. Otherwise, the use of a hand held meter would give different results from the built in meter. These considerations are discussed in depth in articles by Doug Kerr on his web site.

This can happen. It's not uncommon. You automatically assume that people know who Doug Kerr is? This is more of the same obfuscation. But just so we're all sure, I'm guessing this is the Doug Kerr you're referring to. This is an excerpt from a comment he made on another forum a number of years ago: "I shoot mostly in P, beacuse [sic] in most cases I can't imagine what I know about scene brightness metering and setting exposure parameters that 300 engineers in Japan don't." Not a source of information I'd put a great deal of confidence in.

where A is the aperture, q is a constant, Lsat is the luminance in cd/m2 required for saturation, and t is the integration time (shutter speed). On modern digital cameras the apertures and shutter speeds are quite accurate.

Actually not as accurate as you might think. 'Aperture flicker' is a big annoyance to timelapse shooters and it can make a significant difference from shot to shot in a clip. And yes, I can go and read the articles on the DxO site. But from his explanation I have no idea whether Ted is using the same methodology or not. So I asked. 'q' is a 'constant'? What is it? What constant? What's the figure? It really isn't a true 'constant' though, is it? It's going to vary from lens to lens, right? Where do the T and v values come from in determining q? If I ask why '78'? Or where that number comes from you'll decry that I'm throwing up roadblocks. When in actual fact that's not the case at all. Understanding how a formula is derived or where the inputs come from is as important as just plugging numbers into it.

Quote

Most photographers do not have a photometer to measure the luminance, but as mentioned above, the camera meters are usually within spec and the meter reading can be used with confidence. An exposure according to the saturation standard should yield approximately 12.7% sensor saturation, and this is easy to verify using RawDigger or a similar tool. Why complicate things and throw up all your disclaimers?

Regards,

Bill

I'm not complicating things. I'm trying to simplify, get past the bafflegarb and get to the heart of the matter. That you see my questions as 'disclaimers' is your problem.

Actually not as accurate as you might think. 'Aperture flicker' is a big annoyance to timelapse shooters and it can make a significant difference from shot to shot in a clip.

Do you have any data on this? I do have data for the Nikon D800e using various shutter speeds and a constant aperture of f/8. The coefficient of variation is 1.0, indicating a high degree of reproducibility for both the aperture and the shutter speed.

And yes, I can go and read the articles on the DxO site. But from his explanation I have no idea whether Ted is using the same methodology or not. So I asked. 'q' is a 'constant'? What is it? What constant? What's the figure? It really isn't a true 'constant' though, is it? It's going to vary from lens to lens, right? Where do the T and v values come from in determining q? If I ask why '78'? Or where that number comes from you'll decry that I'm throwing up roadblocks. When in actual fact that's not the case at all. Understanding how a formula is derived or where the inputs come from is as important as just plugging numbers into it.

It would be good if you did a bit of reading before asking questions about data that are readily available and understood by photographers with a technical bent. The derivation of 78 is addressed along with other issues in an excellent post on Wikipedia. The factors in determining q are also discussed and they do include an assumed T value for the lens. With TTL metering, the T factor is taken into account. For practical photography, one is really interested in the total system response that includes the lens, sensor, light meter, exposure mechanism of the camera, and the rendering software.

Do you have any data on this? I do have data for the Nikon D800e using various shutter speeds and a constant aperture of f/8. The coefficient of variation is 1.0, indicating a high degree of reproducibility for both the aperture and the shutter speed.

I, and many others who do timelapse, have ample empirical evidence. That empirical evidence is supported by methods used to eliminate the problem. Photography isn't all about data. It's about results more importantly.

Quote

It would be good if you did a bit of reading before asking questions about data that are readily available and understood by photographers with a technical bent. The derivation of 78 is addressed along with other issues in an excellent post on Wikipedia. The factors in determining q are also discussed and they do include an assumed T value for the lens. With TTL metering, the T factor is taken into account. For practical photography, one is really interested in the total system response that includes the lens, sensor, light meter, exposure mechanism of the camera, and the rendering software.

Regards,

Bill

Why? I read something someone posts that raises questions in my mind. What is so wrong about asking questions. What are you afraid of? Why are you staunchly opposed to questions?

Thank you for making my point. The lexicon comes from a highly technical document. It's intended to be read by people with a highly technical background. Here's the thing. Not everyone has that highly technical background. Read what I noted earlier about the (rare) ability to translate technical constructs into more easily understood text. If you had done that, perhaps people wouldn't have responded so, according to you, harshly.

At the 16 bit DN of about 16000, the SD is contaminated by PRNU. The observed SD is 156 and the corrected SD is 144. At a 16 bit DN of around 4000, the observed SD is 73.8 and the corrected SD is 72.3.

Bill,

I started down the road of creating a model in Excel to help me sort this out. It got messy fast, because a few hundred pixels in a simulated test image is not enough to have stable expected values (I'm using the term in the mathematical sense). I've decided that what I should do is create a camera model using a real programming language. I've decided on Matlab. If I do it right, I should be able to extend it to aliasing and demosaicing studies. This is not going to be an afternoon job, so be patient.

One thing I could use is information that would help me model pixel response non-uniformity. I'd be grateful for any pointers you might give me.

I started down the road of creating a model in Excel to help me sort this out. It got messy fast, because a few hundred pixels in a simulated test image is not enough to have stable expected values (I'm using the term in the mathematical sense). I've decided that what I should do is create a camera model using a real programming language. I've decided on Matlab. If I do it right, I should be able to extend it to aliasing and demosaicing studies. This is not going to be an afternoon job, so be patient.

One thing I could use is information that would help me model pixel response non-uniformity. I'd be grateful for any pointers you might give me.

Jim,

The best answer I can give for determining PRNU is that PNRU is proportional to the signal, so it increases along with shot noise as exposure increases. Unless you are doing exposures over 1 second or so, you can probably ignore thermal noise. Once you have determined the shot noise from duplicate exposures, PRNU can be estimated by subtracting in quadrature the shot noise and read noise (which is not significant at higher exposures) from the total noise. Have you seen the excellent treatise on noise by Emil Martinec? If not, it is worth reading. I presume that you have read Roger Clark's post on sensor analysis for the Canon 1DM2. It also has good information.

Again, thanks for taking the trouble to perform and write up your excellent investigations and let us know what you find in future tests.

The full-well capacity is a pretty darned good indicator of the dynamic range of the camera. It's a nice thing to know when you're trying to decide what camera to buy, or what camera to use for a particular job.

Once you've purchased the camera and are using it, you might use the dynamic range of the camera to determine when you need to use HDR, averaging, or similar techniques to get more shadow detail. You can't do that directly from the full-well capacity, but you could take the log base 2 of the full-well capacity, and subtract 4 to 7 stops (some people say you need 100 electrons for photographic quality, and that's a tad under two to the seventh) to account for the signal-to-noise ratio (SNR) you want in the shadows, and what's left would be the approximate difference, in stops, between the highlights and the shadows-with-detail (Zone II or III).

Here's the graph with a log base 2 vertical axis to make it easy for you to do the math in your head:

This ignores dark noise, read noise, and other things that affect the shadows but not the light tones. It also ignores resolution, and you can decrease noise in an image by rezzing it down. In practice, I've found the D4 and the D800 to give similar noise performance at similar resolutions. If we compute the dynamic range by averaging the photosites to get to 12 megapixels for each camera, we see that, except for the M9, the size of the sensor pretty much detirmines the dynamic range:

Have you seen the excellent treatise on noise by Emil Martinec? If not, it is worth reading.

I have seen it, but I initially thought that I could do a black-box analysis. I now see that's not going to work very well, and I need to peek beneath the covers. My worry is that, as I find out more about the underlying processes, that the model will need to have a bunch of special cases (look at the way the M9 results are outliers), and that it will turn into something as Byzantine as pre-Copernican astronomy, to mix a few metaphors.

If somebody has attempted this before me, and they have a class hierarchy that they're happy with, it would save me from reinventing the wheel. I would hope that classes and methods should in this case be substantially independent of programming language (assuming OOP).

I created a camera model in Matlab. I modeled pixel response non-uniformity (PRNU) by creating a matrix the same size as the image, populating that matrix with a normal distribution centered around unity, and, for each simulated exposure, multiplying the electron count (including photon noise) by the matrix. Then I made 100 simulated exposures, took the central 200x200 pixels, and computed the Unity Gain ISO using the algorithm in the OP. I did that for test ISOs from 100 to 6400, and for exposures from Zone VII (ADC value of 8000) to Zone III (ADC value of 500). Here's what I got:

Here are the parameters of the camera that I simulated, which was modeled loosely on the Nikon D4. Full well of 100,000 electrons. Base ISO 100. Fourteen bit ADC. (those three parameters taken together work out to a UG ISO of 610.39.) PRNU standard deviation of 0.002. Quantum efficiency of one. I have not yet modeled read or dark noise, which will probably keep the Zone III line from looking as good as it does in this set of curves.

So, you were right about cutting down the exposure when you're using this algorithm to compute Unity Gain ISO.