The only way to discuss sensors objectively is to reference actual measurements. From the outcry of the D800 fanatic, one would think it actually had a really crappy sensor by the way they demand images be normalized in size to allow the only form of objective comparison a study of a sensor could possibly endure. Ironically, the D800, D600, and D7000 all still outperform Canon sensors when you only look at the objective sensor measurements, so the inane debate about Print DR and its subjectivity is really just that...inane.

So you would compare noise energy at two different frequencies as if they were the same frequency???

Yes, if that is how you want to put it. To put it another way, I want to compare the true RAW, native output of two cameras, untainted by software algorithms. Software algorithms introduce a level of abstraction in the results, and algorithms have the potential to introduce bias into the results. I am also not interested, really, in "comparing noise energy". Noise is only one aspect of IQ, and generally speaking a minor aspect of IQ. Normalization is NOT a necessary process in order to compare sensors. Ironically, even in the ABSENCE of normalization, Sony Exmor sensors STILL come out on top...and THAT is what matters to me. That tells me that Canon's latest sensors, despite their significant pixel size advantage, does indeed need to take care of their read noise and pattern noise issues...I didn't need some bias-introducing unknown software scaling algorithm and a quirky scoring system to tell me that in an exaggerated fashion. It was clear as day regardless of whether you "normalize noise frequencies".

You can go on and on page after post about this but you are just not getting this right at all.

You can tell me...and Neuro, and dozens of other intelligent people on these forums...that same thing till the cows come home. Just because you, LTRL, insist that we aren't getting something doesn't make it true. We ALL know exactly where you are coming from, and why you insist on normalization. To be simple: we all reject your opinion just as much a you reject ours. And that opinion is that there is more to sensors and sensor comparison than doing so on a noise-normal baseline. There is far more information and knowledge to be gleaned if you examine the RAW, native hardware results for true measurements in addition to normalized results, especially if you do not unduly weight the normalized results over direct measurements. One has to recognize that normalization introduces some level of bias regardless of how it is done, and that it is not the single supreme test that tells you everything about a sensor (particularly the way DXO does it, which explicitly overweights "Print" results in their scoring system.)

To put it another way, I want to compare the true RAW, native output of two cameras, untainted by software algorithms. Software algorithms introduce a level of abstraction in the results, and algorithms have the potential to introduce bias into the results.

Unfortunately there is not really any such thing as a 'true measure'. Probably the closest would be simply to list the every single sensel value - all 20+ million of them, for a range of different inputs. There would be no algorithmic tainting, but equally no way to make sense of the numbers, let alone to make meaningful comparisons. And people would still complain that the original input scenes that were photographed were wrong somehow.

Like it or not, software algorithms are inherent in photography. Every time that you compress a JPEG you are using a proprietary and usually not-publically defined algorithm. Just the process of displaying the image on a monitor or rendering it on a printer involves hundreds if not thousands of such algorithms.

The figures that sites like DXO publish are fairly standard measures in engineering and use very standard and well understood algorithms. Although the final scores are obviously subjective, the actual measurement graphs are quite objective and use reasonably well defined measurements and algorithms - certainly compared to anything you see from Canon or Adobe.

Of course a single noise figure does not give the entire story about a sensor (banding, for example), and DXO likely measure/analyse a whole lot more as part of developing their RAW converters. Thankfully, they do not publish these. Imagine the debates if they published noise spectra for sensors to quantify banding, for example.

Neuro is a very knowledgeable guy, and you're probably pretty bright too. And comparing noise at different resolutions makes no sense at all.

Were not comparing noise, though. IMAGE quality...not NOISE quality...IMAGE quality. An image constitutes far more than noise. And, for that matter, one cannot observe more than 10 stops at most, and 8 stops on average, of DR on a computer screen anyway. Trying to say one can objectively "see" more DR in a D800 image scaled down to 8mp vs. at its native 36.3mp is naive. If you have a computer analyze the information contained in an image, and it tells you "Well, yes sir, your 8mp image has an additional 1.2 stops of DR!", that may make you feel better, but it doesn't change how you OBSERVE the quality of the image on your screen. If your the type that likes bragging rights...well, you would definitely have those too, as a single, subjective, scalar quantity to simply beat all your rivals over the head with when you and your buddies get together and start debating who has the better 8mp image.

I would also strongly offer that too much shadow pushing generally results in some funky-looking results. There are dozens of extreme shadow lift examples posted on DPReview forums, as well as elsewhere. The only ones that look good are those that only perform modest shadow lifting. When you see three, four, six stop shadow lifts the results tend to look like one of those poorly tone-mapped HDR images from the earlier days of PhotoMatix, with bad tonality and poor color fidelity in the shadows.

Additionally, on the notion of shadow pushing. Having extra DR in post is really about that...pushing shadows around without running into unsightly noise. First, I won't deny Canon has unsightly read noise at ISO 100 (and to some degree ISO 200). They need to fix it, no question. My arguments have always been about DXO's Print DR ratings though, and the notion that you gain unbounded amounts of DR by downscaling. In the case of the D800, you can supposedly gain more DR than would be allowed by a 14-bit ADC's quantization. Let's assume we can, and we scale a 36.3mp D800 image down to 8mp and save as a TIFF.

The moment you moved from RAW to TIFF, you lost the very vast majority of the exposure pushing power you had. Ever tried to push around a TIFF even by a moderate amount, let alone four or six stops? The results are far from pretty. You MAY, mathematically speaking, gain additional DR by downscaling...but it won't be as usable as the native hardware DR you had in the original RAW. Pushing about RGB triples rather than scalar native pixel values, imposes significant limits. Mathematics aside, realistically, regardless of how much you normalize Gaussian (photon shot) noise (which I think needs to be called out as distinct from read noise, which exhibits in very different ways and may not "normalize" the same way), downscaling changes the entire nature of your photos pixel structure. The results might look less noisy in the shadows, the remaining detail might be cleaner and sharper, but you lost some key things in the process. The photo is no longer RAW, you no longer have all that extreme post-processing latitude you once had, and you lost a CONSIDERABLE amount of original full-size detail.

Taken out of context, reading DXO's results, one might think the D800 was capable of 14.4 stops of DR strait out of the box at native size, and that all of that extra DR would be clearly observable on screen and usable during post-processing...and that everything is as blissful and perfect as it can get. I just want to put things into the proper context so we can have an objective discussion about sensor performance in the real world. The only way I know how to do that is to refer to direct measurements of a sensors actual hardware capabilities, regardless of how much noise there might be.

Out of the box, the D800 is capable of 13.2 stops of DR without clipping highlights or blocking shadows, that DR would give a user more leeway to push exposure around at native size as an ISO 100/200 RAW file than, say, a 5D III. Ironically, despite the fact that I've "unfairly" compared the D800 to the 5D III at a hardware level...the D800 still wins. Noise frequencies not withstanding.

Although the final scores are obviously subjective, the actual measurement graphs are quite objective and use reasonably well defined measurements and algorithms - certainly compared to anything you see from Canon or Adobe.

You just hit the nail on the head, though. The "actual measurement graphs" (Screen Statistics) are indeed quite objective, and that has always been my point. I'll happily use the DXO Screen DR measurement to compare the hardware capabilities of sensors. The argument against my point is that DXO's measurements are useless as mechanism of comparing sensors because they were not taken from normalized images. I believe that notion is fundamentally wrong.

Here's a different test for you -- sample it at 10MP, create another image at 40MP. Then compute the "blackpoint" (SNR = 0db) of the two images.

Now I take it that we agree that the images both have the same "true" dynamic range, but I put it to you that the measured blackpoint on the 40MP will be at a higher luminance level, and therefore the measured per-pixel dynamic range will be lower on the 40MP image.

The second test is irrelevant because measuring the black point is not the same as measuring the floor of useful photographic detail. Assuming that it is leads to flawed extrapolations, such as DxO's 'normalized' DR graphs, or 9-stop scanned LF Velvia.

It's completely relevant, because it goes to the heart of the question, should you or should you not normalize ?

Suppose you do as you suggest, and scan at some resolution (say 40 megapixels), and then downsample to 10 megapixels. Just to make this simple, let's assume that it's some kind of test image like these transmission wedges you're so fond of.

Then you, dtaylor, are responsible for assigning a dynamic range score to both images (the 10 megapixel image and the 40 megapixel image).

Should the two images receive the same score or not ?

If you think the two images should receive the same score, you need to normalize.

As for what point the "floor" is for "useful photographic detail", that's a whole other can of worms. Suffice it to say, you haven't built any kind of case for why using the black point is not valid.

You just hit the nail on the head, though. The "actual measurement graphs" (Screen Statistics) are indeed quite objective, and that has always been my point. I'll happily use the DXO Screen DR measurement to compare the hardware capabilities of sensors. The argument against my point is that DXO's measurements are useless as mechanism of comparing sensors because they were not taken from normalized images. I believe that notion is fundamentally wrong.

I think you're having trouble understanding some really fundamental concepts here, such as "objective", "hardware", and "measurement".

Let me pose a question -- suppose hypothetically, you have a 40mpx sensor and a 10 mpx sensor. The 10mpx sensor has a higher dynamic range per pixel. You could reasonably ask the question, if I downsampled (traded resolution for dynamic range), would I have more dynamic range in the 40mpx image ?

Yes, if that is how you want to put it. To put it another way, I want to compare the true RAW, native output of two cameras, untainted by software algorithms. Software algorithms introduce a level of abstraction in the results, and algorithms have the potential to introduce bias into the results.

This is not about using an elaborate "software algorithm". It is simple averaging, or statistics.

Quote

I am also not interested, really, in "comparing noise energy". Noise is only one aspect of IQ, and generally speaking a minor aspect of IQ.

DxO's dynamic range screen score is defined in terms of signal to noise ratio in the shadows. So it's not a "minor aspect" when we are discussing dynamic range.

Quote

You can tell me...and Neuro, and dozens of other intelligent people on these forums...that same thing till the cows come home. Just because you, LTRL, insist that we aren't getting something doesn't make it true.

We've regressed a bit apparently. Didn't we agree just a few posts back that you could increase dynamic range by downsampling, because it reduces the level of noise and therefore affects the blackpoint ?

As for Neuro, I discussed this extensively with him in this forum and while he pointed out that you can't really get more dynamic range than the number of bits in the ADC (this is far from obvious by the way), he has acknowledged that you can increase dynamic range by downsampling.

I think it's a little dangerous citing the "conventional wisdom" as if it were canon (with a little 'c'). Until quite recently, I'd watch everyone here pile on with the DxO bashing, and no-one really questioned this. The fact that it's been the conventional wisdom here for a long time doesn't really substantiate it.

Quote

. And that opinion is that there is more to sensors and sensor comparison than doing so on a noise-normal baseline.

"Screen" dynamic range is also done using a "noise normal" baseline of sorts -- it's just that a different baseline is used (SNR = 0db per pixel, without adjusting for resolution)

Quote

One has to recognize that normalization introduces some level of bias regardless of how it is done,

You're really on the wrong side of your own argument for two (or more) reasons. One is that the "IMAGE" does not consist of a single pixel. The other is that NOISE is what defines the baseline (lower end of) DxO's dynamic range (both the screen and print scores).

So it's nonsense to pretend that you can have a discussion about the relative merits of screen vs print DR and ignore noise.

Quote

Trying to say one can objectively "see" more DR in a D800 image scaled down to 8mp vs. at its native 36.3mp is naive

You're on the wrong side of this argument too. Suppose you have to assign a "score" to the downsampled image and the original image. Do you think both images should get the same score ? If so, you should normalize. If not, you shouldn't normalize.

The overall impression of dynamic range won't change (after all, you were going to view the two images at the same size anyway), but the per pixel dynamic range changes considerably.

Quote

If you have a computer analyze the information contained in an image, and it tells you "Well, yes sir, your 8mp image has an additional 1.2 stops of DR!", that may make you feel better,but it doesn't change how you OBSERVE the quality of the image on your screen.

Well that's the thing -- if you don't normalize, you will find that sensors with lower resolution have more "dynamic range" as defined by saturation point / black point, even though when you view the two images at the same size (not 100% crops) on your screen, they appear to have comparable dynamic range.

As it's been explained, the choice of 8mpx as a "target" is arbitrary -- the point is to get everyone on the same playing field. The difference in dynamic range scores for two sensors does not change when you normalize to a different resolution. I also made the point that even though quantization puts a limit on your ability to get a darker blackpoint, it doesn't stop you making improvements at 5db, 10db, etc, so even if you hit the quantization limit, you do get an increase in usable dynamic range.

You're really on the wrong side of your own argument for two (or more) reasons. One is that the "IMAGE" does not consist of a single pixel. The other is that NOISE is what defines the baseline (lower end of) DxO's dynamic range (both the screen and print scores).

So it's nonsense to pretend that you can have a discussion about the relative merits of screen vs print DR and ignore noise.

Quote

Trying to say one can objectively "see" more DR in a D800 image scaled down to 8mp vs. at its native 36.3mp is naive

You're on the wrong side of this argument too. Suppose you have to assign a "score" to the downsampled image and the original image. Do you think both images should get the same score ? If so, you should normalize. If not, you shouldn't normalize.

The overall impression of dynamic range won't change (after all, you were going to view the two images at the same size anyway), but the per pixel dynamic range changes considerably.

Quote

If you have a computer analyze the information contained in an image, and it tells you "Well, yes sir, your 8mp image has an additional 1.2 stops of DR!", that may make you feel better,but it doesn't change how you OBSERVE the quality of the image on your screen.

Well that's the thing -- if you don't normalize, you will find that sensors with lower resolution have more "dynamic range" as defined by saturation point / black point, even though when you view the two images at the same size (not 100% crops) on your screen, they appear to have comparable dynamic range.

As it's been explained, the choice of 8mpx as a "target" is arbitrary -- the point is to get everyone on the same playing field. The difference in dynamic range scores for two sensors does not change when you normalize to a different resolution. I also made the point that even though quantization puts a limit on your ability to get a darker blackpoint, it doesn't stop you making improvements at 5db, 10db, etc, so even if you hit the quantization limit, you do get an increase in usable dynamic range.

You just hit the nail on the head, though. The "actual measurement graphs" (Screen Statistics) are indeed quite objective, and that has always been my point. I'll happily use the DXO Screen DR measurement to compare the hardware capabilities of sensors. The argument against my point is that DXO's measurements are useless as mechanism of comparing sensors because they were not taken from normalized images. I believe that notion is fundamentally wrong.

I think you're having trouble understanding some really fundamental concepts here, such as "objective", "hardware", and "measurement".

Let me pose a question -- suppose hypothetically, you have a 40mpx sensor and a 10 mpx sensor. The 10mpx sensor has a higher dynamic range per pixel. You could reasonably ask the question, if I downsampled (traded resolution for dynamic range), would I have more dynamic range in the 40mpx image ?

You are asking a different question than I am asking, which may be where the problem lies. I am not interested in how much dynamic range the "image" has, at native size or downscaled, where image in this case is a digitized two-dimensional matrix of RGB pixels. Images are virtual constructs, and they can be manipulated in near-infinite ways with software, trading detail for DR or the other way around, removing noise with deconvolution, etc.

The question I am asking is, what is the "sensor" capable of? At what point is shadow detail completely overpowered by the electronic noise in the circuit, and at what point do my whites start clipping? In the physical hardware? The sensor itself isn't scalable...you can't halve or double its resolution or pixel pitch...it is a fixed construct. If you pointed the D800, the physical device, at a test display containing something meaningful...a person, a landscape, whatever, with 14.4 stops of DR...according to DXO's Screen DR results...it will fail to capture all of the DR in that scene. The sensor is capable of 13.2 stops, so 1.2 stops worth of DR are going to be lost somewhere. It will be lost either entirely to noise, or entirely to clipped highlights, or some ratio to both.

Lets assume, for the sake of discussion, that you take a photo anyway. Let's say you expose to preserve the highlights, right up to the limit (so the brightest swatch in your test scene is exactly at maximum saturation.) You've lost a lot of dynamic range not to "noise"...that is too general a concept. There are a variety of types of noise. So lets be specific...you've lost a lot of dynamic range to "electronic noise" that is present in the sensors electronic circuit, which interferes with shadow detail. When you actually expose, and convert the image present in the sensor as charge readings at each pixel via the ADC, you are permanently losing a certain amount of the information that might potentially be recoverable from that read noise, and potentially losing or at least diminishing the rest of the information that might potentially be recoverable from that read noise. To be exact, about 0.4 stops (14.4 of our scene minus the 14.0 limit imposed by the ADC) worth of DR can be lost forever when the ADC digitizes the image, and 0.8 stops (14.0 clipping limit minus the 13.2 stops of DR the sensor is actually capable of) worth of DR will effectively be indiscernible from electronic noise...because it is either going to digitize a pixel that contains pure read noise, digitize a pixel that contains very low signal indiscernible from read noise, or digitize a pixel that contains a strong enough signal to differentiate it (even if only to a minuscule degree) from electronic noise.

You probably won't lose ALL of that 1.2 stops to noise, but you'll lose most of it, especially if your electronic noise is as low as it is in the D800. The remainder, where you still have a very low signal that might be just electronic noise or might be actual signal information...well, you could never really know for sure which it was (at least in the case of a signal floor of 0db...in a Canon sensor that uses a bias offset, you probably could discern a fair bit of image detail that was below the bias offset, and effectively within the range of it's FPN and HVBN). Even if you convert the output RAW to TIFF and scale that TIFF image down to a quarter it's original size...you still aren't gaining back that information, it was digitized (i.e. hard coded, permanently registered, whatever you want to call it) as either useful information representing your scene or non-useful information that might be noise or might be scene detail. (As a matter of fact, your losing a lot more information than you originally lose to noise if you scale an image down that much, and while your black point might approach closer to zero, pixels that constitute the lower decibel of your signal won't be any more meaningful than they were before.)

Perhaps the argument just hasn't been made properly. It may be that dtaylor put it into better words than I, although I am pretty sure I've used the same terms and concepts he has in the past. Let me try to state it in different terms that might be more meaningful.

Dynamic Range as a simple score is generally meaningless. DR that might be gained in the process of downscaling an image, at least to me, still feels rather meaningless...I understand what you (elflord) are saying when you state "you can still gain at 5db, 10db, etc.)...but that is in relation to general noise caused by the random physical nature of light, and applies at all levels to all cameras. (Read noise, however, exists only in the shadows, and exhibits in a different way than photon noise, so the same simplistic averaging rules you apply to photon noise may not apply to less random forms of electronic noise.)

As a tool to gauge how much detail you WILL NOT LOSE to ELECTRONIC NOISE (vs. photon shot noise) if you expose a scene of known dynamic range with a sensor of known dynamic range, assuming you expose to maximize the retention of detail from the deep shadows to the brightest highlights...I believe such a DR measurement is very meaningful. That is what I refer to as Hardware DR, or what DXO calls Screen DR. It may be more accurately termed ADC DR, since it is really the ADC that imposes a limiting factor. Hardware DR tells you that even though your computer screen can only display 8 stops worth of DR when rendering your photos, if you exposed properly, you could push around about five point two additional stops worth of useful detail-retaining information (assuming D800), and "recover" information that otherwise might simply look like pure black or pure white on your screen.

Either way...once your otherwise fluid and easily redistributable image signal on the sensor hits the ADC, the non-discrete or effectively "analog" signal is quantized, potentially "recoverable" detail (i.e. if you change exposure via shutter speed or aperture) well below the noise floor is permanently lost as the electronic noise in each pixel is permanently recorded, and no amount of post-processing will recover what you lost (although if what you call a gain in DR is simply making black pixels blacker and/or white pixels whiter, regardless of whether doing so actually increases the amount of meaningful information those pixels contain...I guess thats something...)

Well, either you understand that, or you don't. Either way, these conversations (in multiple threads) have gotten well out of hand, and I don't want to keep contributing to that. So I'm out.

After elford's excellent description -for which I thank him again - I can accept the increase in DR when downsampling.

I will also accept that Nikon has greater DR but we have to think of it either as a 36Mpixel camera with 13.2 stops DR or a 8 Megapixel camera with 14.4 stops DR (which makes it practically a different camera, otherwise someone would downsample to what 2Mpixels? Wouldn't that increase DR to say ... 15 stops?)

It is NOT a 36Mpixel camera with 14.4 stops DR at max resolution (we cannot have the cake and eat it!) So if we want to think of D800 for what it is mainly (a 36Mpixel camera) we have to think of it as a 13.2 stop DR camera. (This does not reduce the DR capabilities of D800 of course).

P.S Too much trouble and fights. What is a 1.2 stops difference in DR between forum members?

After elford's excellent description -for which I thank him again - I can accept the increase in DR when downsampling.

I will also accept that Nikon has greater DR but we have to think of it either as a 36Mpixel camera with 13.2 stops DR or a 8 Megapixel camera with 14.4 stops DR (which makes it practically a different camera, otherwise someone would downsample to what 2Mpixels? Wouldn't that increase DR to say ... 15 stops?)

It is NOT a 36Mpixel camera with 14.4 stops DR at max resolution (we cannot have the cake and eat it!) So if we want to think of D800 for what it is mainly (a 36Mpixel camera) we have to think of it as a 13.2 stop DR camera. (This does not reduce the DR capabilities of D800 of course).

P.S Too much trouble and fights. What is a 1.2 stops difference in DR between forum members?

I had fun reading all these replies. At least its an education of some form though I can't really see the effects of these things in my usual photography. I must not be harnessing my camera enough. I need to shoot more. Let's shoot more.

After elford's excellent description -for which I thank him again - I can accept the increase in DR when downsampling.

I will also accept that Nikon has greater DR but we have to think of it either as a 36Mpixel camera with 13.2 stops DR or a 8 Megapixel camera with 14.4 stops DR (which makes it practically a different camera, otherwise someone would downsample to what 2Mpixels? Wouldn't that increase DR to say ... 15 stops?)

It is NOT a 36Mpixel camera with 14.4 stops DR at max resolution (we cannot have the cake and eat it!) So if we want to think of D800 for what it is mainly (a 36Mpixel camera) we have to think of it as a 13.2 stop DR camera. (This does not reduce the DR capabilities of D800 of course).

P.S Too much trouble and fights. What is a 1.2 stops difference in DR between forum members?

The intuition is about right though it's really best to think of it as 13.2/36mpx, and then think of 14.4/8mpx as where you'd go if you could extend the DR/resolution tradeoff all the way down to 8mpx. It's useful if you want to compare two cameras with different resolutions

The full story is a bit thornier than that. You can't push the black point more than 14 stops lower than the saturation point, but you do keep getting cleaner shadows as you downsample (e.g. down to or even lower than 8mpx) which means more usable DR