I think this is a critical point, and possibly the root of the contention of the "DXO DR naysayers." As someone who prints a lot myself, perhaps I can offer some insight.Assuming you print at native resolution, printing does not average the original amount of information into something less. .../cut/...

Printing is NEVER a pixel > pixel matter, the screening of the print may be at 2400dpi even though you're printing at 300dpi. Otherwise you would need 16 million differently colored inks to get a full hue/tone presentation. Since you "only" have between four and twelve (we have 10-pigment printers, with two neutral densities) possible inks - and in some head models maybe two different ink spot sizes - you need a variation of [16x10^6 / 8] over area, dithered to get a full tone resolution. I set the divisor to eight, as an average of the amount of available individual inks in a modern printer here. How the screening process (dither or a complete image RIP) is done determines how good the printer handles detail per mm or in the equivalent measurement set: MTF per dpi.

There are other problems with DXO calling their rated DR "Print DR", though. Assuming you are using a godly form of paper, such as Innova FibaPrint Gloss, which has a dMax of over 2.7, you might be able to get 7 stops or so from a print. Your average fine art print paper has a dMax randing from around 1.3 to 1.5 on average to 1.75 or so for some of the more recent higher-end fine art papers. That gets you maybe 5-6 stops of DR.

Don't make the mistake of mixing up tone resolution with DR. They are never the same in any practical application. Example:A certain measurement has the DR of 10:1 (say a measurement range of 1 to 10), and a resolution of "1" in that range. Ten discrete steps can be clearly differentiated in the original.A certain presentation type has the linear output range of 10-20. That gives a DR of only "2" since 20 is only two times as much as 10. But the presentation still has ten discrete levels very clearly distinguishable from each other, meaning that the tone resolution hasn't changed. On a visual inspection, you haven't limited the measurement DR, just shifted the base point (and lowered the detail MTF of course).

So that takes us back to the definition of DR. I'm happy to accept that DXO has a purely mathematical interpretation of DR, the ratio between white point (maximum saturation) and black point (noise floor). Again, though, I am not sure it is a useful or realistic definition of what dynamic range is. When one thinks about the value of dynamic range in digital photography, the first thing that usually comes to mind is the ability to recover useful detail from deep shadows. I say from the shadows, as I think any photographer who uses digital knows that it is critical to preserve the highlights, as once they are clipped, detail is well and truly gone.

Realistic; who knows, at least I can tell a whole lot of things about a camera and the resulting images, and what you can DO with the camera - just by knowing the DR and some other base performance figures. You would get the same answer from any other competent machine vision specialist or optoelectrician.Practical; Since a camera sensor signal is linear you can move around as you want in it, internal contrast will be constant. This means that I can expose (photometric exposure) maybe one full stop less with a camera with good DR, giving me more "practically usable latitude" in both highlight and shadow. This is not a very difficult PP operation - I put (in the raw converter) the exposure compensation at +0.5 and make the highlight tone curve a little less harsh in the cutoff knee.If the DR is good, I can shorten my shutter speeds or get more DoF (stop down) at low ISOs - without loosing any image quality compared to a low-DR camera used at longer shutter speed or shorter DoF!More DoF or/and shorter shutter speeds are in most situations something very practical, wouldn't you say?

The dispute on record here, if I may define it according to my own views as well as that which I've read from other DXO DR naysayers, is this:

What value does DXO PrintDR (the mathematically derived ratio between white point (maximum saturation, FWC) and black point (electronic noise floor)) have in a real-world context?

From the standpoint of simply moving the black point in a downsampled image, the only thing that occurs is shadows become darker. One LOSES information during the process of downsampling, so the primary benefit of having additional DR in the hardware no longer applies. In the context of viewing images on a computer screen, primarily done via the web, having a deeper black point might be valuable. Computer screens generally support a much deeper black point than actual prints on paper (particularly prints on high quality fine art paper), although none actually support 14 stops of DR regardless, and the average consumer screen is only 6-bit, so roughly the same DR as a print.

Firstly - Most cheap laptop (and cheap TN) screens use 6-bit with 240Hz time scale (delta/sigma) dither to get 8 bits of tone resolution. None of my devices (except maybe my phone) use lower than 8 discrete bits, and both my TV and my computer screens are true 8-bit >> 10-bit time-dithered.Secondly - This tone resolution is quantized in a gamma-corrected space, usually around gamma>>2.0. If you look at the sRGB gamma the step between the first 14 (of 256) bits is 1/13 of the bit value. This means the linear DR of 8-bit sRGB in the ideal application is 13x255 = 1:3315 or about 11.5 linear bits/Ev. A well calibrated HD-TV will follow ITU-R BT709, and present a step of 4.5 in the lower part of the gamma-curve - giving a linear DR of 4.5 x (235-16) = 1:985 - or slightly less than 10 bits/Ev8-bit sRGB as a format standard has almost the same DR per pixel as a 1Dx. (!) -But in a nonlinear tone mapping - that's the difference.

When it comes to real print, assuming one is printing at native size, or an upsampled image, original detail is preserved or slightly softened, but none of it is lost due to downsampling. Regardless, assuming one even does significantly downsample a D800 image so they can print at 8x10", even printed on the highest dMax papers on the market with the brightest L* rating, your going to get HALF the DR you should supposedly be getting from DXO's 14.4 stop Print DR rating..../LONG cut/...At best, DXO's downsampled DR rating should probably be called Web DR. It is not detail-preserving Photographic DR, as upon downsampling you lose detail. It is definitely not Print DR, since a print is inherently more about color richness and gamut than white-to-black point dynamic range. The depth of blacks sometimes matters in a print, however the deeper your black point in print, the harder it tends to be to actually discern fine shadow detail..../cut/...

Again, you're comparing DR and tone resolution as if they were the same thing. And the comparison definitely does not seem coherent in how noise in different tone levels (brightness zones?) is perceived in a real print.

Photography is - when broken down to practical discrete steps - a series of [input DR + tone resolution] to [output DR + tone resolution] translations. As long as the tone resolution of the combined DR+TR of the receiving end is larger than the sending end the transformation can be lossless.1) 3D object space (reality... :-) ) is projected on to a focal plane (sensor or film) through a lens, where you lose DR due to diffraction, haze and flare. Tone resolution is still infinite, limited only by quantum light physics.2) the image space (the projection) is to be translated into electrical and then digital signals (the sensor and sensor electronics). DR is lost due to noise issues, TR is lost due to noise and quantization issues.3) the linear readout has to be tone-mapped into a standardized color space, often a gamma corrected 8-bit RGB space. Here the tonemapping and tone curves determine how much DR you lose - if you lose any DR at all. In a camera phone or a cheap compact, sRGB actually has a much greater DR than what the input can provide! Tone resolution is (often) limited to 1:255 (8bits)4) the standardized image format has to be rasterized to make it printable. DR is limited by the paper white and ink black densities, tone resolution is limited only by the rasterization scheme.

So, it's quite easy to make a 12bit DR deep shadow detail show up as an easily recognizable noise-free detail even in a 7bit (Ev) presentation DR.

So, what is the value of DXO Print DR? Realistically, practically, physcally...what do I actually gain by downsampling my full-detail RAW into a smaller-sized TIFF? For that matter, what value does DXO Print DR have if I save as a compressed JPEG for viewing on the web? Are we really just talking about a DXO weighted score, and nothing more? If so, should it really be called Dynamic Range, or is there a better term DXO could use that wouldn't come off as some kind of sketchy maneuvering (real or simply perceived) of their results in favor of a major monetary contributor?

Trying to redefine a metric that has been used practically, and for very many practical reasons in very many practical circumstances - by thousands upon thousands of signal processing engineers, sensor developers, imaging software developers (including the guys over at Canon DPP development center) and imaging process logic circuit developers is NOT in any way productive, I'd say it's very counterproductive. Especially since the connection between the measurement value and images in reality is so easy to show.

What would help a lot for most people is to understand what DR is, when used and put in the context that it is MEANT to be used. It plays ONE very important part in the most basic breakdown of the individual parameters that is universally used to measure or determine a signal quality - and a signal quality is the base for image quality assessment.

The camera total raw DR does in very large and noticeable amount have an effect on how the complete chain from object space (reality) to print can be realized. A camera with good DR has the (optional!) ability to show a lot more shadow detail (without adding noise!) in the final result, even if the paper/ink combination is pretty poor.

Is that the assertion? Or is the assertion that a 36 MP sensor with 13.2 bits of DR at 36 MP should be described as having 36 MP of resolution and 13.2 bits of DR?

Camera 1 has 36 megapixels and 11 stops of dynamic range per pixel, camera 2 has 20megapixels and 12 stops of dynamic range per pixel.

Which camera has more dynamic range if I display or print their images at the same size ? If I downsample the 36mpx image to 20mpx, I will get more than 11 stops of dynamic range, but do i get more than 12 ?

I think this is a pretty valid question, provided you intend to view the whole image on print or screen, as opposed to just viewing 100% crops.

This is a complex question, since what DxO basically fails to include in their DR vs resolution compensation (the "print" view option) is that no Bayer-based image ever contains equal noise energy all the way up to 1/f. The interpolation stage often called the "de-mosaic" stage necessary (remember, two out of the three channels in each pixel has had to be estimated after the capture...) is effectively filtering the higher noise frequencies out, and tends towards zero at 1/f.

In a layman's terms, you could describe this high-frequency filter as:-"The noise - or average pixel difference - is stronger in power when you compare two pixels a few pixels apart from each other than if you compare two pixels next to each other."

The end result of this is that the first ~30% of downsampling - down to 70% original scale that is - does not lower image noise power [over the image width] by any significant degree. There wasn't much noise energy in the frequency band that we've filtered away, so what we've basically done is to condense the image information.

But then there's also a more subtle effect. The human eye does not react as strongly to fine-grained noise as it does to coarse-grained noise. This does mean that when you MEASURE the average pixel noise power, it might not have been lowered by any appreciable amount - but when you LOOK at the downsampled image you will perceive the image as less noisy anyway.

So - downsampling 36MP to 20MP would theoretically give you an added:sqrt(36/20) = 1.34 linear scale>> convert to log2 >>log(1.34)/log(2) = 0.42Ev or "bits" of DR

But you wouldn't get 0.42Ev in a real, converted image - you would get maybe 0.1Ev and a much tighter (less objectionable) noise pattern.

From this point on downwards however the noise spectrum could be said to be sufficiently close to a linear 1/f behavior, and you would get the full noise-power lowering effect in practice too. So, continuing down in size would yield the full theoretical gains of log(sqrt(linear scale))/log(2). Together with a lowered resolution per image frame of course... :-)

OTOH, if one was trying to compare things for real one might use an advanced, adaptive NR algorithm and not a quick downsample so that might counterbalance some, or maybe even actually more than all, of the loss you mention where the first 30% of down sampling doesn't help to so much due to the debayer already having done some work, etc. So in the end, perhaps just imaging the full amount would apply would give a relatively realistic estimate, not quite sure how it all balances out.

Aaaaaaand for the topic, I can't wait to see if they can overcome their canon ''hate'' for once. I think we all know Nikon new cameras are great! but their numbers are a little offroad

Do you SERIOUSLY mean to suggest that a software company like DxO that is entirely dependent on being completely unbiased in order to sell their software 'hates' Canon, when a large part of their customer base are Canon users?

No they're NOT reviewers, that's a byproduct of lab-testing sensors and lenses to add their data to the database - and a way to attract interest in their software.

For goodness sake. What would they achieve by giving Canon a raw deal? Nothing at all.I bet the opposite is true - that they sincerely wish that Canon would produce better sensors so they (DxO) wouldn't have to hear this nonsense about being biased all the time. The last thing they want is to scare off a whole lot of potentrial customers for their software.

Believe me, your money smells just as good to them as Nikon owners' money does.

Aaaaaaand for the topic, I can't wait to see if they can overcome their canon ''hate'' for once. I think we all know Nikon new cameras are great! but their numbers are a little offroad

Do you SERIOUSLY mean to suggest that a software company like DxO that is entirely dependent on being completely unbiased in order to sell their software 'hates' Canon, when a large part of their customer base are Canon users?

No they're NOT reviewers, that's a byproduct of lab-testing sensors and lenses to add their data to the database - and a way to attract interest in their software.

For goodness sake. What would they achieve by giving Canon a raw deal? Nothing at all.I bet the opposite is true - that they sincerely wish that Canon would produce better sensors so they (DxO) wouldn't have to hear this nonsense about being biased all the time. The last thing they want is to scare off a whole lot of potentrial customers for their software.

Believe me, your money smells just as good to them as Nikon owners' money does.

I think this is a critical point, and possibly the root of the contention of the "DXO DR naysayers." As someone who prints a lot myself, perhaps I can offer some insight.Assuming you print at native resolution, printing does not average the original amount of information into something less. .../cut/...

Printing is NEVER a pixel > pixel matter, the screening of the print may be at 2400dpi even though you're printing at 300dpi. Otherwise you would need 16 million differently colored inks to get a full hue/tone presentation. Since you "only" have between four and twelve (we have 10-pigment printers, with two neutral densities) possible inks - and in some head models maybe two different ink spot sizes - you need a variation of [16x10^6 / 8] over area, dithered to get a full tone resolution. I set the divisor to eight, as an average of the amount of available individual inks in a modern printer here. How the screening process (dither or a complete image RIP) is done determines how good the printer handles detail per mm or in the equivalent measurement set: MTF per dpi.

I fully understand how print works. I've been printing for many years, I calibrate my own papers, etc. Don't confuse PPI and DPI. Dots per inch (DPI) in a print is not necessarily the same as Pixels per inch (PPI). In your normal ink jet print, printers are usually 2400x1200 or 2880x1440, depending on the brand. That is the number of discrete ink droplets per inch, is usually a constant (some printers allow you to change DPI), and has little to do with the print resolution other than possibly having a ratio with the PPI. One can choose to print at a variety of "resolutions", or "print pixel densities". Technically speaking one could print at any PPI, although it is best to print at one that evenly divides the highest native. In the case of Epson, that would be anything that cleanly divide 720, and for the rest anything that cleanly divides 600. Thus we get 720/600ppi, 360/300ppi, 180/150ppi, and possibly 90/75ppi for those rare gargantuan prints at 60" plus.

Thanks to dithering or the RIP, the total number of dots per pixel printed, and the placement of dots of each color within each pixel, can amount to a HUGE volume of colors. "Dots" need not be placed purely side-by-side, they can overlap in different colors as necessary to create a tremendous range of color and tonality, largely limited only by the type of paper (which dictates ink/black density and white point). It should also be noted that the human eye cannot actually differentiate 16 million colors. Most scientific estimates bring the number of "colors" to around 2-3 million. Our eyes are much more sensitive to tonality, the grades of shades, which is also not necessarily the same thing as color. Tonality in print is more dependent on paper than on inks used or dots placed. Gamut, range of color (as well as maximum potential black density) is more dependent on inks used.

In terms of PPI, pixel size in print can indeed be translated to/from pixel size on screen. So long as you know the pixel densities of both, there is a clear translation factor. My screen is a 103ppi, which means I have to zoom images down to around 33% their original size to get a rough idea of how all that detail will look in print. Zooming will NEVER tell the whole picture, though, since zooming or scaling on a computer do so by averaging information. A print DOES NOT average, at least not the way I print. I can print one of my 7D photos without any scaling at all on a 13x19" page with a small border, at 300ppi, and the printed area itself covering 17.28x11.52". The print contains exactly the same information as my 100%, uncropped, native image strait out of camera. That print simply stores the information more densely. A 13x19" print is comfortably viewed (at full visual acuity) within a few feet. My point about print is that it is notscaling...it is the same original information that came out of the camera (plus any PP), just represented in a denser manner.

There are other problems with DXO calling their rated DR "Print DR", though. Assuming you are using a godly form of paper, such as Innova FibaPrint Gloss, which has a dMax of over 2.7, you might be able to get 7 stops or so from a print. Your average fine art print paper has a dMax randing from around 1.3 to 1.5 on average to 1.75 or so for some of the more recent higher-end fine art papers. That gets you maybe 5-6 stops of DR.

Don't make the mistake of mixing up tone resolution with DR. They are never the same in any practical application. Example:A certain measurement has the DR of 10:1 (say a measurement range of 1 to 10), and a resolution of "1" in that range. Ten discrete steps can be clearly differentiated in the original.A certain presentation type has the linear output range of 10-20. That gives a DR of only "2" since 20 is only two times as much as 10. But the presentation still has ten discrete levels very clearly distinguishable from each other, meaning that the tone resolution hasn't changed. On a visual inspection, you haven't limited the measurement DR, just shifted the base point (and lowered the detail MTF of course).

I do not believe I have made the mistake of confusing resolution with DR. I've never made any such argument. The point I have been trying to make is that the DR gain indicated by Print DR is explicitly dependent upon a TRADE for something else, in this case detail. The net result is really nil, as your potentially gaining more DR (at least DR as DXO defines it), at the loss of potentially significant amounts of detail. My argument has been that DXO does not make this fact clear in the way they score cameras, which is rather misleading.

Damnable security block. Whatever the hell that thing is, I wish the mods would stop denying it exists and fix it.

So that takes us back to the definition of DR. I'm happy to accept that DXO has a purely mathematical interpretation of DR, the ratio between white point (maximum saturation) and black point (noise floor). Again, though, I am not sure it is a useful or realistic definition of what dynamic range is. When one thinks about the value of dynamic range in digital photography, the first thing that usually comes to mind is the ability to recover useful detail from deep shadows. I say from the shadows, as I think any photographer who uses digital knows that it is critical to preserve the highlights, as once they are clipped, detail is well and truly gone.

Realistic; who knows, at least I can tell a whole lot of things about a camera and the resulting images, and what you can DO with the camera - just by knowing the DR and some other base performance figures. You would get the same answer from any other competent machine vision specialist or optoelectrician.Practical; Since a camera sensor signal is linear you can move around as you want in it, internal contrast will be constant. This means that I can expose (photometric exposure) maybe one full stop less with a camera with good DR, giving me more "practically usable latitude" in both highlight and shadow. This is not a very difficult PP operation - I put (in the raw converter) the exposure compensation at +0.5 and make the highlight tone curve a little less harsh in the cutoff knee.If the DR is good, I can shorten my shutter speeds or get more DoF (stop down) at low ISOs - without loosing any image quality compared to a low-DR camera used at longer shutter speed or shorter DoF!More DoF or/and shorter shutter speeds are in most situations something very practical, wouldn't you say?

Here is where my argument comes in. There is too much conflation of what is possible with an analog signal on a sensor, and what is possible with a digital image in post. I FULLY AGREE that dynamic range in terms of a linear analog signal on a sensor is, for lack of a better word, "fluid". You can adjust exposure up or down, and shift the signal anywhere within the dynamic range of the sensor. That's WHY it is called dynamic range. With the finer gradation of discrete levels measured in electrons, potentially tens of thousands of electrons, per pixel, you effectively have fine grained, near-infinite control over that signal. If we didn't adjust everything in stops, you could fine tune an exposure quite precisely in-camera.

I disagree that you have the same kind of unlimited, lossless control over the digital signal represented in an image, RAW or TIFF. For one, you are working with quantized, discrete data. Second, exposure latitude is not infinite, even with a RAW, when working in post. Even with an amazing camera like the D800, noise is going to eventually pose a problem since it is "baked" into the digital signal. Adjusting exposure in-camera, you don't have to contend with noise at all (for all practical intents.) Pushing or pulling exposure in post has its limits as well. If you severely under-expose, no matter how clean the results may be, you are going to have limited color fidelity as you continue to boost exposure digitally. A D800 can boost exposure by maybe six stops, but that is in no way an alternative approach to photography, as a severely under-exposed image lifted by +6EV will NEVER have the same kind of fine tonality, color fidelity, clarity, and sharpness as an image that was never underexposed by -6EV in the first place. It is certainly intriguing that you can lift shadows by 2-3 stops without any real problems with noise...that means you gain a lot of detail and some color fidelity in the deep shadows for applications like landscape photography. Push those shadows too much, though, and your amazing 13 stop landscape photo will quickly turn into a muddy something that looks like one of those poorly tone-mapped HDR images with stippled or muddy gray detail protruding into the lower midtones, which will have a disproportionately greater amount of detail and color fidelity. My point is...there are limits to what you can do with digital signal processing that don't exist when processing the signal in it's original analog form IN-CAMERA.

Now, I've never complained about DXO's "Screen DR" figure. I believe that tells me the dynamic range I have to work with when doing what you described...fiddling with exposure in-camera. My dispute is with the notion of Print DR, and what it seems to stand for given how DXO labels those results and sells the information to the public. I do not believe you really gain anything beneficial, useful photographic DR that allow you to extract MORE DETAIL included, by downscaling a native RAW image to some smaller size in TIFF. I also dispute that assuming you did scale a 36.3mp image to an 8mp image and tried to utilize the supposed 1.2 stop gain in DR from downscaling, that you wouldn't have anywhere near the exposure latitude to actually do anything useful with that newfound DR even if it did contain more useful detail than the original RAW that had less DR.

The dispute on record here, if I may define it according to my own views as well as that which I've read from other DXO DR naysayers, is this:

What value does DXO PrintDR (the mathematically derived ratio between white point (maximum saturation, FWC) and black point (electronic noise floor)) have in a real-world context?

From the standpoint of simply moving the black point in a downsampled image, the only thing that occurs is shadows become darker. One LOSES information during the process of downsampling, so the primary benefit of having additional DR in the hardware no longer applies. In the context of viewing images on a computer screen, primarily done via the web, having a deeper black point might be valuable. Computer screens generally support a much deeper black point than actual prints on paper (particularly prints on high quality fine art paper), although none actually support 14 stops of DR regardless, and the average consumer screen is only 6-bit, so roughly the same DR as a print.

Firstly - Most cheap laptop (and cheap TN) screens use 6-bit with 240Hz time scale (delta/sigma) dither to get 8 bits of tone resolution. None of my devices (except maybe my phone) use lower than 8 discrete bits, and both my TV and my computer screens are true 8-bit >> 10-bit time-dithered.Secondly - This tone resolution is quantized in a gamma-corrected space, usually around gamma>>2.0. If you look at the sRGB gamma the step between the first 14 (of 256) bits is 1/13 of the bit value. This means the linear DR of 8-bit sRGB in the ideal application is 13x255 = 1:3315 or about 11.5 linear bits/Ev. A well calibrated HD-TV will follow ITU-R BT709, and present a step of 4.5 in the lower part of the gamma-curve - giving a linear DR of 4.5 x (235-16) = 1:985 - or slightly less than 10 bits/Ev8-bit sRGB as a format standard has almost the same DR per pixel as a 1Dx. (!) -But in a nonlinear tone mapping - that's the difference.

Thanks for the detailed description, although I am not sure your explanation of ITU-R BT709 is entirely accurate. That system supports bit-appending, which I wouldn't call dithering, to achieve higher bit depths. It also reserves black and white "space" within its numeric range as foot and head room for various purposes (only actually used in TV's, as far as I know, computer screens always utilize the full range of bits without headroom). That foot- and headroom reservation lowers the native DR by at least a stop or so...even if you append extra bits, the headroom requirement still exists, so you might only gain back what you had originally lost, and not much more.

The non-linear tone mapping IS the difference. Another way to put that is the gamma compresses a wider range of information into a smaller space (when indeed mapping from a larger space, which is not necessarily always the case, but is in the case of RAW PP). More information in a space that can only contain less information means we OBSERVE whatever the container renders. If we could observe a 14 stop image on a device capable of fully rendering all of the information contained within that image without the need to compress it (tone map it) in any way, it would be much closer to seeing the world the way we see it with our eyes, where the contrast of any given scene is lower but without actually appearing dull, drab, gray and lifeless. My point is that, despite dithering and finely tuned gamma, there is no such device on the market today. We cannot truly observe the full beauty of a 13.2-stop landscape photograph in all of its linear, lively glory, without applying some kind of non-linear processing to make the information fit on even the best and most expensive of devices today.

When it comes to real print, assuming one is printing at native size, or an upsampled image, original detail is preserved or slightly softened, but none of it is lost due to downsampling. Regardless, assuming one even does significantly downsample a D800 image so they can print at 8x10", even printed on the highest dMax papers on the market with the brightest L* rating, your going to get HALF the DR you should supposedly be getting from DXO's 14.4 stop Print DR rating..../LONG cut/...At best, DXO's downsampled DR rating should probably be called Web DR. It is not detail-preserving Photographic DR, as upon downsampling you lose detail. It is definitely not Print DR, since a print is inherently more about color richness and gamut than white-to-black point dynamic range. The depth of blacks sometimes matters in a print, however the deeper your black point in print, the harder it tends to be to actually discern fine shadow detail..../cut/...

Again, you're comparing DR and tone resolution as if they were the same thing. And the comparison definitely does not seem coherent in how noise in different tone levels (brightness zones?) is perceived in a real print.

Photography is - when broken down to practical discrete steps - a series of [input DR + tone resolution] to [output DR + tone resolution] translations. As long as the tone resolution of the combined DR+TR of the receiving end is larger than the sending end the transformation can be lossless.1) 3D object space (reality... :-) ) is projected on to a focal plane (sensor or film) through a lens, where you lose DR due to diffraction, haze and flare. Tone resolution is still infinite, limited only by quantum light physics.2) the image space (the projection) is to be translated into electrical and then digital signals (the sensor and sensor electronics). DR is lost due to noise issues, TR is lost due to noise and quantization issues.3) the linear readout has to be tone-mapped into a standardized color space, often a gamma corrected 8-bit RGB space. Here the tonemapping and tone curves determine how much DR you lose - if you lose any DR at all. In a camera phone or a cheap compact, sRGB actually has a much greater DR than what the input can provide! Tone resolution is (often) limited to 1:255 (8bits)4) the standardized image format has to be rasterized to make it printable. DR is limited by the paper white and ink black densities, tone resolution is limited only by the rasterization scheme.

So, it's quite easy to make a 12bit DR deep shadow detail show up as an easily recognizable noise-free detail even in a 7bit (Ev) presentation DR.

To address the last sentence: Only with the LOSS of information in some way. Tone mapping is not a lossless endeavor. You map, and in the process OVERLAP, a more extensive set of information into the space of a smaller set of information. You LOSE something. Sure, you can preserve detail during the process of tone mapping...hence the point about manually tuning white and black points in a tool like Photoshop before printing. My point about that being difficult is you have to choose what to keep and what to discard. What range of shadows, midtones, and highlights you believe are most important to the final representation in that specific print on that specific paper. My point was not that you couldn't preserve the right tones to produce a great print. My point is that the print plain and simply does not and can not contain ALL of the original information. When you have more information to start with, compressing it into a smaller space, especially if you are as picky and meticulous as I am, can become a daunting task.

So, what is the value of DXO Print DR? Realistically, practically, physcally...what do I actually gain by downsampling my full-detail RAW into a smaller-sized TIFF? For that matter, what value does DXO Print DR have if I save as a compressed JPEG for viewing on the web? Are we really just talking about a DXO weighted score, and nothing more? If so, should it really be called Dynamic Range, or is there a better term DXO could use that wouldn't come off as some kind of sketchy maneuvering (real or simply perceived) of their results in favor of a major monetary contributor?

Trying to redefine a metric that has been used practically, and for very many practical reasons in very many practical circumstances - by thousands upon thousands of signal processing engineers, sensor developers, imaging software developers (including the guys over at Canon DPP development center) and imaging process logic circuit developers is NOT in any way productive, I'd say it's very counterproductive. Especially since the connection between the measurement value and images in reality is so easy to show.

What would help a lot for most people is to understand what DR is, when used and put in the context that it is MEANT to be used. It plays ONE very important part in the most basic breakdown of the individual parameters that is universally used to measure or determine a signal quality - and a signal quality is the base for image quality assessment.

The camera total raw DR does in very large and noticeable amount have an effect on how the complete chain from object space (reality) to print can be realized. A camera with good DR has the (optional!) ability to show a lot more shadow detail (without adding noise!) in the final result, even if the paper/ink combination is pretty poor.

I think you are demonstrating my point for me. You use the term DR so generically. DR can be, and is, defined in a variety of ways. It is also defined in many different contexts, and it's derivation in each context is not necessarily the same as any other context. Again, to be clear, my only dispute is with DXO's "Print DR". Based on your explanation earlier on about how you could freely change exposure in-camera to shift tones around within the sensors DR is something I agree with 100%. I've never disputed that. You don't have quite the same fluidity with dynamic range in post, especially once you convert an image from RAW to an RGB TIFF image. I don't know why everyone thinks I have a dispute with the general notion of DR. I do not. I'm fully in alignment with everyone in regards to what dynamic range is and what it's benefits are in the context of an analog signal on a sensor.

Hear my words: I specifically dispute the notion of "Print DR", and how it is labeled, used, weighted and sold, by DXO in their sensor scores.

I do not believe I have made the mistake of confusing resolution with DR. I've never made any such argument. The point I have been trying to make is that the DR gain indicated by Print DR is explicitly dependent upon a TRADE for something else, in this case detail. The net result is really nil, as your potentially gaining more DR (at least DR as DXO defines it), at the loss of potentially significant amounts of detail. My argument has been that DXO does not make this fact clear in the way they score cameras, which is rather misleading.

That seems like a sudden change in tune. For the last six months you were saying that the PrintDR plots were garbage and that the only true way to compare cameras relative to one another was using the ScreenDR numbers.... and that you didn't believe in the Print normalization whatsoever and it was others who pointed out the tradeoffs that you now claim you were claiming all along. But whatever, if you are finally on board, then about time.

I do not believe I have made the mistake of confusing resolution with DR. I've never made any such argument. The point I have been trying to make is that the DR gain indicated by Print DR is explicitly dependent upon a TRADE for something else, in this case detail. The net result is really nil, as your potentially gaining more DR (at least DR as DXO defines it), at the loss of potentially significant amounts of detail. My argument has been that DXO does not make this fact clear in the way they score cameras, which is rather misleading.

That seems like a sudden change in tune. For the last six months you were saying that the PrintDR plots were garbage and that the only true way to compare cameras relative to one another was using the ScreenDR numbers.... and that you didn't believe in the Print normalization whatsoever and it was others who pointed out the tradeoffs that you now claim you were claiming all along. But whatever, if you are finally on board, then about time.

My argument has always been that you cannot realize a beneficial improvement in DR when you downscale, at least by the definition of DR that I was using. I freely admit I'm not generally very eloquent in my wording my arguments, and I am trying to be clearer and more specific. According to elflord's explanation, where black point (and this S/N zero) shift closer to pure black when you average noise. That description of DR, from a purely theoretical standpoint, while I'm willing to accept it as the math DXO uses to produce their specific numbers, does not actually describe the kind of dynamic range explained by TheSuede in his reply to me just a few posts above. Theoretically it's sound...in the pure, ideal environment it is described within. I believe there are extenuating circumstances that are not generally factored into that neat and tidy theory. I could reiterate them, but I've done that so much, if you want to know my stance on any particular argument, just reread my posts.

Just as I have always been arguing, Screen DR really actually tells you about THE HARDWARE. Print DR is more like SQF, a normative but otherwise subjective (as it needs to be) mechanism by which to compare IMAGES, or more specifically the amount of noise present in an image and the resultant S/N when noise frequencies are normalized, produced by cameras on a level playing field. I understand the purpose of normalizing images to put NOISE into the same frequency. I also understand the purpose of normalizing images for the sole, pure purpose of producing a workable model within which to score sensors on that same level playing field. But there are scores, and then there are realities...

I refuse to accept that any movement in the black point results in anything useful, as in, an increased ability to recover detail. The simple act of averaging cost you a significant amount of detail (in the case of the D800, by a factor of 4.5). Additionally, the kind of leeway we are all familiar with when it comes to RAW exposure latitude is reduced by orders of magnitude once you convert to RGB (namely, the brightest highlights and deepest shadows are relatively rigid and do not have much leeway to be adjusted...they are essentially as "baked in" as noise; push them too far, and you either clip or block, and end up with muddy gray/brown lifted shadows or dull/grayish sorta-highlights.) So assuming you wanted to try and recover those deeper shadows with a TIFF, you might be able to recover a little, but nowhere near the four to six stops you might with an original, and thus unscaled, RAW. I consider the normalization of noise to be an entirely different concept for an entirely different purpose than dynamic range...always have. This whole argument hinges on what DXO is describing with the terms "Print DR" and "Landscape Score". Referring to the change as a useful improvement in dynamic range, that should thus give you the ability to recover even more detail from shadows that would otherwise be even deeper into noise than you could recover before is simply not true. The information buried that deeply into the noise floor is well and truly gone, it cannot be recovered by any means. All you can do is make noise darker by averaging, but that further destroys USEFUL detail, and simply makes the detail that was already consumed by noise (as well as the noise itself) a deeper shade of black. It does not make it any more usable, useful, or "recoverable".

If the mathematical definition of DXO's Print DR simply refers to the normalization of noise, which thereby concurrently reduces detail as it reduces the noise floor (black point, S/N 0db), so be it. But I do not believe that is how most people "grasp" the concept of dynamic range, hence the complaint about misleading scores, numbers, and terminology, hence the general confusion about what, exactly, DXO's "Landscape" score really actually means, frustration and anger that the "Landscape" score carries so much weight in DXO's model, etc. Now, I am happy to accept that it's DXO's to decide how they weight and distribute points among their own scoring model. It's just that there are reasons, valid reasons IMO, for why people have a hard time with DXO's scores. I've tried to put a logical voice to those reasons.

I am trying to be more clear about my position in this grand debate. I'm trying to refine my stance, based on a clearer understanding of the stance of opposing parties, so we all know where everyone stands.

I do not believe I have made the mistake of confusing resolution with DR. I've never made any such argument. The point I have been trying to make is that the DR gain indicated by Print DR is explicitly dependent upon a TRADE for something else, in this case detail. The net result is really nil, as your potentially gaining more DR (at least DR as DXO defines it), at the loss of potentially significant amounts of detail. My argument has been that DXO does not make this fact clear in the way they score cameras, which is rather misleading.

That seems like a sudden change in tune. For the last six months you were saying that the PrintDR plots were garbage and that the only true way to compare cameras relative to one another was using the ScreenDR numbers.... and that you didn't believe in the Print normalization whatsoever and it was others who pointed out the tradeoffs that you now claim you were claiming all along. But whatever, if you are finally on board, then about time.

My argument has always been that you cannot realize a beneficial improvement in DR when you downscale, at least by the definition of DR that I was using. I freely admit I'm not generally very eloquent in my wording my arguments, and I am trying to be clearer and more specific. According to elflord's explanation, where black point (and this S/N zero) shift closer to pure black when you average noise. That description of DR, from a purely theoretical standpoint, while I'm willing to accept it as the math DXO uses to produce their specific numbers, does not actually describe the kind of dynamic range explained by TheSuede in his reply to me just a few posts above. Theoretically it's sound...in the pure, ideal environment it is described within. I believe there are extenuating circumstances that are not generally factored into that neat and tidy theory. I could reiterate them, but I've done that so much, if you want to know my stance on any particular argument, just reread my posts.

Just as I have always been arguing, Screen DR really actually tells you about THE HARDWARE. Print DR is more like SQF, a normative but otherwise subjective (as it needs to be) mechanism by which to compare IMAGES, or more specifically the amount of noise present in an image and the resultant S/N when noise frequencies are normalized, produced by cameras on a level playing field. I understand the purpose of normalizing images to put NOISE into the same frequency. I also understand the purpose of normalizing images for the sole, pure purpose of producing a workable model within which to score sensors on that same level playing field. But there are scores, and then there are realities...

I refuse to accept that any movement in the black point results in anything useful, as in, an increased ability to recover detail. The simple act of averaging cost you a significant amount of detail (in the case of the D800, by a factor of 4.5). Additionally, the kind of leeway we are all familiar with when it comes to RAW exposure latitude is reduced by orders of magnitude once you convert to RGB (namely, the brightest highlights and deepest shadows are relatively rigid and do not have much leeway to be adjusted...they are essentially as "baked in" as noise; push them too far, and you either clip or block, and end up with muddy gray/brown lifted shadows or dull/grayish sorta-highlights.) So assuming you wanted to try and recover those deeper shadows with a TIFF, you might be able to recover a little, but nowhere near the four to six stops you might with an original, and thus unscaled, RAW. I consider the normalization of noise to be an entirely different concept for an entirely different purpose than dynamic range...always have. This whole argument hinges on what DXO is describing with the terms "Print DR" and "Landscape Score". Referring to the change as a useful improvement in dynamic range, that should thus give you the ability to recover even more detail from shadows that would otherwise be even deeper into noise than you could recover before is simply not true. The information buried that deeply into the noise floor is well and truly gone, it cannot be recovered by any means. All you can do is make noise darker by averaging, but that further destroys USEFUL detail, and simply makes the detail that was already consumed by noise (as well as the noise itself) a deeper shade of black. It does not make it any more usable, useful, or "recoverable".

If the mathematical definition of DXO's Print DR simply refers to the normalization of noise, which thereby concurrently reduces detail as it reduces the noise floor (black point, S/N 0db), so be it. But I do not believe that is how most people "grasp" the concept of dynamic range, hence the complaint about misleading scores, numbers, and terminology, hence the general confusion about what, exactly, DXO's "Landscape" score really actually means, frustration and anger that the "Landscape" score carries so much weight in DXO's model, etc. Now, I am happy to accept that it's DXO's to decide how they weight and distribute points among their own scoring model. It's just that there are reasons, valid reasons IMO, for why people have a hard time with DXO's scores. I've tried to put a logical voice to those reasons.

I am trying to be more clear about my position in this grand debate. I'm trying to refine my stance, based on a clearer understanding of the stance of opposing parties, so we all know where everyone stands.

1. I think you are trying to normalize your claims to match what the others had been telling you for a long time. 2. What do you think the bottom end measurement for DR is? You measure the SNR about the black point. There is nothing more or less magical about their Print plots for DR compared to their Print plots for middle gray SNR. You compare the darkest level noise at the same noise scale to be fair. And yes, it is true that of course you can't both maintain the full MP count of detail and expect to get the Print screen DR at the same time, but yu might find out that your 40MP camera doesn't actually pale compared to your 8MP camera, and maybe even beats it, if you compared them at the same scale for DR and SNR even if at 100% and thus different scales the new 40MP might look noisier.

Anyway:a. the fairer way to compare between sensors is the print plot and DxO is not doing anything horrendous there

b. yes, the actual numbers reported for the print plots as absolutes are basically whatever numbers in the sense that they are not anything to care about unless you happen to print at a very certain scale and view from a very certain distance and downscale in one particular way but they are the way to make relative comparisons between cameras and sensors that is a lot more fair than using the Screen plots (and for the longest time you had been insisting one must only use the Screen plots to compare cameras relative to one another, but whatever)

d. yes, it's generally better to compare the plots on DxO and pay less attention to overall scores since how do you possible sum up a sensor in one single number that would satisfy everyone at once or even a single person for all circumstances? you can't, it is just some chosen weighting and summation and that only gives you a very general and mushed together hint but again does a high score come because the cam is great at low ISO DR, at high ISO DR, at SNR, at color purity, etc. who knows, so it's better to look to the plots, at least the lower level overall scores (although even there the plots give a much clearer picture)

e. yeah, the lens tests at DxO DO seem to be pretty suspect, not sure what they are doing there, a diferent group tests them, I believe, and lens testing is MUCH trickier and copy variation more relevant but with all of the 300 primes worse than L zooms worse than non-L zooms and 2.8 IIs worse than the original version 70-200 and so on, it is kinda bizarre, I honestly don't bother even looking at their lens tests any more. The 300 2.8 IS II is trash? The 70-200 2.8 IS II worse than the 70-200 2.8 IS better than the 70-200 2.8 non-IS? The 70-300 non-L better than the 70-300L and 300 f/4?? not sure what to say