AuthorTopic: High DR Cameras and Low DR Monitors (Read 4857 times)

RFPhotography

A topic in another thread got me thinking about this and thought it might be relevant for discussion.

We now have cameras that can capture 14 stops of light in tests. We've had cameras that tested at around 12 stops but what gets tested in a lab and what happens in the real world are often two different things so it may not have mattered as much with 'mere' 12 stop sensors.

Most monitors in use are not wide gamut so we're already working with a somewhat hobbled system in that all the colour we capture can't be viewed on our monitors. Even with wide gamut monitors, cameras can capture more than the AdobeRGB space (basically the upper limit of wide gamut displays) so we're still losing colour visually when editing.

Most monitors in use by those working seriously probably have a contrast ratio of around 800:1 to 1000:1. That's a max of 10 stops.

Are we in a position now where, in addition to not being able to see all the colour in our images when editing (some of that can be got back in printing with some papers that can reproduce colours outside AdobeRGB in some hues) but we're also having to effectively sacrifice dynamic range in the image because we can't see it on the display? Or is it essentially moot because current printing technology only allows us to reproduce maybe 6 or so stops on paper?

What effective use are sensors with such broad dynamic range if that range can't be used? Will display and printing technology ever evolve to a point where it's even close to possible?

What effective use are sensors with such broad dynamic range if that range can't be used? Will display and printing technology ever evolve to a point where it's even close to possible?

That dynamic range can be used. Real world scenes have higher DR than monitors or paper, but that DR is mapped onto the output device. This is what tone mapping is about: compressing a high input DR over a low(er) output DR device. Nothing is wrong, the process makes sense.

If one day monitors achieve enough DR to make that tone mapping process unnecesary, images could be linearly output to the monitor without any processing, and we would have the same perception looking at them as we have looking at the real world scenes.

Or is it essentially moot because current printing technology only allows us to reproduce maybe 6 or so stops on paper?

And yesterday, negatives captured more or less the same 12+ stops, and paper was already limited to the same 6 stops. Same problem, more or less : it might be of interest to study the solutions of that age.

Then as now, the main photographer's skill is to convey some of the reality in a drab piece of paper.

If one day monitors achieve enough DR to make that tone mapping process unnecesary, images could be linearly output to the monitor without any processing, and we would have the same perception looking at them as we have looking at the real world scenes.

I wonder if having 15stops DR monitors might not prove more challenging, as one won't have to do the DR mapping, but still will need some massaging to convey the 3D impression. Or will they be really 3D as well?

There are some "HDR" monitors out there, using e.g. multiple modulated LED backlights behind a regular LCD panel. I have never seen one in person, though.

A digital camera presents a set of capabilites, a display another, and print yet another. There is not reason to expect a direct relationship between the limitations of those technologies. Very few people have access to 36 megapixel or 180 megapixel computer displays. They are still able to edit such large files with some adoptions on their lowly 2MP or 4MP screens.

A simple example is that you can use a graduated filter to improve sky in processing the raw image, than you can add some fill light so your shadows will show some subtle detail and you have tamed aDR of 12 stops into perhaps 8-9 stops DR on screen.

A topic in another thread got me thinking about this and thought it might be relevant for discussion.

We now have cameras that can capture 14 stops of light in tests. We've had cameras that tested at around 12 stops but what gets tested in a lab and what happens in the real world are often two different things so it may not have mattered as much with 'mere' 12 stop sensors.

Most monitors in use are not wide gamut so we're already working with a somewhat hobbled system in that all the colour we capture can't be viewed on our monitors. Even with wide gamut monitors, cameras can capture more than the AdobeRGB space (basically the upper limit of wide gamut displays) so we're still losing colour visually when editing.

Most monitors in use by those working seriously probably have a contrast ratio of around 800:1 to 1000:1. That's a max of 10 stops.

Are we in a position now where, in addition to not being able to see all the colour in our images when editing (some of that can be got back in printing with some papers that can reproduce colours outside AdobeRGB in some hues) but we're also having to effectively sacrifice dynamic range in the image because we can't see it on the display? Or is it essentially moot because current printing technology only allows us to reproduce maybe 6 or so stops on paper?

What effective use are sensors with such broad dynamic range if that range can't be used? Will display and printing technology ever evolve to a point where it's even close to possible?

RFPhotography

I agree with many of the points made and posted this sort of as a 'devil's advocate' type of discussion. Not to try and stir the pot but hopefully to generate some opposing viewpoints and a reasonable discussion. Guess many of us are pretty much on the same page.

The only thing I'd push back against a bit is Niko's idea of negatives being able to render 12 stops. That's a bit broad. Given that you linked to "The Print" I'm presuming you were referring to b&w negatives. Colour negs couldn't contain that much brightness. B&W, yeah, they could come close. The difference between film and digital, though, is that you could always 'see' the full range in the negative. With digital you can't. With film, you knew how much range you and and you knew how you had to print the neg. to contain that range and produce something useful in the print. With digital, you never actually get to see the full range that was captured. You're always looking at something that is showing less than the range captured; at least with the more current DSLRs.

Bart, I wasn't confusing linear capture with gamma encoded screen view but that is a very good point. And you're right, particularly with what we see in Lr4.x/ACR7.x with the 2012PV that does make a significant difference.

If one day monitors achieve enough DR to make that tone mapping process unnecesary, images could be linearly output to the monitor without any processing, and we would have the same perception looking at them as we have looking at the real world scenes.

I surely hope we do not get there, I don't want to have to use sun glasses in front of my screen...

RFPhotography

Not really. At least not mainstream cameras. Are there? There are things that happen in the camera that have the effect of applying a small shoulder at the top of the curve, but it's still quite small. Anti-blooming circuitry can result in the response straight off the sensor having a small shoulder. Things like highlight protection and active d-lighting will delinearise as well. But that's a camera setting that, at least, Adobe products don't pick up on anyway. Don't know about other raw converters.

Are there? The DSLRs and MF Backs I know are pretty linear in their response curve over the majority of the luminance range. Of couse the sensor read noise will reduce the signal to noise ratio, as will shot noise. Also some clipping or anti-blooming safeguards may be built in, but that potential partial departure from linear will be cancelled by the HDR compositing/assembly process, as are things like optical glare. Of course the ADC and accompanying electronics can change that captured signal on the way out (e.g. use a lookup table and apply a Gamma curve), but then we are way beyond the capture stage.

The point was that there will be a gamma adjustment applied to the captured signal before it is displayed, which will change the output dynamic range. Add in some tonemapping, and we'd be lucky if more than 8 or 9 stops of DR are left for us to view.

It looks more like it is changed to be non-linear."A typical image-capture section of the design consists of the sensor, some control logic, and some analog processing. It also includes either an ADCif the image is to be digitized, or perhaps an analog mixing circuit to convert the image into video".

Quote

Also, the Fuji CCD designs with variable sensel size/sensitivity could be regarded as non-linear as a whole (though each sensel may have been linear)

Yes, the signal of multiple (linear) sensels is added ( adding signals is best done in linear gamma to avoid color shifts).

AFAIK, it has been impossible to prove audible advantages of SACD over CD at comfortable listening levels. Hearing differences between 14 bits and 16 bits (properly dithered) is challenging enough,

If the playback gain is e.g. loud enough to cause permanent hearing damage _while_ the actual encoded signal is very weak, you will eventually be able to hear the dithered noise-floor. That is not a "fair" or relevant test for how people actually listen to music.