Well I think the thread is turning more an engineering discussion rather than photographic.At the end, what we see in these examples is the fact that for a given job, with some cameras, you will have to work harder in order to get highlights and shadows to work. Which means more files with different exposures to blend the desired image in photoshop.That's my thought.Two years ago, I went through the process of giving up on my Canon 5DMKII for the D800 which gave me the ability to obtain cleaner files, with more DR and cleaner shadows. All this reflected on me working less and my clients being happier.ACH

So is there a consensus that because of flare more than 11 steps of practical DR is not possible with current camera designs (not sensors) ?How much practical DR can we already get from the better camera/sensor combos in the absence of strong flare from outside the image field ?

Personally I find the theoretical part interesting, too, but you are right that cameras are just tools. I think the A7r is an interesting tool, specially for one lusting for DR and resolution. Personally I am not a potential buyer, right now.

Well I think the thread is turning more an engineering discussion rather than photographic.At the end, what we see in these examples is the fact that for a given job, with some cameras, you will have to work harder in order to get highlights and shadows to work. Which means more files with different exposures to blend the desired image in photoshop.That's my thought.Two years ago, I went through the process of giving up on my Canon 5DMKII for the D800 which gave me the ability to obtain cleaner files, with more DR and cleaner shadows. All this reflected on me working less and my clients being happier.ACH

If D800 is in lossless compression mode, the gaps are nearly non-existent, only white balance pre-conditioning adds them, and those are very few.

The results with digital backs strongly depend on exposure, which includes the colour of light. For a relatively well-exposed regions the histogram is totally gapless:

Thanks Iliah,

So, are you saying that the raw files of the a7r, due to their lossy nature, contain significantly less information that those of the D800, but that the gap between the D800 and the latest backs is small?

Regarding exposure I see your point. Personally I am a base ISO guy and I try to expose for highlights, getting maximum exposure without clipping. It seems that Chris find 100 ISO the lowest he wants to go and exposes to protect highlights. Life is often a compromise.

I am quite interested in this stuff, but I guess some of the discussion is a bit of topic for some of our friends.

I am thankful for your explanation of the Sony ARW format. I guess it could be that the Bionz is only twelve bit wide but they want to push 14 bits of data trough it. It sort of makes sense to me. What about the Alpha 99, that camera shows up with 14 bits in RawDigger, as far as I know? Do they use the same solution?

Hi Bernard,> are you saying that the raw files of the a7r, due to their lossy nature, contain significantly less information that those of the D800Yes.> but that the gap between the D800 and the latest backs is small?No, I see better colour from the digital backs, more balance of the system towards high resolution, but I also see that DB manufactures may not be doing enough, explaining exposure, vibration, lenses, and filters.

Without going any further into Sony innards, I would say they are capable of creating very decent digital backs, even if 135 format, but to do so they need to drop lossy compression and design it as a back, not as a camera.

Iliah,Thanks for all your answers to my persistent questions and quibbles; I think I understand the situation better now.

About the A7R being "only 13-bit even before compression": do you think that its ADCs themselves are only 13-bit, perhaps using 14-bit output format just because for whatever reason, an even number of bits is always used for output? And yet the D800 has a true 14-bit mode, probably from the same sensor? That reminds me of an earlier case, where it was speculated that a Nikon body using a Sony EXMOR sensor was doing multiple non-destructive ADCs at each pixel and averaging to get an extra bit or two in its low frame-rate mode, whereas a Sony body using the same sensor only offered 12-bit.

do you think that its ADCs themselves are only 13-bit, perhaps using 14-bit output format just because for whatever reason, an even number of bits is always used for output?

The ADCs used in those sensors are comparator-based, one can extract very high precision from those if the time to raise the ramp in fine steps is enough. A short description of the ramp-compare ADC is on http://en.wikipedia.org/wiki/Analog-to-digital_converter The design decision is "what is good enough", to keep acquisition time short to ensure the target frame rate, and to have good enough precision.

And yet the D800 has a true 14-bit mode, probably from the same sensor?

Sensor is very similar, but the supporting circuitry, even that on the sensor chip, is somewhat different, and differently timed and sequenced. Looking at the dump of the raw data one can see it - say, A7r and D800 have different optical blacks around them. One can do quite a lot by just programming internal sensor registers in a different manner.

That reminds me of an earlier case, where it was speculated that a Nikon body using a Sony EXMOR sensor was doing multiple non-destructive ADCs at each pixel and averaging to get an extra bit or two in its low frame-rate mode, whereas a Sony body using the same sensor only offered 12-bit.

It was the same different sawtooth programming, for slower frame rate and higher bitness the steps were finer.

The ADCs used in those sensors are comparator-based, one can extract very high precision from those if the time to raise the ramp in fine steps is enough. A short description of the ramp-compare ADC is on http://en.wikipedia.org/wiki/Analog-to-digital_converter The design decision is "what is good enough", to keep acquisition time short to ensure the target frame rate, and to have good enough precision.Sensor is very similar, but the supporting circuitry, even that on the sensor chip, is somewhat different, and differently timed and sequenced. Looking at the dump of the raw data one can see it - say, A7r and D800 have different optical blacks around them. One can do quite a lot by just programming internal sensor registers in a different manner.It was the same different sawtooth programming, for slower frame rate and higher bitness the steps were finer.

Arghh, this ADC was a standard electronics course design example; I must have spent hours calculating their parameters when young.

The ADCs used in those sensors are comparator-based, one can extract very high precision from those if the time to raise the ramp in fine steps is enough....It was the same different sawtooth programming, for slower frame rate and higher bitness the steps were finer.

Thanks! A much better and simpler answer than the speculation I had heard before.

Thanks! A much better and simpler answer than the speculation I had heard before.

Quote

The ADCs used in those sensors are comparator-based, one can extract very high precision from those if the time to raise the ramp in fine steps is enough....It was the same different sawtooth programming, for slower frame rate and higher bitness the steps were finer.

Sorry to add to the geeky part of the discussion.Mentioning the loss of resolution due to the more or less logarithmic conversion of Sony sounds bad. But once it is realized that the dominant noise source at higher illumination level is shot noise, a noise source that grows as a function of the light level, it can be understood that the full linear resolution as provided by Nikon and many back makers, is of little practical use. The extra resolution in the highlights will be completely swamped by shot noise. A linear analog to digital converter provides half its resolution to the highest full stop of its dynamic range. So in the case of a 14 bit ADC, the highest EV of the total range is divided in 8192 values. One EV down the resolution is 4096 etc, until the last bit, the one in the deepest shadows, where a single EV is only expressed with a single bit. Apart from the rather ridiculous resolution the highest EV gets, this amount of resolution is not present in the signal going into the ADC. Shot noise will reduce the SNR, at the top of the illumination range, to say 45dB, which is a ratio of 1:178. So most of the ADC resolution is used to express noise, which is of little practical value. Sony is no doubt aware of this, and reduces the resolution to a more useful range, reducing file size. As long as shot noise is high enough to dominate the resolution of the Sony approach at each level of illumination, dither will ensure that all tonal gradations are smooth. The Sony a900 had both the same compression algorithm and uncompressed RAW. I was never able to find any visible difference.