Unfortunately, it is not always feasible for ordinary users to determine stuff such as MTF, "microcontrast", etc.

What is an "ordinary" user? Surely, most camera-owning people would call it extra-ordinary to spend $1000 or $100000 on camera gear, and to participate in a discussion on luminous-landscape.com about the real-world advantages of using 16-bit analog-to-digital converters in cameras having a larger image sensor than 24 x 35 mm?

For most camera users, MTF is irrelevant because they get the photographies they want (or are willing to make) without it. For those that spend the time and resources, and have the expectations that make MTF relevant, I dont see why it is such an obstacle. After having tried to understand the AF-options of my Canon 7D, I think that MTF is a relatively well-documented and intuitive concept...

Quote

Special test charts, methodology and software have to be used. And, still what about real images that you have acquired? I.e., images of landscapes, cats, oranges, etc., and not some test charts in controlled setting. To complicate matters further, arguing what is lens sharpness, pixel pitch, FOV, and what not.

It is always difficult to relate "lab tests" to real-world usage. Is your new car really able to pull 0.36 liters of gas per 10km of your usage pattern? Is your Kenwood really able to output 1200Watts of dough-massaging (or is most of it going to heat and sound?), and what is the relevance for making bread? One reason why people and engineers still use these "synthetic" measurements is that they are/should be universal, repeatable and at least correlated with significant user patterns. I may not drive like whatever EU/US pattern is used to measure car fuel effciency. You may not either. But perhaps the measure is still sufficient robust for me or you to aid choosing a car without making too large errors?

I think it is hard to discuss the pros and cons of certain equipment with people that seem to invent their own terminology, stating the equivalent of (slightly exaggerated) "my xyz2000 may not have better measurable MTF than your average Canikon, but it has a lot more shing-a-dong leading to better subjective eye-detail, and its razzmatazz renders silky-smooth gradations that should be plain to see. Further, its micro-dynamic-range means that you avoid the shadow collapse so easily seen on mainstream cameras."

How many litres are there in one meter? I think that we have had this discussion before. I am able to use abstract measurements in my job and private life. A lot of other people are as well. If you cannot (or if you make up a fictious character for arguments sake), then I dont know how to help out, really.

Regarding the topic:It would be interesting to see a "good" and "critical" raw image from e.g. a Leica S2 subjected to the same ("good") development parameters where the 15th and 16th bits were a)untouched, and b)replaced by a suitable randomly distributed function. If the 2 lsb can be replaced by "noise" without ever affecting final IQ visibly, then they are not needed.

In addition, getting to look at the 2 lsb isolated (amplified to fill whatever the distributed format) would be interesting.

How many litres are there in one meter? I think that we have had this discussion before. I am able to use abstract measurements in my job and private life. A lot of other people are as well. If you cannot (or if you make up a fictious character for arguments sake), then I dont know how to help out, really.

-h

JIDM is a measure of detail in usual photographic, textured images, especially when the traditional MTF notion can't be directly applied.

JIDM is a measure of detail in usual photographic, textured images, especially when the traditional MTF notion can't be directly applied.

Sincerely,

Joofa

And for that I am sure that it is a fine tool. My objection was that you seemed to think that MTF was too difficult a concept for photographers to understand or use. I think that most "serious" photographers are in fact able to understand complicated technical concepts if they think that it will help them do their thing.

My camera uses a 16-bit ADC. Perhaps under ideal conditions that's irrelevant but under less-than-ideal conditions when I have to push the files' limits it falls apart much less often than files from 12- and 14-bit cameras. Would your left brain like to explain this?

You have failed to isolate the variables, a typical failure of a non-scientific approach. You should ask, "what other variables contribute to the observed robustness of the files". As pointed out earlier, photon noise and read noise have a lot to do with clean shadows. The newer generation of dSLRs such as the Nikon D7000 and Pentax D5 have very low read noise and very clean shadows, but are limited by photon noise because of their small sensor size. Photon noise predominates except in the deepest shadows. On the other hand, the MFDBs have a large sensor area allowing collection of more photons and a better SNR from photon noise, but are handicapped by high read noise. A noise analysis using photon noise and read noise such as in Table 2 of Roger Clark's treatise can describe the noise performance of a sensor fairly well, but other factors such as pattern noise and unfavorable coefficients for the 3x3 matrix transform as referenced by Emil in an earlier post come into play as well.

When micro-contrast and image detail enter into the equation, many more variables are involved.

As far as I know "Telyt" is using a Leica DMR so it's a 1.3 crop factor digital back with a Kodak CCD. I don't know the pixel size. I also know that "Telyt" takes very good pictures, so whatever the bits, the camera serves him well.

You have failed to isolate the variables, a typical failure of a non-scientific approach. You should ask, "what other variables contribute to the observed robustness of the files". As pointed out earlier, photon noise and read noise have a lot to do with clean shadows. The newer generation of dSLRs such as the Nikon D7000 and Pentax D5 have very low read noise and very clean shadows, but are limited by photon noise because of their small sensor size. Photon noise predominates except in the deepest shadows. On the other hand, the MFDBs have a large sensor area allowing collection of more photons and a better SNR from photon noise, but are handicapped by high read noise. A noise analysis using photon noise and read noise such as in Table 2 of Roger Clark's treatise can describe the noise performance of a sensor fairly well, but other factors such as pattern noise and unfavorable coefficients for the 3x3 matrix transform as referenced by Emil in an earlier post come into play as well.

When micro-contrast and image detail enter into the equation, many more variables are involved.

Yes, Doug (Telyt) takes some wonderful bird and wildlife pictures with his Leica/DMR.

I had a DMR and can also attest to its fantastic color and detail. I think my post where I measured the DR of the DMR and Canon 5D using Imatest and a stouffer transmission test wedge can still be found on these forums. In terms of DR, I don't think the DMR would equal the current cameras but it did well then.

One thing I've always felt is that people measure DR from light to dark, but there should be some kind of way to measure the camera's ability to reach across the colors similarly. Some cameras are able to show subtle changes in colors, and others not. The DMR was one of those that could render subtle color transitions very well. I've always felt this was something the Kodak CCD sensors were very good at and have always right or wrong credited that ability to the 16 bit A/D pipeline.

DxO-mark measures something they call "color sensivity" but I don't know what it is worth.

Regarding color rendition I'd suggest that it is essentially a function of the color grid array. The filters in the CGA have different transmission characteristics, and those affect the possible rendering of colors. The amount of overlap between the channels matters a lot. A CGA optimized for high ISO may have different characteristics from one that is intended to give good separation of colors. The manufacturing process may also matter.

Yes, Doug (Telyt) takes some wonderful bird and wildlife pictures with his Leica/DMR.

I had a DMR and can also attest to its fantastic color and detail. I think my post where I measured the DR of the DMR and Canon 5D using Imatest and a stouffer transmission test wedge can still be found on these forums. In terms of DR, I don't think the DMR would equal the current cameras but it did well then.

One thing I've always felt is that people measure DR from light to dark, but there should be some kind of way to measure the camera's ability to reach across the colors similarly. Some cameras are able to show subtle changes in colors, and others not. The DMR was one of those that could render subtle color transitions very well. I've always felt this was something the Kodak CCD sensors were very good at and have always right or wrong credited that ability to the 16 bit A/D pipeline.

Yes probably the color has something to do with the filter array. But also I am betting in addition to that good A/D electronics also help a camera render those subtle color transitions, and that those cameras with higher bit pipelines can do it better.

Yes probably the color has something to do with the filter array. But also I am betting in addition to that good A/D electronics also help a camera render those subtle color transitions, and that those cameras with higher bit pipelines can do it better.

I think that it is often sufficient to have a model that considers the SNR/DR/Noise/non-linearity of luminance, and a separate model that considers color-characteristics as a linear, noise-free function of wavelength. This model certainly breaks down when considering raw development, but I think it is a good one for analyzing sensor/camera/raw behaviour.

After all, the color filters (ideally) affect each sensel individually with some spectral sensitivity modification, and from the sensel and all the way to the raw file, each element is (ideally) independently and approximately equally processed.

When your raw developer turns the bits and bytes into a pleasing image, all kinds of tricks can be done, including "hiding" noise by doing less accurate colors. But then we have access to before and after files, and a lot of knowledgeable people on this forum that can guide us.

Yes probably the color has something to do with the filter array. But also I am betting in addition to that good A/D electronics also help a camera render those subtle color transitions, and that those cameras with higher bit pipelines can do it better.

As far as I know "Telyt" is using a Leica DMR so it's a 1.3 crop factor digital back with a Kodak CCD. I don't know the pixel size. I also know that "Telyt" takes very good pictures, so whatever the bits, the camera serves him well.

The S2 is a very good camera and Telyt is said to take excellent pictures. However, his image quality is likely related to factors other than the bit depth of 16. Your DR analysis of 11 - 12 stops would require a bit depth of around 12 bits for encoding. One can reach similar conclusions through a noise model similar to that used by Roger Clark where the two main sources of noise--shot noise and read noise--can be added in quadrature to obtain total noise. As Emil has explained, it makes little sense to quantize the signal from the sensor in steps much finer than the level of the noise.

The Leica S2 uses the KAF 37500 sensor which was designed specifically for the S2, and Kodak has not released a data sheet. However, they do state that the chip uses the 6.0 micron TrueImage technology, and the performance is probably similar to other chips in this series. The KAF-40000 is one of these chips and it has a full well of 42K electrons and a read noise of 13 electrons. The DR is listed at 70.2 db (11.7 stops), in line with your estimate.

The chart below shows the Clark style noise model along with the sensor gain (electrons per data number [DN]) for various bit depths, assuming that the full range of the ADC is utilized. At a SNR of 1, total noise is 13 electrons and even at a bit depth of 12, the gain is 10.25 electrons/DN. This meets Emil's criterion. A bit depth of 14 would give a margin of error, but a bit depth of 16 serves only to quantify noise.

The S2 is a very good camera and Telyt is said to take excellent pictures. However, his image quality is likely related to factors other than the bit depth of 16. Your DR analysis of 11 - 12 stops would require a bit depth of around 12 bits for encoding. One can reach similar conclusions through a noise model similar to that used by Roger Clark where the two main sources of noise--shot noise and read noise--can be added in quadrature to obtain total noise. As Emil has explained, it makes little sense to quantize the signal from the sensor in steps much finer than the level of the noise.

The Leica S2 uses the KAF 37500 sensor which was designed specifically for the S2, and Kodak has not released a data sheet. However, they do state that the chip uses the 6.0 micron TrueImage technology, and the performance is probably similar to other chips in this series. The KAF-40000 is one of these chips and it has a full well of 42K electrons and a read noise of 13 electrons. The DR is listed at 70.2 db (11.7 stops), in line with your estimate.

The chart below shows the Clark style noise model along with the sensor gain (electrons per data number [DN]) for various bit depths, assuming that the full range of the ADC is utilized. At a SNR of 1, total noise is 13 electrons and even at a bit depth of 12, the gain is 10.25 electrons/DN. This meets Emil's criterion. A bit depth of 14 would give a margin of error, but a bit depth of 16 serves only to quantify noise.

The S2 is a very good camera and Telyt is said to take excellent pictures. However, his image quality is likely related to factors other than the bit depth of 16. Your DR analysis of 11 - 12 stops would require a bit depth of around 12 bits for encoding. One can reach similar conclusions through a noise model similar to that used by Roger Clark where the two main sources of noise--shot noise and read noise--can be added in quadrature to obtain total noise. As Emil has explained, it makes little sense to quantize the signal from the sensor in steps much finer than the level of the noise.

The Leica S2 uses the KAF 37500 sensor which was designed specifically for the S2, and Kodak has not released a data sheet. However, they do state that the chip uses the 6.0 micron TrueImage technology, and the performance is probably similar to other chips in this series. The KAF-40000 is one of these chips and it has a full well of 42K electrons and a read noise of 13 electrons. The DR is listed at 70.2 db (11.7 stops), in line with your estimate.

The chart below shows the Clark style noise model along with the sensor gain (electrons per data number [DN]) for various bit depths, assuming that the full range of the ADC is utilized. At a SNR of 1, total noise is 13 electrons and even at a bit depth of 12, the gain is 10.25 electrons/DN. This meets Emil's criterion. A bit depth of 14 would give a margin of error, but a bit depth of 16 serves only to quantify noise.

Regards,

Bill

Having the Hasselblad H4D-40 and Pentax 645D the same Kodak KAF-40000 sensor, how come Pentax state 14 bits and Hasselblad 16 bits ??

Pentax probably uses specially built ASIC for their hardware Hasselblad is probably using at least some of the shelf component. The CCD sensor we have today are nowhere even near 16 bit signals, more like 11-12 bits.

Pentax is probably one of the better MF systems regarding number of bits utilized.

16-bits seem to be a good marketing argument as many posters on this forum actually believe there is some benefits of having 16 bits although it is quite clear that it is not the case on any CCD-based MFD back, or any other common photographic device.

As far as I know "Telyt" is using a Leica DMR so it's a 1.3 crop factor digital back with a Kodak CCD. I don't know the pixel size. I also know that "Telyt" takes very good pictures, so whatever the bits, the camera serves him well.

Erik,

Thanks for pointing that out. Somehow I thought he was using the S2, but the comments make even more sense for his DMR.

The enclosed figure is a pretty good indication of the actual amount of information available from the sensors. Both the Hasselblad and the Pentax use actually 11.5 bits. That is less than on the best DSLRs of today, but both Pentax and Hassy have more pixels. Very few cameras of today actually utilize more than 12 bits and those seem to use Sony sensors which have a massive amount of on chip ADCs.

If you look at the figures, the Pentax seems to have one stop of advantage regarding ISO. That can possibly come from Pentax having microlenses and Hasselblad not.