If I understand correct, one of the key benefits of MF is that the files are 16 bit giving files that can be pushed a lot more in post. Now why can´t files from a DSLR be 16 bit? I know my Nikons have the option of using 14 bit (and I always use it), but why stop there if it improves the files so much?

The A/D converter in most digital backs is 14-bit or 16-bit, afaik. However, the least significant bits may be below the noise floor of the signal, in which case they contribute nothing to the image quality. The question is where this noise floor begins. For most backs it is probably in the 11-13 stop range. That's why a 14-bit file is enough. The last 2 bits in 16-bit are a waste of space.

I'm sure there are also DSLRs offering 12 or 14 bit files which are also wasting the least significant bits.

A sensor cell can hold say 30000 - 60000 electrons. Larger pixels hold more electrons. So upper limit is 60000, which correspond to 16 bits. Now, this signal needs to be read out and converted to binary digits. The readout has some noise. Some CMOS sensors do it on chip, and they can have as low noise as 1-2 electrons, CCD-s have much higher readout noise. more like 15 electrons.

but what then contributes to making MF files so much more robust to post processing. Is it the calibration related to iso so that if I underexpose my DSLR I get the same effect, and/or is it the size of the sensor?

but what then contributes to making MF files so much more robust to post processing...

I am not sure that statement is really true, at least I think the answer may be attributed to many factors. However, many MFD sensors have a larger pixel pitch and DR than many other smaller cameras--check DxO. But the MFD cameras don't sit at the top of those rankings by themselves. The pixel resolution may also contribute to masking some effects of processing as well as that the optics tend to have a little more contrast than smaller formats. Generally speaking, MFD photographers take much more care over exposure than smaller format shooters and likewise may be more careful in post production as well--the abbreviation for Medium-Format Digital is OCD. All of this adds up to better files and better results.

But you can also get some really horrendous files out of a MFD as well as some really nice files out of smaller formats. How much is the machine and how much the photographer is really unclear.

but what then contributes to making MF files so much more robust to post processing. Is it the calibration related to iso so that if I underexpose my DSLR I get the same effect, and/or is it the size of the sensor?

Christopher

That's a very good question. It may be that linearity of CCD files is better than CMOS but I am just guessing here. It would be fun to do a side-by-side test against a high-end DSLR. I might do that a bit later and post results, when I find some time.

Ii recently acquired a leica S2. On a recent trip to the Eastern Sierras (California), I was fortunate enoughto witness an incredible sunset, and in particular the "after glow" over the mountains. The sky was paintedwith shades of red, yellow, pink, orange, more vivid and varied than I have ever seen. Along with the S2,I had a nikon D700. I set both up at the same time, and took repeated images. The D700 could not capturethe colors to the extent that the Leica S2 did. The range of colors, the tonal gradations, the gradual shiftsfrom one color to the next, was clearly superior on the S2 vs. the D700. It was visible on my calibrated monitor, and even more so on a print.

Indeed, throughout this trip, the "micro contrast" and tonal range was clearly superior on the MF S2 thanon the D700. Whether it was rocks, desert sand dunes, salt crystals on the salt flats (Death Valley), therewas a clear distinction. I do not mean merely in terms of resolution, but in the fine tonal contrast thatlends "texture" and a 3-D appearance to the objects in the photo.

Obviously, this is NOT a scientific study, but it was a side-by-side comparison. I cannot explain why thereis such a distinction, whether it is CMOS vs. CCD, 14 vs 16 bit, or the algorithms used to interpret thecollected photons. It is just an observation. When I show the images to colleagues they too can identifythe S2 vs. the D700 images.

but what then contributes to making MF files so much more robust to post processing.

Now you're asking the right question! The A/D convertor question is never going to get you anywhere on this forum or anywhere else. It's like asking about a specific spec in the design of a subcomponent of a car engine. It's the system that matters; not the individual specs. Beyond number of bits A/D convertors and sensors can perform at myriad levels, speeds, producing various amounts of heat, requiring certain amounts of power and costing various amounts of money. No one, not Nikon, not Canon, not Phase is going to select a component based on a single number but rather on suitability within the entire system (cost/performance/size/speed/power/heat all considered).

The post-processing question is much more interesting because it is so much more practical and so much easier to show. 35mm dSLR files simply do not contain equal depth of color, tonal gradation, shadow color accuracy, and post-processing malleability when compared with the digital backs I've worked with (including fairly old models like the P45).

Suffice it to say that medium format cameras don't have low cost, high ISO, fast shooting speeds, or a huge number of features, and they are more difficult to learn and to use; so if they don't deliver fantastic image quality and a good user experience then they won't be purchased by anyone. As a result the engineering, marketing, and resources of medium format companies goes very heavily into making the image capture the best quality images (even if it means sacrificing a convenience or non-quality-related feature).

Ii recently acquired a leica S2. On a recent trip to the Eastern Sierras (California), I was fortunate enoughto witness an incredible sunset, and in particular the "after glow" over the mountains. The sky was paintedwith shades of red, yellow, pink, orange, more vivid and varied than I have ever seen. Along with the S2,I had a nikon D700. I set both up at the same time, and took repeated images. The D700 could not capturethe colors to the extent that the Leica S2 did. The range of colors, the tonal gradations, the gradual shiftsfrom one color to the next, was clearly superior on the S2 vs. the D700. It was visible on my calibrated monitor, and even more so on a print.

Indeed, throughout this trip, the "micro contrast" and tonal range was clearly superior on the MF S2 thanon the D700. Whether it was rocks, desert sand dunes, salt crystals on the salt flats (Death Valley), therewas a clear distinction. I do not mean merely in terms of resolution, but in the fine tonal contrast thatlends "texture" and a 3-D appearance to the objects in the photo.

Obviously, this is NOT a scientific study, but it was a side-by-side comparison. I cannot explain why thereis such a distinction, whether it is CMOS vs. CCD, 14 vs 16 bit, or the algorithms used to interpret thecollected photons. It is just an observation. When I show the images to colleagues they too can identifythe S2 vs. the D700 images.

Craig,

I think the three big factors in your example are (1) The superb S2 lenses + (2) the lack of an AA filter together provide the remarkable microcontrast and tonal texture; and (3) Kodak's Bayer-CFA filter bandpasses provide the remarkable colour range/gradations. It has little or nothing to do with CMOS vs. CCD or 14 vs 16 bit, especially since the subject matter you describe is at the mid and high end of the histogram.

I think the three big factors in your example are (1) The superb S2 lenses + (2) the lack of an AA filter together provide the remarkable microcontrast and tonal texture; and (3) Kodak's Bayer-CFA filter bandpasses provide the remarkable colour range/gradations. It has little or nothing to do with CMOS vs. CCD or 14 vs 16 bit, especially since the subject matter you describe is at the mid and high end of the histogram.

Exactly my point. We can armchair all we want about individual components. But until/unless you can build-your-own-camera with your own combination of sensor, CFA filters, AA filter, A/D convertor, dark-frame technology, camera-specific raw processing tweaks, etc etc etc then the only discussion that makes sense is the system - as a whole - in real world use. Discussing bit-depths, A/D convertors, analyzing numeric values from raw files in a 3rd party convertor (dXo), and waxing philosophical about CFA filters is academically interesting, but you don't buy an A/D convertor - you buy a system.

A sensor cell can hold say 30000 - 60000 electrons. Larger pixels hold more electrons. So upper limit is 60000, which correspond to 16 bits. Now, this signal needs to be read out and converted to binary digits. The readout has some noise. Some CMOS sensors do it on chip, and they can have as low noise as 1-2 electrons, CCD-s have much higher readout noise. more like 15 electrons.

So in this case the Nikon would actually be 14 bit while the MF back would be 12 bits. The figures are approximately in the ballpark.

Would MF backs make good use of sixten bits they would also offer excellent high ISO capabilityBest regardsErik

Erik,

That's about right for the MFD system, but even though I know you're using ballpark figures, you're still somewhat off on the D3X for two reasons:

1) It's not as good as say a Canon 1d MkIV in terms of minimum readnoise; the D3X is closer to 4 electrons than 2. That in itself is a drop of 1 bit of DR.

2) Your D3X DR calculation assumes that the minimum readnoise is attainable AT THE SAME TIME as the full pixel capacity (well depth) is also attainable. But because of A/D converter noise, the lowest readnoise in DSLRS is usually only reached at around ISO 800 or 1600, at which setting the maximum signal is reduced by around 4x - 16x, depending on the base ISO.

So the calculation you did correctly gives the sensor's inherent DR, but fails to take the rest of the real-world camera system into account (the A/D contribution to noise). You are in good company: I've seen people quote from Roger Clark's DR tables and plots while missing the crucial nuance that these are sensor not camera values, and I know I've done it wrong myself in the past . It's rather like saying that "because I can run a short 100m race at 5m/sec, and because I can also run a marathon, then I can run a marathon at 5m/sec" ["because I can readout a truncated max signal at very low readnoise, and because I can also store a max signal up to the full well depth, then I can readout the full well depth at very low readnoise"]. It's combining performance specifications taken under different, mutually exclusive circumstances. I guess it's an easy mistake to make, because people like you and I grew up on the strict engineering definition of DR=FWC/RN, which is fine for CCDs as the readnoise rarely changes with ISO, and ISO settings are usually only "flags" which do not actually decrease the maximum signal (...and scientific CCDs have no concept of ISO in the first place!). But that definition needs to be adjusted for CMOS sensors with real ISO, max signal and readnoise variations.

Just now Doug reminded us again that "it's the system that matters", and he's absolutely right; but I think that he is underestimating the importance of the A/D converter as one of the kingpins of performance.

While Nikon did make a breakthrough with the D3X, managing to greatly reduce A/D noise so that it was much less of a limiting factor at lower ISOs, there still is a modest trend with ISO (D3X on Sensorgen). More recent cameras from Sony and Pentax are also greatly diminishing the trend of readnoise with ISO, by beating down the A/D component of readnoise. This is how the Pentax K-5 took everyone by surprise with its ~14 bits DR ( K-5 on Sensorgen, and likewise the Nikon D7000.

I just performed a small test. I took the same image with a Canon 5D II and a Leaf Aptus II 12 back (both at base ISO). I matched the histograms as close as I could, and deliberately underexposed. I also used an object with dark detail.

Then I imported both RAW files into C1 and pushed them both 2 stops. It's not just the noise performance which is better with the Leaf, but the colour is far superior:

I just performed a small test. I took the same image with a Canon 5D II and a Leaf Aptus II 12 back (both at base ISO). I matched the histograms as close as I could, and deliberately underexposed. I also used an object with dark detail.

Then I imported both RAW files into C1 and pushed them both 2 stops. It's not just the noise performance which is better with the Leaf, but the colour is far superior:

Thanks for the test.

I am not that surprised, the DR of the 5DII, and therefore its shadow noise, is notoriously not that good. Even the 18MP 7D does nearly as well although the pixels are much smaller:

My intention was more to demonstrate the principles. I did not have any read noise figures for the D3X and actually guessed that two electrons were a bit optimistic but I got the impression that the latest sensors (K5) are about there. On the other hand I also guess that 30000 electrons is a bit on the low side. What I wanted to demonstrate that the 16-bits claimed on some backs is irrelevant.

The other factor you point out, that it is not clear minimum readout noise can be attained full well capacity was not very obvious to me, thanks a lot!

I also made a point that I would expect cameras with > 14bit DR to excel at high ISO. I was always somewhat confused by the statement that MFDBs have large DR but don't perform well at high ISO, you may perhaps spread some light on the issue?

I'm only having a Sony Alpha 900 and a few lesser cameras. Personally I have not found many images where DR was an issue, on the contrary, I'm quite impressed with the DR on the Alpha 900. I also know that the Alpha 900 is not champ in the DR arena, but I have only see DR as a limitation in very few cases. I'm aware that the D3X is much better using essentially the same sensor design, but don't know how they achieve that.

I have seen a few comparisons between MFDBs and DSLRs:

- Miles Hecker on Pentax 645D and Nikon D3X. For me the Pentax had more saturated color and fine detail contrast but I have not seen much difference in DR.- Diglloyd compared D3X to Leica S2. In my view the D3X had definitively better DR than the S2.- Peter Eastway published some comparisons between P65+ and Canon 1DsIII. P65+ had much better DR than Canon.

There is little doubt that a larger sensor collect more photons, so shot noise will be less with a bigger sensor. A larger sensor will also render all features at higher size, so MTF will be higher. Lower shot noise and higher MTF will result in better image quality. I'd say that is very well demonstrated in Graham Mitchell's images.

That's about right for the MFD system, but even though I know you're using ballpark figures, you're still somewhat off on the D3X for two reasons:

1) It's not as good as say a Canon 1d MkIV in terms of minimum readnoise; the D3X is closer to 4 electrons than 2. That in itself is a drop of 1 bit of DR.

2) Your D3X DR calculation assumes that the minimum readnoise is attainable AT THE SAME TIME as the full pixel capacity (well depth) is also attainable. But because of A/D converter noise, the lowest readnoise in DSLRS is usually only reached at around ISO 800 or 1600, at which setting the maximum signal is reduced by around 4x - 16x, depending on the base ISO.

So the calculation you did correctly gives the sensor's inherent DR, but fails to take the rest of the real-world camera system into account (the A/D contribution to noise). You are in good company: I've seen people quote from Roger Clark's DR tables and plots while missing the crucial nuance that these are sensor not camera values, and I know I've done it wrong myself in the past . It's rather like saying that "because I can run a short 100m race at 5m/sec, and because I can also run a marathon, then I can run a marathon at 5m/sec" ["because I can readout a truncated max signal at very low readnoise, and because I can also store a max signal up to the full well depth, then I can readout the full well depth at very low readnoise"]. It's combining performance specifications taken under different, mutually exclusive circumstances. I guess it's an easy mistake to make, because people like you and I grew up on the strict engineering definition of DR=FWC/RN, which is fine for CCDs as the readnoise rarely changes with ISO, and ISO settings are usually only "flags" which do not actually decrease the maximum signal (...and scientific CCDs have no concept of ISO in the first place!). But that definition needs to be adjusted for CMOS sensors with real ISO, max signal and readnoise variations.

Just now Doug reminded us again that "it's the system that matters", and he's absolutely right; but I think that he is underestimating the importance of the A/D converter as one of the kingpins of performance.

While Nikon did make a breakthrough with the D3X, managing to greatly reduce A/D noise so that it was much less of a limiting factor at lower ISOs, there still is a modest trend with ISO (D3X on Sensorgen). More recent cameras from Sony and Pentax are also greatly diminishing the trend of readnoise with ISO, by beating down the A/D component of readnoise. This is how the Pentax K-5 took everyone by surprise with its ~14 bits DR ( K-5 on Sensorgen, and likewise the Nikon D7000.

I am not that surprised, the DR of the 5DII, and therefore its shadow noise, is notoriously not that good. Even the 18MP 7D does nearly as well although the pixels are much smaller:

It's all I had at hand. As for the DR, it is above average in DR performance (according to DxO), and certainly one of the most popular cameras in use by professionals, so it's a pretty good indicator of why 35mm DSLRs have a particular reputation.