What you describe is real, and a direct result of the sensor physics/electronics.

The D800 and other recent CMOS cameras can be crudely categorized as "low signal, low noise" while the MFD units - especially the older backs with larger, fewer pixels - are "high signal, high noise" in comparison. They do not achieve their high signal because they are more light sensitive (on the contrary, they are typically one stop lower in quantum efficiency), but because (1) they are set one stop lower in base ISO, making you double exposure time to receive more signal, and (2) they have larger pixel surface areas. And they have larger pixel well capacities to accomodate that larger signal.

Analogy - consider a basin and a coffee mug. I leave them both outside in a rain shower. If I cover half of the top of the basin beforehand, it will catch only half of the water falling on it - this is like the one stop lower q.e. of the MFD sensors. Nevertheless, it still has a larger open area to the rain than the mug, so it will still catch more raindrops in a given time - this is like the pixel size/surface area difference. If I additionally take the mug inside halfway through the shower, the basin gets a further doubling of its catch - this is like the 1 stop lower ISO and corresponding longer shutter speed. Then I take the basin inside as well. I carefully pour the contents of the mug into a measuring jug, measuring all but a few drops that adhere to the sides of the mug. I try to do the same with the basin, but I slosh and spill some water - this is like the readout noise difference. I reckon that in both cases, I have measured 99% of the rainwater collected, or to put it another way, the volume I measured was 99 times larger than the estimated error - this is like the dynamic ranges being the same.

Anyway, if (as I have done) one models the signal to noise curves for the individual noise components in a sensor, and for their composite effect, you get quite different trends for these different types of sensor. The MFD units are killed by high readout noise at the shadow end (which is why they are poor at high ISO, when the shadows are pushed to become mid tones), but in the base ISO mid tones and highlights, where readout noise slips into insignificance behind shot/Poisson noise, they have better S/N due to their larger signals and lower percentage shot noise - as you put it, "much more graceful gradation from mids to highlights". The D800 and its ilk, on the other hand, will keep discriminating stop after stop of shadow detail thanks to its low readout noise, but as it is doing so with mere handfuls of signal photons at the bottom end, it can look rather quantized compared to the thousands of distinct shades of equally low S/N scuzz at the bottom of the DR range for MFD images.

In short, as I have said often, the quality along the range covered by the DR is very important...few people realize this and they look only at the overall quantity of the DR.

I your analysis you assume the sensors are linear in the high range; but AFAIK some new CMOS sensors use the antiblooming to get a (tuneable) shoulder in the highlights. I checked this assumption with Aptina and they concurred, said this had been going on for some time.

Edmund

Can you tell us more about how this works? Bleed over from sensor well to sensor well is collected / measured some how or what?

Edmund may have a point here. It may be it is simple a waste gate, leaking charge to ground?! It may also explain something I see. In general I try to expose for the highlights, just below clipping. What I have seen recently is that my P45+ exposures are quite dark while I guess that my Alpha 99 exposures are more normal. It is very difficult compare images, as this is an observation in post and where there are clouds light is changing fast. I seldom shoot the exactly same subject with DSLR and MFD even when I carry both (I never carry MFD only, need my long zooms and ultrawides).

What I do is that I check the images in RawDigger looking at real raw data.

So what I see is that ETTR images exposed for highlights are dark on the P45+ and I struggle shadow detail. On the A99 shadow detail is always there, never a problem.

If Edmund is right it could be that I can expose more on the Alpha 99 without saturating pixels.

I enclose one of those dark P45+ shots. In this case I had little problems with dark details.

On your posted data, I find that the right-hand part of the histogram curve drops down like it had hit a brick wall. It looks as if you have hit saturation, which would not be surprising when one sees the picture: the highlights in the clouds are very bright.

On your posted data, I find that the right-hand part of the histogram curve drops down like it had hit a brick wall. It looks as if you have hit saturation, which would not be surprising when one sees the picture: the highlights in the clouds are very bright.

There was a time when upgrading to Medium Format was evident to gain IQ. People started by using 35mm, with a long learning curve and jumped to MF/LF, to pursue the goal of high quality photography. In film days, the gap was very visible, it was logic.

Jumping to MF was kind of mandatory to "future proof" your hard work, even at the start of digital photography. Then the 35mm industry, with the Cmos technology, started to up the pixels count on their wafers. Just remember the Mamyia DF vs 1DsIII fight...This is interesting to see that MF industry stay with CCD technology and do not jump into Cmos kindergarten fest. Many posts above show why and skin tones/wavelength interpretation seems to be the key. Leica M8 (or even the old digilux) output very pleasant skin tones, better than Nikon or Canon Cmos based DSLR.The visual impact between CCD and Cmos should not be minimized, at all. Some had a look at SIGMA with the foveon because, even if it is a Cmos sensor, it output something close to CCD rendering as well as the D700 because of his big pixels. Some do not like the M240 output because (and it is understandable) of the lack of crispness or whatever, like if the M9 had a soul and not the M240. I, and I'm not alone, see what they mean.

Today, 35mm industry try all the possible tech ways to not die. Incredible amount of pixels, features and automatisms. D800/e, Sony A7r and so on ... Wonderful tool compared to ancient technology even very old MFDB.

So, as some photographers said to me one day, if you want to future proof your work/passion, do not jump on the first little digital MF you find on the market. This is a hard way because you will need to learn it hard and seek lenses + study all your graphic chain behind thus engaging more costs. If you jump into MF, buy at least IQ260 or H5D60 >> this is the step to go beyond (in the future) what latests 35mm DSLR offer and will offer soon. If you do not have it yet, you will need a good computer and a good screen + calibration tools (Mac + Eizo + Xrite). There is no little steps today; you move in or not... if you are pro or very passionate.

It is why "crop MFDB" like S2, pentax and all the entry level stuff (H4D40, P40) are a dead end by today standards. A D800e + very good lenses (Zeiss) do a very good job and come very close to the end result of those backs, period. This is a wonderful tool for the one who can't invest in bigger imagers.

And here come the softwares... With the actual plethora of pro-softwares you can find around, you can render a LOT of things, even shaping a 35mm shoot in a sort of MF looking shoot. That's the hard reality, truly. But it need some practice and it is time eater.

Now, if you really want to have a MF "look" and really want a MF, nothing can stop you apart yourself. You have the choice. If you come from a 5D or a D700 and want MF look, you can find (even new) the old and reliable Mamiya DF. ISO50/100 only, CCD look, not expensive even the lenses, possibility to use it like a tech cam with bellows and adapters ...This is not a killer MF but it is simple to use. Sensor is 2x the size of 24x36 with same density as the D700. It is a double D700... but with CCD color rendition.

You also have the film way, who is not dead at all but take more time, for sure. Incredible DR, real rendering on a 6x7 sensor, less expensive on the run than a IQ260 system

The second, the most pragmatic and future proof, is the D800e + plethora of good lenses (135f2 ZF2, 21f2.8 ZF2 or AF equivalents in Nikkor line-up, + screen/computer). If you do not like skin tones out of this camera, try to find someone who can teach you how to improve that in post quickly.

The third... the film way, the real MF. You can rent a Voigtländer Bessa III 667 (or 667W) or his Fuji equivalent and burn a roll or two with it. Scan it with an Epson V750 and see by yourself. You can buy a used Rolleiflex Hy6 Mod2, because you will have the possibility to plug a MFDB on it later. Absolutely awesome camera and lenses, very underestimated.

You have the choice. But for real MF venture, 15k$ is close to nothing (especially if you face some mechanical problems and need to pay the bills, this is why I don't speak about Hassy...).

One corollary, I believe, is that if any channel on a modern dSLR is in the last hard stop you will have imprecise color rendering. This implies that "conventional" matrix profiling by testchart will always fail, and that matrix profiles should be determined with at least one stop *under* exposure, that art repro should be done at one stop under, and that LUTS should be used for any normal exposures, which is certainly a datapoint of *practical* importance.

Of course, we don't know exactly what the firmware is doing in re-encoding the "raw" data, and how the "waste gates" are tuned by the firmware, my experience with my D4 leads me to believe that the control level is set in firmware.

As the author of the very nice RawDigger, Iliah Borg will probably have more realistic opinions than me, but with no disrespect intended, I don't think that RawDigger can really "see" the hardware, only the virtual presentation, and some conjectures about the non-documented features. It is a tool for the photographer, more than for the engineer.

An actual engineer from one of the big companies might speak out, but for some reason they all think that disguising their hardware structure will disguise the failings of same, rather than publishing them so software can be developed to amend them. Maybe someone here could get access to the nodelist dump from a reverse engineering firm in which case a couple of weeks hard work would probably tell us what the sensor really does.

EdmundPS. The magic lantern guys might be have more experimental data, as they have explored setting the various on-chip registers.

Edmund may have a point here. It may be it is simple a waste gate, leaking charge to ground?! It may also explain something I see. In general I try to expose for the highlights, just below clipping. What I have seen recently is that my P45+ exposures are quite dark while I guess that my Alpha 99 exposures are more normal. It is very difficult compare images, as this is an observation in post and where there are clouds light is changing fast. I seldom shoot the exactly same subject with DSLR and MFD even when I carry both (I never carry MFD only, need my long zooms and ultrawides).

What I do is that I check the images in RawDigger looking at real raw data.

So what I see is that ETTR images exposed for highlights are dark on the P45+ and I struggle shadow detail. On the A99 shadow detail is always there, never a problem.

If Edmund is right it could be that I can expose more on the Alpha 99 without saturating pixels.

Maxmax published a measurement they made on the Canon D40, and it certainly seems like some highlight compression going on.

I enclose one of those dark P45+ shots. In this case I had little problems with dark details.

> if any channel on a modern dSLR is in the last hard stop you will have imprecise color rendering. This implies that "conventional" matrix profiling by testchart will always fail, and that matrix profiles should be determined with at least one stop *under* exposure, that art repro should be done at one stop under, and that LUTS should be used for any normal exposures, which is certainly a datapoint of *practical* importance.

Technically the metering point of modern cameras is shifted downwards, "overestimating" the sensitivity. That is done not just because of "leaving more headroom to preserve highlights", but also because of the linearity issues in the last 1/3 to full stop in highlights. Not to overload the response, consider summaries presented at http://harvestimaging.com/blog/?p=1238 and http://harvestimaging.com/blog/?p=1249 . Trusting metering system for shooting profiling target is a good start. Generally because the noise in shadows is relatively low there is no penalty in underexposing profiling target shots about 1 EV compared to "ETTR". One may want experimenting with exposure. RawDigger allows to create compensated (normalized) CGATS files so that the data appears properly exposed to profiling tool. Normalization and white balance are done with great care in RawDigger, using floating point and proper rounding technique.

In practice it is important also to use flat field normalization and good light sources for profiling. Flat field normalization helps with light and white balance variations across the target but it cant help against poor spectrum of the light sources. Halogen lights with daylight filtration work well, HMI not so well.

> I don't think that RawDigger can really "see" the hardware

Quite so. All RD can do is analyse raw data as recorded. It is far from sensor output, I know it well as I work with sensors directly to. And yes, RD helps guesswork on sensor properties. The accuracy of such guesswork depends on the skills and knowledge of the user.

Quite so. All RD can do is analyse raw data as recorded. It is far from sensor output, I know it well as I work with sensors directly to. And yes, RD helps guesswork on sensor properties. The accuracy of such guesswork depends on the skills and knowledge of the user.

And for an additional fee, you can provide a camera maker a private version that shows both raw and "Raw"

One corollary, I believe, is that if any channel on a modern dSLR is in the last hard stop you will have imprecise color rendering. This implies that "conventional" matrix profiling by testchart will always fail, and that matrix profiles should be determined with at least one stop *under* exposure, that art repro should be done at one stop under, and that LUTS should be used for any normal exposures, which is certainly a datapoint of *practical* importance.

Of course, we don't know exactly what the firmware is doing in re-encoding the "raw" data, and how the "waste gates" are tuned by the firmware, my experience with my D4 leads me to believe that the control level is set in firmware.

As the author of the very nice RawDigger, Iliah Borg will probably have more realistic opinions than me, but with no disrespect intended, I don't think that RawDigger can really "see" the hardware, only the virtual presentation, and some conjectures about the non-documented features. It is a tool for the photographer, more than for the engineer.

An actual engineer from one of the big companies might speak out, but for some reason they all think that disguising their hardware structure will disguise the failings of same, rather than publishing them so software can be developed to amend them. Maybe someone here could get access to the nodelist dump from a reverse engineering firm in which case a couple of weeks hard work would probably tell us what the sensor really does.

EdmundPS. The magic lantern guys might be have more experimental data, as they have explored setting the various on-chip registers.

I reposted the Unprocessed images and raw histograms from P45+ and Alpha 99.

The unprocessed image from the A99 is much less dark. These are of course different images although taken at the same place at the same time. But I feel that the P45+ images exposed for clouds are generally darker than Alpha 99 files similarly exposed. I also feel the Alpha 99 files have cleaner shadow detail.

For photographer purposes, "Subtract Black" should be left on Auto most of the cases (only for the cameras that are not yet officially supported by rawDigger manual intervention may be necessary).

The rest depends on the purpose, and the raw converter of the choice. Most raw converters do not use ARW hack and use the linearization curve; so if the purpose is evaluating how a converter will "see" the data set the Preferences not to apply ARW hack and to use the curve.

For raw converter developers, it is different. Also it is different if one is going to use RawDigger as a data exporting tool for raw data and sensor analysis (like how many bits are used to represent one "colour", or noise analysis, etc.).

I have to agree with folks that are suggesting to FIRST look into your post processing. Take on a differnet approach to it. Also I think you underestimate the "negative" impact the AA has. I am surprised you didn't get a D800E?(I know the moire). I don't do a lot of fabric, and willing to do the post if I come across the issue. for 10 years it has happened a hand full of times that I was easily able to manage.

Hi Phil,

As I mentioned in one of the previous posts, I did change my post processing workflow and got better results from the D800 files, but they aren't there yet. I have processed my files, files from others, files from Canon DSLRs (All under various lighting setups), even looked at the finished work from other photographers and non of them are devoid of that plastickyness. The samples from the Aptus II backs that I have worked on however get it right straight after import. I like that.

As I also mentioned in the previous post, I re-processed some very old files from my D70s and they have this wonderful tonality too that the D800 files lack.

I didn't get a D800E because at the time when I bought mine, there was absolutely no stock of the E model anywhere. The dealers wouldn't even accept a deposit to reserve a unit because they had no idea when the stock would arrive too.

There was a time when upgrading to Medium Format was evident to gain IQ. People started by using 35mm, with a long learning curve and jumped to MF/LF, to pursue the goal of high quality photography. In film days, the gap was very visible, it was logic.

Jumping to MF was kind of mandatory to "future proof" your hard work, even at the start of digital photography. Then the 35mm industry, with the Cmos technology, started to up the pixels count on their wafers. Just remember the Mamyia DF vs 1DsIII fight...This is interesting to see that MF industry stay with CCD technology and do not jump into Cmos kindergarten fest. Many posts above show why and skin tones/wavelength interpretation seems to be the key. Leica M8 (or even the old digilux) output very pleasant skin tones, better than Nikon or Canon Cmos based DSLR.The visual impact between CCD and Cmos should not be minimized, at all. Some had a look at SIGMA with the foveon because, even if it is a Cmos sensor, it output something close to CCD rendering as well as the D700 because of his big pixels. Some do not like the M240 output because (and it is understandable) of the lack of crispness or whatever, like if the M9 had a soul and not the M240. I, and I'm not alone, see what they mean.

Today, 35mm industry try all the possible tech ways to not die. Incredible amount of pixels, features and automatisms. D800/e, Sony A7r and so on ... Wonderful tool compared to ancient technology even very old MFDB.

So, as some photographers said to me one day, if you want to future proof your work/passion, do not jump on the first little digital MF you find on the market. This is a hard way because you will need to learn it hard and seek lenses + study all your graphic chain behind thus engaging more costs. If you jump into MF, buy at least IQ260 or H5D60 >> this is the step to go beyond (in the future) what latests 35mm DSLR offer and will offer soon. If you do not have it yet, you will need a good computer and a good screen + calibration tools (Mac + Eizo + Xrite). There is no little steps today; you move in or not... if you are pro or very passionate.

It is why "crop MFDB" like S2, pentax and all the entry level stuff (H4D40, P40) are a dead end by today standards. A D800e + very good lenses (Zeiss) do a very good job and come very close to the end result of those backs, period. This is a wonderful tool for the one who can't invest in bigger imagers.

And here come the softwares... With the actual plethora of pro-softwares you can find around, you can render a LOT of things, even shaping a 35mm shoot in a sort of MF looking shoot. That's the hard reality, truly. But it need some practice and it is time eater.

Now, if you really want to have a MF "look" and really want a MF, nothing can stop you apart yourself. You have the choice. If you come from a 5D or a D700 and want MF look, you can find (even new) the old and reliable Mamiya DF. ISO50/100 only, CCD look, not expensive even the lenses, possibility to use it like a tech cam with bellows and adapters ...This is not a killer MF but it is simple to use. Sensor is 2x the size of 24x36 with same density as the D700. It is a double D700... but with CCD color rendition.

You also have the film way, who is not dead at all but take more time, for sure. Incredible DR, real rendering on a 6x7 sensor, less expensive on the run than a IQ260 system

The second, the most pragmatic and future proof, is the D800e + plethora of good lenses (135f2 ZF2, 21f2.8 ZF2 or AF equivalents in Nikkor line-up, + screen/computer). If you do not like skin tones out of this camera, try to find someone who can teach you how to improve that in post quickly.

The third... the film way, the real MF. You can rent a Voigtländer Bessa III 667 (or 667W) or his Fuji equivalent and burn a roll or two with it. Scan it with an Epson V750 and see by yourself. You can buy a used Rolleiflex Hy6 Mod2, because you will have the possibility to plug a MFDB on it later. Absolutely awesome camera and lenses, very underestimated.

You have the choice. But for real MF venture, 15k$ is close to nothing (especially if you face some mechanical problems and need to pay the bills, this is why I don't speak about Hassy...).

Hi there,

For my purposes and shooting style, the files from an Aptus II 7/8 are much better than what my D800 offers, which was the original point. Also, I have mentioned in several posts previously, I have done many things in post that improved the skintones since then, but they are still nowhere as natural looking as files from an MF.

I do have a film MF camera, as I mentioned earlier. A Bronica ETRSi. However, as much as I like the Portra 160, shooting film is just not a permanent solution for the type of work I do.

Manual lenses on the D800 is not a solution for the model shoots I do either. Moreover, manual focusing with the D800 viewfinder is a major pain int he ass. Manual focusing on my bronica is a joy.

I hope you understand and respect that different people have differing needs and for my particular need, the 33/40MP Leaf backs are perfect. I am not looking at getting into a spec war with other DSLR shooters or to show that I have 60 MP while they have "Only 36" or whatever. Just looking for the tool that fits my artistic vision best.

As for the rest of the posts, I do appreciate the effort you put in folks, but all this MTFing and Rawdigging and all that really goes above my head. All I know is, the DSLR files really butcher subtle tonal differences while the MF files don't. I have managed to make the DSLR files look better, but they are not there yet. Instead of fighting the files with every shoot that I do, I just want to use a system that gets it right, out of the box.