What you are describing has more to do with dynamic range and signal to noise performance in shadows (where read noise dominates). So differences between sensorss and support electronics can play a large role here.

IMHO, the real difference between 12 and 14-bit ADC quantization will manifest itself in the highlight tonality, as it will be much smoother and will hold more detail. The attached charts (of a scanline from a synthesized stepwedge with Poisson noise added) should illustrate that it becomes increasingly more difficult to discern the differences between the relatively coarse steps in brightness as we lower the number of bits. It becomes especially troublesome after a gamma 1/2.2 adjustment, in particular in the highlights regions. More subtle detail with smaller brightness differences will lose all definition in the lower bit versions even faster.

Also remember that this originates at the Raw level, and it will therefore affect the demosaicing accuracy as well.

But this is getting a bit off topic, so I'll stop here. It can of course be discussed further elsewhere, should there be a need to.

Cheers,Bart

Hi Bart,New thread started. The comparisons I showed in the previous thread at http://www.luminous-landscape.com/forum/index.php?topic=74637.20 are from the same camera with the same sensor and support electronics, so it's reasonable to suppose that the differences in dynamic range that are very obvious near the limits of DR are entirely due to the differences between 12 bit and 14 bit processing.

That there are also differences in highlight detail and tonality may be the case as your graphs imply. But the question from Ben was, are there any real-world differences that would in practice be noticed.

When I tested the DR of the D7000 a couple of years ago, there were no obvious differences above the shadow level, between 12 bit and 14 bit, that were noticeable in the test chart I had taped to the wall. But I did notice that the amount of adjustment required in ACR, raising darks and shadows with the Tone Curve to achieve the same overall effect, was different. The 12 bit image, with the same adjustments in ACR, was noticeably darker, particularly in the deepest shadows. Refer attached comparison which I've just reprocessed with ACR in CS5 using identical adjustments (but no sharpening nor noise reduction of course).

The adjustments were, +4 exposure, 100 Brightness, zero blacks and zero contrast, and +100 'Darks' in Tone Curve. The histogram in Levels, after conversion, shows obvious clipping of blacks in the 12 bit image, but not in the 14 bit image.

I also find it significant when comparing on the DXOMark site the D7000 with two other models of cameras with a 24mp sensor, the D3200 and the NEX-7, both of which have only 12 bit processing, the D7000 has about 1/2 a stop to 2/3rds of a stop DR advantage at base ISO, yet SNR at 18% is about the same for all three cameras.

It seems reasonable to presume that the DR of all three cameras at base ISO would be about the same if the 24mp sensors had 14 bit processing.

Could you share the DR test target you use? I know it is coming from Jonathan Vienke who used to post on these forums but he disappeared. I hope he is doing well.

Regarding 12 vs. 14 bits, I would say that if DR per pixel exceeds 12 EV than you need more than 12 bits to represent it.

My latest camera is 14 bit, but I have failed to find a real world image that actually utilizes more than 12 bits in a single exposure, that including shooting a Stouffer wedge. The problem is getting the environment dark enough, shielding out leak light and veiling flare. The first time I could see the more than 12 EV DR was shooting repro of Velvia slide. In that case the the advantage of my Sony Alpha 99 over my Sony Alpha 900 could be made visible,

My take right now is that most images are limited in DR by veiling flare.

Could you share the DR test target you use? I know it is coming from Jonathan Vienke who used to post on these forums but he disappeared. I hope he is doing well.

Regarding 12 vs. 14 bits, I would say that if DR per pixel exceeds 12 EV than you need more than 12 bits to represent it.

My latest camera is 14 bit, but I have failed to find a real world image that actually utilizes more than 12 bits in a single exposure, that including shooting a Stouffer wedge. The problem is getting the environment dark enough, shielding out leak light and veiling flare. The first time I could see the more than 12 EV DR was shooting repro of Velvia slide. In that case the the advantage of my Sony Alpha 99 over my Sony Alpha 900 could be made visible,

My take right now is that most images are limited in DR by veiling flare.

Best regardsErik

Hi Erik,I'll try to locate the original file that Jonathan Wienke made available some years ago, but give me time. I'm not terribly well organised.

I agree that in most situations this advantage of 14 bit may not be at all significant. However, one situation where it might be, is when using the camera at base ISO for all situations, and underexposing instead of raising ISO. If one were to underexpose 6 or 7 stops instead of raising ISO to 6400 or 128,000, in order to retain full detail in the highlights, and perhaps even the specral highlights, then the advantage of that 14 bit processing would probably be very apparent.

Just out of curiosity, I darkened the reprocessed 14-bit image by increasing contrast and moving the middle 'levels' slider to the right, in Photoshop. I then checked the histograms of both images. The 12-bit image had clipped blacks from the beginning. After darkening and increasing the contrast of only the 14-bit image, it is still not clipped. Refer attached image.

The Poisson noise that you added, is it to simulate the effect of photon noise?

Hi Francisco,

That's correct, although it would be shot noise at an ADC unity gain level (1 photon equalling 1 ADU or DN), hence the noise has a Poisson distribution with a mean that is equal to the square root of the signal level.

Here is a dark part of my repro shoots, the Alpha 99 (14 bit new) on left and Alpha 900 (12 bit old) on the right. I also include the RawDigger display for the Alpha 99 image. As you can see the green channel is clipped and the pixel count falls below 500 at around 4-5 pixels. So the dynamic range is something 16000/4 -> 4000 -> 12 EV. I choose the value 500 arbitrarily.

Even this image is limited by surround light, light leaks and veiling flare, I think. Ah yes, the DMAX of Velvia is somewhere between 3.6 and 4.0 one EV is 0.3 in density to density range of Velvia also 12 bits.

Hi Erik,I'll try to locate the original file that Jonathan Wienke made available some years ago, but give me time. I'm not terribly well organised.

I agree that in most situations this advantage of 14 bit may not be at all significant. However, one situation where it might be, is when using the camera at base ISO for all situations, and underexposing instead of raising ISO. If one were to underexpose 6 or 7 stops instead of raising ISO to 6400 or 128,000, in order to retain full detail in the highlights, and perhaps even the specral highlights, then the advantage of that 14 bit processing would probably be very apparent.

Just out of curiosity, I darkened the reprocessed 14-bit image by increasing contrast and moving the middle 'levels' slider to the right, in Photoshop. I then checked the histograms of both images. The 12-bit image had clipped blacks from the beginning. After darkening and increasing the contrast of only the 14-bit image, it is still not clipped. Refer attached image.

Hi Bart,New thread started. The comparisons I showed in the previous thread at http://www.luminous-landscape.com/forum/index.php?topic=74637.20 are from the same camera with the same sensor and support electronics, so it's reasonable to suppose that the differences in dynamic range that are very obvious near the limits of DR are entirely due to the differences between 12 bit and 14 bit processing.

That there are also differences in highlight detail and tonality may be the case as your graphs imply. But the question from Ben was, are there any real-world differences that would in practice be noticed.

Hi Ray,

That's correct. One of the first things I noticed when I upgraded from my 1Ds2 to my 1Ds3, was the different/improved sensation of highlight rendering. The dynamic range of the 1Ds3 as I measured it (based on the Raw data before demosaicing differences of equal exposure pairs) was slightly lower than from the 1Ds2, but the rendered image quality was better. That's why I mentioned it.

I never made side-by-side comparisons of an identical scene, because I got my 1Ds3 for creating 21 MP images instead of 16 MP ones (and for live view, and for AF micro adjustments), not for comparisons. It just made sense to me that more bits would allow to render smoother gradients and such.

That's correct. One of the first things I noticed when I upgraded from my 1Ds2 to my 1Ds3, was the different/improved sensation of highlight rendering. The dynamic range of the 1Ds3 as I measured it (based on the Raw data before demosaicing differences of equal exposure pairs) was slightly lower than from the 1Ds2, but the rendered image quality was better. That's why I mentioned it.

I never made side-by-side comparisons of an identical scene, because I got my 1Ds3 for creating 21 MP images instead of 16 MP ones (and for live view, and for AF manual adjustments), not for comparisons. It just made sense to me that more bits would allow to render smoother gradients and such.

Cheers,Bart

Ah! I'm sure you'll agree, Bart, if one is to do a scientific test of the differences in image quality between 14-bit and 12-bit in-camera processing, one should keep everything else the same. Same camera and sensor, same scene or target, same lighting conditions, same lens, same aperture, same shutter speed and ISO, and same processing in the RAW converter. The only change between the shots should be the 12 bit and 14 bit settings in the camera

I never made side-by-side comparisons of an identical scene, because I got my 1Ds3 for creating 21 MP images instead of 16 MP ones (and for live view, and for AF micro adjustments), not for comparisons. It just made sense to me that more bits would allow to render smoother gradients and such.

Hi Bart,Would I be correct in assuming that the Canon 1Ds3 does not give one the option of using 12-bit processing? I didn't realise that.The differences in the RAW file size on the D7000, for the images I compared in 12-bit and 14-bit mode, are 11.1MB for the 12-bit and 15.3MB for the 14-bit .

I believe it used to be the case on earlier Nikon models that the 12-bit setting allowed for a faster frame rate. This is no longer the case, as I understand, but I haven't tested it. However, I presume that the buffer will fill up sooner when shooting in 14-bit mode, and/or uncompressed as opposed to 'losslessly compressed'.

If one's camera doesn't have the option to shoot in 12-bit mode, then the question whether or not 14-bit provides visibly better image quality than 12-bit, doesn't apply. One can't test it. It's not relevant.

A more relevant question may be why Canon does not provide such an option in the menu, to use 12-bit. The reason may be that customers would become confused because they would not be able to discern any differences in any circumstances, if it is true that such differences are only apparent in the deepest shadows, as my tests suggest.

As we all know, Nikon sensors have about 2 stops better dynamic range than the Canon equivalents.

I should also add a correction to the ACR adjustments I quoted for my comparison images above. Contrast was not zero, but -50, and the Tone Curve contrast was linear.

Hi Bart,Would I be correct in assuming that the Canon 1Ds3 does not give one the option of using 12-bit processing?

Correct. It doesn't allow to compromise the Raw data, Canon Raws traditionally also do not clip the read noise level, which makes them suitable (if not preferred) for astronomical imaging.

Quote

A more relevant question may be why Canon does not provide such an option in the menu, to use 12-bit. The reason may be that customers would become confused because they would not be able to discern any differences in any circumstances, if it is true that such differences are only apparent in the deepest shadows, as my tests suggest.

Why handicap the captured data, exchange data for file size? That's what e.g. JPEG output is for...As those involved in Astronomical imaging (and there is a lot to be learned from them) can contest, there's also a lot of signal burried in the noise floor ...

Quote

As we all know, Nikon sensors have about 2 stops better dynamic range than the Canon equivalents.

Which is a completely differeent issue from what's being discussed here.

Hi Bart,I agree personally. I see no point in handicapping data for the sake of a slightly smaller file size. However, one could also ask, why use the 'uncompressed' setting for RAW instead of 'lossless compression'. If the compression is lossless, there can be no image-quality advantage using the larger 'uncompressed' files. However, there may be a speed advantage during processing because the files are already uncompressed. Such time-saving could be seen as an advantage to those who process thousands of images a day. I don't know. I'm just guessing here.

Some people find no advantage in the additional DR of Nikon cameras because they simply don't think they would ever use use it. Likewise, somepeople are quite satisfied with the quality of out-of-camera jpegs. If you were to do a survey of DSLR usage, I'd bet my money the results would show that most owners of DSLRs shoot in jpeg mode most of the time, or even all of the time.

Quote

As those involved in Astronomical imaging (and there is a lot to be learned from them) can contest, there's also a lot of signal burried in the noise floor ...

I'm sure there is, and I know it used to be the case that Astronomers preferred to use Canon cameras instead of Nikon because Nikon had a habit of clipping the deepest shadows so it was impossible to retrieve certain detail even with sophisticated software.

Now that Nikon have around 2 stops greater DR than Canon, I would assume that Astronomers would now prefer to use the latest Nikon DSLRs. Is this the case, Bart, and if not, why not?

Quote

Which is a completely differeent issue from what's being discussed here.

Surely it's not a different issue if the benefits of 14 bits, as opposed to 12 bits, lie largely within those two deepest stops of DR.

You make a theoretical case that in-camera, 14-bit processing improves highlight detail, but you can't demonstrate it in practice. If the theoretical improvement in detail is barely noticeable at 600% enlargement on screen, for example, it would be of little concern to most photographers, but could be of interest to Astrophotographers.

I see no point in handicapping data for the sake of a slightly smaller file size. However, one could also ask, why use the 'uncompressed' setting for RAW instead of 'lossless compression'. If the compression is lossless, there can be no image-quality advantage using the larger 'uncompressed' files. However, there may be a speed advantage during processing because the files are already uncompressed. Such time-saving could be seen as an advantage to those who process thousands of images a day. I don't know. I'm just guessing here.

AFAIK, a number of Raw file formats can use at least a simple kind of lossless compression , which is fast and can be done without complicated power and time consuming calculations. One would possibly need to compare the variable Raw file sizes after subtraction of the size of the embedded thumbnail(s) to figure out whether compession took place.

Quote

I'm sure there is, and I know it used to be the case that Astronomers preferred to use Canon cameras instead of Nikon because Nikon had a habit of clipping the deepest shadows so it was impossible to retrieve certain detail even with sophisticated software.

Now that Nikon have around 2 stops greater DR than Canon, I would assume that Astronomers would now prefer to use the latest Nikon DSLRs. Is this the case, Bart, and if not, why not?

While the extra dynamic range helps (to avoid clipped stars and thus preserve their color, yet keep the noise floor low), it's also the 'quality'of the noise (floor) that matters. It requires both sides of a Gaussian noise distribution to get good cancellation by averaging multiple samples. But averaging is not the only way of reducing the noise floor, so a combination of other methods may work better on Nikon Raws. I do not have statistics on the split by brand.

Quote

Surely it's not a different issue if the benefits of 14 bits, as opposed to 12 bits, lie largely within those two deepest stops of DR.

Depending on the type of astrophotography, it may not be the dynamic range as such that's the issue, but rather the S/N ratio at all levels (not just the deepest shadows). One can stack multiple exposures of faint stars and nebulae (exposure time limited by dark count building up at longer exposure times, and increasing risk of tracking errors and airplanes), and add them together to increase the relevant luminosity. The addition will raise the signal level (assumes perfect registration of the sub-exposures), and at the same time the random (shot) noise cancels out further with each added shot.

Quote

You make a theoretical case that in-camera, 14-bit processing improves highlight detail, but you can't demonstrate it in practice.

It's not theoretical, and I demonstrated it in the other thread in a stepwedge scanline example. What's more, the improved highlights were apparent to me when I started using the 1Ds Mark III, but then I may be more sensitive to the improvement than the next guy. It also matters whether one downsamples for output, or requires full resolution for large format output or when significant crops are inevitable. It's just that the Canons do not allow to reduce the bit depth used in the ADC conversion process, that prevents me to demonstrate it with the same camera. Besides, nobody is paying me to do such a comparison, so I have to prioritize how I spend my time.

It's not theoretical, and I demonstrated it in the other thread in a stepwedge scanline example. What's more, the improved highlights were apparent to me when I started using the 1Ds Mark III, but then I may be more sensitive to the improvement than the next guy. It also matters whether one downsamples for output, or requires full resolution for large format output or when significant crops are inevitable. It's just that the Canons do not allow to reduce the bit depth used in the ADC conversion process, that prevents me to demonstrate it with the same camera. Besides, nobody is paying me to do such a comparison, so I have to prioritize how I spend my time.

Hi Bart,Without trying to cause any offense, I have to say I that am having some difficulty in understanding your line of reasoning here. On the one hand, when I demonstrate using the same camera how much more detail exists in the 13th and 14th stops of DR with camera in 14-bit mode, you write, "....differences between sensors and support electronics can play a large role here".

Since you don't have a camera which provides both a 12-bit mode and a 14-bit option, and since you haven't provided any real-world images, shown at 100%, or 200%, 400% or even 50% enlargement, how can you claim that you have demonstrated in practice that 14-bit processing improves highlight detail?

What you've provided, in the other thread you refer to, are charts of a scanline from a synthesized stepwedge with Poisson noise added.That seems very theoretical to me.

I recall some time ago there was a discussion about ETTR in which Emil Martinec criticised certain explanations for the benefits of ETTR, which made reference to the fact that half of the available levels, whether from 12 bit or 14 bit processing, were applied to the brightest stop in the dynamic range.

Emil, who surely knows a thing or two, claimed that the additional number of levels made available in the brighter stops, as a result of ETTR, were irrelevant because the sheer number of such levels in highlight areas was far above what the eye is capable of discerning. The real purpose of ETTR, Emil claimed, was to increase Signal-to-Noise so that noise which would have been apparent in an underexposed shot, would no longer be apparent, or would be less apparent in an ETTR shot.

That seemed reasonable to me. However, Emil went even further with this line of reasoning and claimed that 14-bit processing for images which had less than 14 stops of DR, served no purpose.

I disagreed because I'd seen such differences, and felt they were very real. As I recall, when I posted similar images to those comparisons I've posted at the top of this thread, Emil dismissed both 12-bit and 14-bit images as being so bad that no-one would include such areas in a photograph. He had a point, and I think the discussion ended there.

However, upon reflection, it has occurred to me that such additional detail, however degraded, could prove to be very useful either scientifically or forensically.

Imagine a scenario where I have taken just a single shot of a high-contrast lanscape containing trees and dark undergrowth. As I process the image in Photoshop, raising the deepest shadows experimentally to see if they contain detail sufficiently interesting to include in the final image, I notice what appears to be a very strange, feathered bird in the undergrowth, which I didn't notice at the time I took the shot because I was too far away. So I crop the area, clean up the image and enhance the detail to find out what species of bird this is, just out of curiosity.

Fortunately, I'm using a Nikon camera, so the level of detail should be sufficient to identify the bird. If I'd been using a Canon camera, then forget it! If I'd been using the Nikon in 12-bit mode, the job of identifying the bird would have been more difficult, but perhaps still possible.

As it is, my camera was set to 14-bit processing, so there's sufficient detail for identification purposes. After extensive searching and consultation with Biologists and Naturalists, I find that I have discovered a new species of bird that is subsequently named after me. I become famous.

Or, if you prefer an alternative scenario, I've rediscovered a species which was long considered to be extinct, the Long-toed, white-faced, yellow-bellied Honeyeater.