14-bit vs 12-bit RAW – Can You Tell The Difference?

12-bit image files can store up to 68 billion different shades of color. 14-bit image files store up to 4 trillion shades. That’s an enormous difference, so shouldn’t we always choose 14-bit when shooting RAW? Here’s a landscape I snapped, then found out later I had shot it in 12-bit RAW. Better toss this one out, right?

Nikon D810 + 24-120mm f/4 @ 100mm, ISO 100, 1/250, f/8.0

Depending on which class you took at the University of Google, the human eye is only capable of distinguishing between 2.5 and 16.8 million different shades of colors. If this is the case then wouldn’t 12-bit be plenty? Even 8-bit JPEGs can render 16.8 million colors.

There are many upsides to shooting 12-bit instead of 14-bit. The files are smaller, hence your camera’s buffer doesn’t fill up as fast, allowing longer action sequences to be caught before buffering out. 12-bit files take up less space on your memory cards – great for if you are on vacation without the ability to download your images every night. You can save money because you don’t need to purchase as many gigs of storage. Likewise, 12-bit hogs less space on your drives at home and the same number of 12-bit files load faster than if they were 14-bit. Lastly some cameras, such as Nikon’s D7100 and D7200 achieve higher burst rates when shot in 12-bit than in 14-bit.

So if the human eye can’t discern the difference and 12-bit has so many advantages, why doesn’t everyone just shoot in 12-bit? For the same reason brides want a real diamond, not a cubic zirconia. Admiring an engagement ring from a normal viewing distance, few people can tell the difference, but give that ring to a lab technician with a refractometer and they can distinguish the two (Note to readers: I will cut off your shutter finger if you forward this to my fiancé).

When given the choice I’ve always shot 14-bit, because as an American I know bigger is better and besides it’s my constitutional right to fritter away as many redundant bytes as I please. I went to the internet (I sense trouble coming) to validate my feelings and found a lab test where someone shot a lens chart with a DX body at 4 stops underexposed then zoomed in to 200% and sure enough, you could see a difference. Then I checked another site where test shots showed no difference. This was getting confusing for my puny brain so I decided to field test 12-bit versus 14-bit to see if I could tell any difference. I started out with landscapes – if anyone is picky about file quality it’s us landscape geeks. Bear in mind that these tests are in the field, not the lab, and though I tried my best to keep all parameters the same, there may be some slight variations due to Nature and/or the tolerances the camera is built to. I shot with a D810.

Here’s a nice yucca shot in 14-bit and properly exposed.

Nikon D810 + 105mm f/2.8 @ 105mm, ISO 64, 1/2, f/22.0

And here’s the same yucca in 12-bit.

Nikon D810 + 105mm f/2.8 @ 105mm, ISO 64, 1/2, f/22.0

I can’t tell a difference.

Next I tried a shot that would test the camera’s dynamic range with sunlit clouds and a shadowed foreground. I exposed not to blow out the highlights, which then resulted in the shadows being pretty underexposed, requiring me to pull the shadow detail back up in post. Here’s the original 14-bit file with no post-processing. (I won’t bore you with the 12-bit original – it looks identical.)

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 64, 1/100, f/10.0

And here’s the 14-bit with the shadows recovered.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 64, 1/100, f/10.0

And the 12-bit.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 64, 1/100, f/10.0

Here’s 14-bit cropped in on some shadow detail.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 64, 1/100, f/10.0

And the 12-bit.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 64, 1/100, f/10.0

Oh crop, there’s no difference I can tell. I showed this to a photographer with more critical eyes than mine and she couldn’t tell a difference either. Maybe the difference is only visible in a final print. So I printed the two versions and still couldn’t tell a difference.

So far my tests were with well-exposed shots. It holds to reason that if I were to be able to tell a difference it would be in the dark values as when you’re courting the left side of the histogram, you’re dealing with a lot less raw information. After all, in dark conditions such as underexposure, fewer photons are being counted at each pixel site than when it is bright. (See Spencer Cox’s article about the theoretical advantages of having more data to work with by exposing to the right side of the histogram.)

I was driving through Northern Arizona late one afternoon and there were some clouds so I had to detour to The Mittens. Sadly the clouds started shrinking to where I would have a mediocre sunset shot. Rather than pack up and leave, I thought “ah ha”, time to run some more 12-bit versus 14-bit tests. But this time I’ll bracket the exposures and see what happens.

Rather than go through 30 samples of original shots, tweaked in post shots, and tweaked and cropped to 100% shots, lets fast-forward to the most underexposed sample. First the unprocessed 14-bit.

Nikon D810 + 24-120mm f/4 @ 75mm, ISO 64, 1/400, f/5.0

And 12-bit unprocessed.

Nikon D810 + 24-120mm f/4 @ 75mm, ISO 64, 1/400, f/5.0

14-bit tweaked and cropped to 100%.

Nikon D810 + 24-120mm f/4 @ 75mm, ISO 64, 1/400, f/5.0

12-bit tweaked and cropped to 100%.

Nikon D810 + 24-120mm f/4 @ 75mm, ISO 64, 1/400, f/5.0

What’s a guy got to do to see a difference? These are virtually identical, especially given that this was natural light, which isn’t always even and there could be slight exposure variations due to tiny shutter speed and aperture differences. If I can tell any difference at all it might be a teensy bit more contrast in the tweaked 12-bit samples.

The above tweaking was from letting Lightroom auto-tone the images. Not really the best presentation of this file so I went in and tweaked more to my liking and applied the exact same parameters to each file.

14-bit tweaked.

Nikon D810 + 24-120mm f/4 @ 75mm, ISO 64, 1/400, f/5.0

And 12-bit tweaked.

Nikon D810 + 24-120mm f/4 @ 75mm, ISO 64, 1/400, f/5.0

Squinting real hard I’m learning two things. First, I can’t tell the difference between these 12- and 14-bit images unless I look at the metadata. Second, my 24-120 Nikkor is pretty soft in the corners – I should have shot my 50mm.

You may have noticed all of these so far are at base ISO. Maybe I need to go to higher ISOs where the camera will have to amplify the signal hence we might encounter some noticeable differences. I went inside to get low enough light.

Again, instead of running you through dozens of tedious samples, we’ll cut to the chase. These are at ISO 3200. The scene has a ridiculous dynamic range from sunlit bushes outside the window to deeply shadowed pillows inside. First the untweaked 14-bit.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/100, f/16.0

Now the untweaked 12-bit.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/100, f/16.0

The 12-bit looks a tad brighter. The following had identical tweaks save the exposure of the 14-bit was boosted a tad to match the 12-bit. Tweaked and cropped 14-bit.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/100, f/16.0

Tweaked and cropped 12-bit.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/100, f/16.0

Just maybe an eensy bit more constrast in the 12-bit, but I reckon if you printed these out and swapped them around and showed them to me I couldn’t pick out one from the other.

I guess we’ll have get ridiculous and head to that special place where histograms go to die.

Here’s the 14-bit.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/3200, f/16.0

And the original histogram. Oh my, even Michael Moore doesn’t expose that far left.

And 12-bits of despair.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/3200, f/16.0

Again the 12-bit looks a tiny bit lighter so this is perhaps a 12-bit/14-bit D810 thing, not any variation in light levels or shutter speed/aperture tolerances.

14-bits cropped and tweaked.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/3200, f/16.0

12-bits cropped and tweaked with the same parameters.

Nikon D810 + 24-120mm f/4 @ 40mm, ISO 3200, 1/3200, f/16.0

At last, a difference I can see – the 12-bit file after extreme processing is having some trouble holding highlight detail. However I can see a tiny bit more shadow detail in the 12-bit file. This might be due to the f/16 aperture not closing as far on that particular exposure, hence letting a few more photons reach the sensor. Or maybe not. Either way, both 12-bit and 14-bit files look like they drank too much and puked all over themselves.

Let’s try again. Here’s a scene in Petrified Forest. I’m underexposing heavily and stopped way down trying to get a small, tight sunstar and seeing just how much I can retrieve from the shadows in post. (This would be a much better candidate for exposure blending, but we’re running a test here.)

Here’s the untweaked files. First the 14-bit.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 125, 1/400, f/16.0

Now the 12-bit.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 125, 1/400, f/16.0

And now with identical post processing. -100 highlights, +100 shadows, +2.45 exposure, and various other tweaks.

The 14-bit.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 125, 1/400, f/16.0

And the 12-bit.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 125, 1/400, f/16.0

At last, I created a practical field test that is showing a difference. The 12-bit version has got an unpleasant greenish cast and the 14-bit is trending slightly magenta. Also it looks like there is more shadow detail in the 12-bit. Let’s fix the color in the 12-bit shot as seen below.

Nikon D810 + 16-35mm f/4 @ 16mm, ISO 125, 1/400, f/16.0

Voila. Looks great now and I like the extra shadow detail recovery. As I’ve already gone to +100 in the shadows there’s no further global adjustments I can give the 14-bit without blowing out my sunstar. I’d have to do local dodging and burning to match the two.

The takeaway I got from all this is that worrying about having 14-bit files instead of 12-bit is silly if you expose well or even just don’t mess up too bad. Good post-processing can give results that make it hard if not impossible to distinguish between 12- and 14-bit files. I did these tests to mimic situations I might encounter. I encourage readers to do their own tests with the sort of subjects they shoot. Feel free to share your results in the comments section.

All said and done, will I switch to shooting 12-bit? Psychologically this tears me apart knowing my files won’t be all they can be. Furthermore I like to photograph birds. They have four cone types in their eyes versus three for humans, hence have far superior color vision, up to ten times better. What if I want to sell family portraits to this Mallard mom?

CORPORATIO + 800mm f/5.6 @ 800mm, ISO 1600, 1/800, f/16.0

Us humans can’t see the difference, but Mama Mallard sure will. Aw, those Mallards are pretty stingy anyway. I’ll just switch to 12-bit and increase my chance of getting the shot (through shameless spray and pray tactics) rather than getting a bigger file I can’t appreciate. And I can always switch back to 14-bit when conditions dictate I should – like the next time I’m shooting handheld candlelit test chart shots and forget to remove my 6-stop ND filter.

Related articles:

About John Sherman

John “Verm” Sherman is one of only 25,000 wildlife and nature photographers based out of Flagstaff, Arizona. In 2012 he was awarded Flagstaff Photography Center’s Emerging Artist of the Year award. He has since submerged into internet notoriety but comes up occasionally to contribute to Arizona Highways Magazine. Visit his website and blog at www.vermphoto.com.

Reader Interactions

Comments

1) Kasper

May 29, 2015 at 3:40 am

This will be camera dependent. What matters is how much analogue signal you get out of the sensor. If this signal corresponds to 11 bit it doesn’t matter wether the A/D converter is 12 or 14 bit. In the old days some scanner manufacturers put 12 bit A/Ds in scanners that only had 7 bit of signal, marketing tool. When they have 14 bit analogue coming out of the sensor the will already have switched to 16 bit A/D converter.

John, you’ve obviously missed the critical detail that in the future, when future-photographer-art-curator has biologically enhanced nano-bot digital eyes that have just been upgraded to the latest 84k, 14-bit resolution, that 14-bit mallard mamma will really pop with trillions of colors. They’ll probably think the 12-bit file is a B&W conversion. I shudder at how they will look down on you with their nano-pixel-peeping eyes and think, “If only John had shot in 14 bit all along… think of all the hundreds of millions of colors he selfishly has kept us from enjoying. What an asshat.” :D :D :D

I think you’ve been looking for problems in the wrong places. Bit-depth is about the number of steps available to describe colours, not dynamic range. Most of your tests have been about the latter, and as expected there is very little difference between the formats.

I’ve done similar tests and come to a similar conclusion as you, but I have seen differences. They are small but they are there. Shoot a flat sky, preferably flat clouds, in both formats. Then develop them pushing up the local contrast. A Nik plug-in like the Tonal Contrast tool in Color Efex Pro is merciless in revealing when you run out of bit depth. If you push hard enough you should see banding in both formats – but it should be more obvious and appear earlier with the 12 bit file.

The main problem with lack of bit depth is banding, not lack of shadow detail. Go hunting for that and you will find it.

I’ve been looking for the banding and so far haven’t seen a significant difference on my D810 files. I was shooting a D7000 today in 12 and 14-bit to see if I could see it there. I have bumped up contrast in some of the images in both tone curve and mid-tone clarity, but still saw little difference. Not to despair, I am still on a mission to find a way to see the difference in a practical field application. I was hoping the sunstar shots would display the banding, but these didn’t. I’ll have a follow-up soon I hope and hopefully some answers to just how much you can push the 12-bit before it breaks down.

Not all 12-bit lossy formats are the same. Sony’s crappy “raw” format uses only 1 out of every 32 possible tones in the bright region. Go shoot a flat cloudy sky and then in post, spread that narrow spiky histogram as wide as you can, and you’ll see fascinating invisible tendril-like structures in the clouds from a lossless 14-bit Nikon file, but only a posterized muddle from Sony’s tone curve.

REALLY?! you faking troll you are either misleading on purpose or you are trolling or you are stupid there is a clear difference between 14 and 12 the color composition of 14 bit has more depth and clarity so if you didn`t mislead on purpose with some exaggerated presets just to place 14 bit on top while pretending there is no difference then you are stupid WTF

On the button. More bit depth provides more headroom or latitude when doing extensive editing. 14 bits is simply a better starting point than 12 bits. If there is no significant editing carried out, 12 and 14 bit images will be pretty much indistinguishable.

It’s the same with monitor calibration. Trying to calibrate by messing with the output from a 6 or 8 bit video card soon causes it to run out of tonal values and into banding troubles. It simply cannot hold a candela (ho,ho) to the accuracy and precision achieved with direct hardware calibration using a spectrophotometer and the 14 or 16 bit LUT of professional photographic monitor.

I think you miss the whole point of exercises like the above. “Go hunting” you say….well, if you look for problems in any facet of your life, you will find them. The article above is a great example of how IT DOES NOT MATTER IF IT’S 8 BIT, 12 BIT, 14 BIT etc. Especially on AN 8 BIT MONITOR.

Exactly why I encourage the readers to do their own tests and why I also examined prints trying to see a difference. However I can’t share those prints over the web so I did what I could. Many of us primarily share their photos via the web, not prints or publications, so in that respect I feel there is some value to sharing these findings over the web.

exactly, no further comments needed. Btw. i am ALWAYS using 14bit RAW a si want the best possible from place where i could be only once at that moment. For some stupid cheap occassions i am swithing to 12bit… Also i want to mention that there was already a study years ago with Nikon D300 and there was difference in shadows… Also the difference will very probably be visible on professional monitors having min.10-bit better 16bit processing (EIZO CG)

RAW files are linear (means doubling of value is a doubling of light received by the sensor) while file formats like jpg, png already have a gamma curve applied (means a doubling in value doubles the perceived brightness while the nerve cells actually react logarithmically, not linearly like the camera sensor). That is why you always need more bits in raw files than in jpgs.

We CANNOT judge any of the photos since we are viewing them in 8bit JPEG on the web browser. Even if you have a 16 bit TIFF opened in your image viewer you are still looking at just 8 bit. From what i know, you need an Nvidia quadro video card with display port and proper software to view 16 bit images.

John, your articles rock! Mamma Mallard should be employed to review all camera and lenses in all upcoming test then!! I would love to hear your opinion on compressed/uncompressed RAW files. I saw a similar article by a good German photographer arguing that there is no visible difference even in some heavilly overprocessed files. Thanks again, greetings from Prague

Loved reading the article and thanks for taking the time and effort. I think it’s not so much about the exposure as the number of colours. My opinion is photographers are too hung up on absolute technical perfection (does it exist?). It’s the same argument with Jpeg v RAW. To me, anyway, the big advantage with RAW is sorting out White Balance and Exposure issues, or wanting to use the in camera Art Filters afterwards. I often shoot RAW & Jpeg and often go with the jpeg as it’s done the job. The processing is a lot easier with Jpeg (sometimes no need at all!) and the amount of storage space saved in massive. Going on from here, is the camera used. If you put 24 pics of the same subject taken with different cameras or lenses without any labels on them to say, I bet you the virtually all would have difficulty picking out the Leica, Nikon, Olympus, Fuji, or whatever picture. There’s so much snobbery in photography that if you don’t use a certain camera, fast lens, RAW, etc., then you are not doing it right. It must come from people sitting at the 40″ monitor with a magnifying glass looking for some imperfection. I think we just need to get out and shoot more, enjoy the fun side of photography, and get images that whilst aren’t technical perfection are great to look at by anyone!

I’m beginning to notice that as well. I used to be so perfectionistic, and punishing myself with all kinds of technicall stuff, while lately I’ve shot some landscapes that aren’t that sharp at all, and I still like them for the atmosphere. See, the air isn’t always that clean here, so you can have lenses that are sharp as hell and still have a hazy picture.

Glad someone agrees! The considered old masters of photography just put film in their cameras and went out and shot stuff. Some of their images are classed as masterpieces/works of art and are often copied. They didn’t worry about the number of colours (often it was just B&W) but getting a good interesting picture. The images were shot on film – none of this pixel peeking rubbish that engulfs the photo world today. It’s VERY little to do with the camera/lens but the photographer pressing the shutter button – that picture/story makes the image rather it having several trillion colours.

I beg to differ on that one. “Pixel peepers” were around long before pixels were a thing. That’s why landscape photographers hauled 50 pound cameras up a mountain instead of taking a 35mm, ie Ansel Adams with his 8×10, Art Sinsabaugh with his 12×20 etc – to get the largest, most detailed, highest resolution photo they could.

And the 12bit vs 14bit arguement is not comparable to jpeg vs RAW argument. If you don’t agree, you should shoot more in RAW.

It’s not that Ansel Adams didn’t care about sharpness, it’s just that his equipment was primitive by today’s standards. Adams was fanatical about quality and sharpness – that’s why he founded the F16 club – anything shot below F16 was considered unworthy of consideration. If he’d had a D800, he would have been in seventh heaven.

His equipment wasn’t primitive at all. If you consider a full-frame sensor equivalent to 35mm at about 20 megapixels, Ansel Adams shot 8 x 10 negatives that would equate to 1208 megapixels! Everyone should see one of his prints first-hand. Nothing in the digital world compares and you won’t get an idea for how awesome his work is by looking at anything online.

I totally agree with RJM. What really makes the difference is how you can use Lightroom or Photoshop etc. I don’t have the upgraded versions so I can’t do some of the amazing stuff I have seen YouTubers do.

As others have pointed out, it eventually gets crunched down to 8-bit anyway if you’re viewing on a standard monitor. In theory a 14-bit original will give you a bit more flexibility in processing, but in practice you probably wouldn’t notice the difference.

Besides some disturbing pictures that could be proof you are a serial killer, I think, or imagine, seeing a bit more red and pink in the 14 bits. Your sense of humor certainly is a gem and enjoyable to read:-)

A nice not to serious look at a subject which vexs many of us, should we stretch our cameras to the absolute max as we paid so much for them? will we miss out on our landscape shot dropping down to 12 bit ? probably not, but we stay on 14 just in case. No doubt dropping to 12 bit for wild life makes good sense, buffering, storage etc.

Great article. One thing I have noticed makes a big difference between 12 and 14bit (on a D7100) is banding on a sunset shot. I’ve not got an example to hand, but a shot of a sunset with the sun still in the sky and a lot of sky in the shot would have a lot of banding when shot in 12bit but in 14bit the banding would disappear.

I wonder if the banding you’re seeing is related to either the quality of the sensor and the camera’s processing (I’ve seen many complain about image banding issues with the D7100, and Canon users complained for years about image banding on the 12mp and 18mp sensor cameras), or if it’s a consequence of the smaller sensor area.

The control experiments that might help verify this would be to test a D7200 (Sony sensor instead of the Toshiba in the D7100), and perhaps a Micro Four Thirds camera.

Technically speaking, the difference between a 12-bit and 14-bit uncompressed files shouldn’t be the cause of banding, so I suspect the issue lies elsewhere.

I’m a little late but yeah the banding is more due to noise in the sensor than the number of bits. All of these examples are being shown in 8-bit jpeg so I’m hardly surprised there is little difference. This article is a really good illustration that 12-bits is often more than enough.

Thanks, John, for an article written with great wit and humor. That made my day.

My primary take-away from your article was illuminated by another commenter, namely that it’s far more important to get out and make photographs than to be so wrapped up in the technical minutiae of digital photography. This is a lesson that has come to me from several different writers over the last few weeks and I think I’m finally catching on.

As usual, hilarious John! I have never really questioned 12bit vs 14bit before…another technical discussion that I sort of gloss over. Despite being a pretty big geek, this stuff makes me snore….until someone like you makes me care a little bit because of the perspective / humor. Needless to say, I will classify this as yet another of the endless arguments where everyone gets into a corner about all the differences without consequences. 5:36 AM here. Sunrise is in about 20 minutes, so I better get moving :)

The difference will more apparent more you manipulate the image. That’s the reason we use 16-bit processing as opposed to 8-bit in Photoshop. The 14-bit original raw image starts at a better place, information-wise, than a 12-bit image, that’s all.

For many things it doesn’t make much difference, especially if the final destination is a small jpeg. However, there is a difference in the image information and it does matter in some situations. In addition as raw processors get better that difference can grow. This is different from sampling (megapixels). It’s about precision. You can manipulate an 8 bit image just fine and most people can’t tell but you wouldn’t want to set your raw files to 8 bit would you?

Holey crop John! I’m not sure what I enjoyed more…the common sense conclusions or your commentary. As it is, I will continue to shoot RAW in 14 bit. I just upgraded to a Drobo that can take up to 32 TB. So no storage issues here. Since I upgraded from the D700 to the D810, my MacBook Air is not fast enough to handle the 12 or 14 bit files. So whether I wait for a 12 bit to process and load or a 14 makes no difference right now. I plan to move up to either the 2014 MBP 15″ or the new model. 16GB and blazing fast processing should do the trick. So in the end how you choose to set your baseline is more about your restrictions and determination what is your workflow to achieve the desired result. So I might as well shoot at the upper limit of my camera as it will make no difference today. But who knows about tomorrow? I don’t want to say ‘oh SHUT(ter)!’ If new technology allows us to see a meaningful difference. I look forward to your next post.

Well said Jon. Tomorrow’s technology might well make the 12-bit/14-bit difference easier to discern. One thinkg I noticed going back over my files is that the 12-bit D810 files seem about 1/4 stop more exposed than the 14-bit at the same exposure settings. I was actually getting more detail out of deep shadows with the 12-bit files and only until I severely underexposed did I see weird colors which were easily corrected later. I would be very interested if other D810 owners have noticed this difference.

The 12 vs 14 bit myth is likely the same as shooting uncompressed vs lossless compression (vs even lossy compression). How does that compare to each other? I didn’t ever performed any testing related to it and just assumed that shooting to the highest standard was thé way to go for material that really deserves it (mind that nuance, it’s very important to me), all the rest goes to lossless compression, still 14 bit. Under the given conditions, the documentarist in us should aim for the very best even if it’s costing more GBs at the end of the day. But we can be 100% sure that even the best printer existing in the universe won’t reveal any difference. Take it apart for ‘just’ an Epson. Also there I see so many settings in the driver that it would take ages to test and see the differences, so as most I click what looks to be the best.

Thank you Betty! You are obviously speaking from actual usage experience. Many of the nay-sayers here are commenting from theory and / or reinforcing their buying decision and / or only know how to “spray and pray” (and therefore prefer a smaller file).

The colours / subjects above are quite pallor, which doesn’t bring out the potential of the additional colour bit depth. None of the tests above were comparing how the files hold up with manipulation of COLOUR, which is where 12 and 14 bit files differ (in how much colour information you are working with).

I suspect the dynamic range of the camera’s sensor is making a big difference here. The D810 is phenomenal. My D700 not quite so much in comparison. Several years ago I tested some differences between 12 and 14-bit with the D700 for Milky Way shots. Apparently I didn’t keep all the crappy test shots as I can’t find them now, but I decided to keep the D700 at 14-bit back then for a reason. I want to run that test again soon with the D810 and see how valid it is now. I’ll probably still keep it at 14-bit regardless as RAW converters keep getting better and better. It’s amazing what I can do with a D70 file today. :-P

I was shooting the D7000 today in 12 and 14 bit and hope to report soon on that. I recommend that all readers do their own tests on this with their own cameras and make their mind up that way. My tests are valid for how I shoot with the body I use.

I frequent a number of photography forums and, anytime the issue of image quality comes up, I see pages and pages and pages of text. There are technical explanations that might or might not be correct, there are glossy descriptions of differences noted, but most of all there are pages and pages of text. I enjoy articles like this because they’re image heavy, letting the reader decide whether the reader can see a difference in images. Instead of text, I’d rather see someone in the comments post a dropbox link with visual confirmation of the differences they’re describing … after all, image quality is about images.

The test i a bit misleading. Posted images will not help. Some comments pointed at it already.

14 bit is about more infirmation stored. You will no t be able to see it right away. If your shot a picture which is perfect for you ( some sharpening and minor color grading might be added) 14 bit is not needed. But if you start processing the pictures to your liking or to reach a specific look or to rescue a not so well made shot this more of information might help to prevent colorbending or other ugly outcomes.

I went to 14-bit shooting on the D7000 after seeing a feature somewhere on a D300s that showed a significant improvement in resolving shadow detail on the higher bit file. I have been shooting in compressed 14 bit on the D800, about a 41MB file. But hey, if it can handle a 12 bit file . . . . I’ll have to at least give it a shot.

The “raw” file formats from various camera vendors use different tonal compression techniques to reduce bit depths. The data are not just being shifted right two bits! Instead, each technique uses a very carefully chosen 4K-tone subset out of the 16K-tone domain. Nikon’s technique works really well, whereas Sony’s *only* “raw” format is effectively 11-bit. I think that the author of this piece missed an opportunity to write an informative article about the advantages and disadvantages of all the various options, and chose instead to write a depressing pile of antiintellectual snark.

Not sure your sensible words will find much traction here, I bet you’ll be told that 16-bit files coming out of a medium format Phase One are no better than Sony’s 11-bit efforts. Tonality, sadly, is lost on most people….

I still shoot Ilford FP4 7×6 b&w film for prints (hint, it’s not for the resolution!!), but then I don’t convert them to 8-bit jpgs to view at web resolution and make sweeping statements as to how they compare to a smartphone camera :-)

Yes, they are saved as jpeg to post on the site. As well your monitor probably only displays 8 bits so you have made a good point here that if one shares their photos primarily on the web that there is more forgiveness. However I did print some images to see if I could tell a difference that way but I couldn’t.

I’ve been trying to learn birds in flight for the longest time and am still having difficulty with that. I always drop down to 12 bit because I can get so many more captures in burst mode. Now if I would only open my eyes while depressing the shutter release maybe I would wind up with a bird in the frame. Verm, another great article full of humor and worthwhile information.

I can’t believe this stuff. I mean, I can see the difference as clear as…well, pretty darned clear…kind of…maybe…not really….nope! No difference to my eyes! Who cares anyways. Another few years and we’ll all be shooting med format!

So, let me see if I understand what you’re trying to say here. If I start using 12-bit and I forget to remove my 6-stop ND filter then I need to remember to switch to 14-bit whenever I shoot candlelit test charts (or do I need to remember to switch to 14-bit anytime I forget to remove the 6-stop ND filter?). This 12-bit stuff just seems to be beyond my mental capacity.

The article stipulates that there is no (visible) difference between 12 and 14bit raw.

To argue from the total number of colours vs. what the eye is supposed to distinguish misses the point entirely!

The issue is coding of color values in a fixed point format. As a consequence, half of all code points are in zone 10, half again in zone 9. 10 halfings is 1 : 1000, thus in zone 1 are quite few distinguishable code points. That means quantisation noise and banding in the deep shadows, especially if these shadows are lifted.

This is definitively visible, but not with examples like those in this article showing full pictures at a resolution of 600px across.

If you print large (A0) it most definitively shows. Since storage space savings is not an issue, opting for 12 bits is clearly unwise.

A great piece of an article. I shoot 12 bit raws, but was always thinking How much am I loosing this way? Glad to know not that much. Also nice to read something funny and with healthy distance. Great job.

My findings were similar to yours. I did the various compression choices as well. I concluded that the 14bit option was worth keeping for the higher ISO shots where shadow details held colour better. For a Landscape photographer shooting below ISO200, I doubt that there will be a visual difference as you describe.

I wish to explain again: with 14bits you have a mere 16 tone values in zone 1! With 12 bits there are 4 tone values available. In zone 2 there are double this values: 32 vs 8. Would anybody here argue that this does NOT produce visible banding and quantisation noise? If so, then please argue your point.

Zone 2 should have clean information.

A discerning landscape photographer who will print big should definitively use 14bits.

You’re assuming that 12-bit encodings are a linear subset of the 14-bit domain — that the conversion to 12-bit data is a simple right shift. Neither Nikon’s nor Sony’s tone curves work like that. Do more research. There’s good reasons to use 14-bit lossless RAW files, but misunderstanding 12-bit encodings isn’t one of them.

The conversion is still done at 14 or even 16 bit – and then the so-called RAW file is cooked out of that in-camera – a conversion curve is applied, and the number of bits is decimated to the value left in the file.

Most of the time, the conversion curve is non-linear, and a bias is applied.

Sony goes one step further, and uses block encoding (base bit plus delta) on top of the conversion curve.

A well-chosen conversion-curve and block coding reduces the amount of stored data significantly without significant loss or relevant information…

And, unless exposure is placed in such a way that the dither by sensor+readout+conversion+quantisation noise + imperfect MTF of the lens does not mask he loss of information in the process, the results tend to be remarkably good.

Even if from the purist point of view, “uncooked RAW” without compression may sound interesting, practically it does not offer significant advantages…

Nice one John – You may want to see lossy DNG for yourself, too. They are pretty impressive if you’re not going for 4-5 EV compensation. And also quite good for storing family, etc images in smaller resolution (than 36Mpixel)

When I look at the first two pictures of the Yucca leaves and the knurled would behind the leaves I see a clear difference. The color in the leaves make a much smoother transitions in the 14 bit than in the twelve bit. The 12 bit are somehow kind of harsh in comparison. The same is true in the knurled twisted wood in the background. The 14 bit is much smoother and less harsh in its color transitions. It is not like I can pick out colors that are not there with the twelve bit, but there must be less colors in a transition even thought this transition has been transformed to 8 bit jpg.

It is hard to understand why more people are not seeing this difference, when it is so clearly there when I look the pictures.

This is a valid Nikon to Nikon comparison but if you compared Sony’s inferior and lossy 11+7 bit and looked for banding and artifacts you’d see it in a NY minute. Sony has quit marketing their 11+7 bit cameras as 14 bit.

Thanks for such as great post. I am looking at buying a Canon 1ds mark ii, which only has 12 bits vs 14 bits of the newer cameras (even the rebel line). I am more comforted that if I do get the 1ds m2, the 12 bits won’t be such a big factor.

One question:

Does not the final viewing medium, whether print or screen, will do a fair amount of image processing anyway, hence interpreting much of the image? For example, images on Apple large monitors look better because they are better than the typical PC’s? Does the printers “rip” the images sent to them anyway, hence interpreting the images as well?

Interesting article! I believe most people will agree with your point, but we just love the luxury and choose 14-bit. In the very beginning, you mentioned “12-bit image files can store up to 68 billion different shades of color. 14-bit image files store up to 4 trillion shades. “, how does these numbers come? I thought 14-bit file merely gains 2 extra bitwidth and provide finer resolution by 4 times. This theory maybe slightly different for different vendors, but 4e12/68e9=59 times difference is far beyond my expectation. Would you please share your reference? Thanks.

Good stuff… I just want to point out one thing I belive you are missing out on in this article: The more details (12bit -> 14bit) you capture in your raw file will greatly increase the artistic possibilities in the post processing of your images giving wast opertunity to control the dynamic range in your shots.

Two years ago I made some pics using an IR 720nm filter. Just for fun. My experience in post-processing the material is the only circumstance were I am pretty sure that it is really easy to see a different between 14 and 12 bit. For other reasons I have no need for more than 12bit… BUT unless I need more buffer I shot in 14bit. Why? If I ever need more than 12bit I will be surely forget to change settings. My brain is surely more limited than my camera is.

John, when reading this article I noticed the JPEG files are 8bit files. I’d love to know more about how you opened the files. Specifically did you use Photoshop and ACR to open the files. And if so what color space did you use? Did you use an 8bit or 16bit color space?

I fail to see the point of this article as it says nothing about the real differences between 12 bit and 14 bit. It is no great surprise that your examples look very similar because you have ‘processed’ them and then showing 8 bit images? A true 14 bit capture will always be better than a 12 bit, period! Just as a 12 bit will always be better than a 10 bit. If you only ever intend taking ‘snaps’ and putting them on line then save yourself a lot of money and use your mobile phone. If on the other hand you are a pro photographer as am I, and predominantly shoot for publication then having the extra amount of information can help tremendously. Most publications can only dream of printing 16,000,000, plus colours ( 8 bit ) so the challenge is to squeeze out as much detail from a Raw 14 bit as is possible before converting it to a 8 bit for printing. Try opening an 8 bit file and make some hefty changes in ‘curves’ then look at the histogram and you wil see the ‘comb’ effect indicating how much info you have just thrown away. Do the same with a 14 bit file, convert it to 8 bit and notice a perfect histogram with no ‘comb’ effect.

For cameras like Nikon D7200 and D5500 the fps improve with 12 bit RAW. I guess it would make more sense to use 12 bits on my D5500 whenever I need higher fps but otherwise stick with 14 bit RAW (just in case).

Thanks John. Your testing settled the matter for me. All I needed to see. For me it’s 12 bit, lossless compressed raw. Just pre-ordered the new Nikon D500 and will use this setting happy to know that I’ll get a larger buffer and more images on my card and computer. I’ll say this though, after reading some of the other comments: Some people are real hard-headed aren’t they? Maniacal pixel peepers to the grave. Although, perhaps some people can resolve more detail/color/shades than the average person? My vision is 20/20, but my wife is able to read a road sign 2x as far away as I am. She has super normal sight, hearing, and sense of smell. In fact, we kid her by saying that she doesn’t have a nose… It’s a snout! *LOL* So perhaps vision differences can be a factor as well?

Like others’ suggested. Actually “bits” only matters on the color and nothing else. This isn’t a new problem! Even back on the film age, where scanners have depth of “bits” to scan, this problem is always there. The “BITS” isn’t something you really can see visibly because we are using “12, 14, 16” as “data storage”, but actual “bits” to display, comes in play to your printer and your screen.

The output device such as screen and printer can ONLY OUTPUT 8bit. Simply look at curve definition where RGB 3 colors, holds 0~254 levels. So 255x255x255 is exactly we called 24bit total color range a file format can hold. Which means only 8bit per channel. It doesn’t matter if your RAW can hold 10000bits, after all, displaying on your screen it’s merely only 8bit.

So what’s the extra “bits” doing? Like others suggested.. It’s the color rendering. The more “bits” you are using, the smoother the color gamut will render. There are two places you can find them :

1) pushing extreme on your color curves. Since 8bit out of 12/14/16bit is merely tip of the iceberg, when you push extreme, you will see color breakage. Simply put, Although your RAW looks identical as JPEG output into Photoshop. But changing curve in LightRoom vs JPEG curve in Photoshop, if you do not change your white balance setting, you will see JPEG 8bit is impossible to correct white balance if you are way off the tangent. On the other hand, the extra “bits” is exactly why you can select white balance. Whenever you drop a droplet to one “space”, the color space use that droplet point as the starting point. What it does is to pick out the 8bit curve out of 12/14/16bits you have in RAW, and renders that 8bit color space for you. So your Yellowish color tint can go away, daylight became sunset, midnight became daylight. But after RAW export file to JPEG/TIFF, you are still going to get that 8bit of color curve though.

2) Color smoothness. This can be evident from Photoshop processing a TIFF 16bit vs JPEG 8bit. This time, both TIFF and JPEG already had defined white balance / color curve. 16 vs 8 here is simple.. When you push the curve, you can see a flat surface of one color began to break into random color dots. The less bit, the more visible the breakages should be. Just take 16bit TIFF, save into 8bit, 4bit, 2bit.. you can see how color tried to map nearest color palette. Because insufficient (bit) to map to, you start seeing random color dots… The result tells us, more bits to start with, the smoother your color rendering would be. The only down side is once again.. we are displaying to AdobeRGB/sRGB (8bit) monitor.. so anything beyond 8bit, you will see little benefit. That’s the reason why we have RAW to save anything beyond 16bit per channel, but we don’t really need it “today”… maybe in the future.

“14 bits vs 12 bit raw Can you tell the difference ” that is the rhetorical question we are commenting on! The short answer has to be NO because humans can’t detect even 8 bits of ‘shading’. If ( as I interpret the question ) we are asking, should we work in 12 bit or 14 bit then my answer is 14 bit. I have given my reasons for this in an earlier post.

I am struggling to understand why you are saying “Actually “bits” only matters on the color and nothing else.” ? This is very misleading. Why would camera manufacturers put a great deal of technology into capturing 16383 shades when they could give us 255? ( 8bit). In a RGB 8 bit colour space, R-005, G-005, B-005 would show as a black, R-006, G-006, B-006 would still appear black but we know it is ‘lighter’, right the way up to 255, 255, 255 maximum white. It figures therefore if we work in a RGB 14 bit colour space we would have 16383 different shades (more detail). I always save down my tiffs to 8 bit before sending to the printers because of the reasons you have already mentioned.

The assumption regarding white balance is completely wrong. The way that white balance is determined has nothing to do with bit depth full stop. The whole point of a white balance is to try and cancel out any influences that the ambient/flash lighting has on the image. Regardless of the bit depth if you do a white balance sample on ANY pixel the software will make that ‘colour neutral’ ( RGB will all be the same number) then use the same information and apply it to the rest of the image. I could get carried away with the fundamentals of white/grey balancing but that’s a different subject. I’m only pointing out that it has no bearing on bit depth.

Your second observation is also misleading, quoting colour ( colour smoothness) however we are really dealing in are shades of grey ! (Tonal range) If you choose to display the channels in greyscale( under photoshop preferences) you will see each of the RGB channels displayed as a greyscale so any ‘smoothness’ or adverse ‘combing’ effect will show in shading as it would if you convert to a B&W image! You also refer to only having 8 bit monitors meaning photoshop will display a 16 bit the same as an 8 bit, ’true’ but you wrongly then assume the extra bits are of no use but might be one day?

Let’s clear up one thing here, what we are actually talking about is the ‘editing’ capabilities of an image in relation to its bit depth. It’s a no brainier if the final image needs to be of the highest professional standard, the more bits the better as it offers more information to allow a good edit. If the image does not require that sort of detail then there’s no need to work in 14 bit ( 8 bit may well be all that is required). If you need anymore proof of this then think on why did photoshop introduced 32 bit capabilities ( HDR ) if, in general we can still only display or print 8 bit images?

thanks for taking the time to put together this post. I feel however you may have missed the importance of “head room” and channel blow out. As well (as another commenter has alluded to) there are camera differences, which will be essentially all about what is the saturation levels of the sensor. If the ADC doesn’t get more photon counts then it indeed is pointless, however if it does then does it cluster them around the ADC creating an artifical saturation of the signal or can it record more? Some time ago I have tested a SuperCCD Fuji camera and found that there were actual observable differences in channels (specifically the red) not blowing out on high value notes. Wedding dresses and red autumn leaves (or even strawberries).

I often use a tool such as dcraw and “rawhistogram.exe” to examine the data outputs of the files and see what head room I gained. I’d be curious to see the outputs from these RAW files to identify if there actually was more data after 4096. For instance a Canon sensor often gives up its max at 0000 1111 1010 0111 while the superCCD was giving 0011 1101 1111 1101

increase of significant bit is doubling, so two significant bits change is doubling doubling.

“The human eye is only capable of distinguishing between 2.5 and 16.8 million different shades of colors. ”

First and foremost. MOST monitors are incapable of displaying information beyond 2^24 colors; in fact, most monitors are “8-bit” in terms of raw file. We just call the 3 pixel bundles a single pixel, but it’s really just like having a cfa with a “reverse sensor” behind it.

Second. Most consumer market printers are INCAPABLE of printing 2^24 colors with any accuracy!

If humans cannot see past 8-bit raws, why would medical technicians REQUEST 12-bit monitors (or 36-bit color monitors). These expensive monitors are used to display the output from things like MRI’s CT Scans, or any other digital output.

Finally, If you didn’t get it because you’re a university of google graduate. Your monitor is incapable of displaying the difference between an 8-bit raw file and a 12-bit raw file. You specifically avoided getting anywhere close to the extra stops you get that become visible from post-processing until the very end. Of course, you could ALSO use the full dynamic range (or HDR) aspect, or like, idk, actually be competent enough to admit two 8-bit JPEGs made from the same shot are going to be identical, rather than sounding like a complete idiot acting like you should see the difference. You could TALK about local vs global adjustments, or otherwise be SMART instead of acting like an idiot.

You don’t need it, that is PERFECTLY FINE. That is why it is an OPTION on your camera; you even have the OPTION to just shoot jpegs! The human eye can’t see past 8-bit according to you, why ARE you shooting in 12-bit?

*Now, will I argue that you’re going to USE those extra two bits? No, not really… but seriously, you know what you’re doing and you’re only coming off as an idiot for doing it. Either address the subject clearly, presenting what people WOULD want those bits for, or just admit that you didn’t even bother graduating from the university of google. If you aren’t going to look up your opponent’s argument, what is the point of this “review”? That you don’t know what people would use the extra bits for?

” … how I shoot with the body I use.”.. Lordy, if the photo geeks here only knew how many varieties of breaks, sprains, tears, and dislocations etc. that used body of yours has endured… You should write a comparison between 12 and 14-bit reconstruction of exploded ankles after twenty-five foot groundfalls! You’ve come a long way from Hueco Tanks in the ’80’s, but then I was even more astonished when Crusher published his Desert Towers tome. From Bolt to Bit Wars, eh? Reducing from 14 to 12 bits ergo is the equivalent of bit chopping? It’s hard to get too worked up over the digital imaging debates over saving/storing/editing and essential critical bit thresholds; the modern photographic multi-verse hardly seems principled enough to find anything even as “radical” as the f/64 manifesto, much less any actual action you could compare to the establishing of the Bachar-Yerian route, for example. Your practical approach, i.e. use whatever seems to work better for you until/unless you experience issues that require a reassessment of best practices, beats the heck out of wasting countless hours that might be better spent making actual pictures.

Be careful here when you compare shadows details in 12 and 14 bits : your screen and the JPEGs resize the color space to 8 bits/channel, so the algorithms in your RAW editor are certainly not the same to resize from 12 or from 14 bits. I suspect that you are comparing here the way your software handles both color encodings rather than the actual properties of your digital files.

Wouldn’t a higher a bitrate more or less only be relevant in the case of gradients, like a blue sky? And architecture, which also involves large areas of almost identical colors? Hence that would be the best way to test it?

Alas, I’m shooting my Nikon D600 as 12-bit losslessly compressed. No reason to obsess about something one has to do a test like this to confirm:)

By the way, to anyone who wonders whether lossless compression is truly lossless: “Lossless” compression typically works by discarding information that truly IS redundant. Like when several neighbouring pixels (usually horizontally, as that’s how the data is arranged in the file – i.e. scanlines) have the same color. Not “almost the same” but THE same color. The compression algorithm then says “4 pixel of color value #884728” rather than “1 pixel of color value #884728, another pixel with a color value of #884728, …” and so on.

And even LOSSY compression CAN be done in graceful ways. There’s a vast difference between a 128 kbps MP3 and a 128 kbps AAC, to use audio compression as an example. I’ve worked with radio production for years when I was younger, and had music released on vinyl and CD, produced in a professional studio, build multiple loudspeakers, swear that vinyl sounds better than digital, etc…

– and yet: I recently opted to compress my entire former CD collection to… 128 kbps AAC. After multiple blind tests (done with the help of my partner, to keep score) I just couldn’t tell ANY difference between a 128 kbps and a 256 kbps AAC file. And yes, that test was done on some of the most accurate headphones I’ve ever owned: the Denon AH-D2000’s.

Being a person who always struggle with running out of storage space, I decided there was no room for choosing 256 kbps (or even 160 kbps) just to make my OCD feel better;)

OMG, good thing you’re not posting to an Audio Faithful True Believer site; don’t you know, the princess-and-the-pea types swear they can actually hear 40k frequencies? That presents all sorts of issues of course regarding actually recording anything close to that and then somehow transferring it to magic vinyl where needles can only handle 22k as speced by trustworthy manufacturers. In a similar vein, I at least have discovered that, despite attending very few rock concerts as a youth, at age 65 I cannot detect frequencies much past 12-14k, so from now on I can save on all sorts of pricey audio gear that promises the moon, when I can only see a dim streetlight. I have detected differences in mp3 quality between 128 and 256kbps, but suspect it’s incidental to the encoder version’s conversion quality and not the size per se. I found iTunes conversions sounded noticeably cleaner for the same bit rate, and now will have to look at AAC as a superior option, if a new car will play them and such.

A very interesting article in which the proof is in the image(s)… I think. And interesting that comments are still coming in a year later.

One point that was made several times is that 24-bit colour depth is all that we can see on the monitor; another point made is that 14-bit data has more information than 12-bit data, hence more room for fine adjustment.

If I put these ideas together, notwithstanding that more bits gives you finer control/resolution over the changes you make, are you not viewing those changes on an 8-bit-per-channel monitor as you tweak your image? Considering just a single colour channel, a change in value from say 1 to 2 in 8-bit space should be visible on the monitor as a change in brightness. If my data structure in computer memory were 16-bit-per-channel it would still get quantized into 8-bit-per-channel for display. Thus, data values from 0 to 255 in 16-bit space would map to 0 in 8-bit space; values from 256 to 511 would map to 1; 512 to 767 would map to 3; etc.

So in doing fine adjustments to my 16-bit data, how would I see the effect of changes in which the 16-bit space value were adjusted from say 585 to 728? Would this change not be invisible, because both values still display as just 3 in 8-bit space? Do I not have to cross the threshold of the mapped 8-bit values before I can detect that I have made a change to my image?

I know this is slightly off the topic of 12- vs 14-bit data, but it has me baffled as to why we need more information than what can be displayed (or printed).

Damn people commenting here is SO DUMB! It’s NOT about colors!!! We all know we use 8Bit monitors, but bits represents data. Amount of data. With 8bits raw you can do shit in POST with raw files compared to 14 or 12Bit raw files. This article about RECOVERING DETAILS. not about how great colors looks. Learn something first then do stupid comments.

You can call my comment “stupid” but that does not help explain or lessen my confusion. Since posting that comment I have actually figured out the source of my confusion and I now understand why more bits in the RAW file are important. And you are correct that is comes down to what you can do in post processing to recover details and how that is achieved.

So yes, I should learn something, and I was hoping that by posting the question I might get a helpful answer. YOUR “helpful” comment added zero to my understanding, and that would be zero in 8-bit, 12-bit, 14-bit… even gazillion-bit format.

!!! WARNING !!! THIS MY BE RIGHT ON A HIGH-END CAMERA LIKE THE D810 BUT THIS IS FAR FROM TRUE ON THE D7200 !!!

I have to say, I also had my doubts about the use of 14-bit RAW files and after reading this and the fact I get extra FPS on my D7200 I did some test first in home, some really extreme under exposed colorfully pictures at the same settings and on a tripod and manual focus, well there was a big difference in the recovered images, but ok I had to go up 5 stops! When I recovered from one stop, I could not tell the difference, when comparing them in lightroom and even had to check the files again to know which was the 12-bit.

So ok was sunny day today so I went out to test this outside on some birds and nature picture, well 12-bit sucks! The 12-bit shots are very noisy and lack detail, I have to say it was in the afternoon with just before golden hours and during golden hours in the winter, so not that much light and a lot of lightcolor variations. I was very pleased able to shoot the birds in flight at 6fps, in staid of the usual 5fps it really felt better when I was out, sounds cool too :D.

Now I’m home agian and looked at my pictures on my computer in Lightroom. My pictures simply look like they are taken with a cheap budget camera, trying to fix or recover some of it in Lightroom also doesn’t give the usual result.

I discovered that there’s one place you will immediately see a huge difference in a 12 bit vs 14 bit photo. Take three or more bracketed RAW files from a D810 through Photomatix for an HDR file. The 12 bit will come out a bright green, no matter what colors were in the viewfinder. 14 bit files work as you would expect. It was driving me crazy until I looked through the FAQs on Photomatix. Sure enough, their current software as of mid Dec, 2016 can only deal with 14 bit RAW files from the D810. JPEGs, should you wish, work just fine, although if you care enough to buy a D810, shoot HDR and go through Photomatix, I would expect you to use RAW.

The main difference between 12-bit and 14-bit raw (assuming both are compressed) should be in the highlights. Click on the links in the “Encoding table size” column on http://photonstophotos.net/NikonInfo/NEF_Compression.htm and scroll to the bottom to see how highlight values are quantized.

Many say the quantization of highlights doesn’t matter because sensor noise is higher in the highlights. But I think this overlooks the fact that noise is reduced by down-scaling. If a camera is 16MP but the final image is 1MP, that’s a lot of noise reduction, so I would expect to be able to tweak the highlights more with a 14-bit file.

What I just said does not apply to uncompressed (or losslessly compressed) raw. I would expect 12-bit uncompressed to be about as good as 14-bit compressed (slightly better in the highlights, slightly worse in the shadows).

Well kind of yes and no.. If you do everything well, than jpg is good as well. But you want post processing and diff between 12 and 14 bit is 4 times more data you can use to recover texture/colour details. So I would still go with the American dream, bigger is better.

I read the compression note as well in the previous comment. I don’t really understand that. Lossless compression doesn’t change anything, so it is as good as the uncompressed just takes less space. Lossy compression is smaller, and it depends on the picture what amount of details you will lose. Still usaully it gets rid of extremiums and the very small amplitudo frequencies (means white will be white not having contrast micro details might disapear details. Basically loosy raw means no sense for me at all, as it does the same thing as jpg compression does, maybe a bit less bruttal loss in data, and without added in camera processing. And yes small amplitudo frequncy on 12 bit means more data on 14 bit, so 12 bit will suffer more data loss.

Erm… The bit depth encoding has more to do with how many discrete tone values each stop your camera can capture. Higher bit depth = more discrete tone steps for each stop of light captured which means more ability to white balance and generally push the colors around in post before banding and other artifacts show up. Contrary to what people think, the camera’s dynamic range is not linearly encoded in the raw file, so there’s no real correlation between how much dynamic range the file can hold and how many bits it is. For example, Canon 14 bit raw .CR2 files generally encode the sensor data from tone values 2047 to 13586 (roughly, it can vary a bit from one camera to the next, but for the Canon cameras I’ve used, that’s what it is). This has been the case since .CR2 files have existed and doesn’t matter if it’s an old 600D (t3i) which gets just over 11 stops of DR or a new 80D, which gets 13+ stops of DR. The total dynamic range captured is encoded into roughly 11,500 discrete tone steps. Part of the raw development is taking that non-linear raw encoding and linearizing it back out.

You’re experiment has largely bore that out. Given that you’re looking at 8 bits per color on your monitor, given a standard rec.709 gamma (with sRGB), you’ve got 12 stops worth of data encoding into that 8 bit file. As long as you’ve got at least 3 or 4 discrete tones values per displayable tone value, you will not see much difference between the two.

REALLY?! you faking troll you are either misleading on purpose or you are trolling or you are stupid there is a clear difference between 14 and 12 the color composition of 14 bit has more depth and clarity so if you didn`t mislead on purpose with some exaggerated presets just to place 14 bit on top while pretending there is no difference then you are stupid WTF

Simply if you don’t process the raw photo any bit depth shows the some results.

The typical bit depth are: The 8bit of jpeg photos. or the 10bit raw from smartphones like iPhone 7, Nokia 1020, or the 12bit raw from cameras like the Nikon 1 series or during continuous shooting from Sony a7 series, or the common 14bit of most dSLR camera and mirrorless cameras, or the 16bit of nearly all digital medium format cameras.

People don’t forget that raw files are not photo files. Raw files always demand processing before being uploaded to internet or being printed. So always higher bit depth gives greater flexibility with adjustments like highlights, shadows, blacks and whites etc.

The difference sometimes is small. However the difference between photos from professionals or enthusiasts is small too.

I have RP disease so I cannot distinguish a lot of shadows so I can see quicker if something is brighter than another picture because my light range is limited now, not so large like 20 years ago, so I can see that 12-bit looks a little brighter than 14-bit on a 8-bit IPS display. For me, It’s like going from 6-bit to 8-bit to 10-bit. 10-bit looks darker for me, 8-bit brighter, 6-bit even brighter, so 12-bit should look a little washed out than 14-bit, and I assume this can be visible only on trained eyes on dark scenes, humans have way more night cells than color/light cells, I guess only kids can see more gradations of colors/shadows.

Comment Policy: Although our team at Photography Life encourages all readers to actively participate in discussions, we reserve the right to delete / modify any content that does not comply with our Code of Conduct, or do not meet the high editorial standards of the published material.