I've heard a lot of time people saying that shooting in RAW offers a better dynamic range than shooting in JPG. But in some way I've always felt it as hard to believe.

So, the question is: there is any evidence of this fact?

Follow the (probably wrong) reasoning that I have done, why it's hard for me to believe that RAW has really a great advantage in achieving more dynamic range.

The reason is that if that I know that having 12 bit for each channel (instead of 8) offers the possibility to memorize 8 time more shades, so theoretically it would be possible to save more info in a RAW picture.

But at the same time I also know that the final result of a perfect HDR processing is shown using 8 bit.

So, somehow if a picture is taken to JPG as having some zone burned out (clipped) I wonder why the firmware, having the RAW information with the higher dynamic range, prefers to burn out some zone deleting details instead of doing a simple HDR on the fly to save some detail.

Also considering that the human vision is very similar to a natural HDR (it's really rare that the eye see the sky as being white because too much luminous)

4 Answers
4

Yes, the evidence that this is a fact is that RAW images are used to make the JPEGs. It isn't possible for a JPEG to have a wider range than a RAW image because the RAW image is the actual sensor data from which the JPEG is made.

A JPEG is the processed image produced by the camera taking its best guess at how the image should be processed. It discards details as irrelevant that are at the top or bottom of the dynamic range and then makes black and white be the points it thinks are the black and white point from the image. The end result of this is that the JPEG may very well LOOK more vibrant out of the box, but it actually contains far less information because the processing has already been applied, detail has been discarded outside the range of the JPEG and the color depth has been reduced significantly (12 or 14 bit color to 8 bit in most cases, which is over an order of magnitude less color.)

You are confusing the dynamic range of the input with the bit depth of the output. In an HDR image, it is covering an expanded input dynamic range by looking at the detail in both ends of two photos to cover a wide range of input. Properly processed, this can be crushed in to 8 bits because you are making sure to spread out detail well across those 8 bits. HDR doesn't work for every image though. If the dynamic range of a scene isn't very wide, then using HDR doesn't accomplish anything. (Also note that the HDR look can be accomplished with a single RAW file on many modern DSLRs due to the high dynamic range they support.)

What has to happen to get from 12 or 14 bit RAW to a JPEG is that the information that is important has to be tone mapped in to the 8 bit range. The camera can try to do this itself or, for more accuracy, an experienced photographer can do this by hand. Keeping the bit depth high allows this detail to be stored until it can be tone mapped by an artist rather than a computer. This is not the same as dynamic range though.

You could have an image with less dynamic range, but more bit depth. Bit depth determines the accuracy and granularity of color, not the range. Range is the measure from darkest to brightest point. It is dependent on the input. Theoretically, a JPEG could be processed to have the same dynamic range as a RAW file, but generally, the camera is going to discard information that it crushes to white or black which results in the JPEG representing a smaller dynamic range than the RAW image.

But is there any measurement saying that, at least in mostof cases, RAW are less clipped than Jpeg? (I hope the word clipped is clear)
–
ReviousMay 23 '14 at 19:04

1

@Revious - JPEGs come from the RAW data, it is not physically possible for a RAW file to clip more than a JPEG version of the image. When you take a photo, the RAW data is taken from the sensor and either stored in a RAW file or processed to form a JPEG. The RAW data is the exact data that the JPEG is made from when you save as JPEG.
–
AJ HendersonMay 23 '14 at 19:21

@Revious Consider that a raw file can be processed into not just a 3×8-bit JPEG, but equally well into a 3×16-bit TIFF. I don't have a citation at hand, but I recall reading not too long ago that many consumer TFT monitors don't really do more than about 6 bits per channel per pixel of actual color fidelity.
–
Michael KjörlingMay 23 '14 at 20:51

@MichaelKjörling - TN panels generally only offer 6 bit color. Also, when you apply a RAW file to a 16bit TIFF, it is expanding a bit, so the full 16 bits isn't really utilized, but more information is preserved than if you go in to an 8bit JPEG, but that's related to the number of colors rather than the absolute dynamic range.
–
AJ HendersonMay 23 '14 at 21:10

1

@HagenvonEitzen - true, I guess I was thinking in terms of powers of 2 rather than powers of 10, but most people think in powers of 10.
–
AJ HendersonMay 23 '14 at 22:57

The advantage of RAW is that the extra bits of precision are available during post-processing for the photographer to play with, so it's possible to bring out extra detail by increasing exposure in shadows and decreasing exposure in the highlights. Either automatically (using the software's "Auto" function) or selectively. Chopping off the extra 4-6 bits and converting to JPEG in the first place loses those details.

So, yes, when the final JPEG is generated from the RAW, it technically only has 8-bits per pixel, but can show enhanced detail that would have clipped had it been 8-bit in the first place.

I totally approve of the answer given by AJ Henderson, but it glosses over an important additional drawback of JPEG.
(Understandably as the question focuses on the color-range aspect of RAW vs. JPEG.)

Nevertheless I think it is important to mention this as I have found over the years that many people don't realize this little fact...

JPEG also looses fine detail !
It is a lossy compression method. It achieves it compression rate by discarding high-frequency components of the image.
Without going in the really technical details (go read the JPEG specifications (here) if you want a headache) it effectively means that quick color-changes (like on sharp edges) in the picture become slightly blurred.
It can also introduce small artifacts in the fine detail that weren't in the original image.
As a result the jpeg will always have slightly less detail (in color and resolution) as the original RAW.
Even on the highest JPEG quality settings this loss of precision is present.

While everything you are saying here is accurate and completely relevant to a question about RAW vs JPEG image quality. This question is specifically about the dynamic range supported and this answer doesn't really address that, which makes this more of a comment. (Which is also why I didn't include it in my answer, though I agree completely it is highly important to point out when talking about advantages of RAW.)
–
AJ HendersonMay 23 '14 at 23:02

Btw, I'm not the one that down voted.
–
AJ HendersonMay 24 '14 at 4:09

While I agree that this doesn't directly answer the question, I think answers that provide additional side information for the OP to consider are perfectly valid and useful here. When doing that, you should point out that you're not directly answering the question. However, you did, so +1 to counter the unfair downvote.
–
Olin LathropMay 24 '14 at 15:38

@AJHenderson I considered putting it in a comment but I figured that 1) formatting in a comment is limited making it very unreadable 2) In order to be concise it was a bit too long for a comment 3) The JPEG compression also affects the color reproduction itself (it works on 2x2 pixel groups) so it is tangentially related to color-fidelity too 4) An answer gets more eyeballs :-)
–
TonnyMay 24 '14 at 16:07

Yes, the raw data absolutely allows for better end pictures when the scene has high dynamic range.

Theoretical Argument

A 14 bit sensor captures intensity with a resolution of one part in 214 = 16384. You are partly right in that largely the same dark to light range is captured, and that ultimately you will display or see the picture with much more limited resolution. Let's say the final picture will be 8 bits/color/pixel, which is only 28 = 256 levels.

The point you are missing is that it is important to be able to choose the final 256 levels from a continuum and not be stuck with the 256 levels evenly distributed from the darkest to the brightest point in the scene. Very often you want the final output levels to be non-uniformly spaced, sometimes significantly so, when mapped back to the original captured data.

Humans don't perceive light intensity linearly; they perceive it logarithmically. This means to get what appears to you as a fixed brightness increment, you actually need a fixed multiple in true light intensity. For example a 1.1x step by going from 50 to 55 will look like about the same increment as going from 200 to 220. Conversely, working this backwards, a +1 step from 200 to 201 will be hardly noticable, but a +1 step from 10 to 11 is quite significant.

A scene with reasonably high dynamic range can easily contain 1000x ratio in brightness from the dimmest part you care to see detail in to the brightest part. However, common display means don't come anywhere near that. Even a good print might be 50:1, or 80:1 under ideal conditions. The same holds true for computer monitors, especially since most are viewed with significant ambient light, therefore limiting how black black can really be shown.

Therefore, to make a final picture viewable on a 50:1 medium that starts out with 1000:1 contrast, for example, there needs to be serious compression of dynamic range. To make this compression look natural and acceptable to human viewers, it has to be done in the human conceptual intensity space. As I mentioned above, the human conceptual intensity space is the logarithm of physical light intensity.

The point of all this is to explain why linearly spacing out the 256 final display intensity steps over the original captured intensity range doesn't work. If you follow all the math, the result is that the limited display levels are grabbed from the original bunched up at the dark end and spread out at the light end. You can't do this and not lose information if you start out with only 256 linearly spread out levels.

There are other post-processing effects and other reasons for wanting to chose the display levels non-linearly from the scene intensities. All these require more captured brightness resolution so that you still have differences and smooth jumps between the limited number of output levels.

Practical Argument

We just had a question where someone posted a night scene (high dynamic range) and wanted to know where all the detail in the dark areas went. See my answer, which shows a great example of what happens when you try to perform non-linear brightness mapping starting with only the same 256 levels you ultimately want to display the result in. Basically, don't let this happen to you.