High-Dynamic-Range Photography: A Guide

If you've seen a particularly eye-popping, out-of-this-world night photograph of a city skyline, or a particularly apocalyptic cloudscape with cartoonish color saturation making the rounds on blogs lately, there's a good chance it was made using high-dynamic-range imaging, or HDR software. And while these images may look like the work of a pro photographer, or at least a seasoned digital-imaging or special-effects expert, the tools to easily make your own amazing HDR images are widely (and in some cases freely) available.

So what exactly comprises an HDR image? Basically, more information per pixel. When you take a photo with your digital camera, the colors are converted to accommodate the limited palette of your display or a piece of photo paper. The human eye, however, is capable of taking in far more color and light information at any given time. This is why it's necessary to take a photo with the correct exposure settings—what your eye sees as a uniform scene with a balanced brightness and color range needs to be regulated to fit within the more limited range of your camera's sensor, or else the image will appear under- or overexposed (too dark or too light).

HDR provides a way to combine a range of exposures of the same scene into one image, adding significantly to the amount of data held per pixel (most digital images hold 8 bits of color information per pixel; an HDR image has 32). The result is an image with more "dynamic range"—in other words, the brights are brighter, the darks darker, and there's much more variance in between.

For a step-by-step guide to creating your own HDR images, continue reading below:

To get started, you'll need to shoot the same scene with a range of different exposures [above, my subject is the Williamsburg Bridge in New York City]. Scenes with uneven lighting really bring out the best of HDR (in my case, the bright lights of the bridges in the distance and the dark shadows of the cargo crane at left and the sky above). You can use the bracketing function of your digital camera (better point-and-shoots and almost all digital SLRs have it) to fire off three frames every time you squeeze the button—one with the correct exposure, one overexposed, and one underexposed. You want the difference to be as dramatic as possible, so if the three images look too similar, you can use the manual-exposure setting of your camera to take a series of exposures with a tripod (like I did here). The more exposures the better. And if your camera can shoot RAW images (an unprocessed format like a “digital negative” with greater flexibility), use that, as your images will have more detail.

Now, the magic. To combine them, you'll need software capable of doing the job. If you're using Photoshop CS2, you're in luck—HDR capabilities are built in. If not, there are alternatives. A cross-platform application called Photomatix Pro is a specialized HDR processor that costs $100 ($83 if you use the coupon code found here; there's a free bare-bones version just for Windows that I haven’t tested) and does an amazing job; since it only does one thing, it does it very well, offering specialized controls and batch-processing options that Photoshop lacks. There is also a rapidly improving, free, open-source alternative called qtpfsgui (great name, right?)—it doesn't have as many options yet for tweaking your HDR output, but you can't beat the price, and it's great to get started with. Venerable open-source Photoshop-alternative the GIMP has yet to incorporate HDR support.

Using whatever software you settle on, you'll need to combine your batch of variable exposures into a master HDR image. If you used a wobbly tripod (or worse, handheld your shots), you can have the software attempt to align them automatically—if they're not too far off, this usually works fairly well. The resulting image might take a while to generate and look a little weird when it does; this is because your screen isn't capable of displaying HDR images. To get the eye-popping HDR color effect, you'll need to downsample the image back to 8 or 16 bits per pixel, but in a way that blends the high dynamic range of your HDR composite image into one that still retains the increased detail and color range of HDR but fits comfortably in the viewable range of your monitor or paper. This process is called tone mapping.

Tone mapping is where the serious bit-crunching comes in, and each of the software tools detailed here has a different way of doing it. Photomatix provides a fairly straightforward dialogue of sliders that regulate the brightness, white and black points, and numerous other aspects of the resulting image—you can get some wild effects just by tweaking them and seeing what happens in the live preview. Photoshop gives you four tone-mapping choices. But the hands-down best is "Local Adaptation," which gives you control of the image via the "curves" control. I recently learned how to use curves, which are the basis of almost all digital-image processing, and I'm still not good enough to really explain them. I learned from here, though, and if you use Photoshop, your life will be better for learning as well. Anyway, this gives you great control of the image's color and exposure, and again, simply playing around and observing the live preview can yield some fun results. Qtpfsgui has all kinds of crazy-sounding tone-mapping functions to choose from (Drago logarithmic mapping! Durand fast bilateral filtering!); since no one but the mathematicians who invented them have any idea what they mean, trial-and-error is again your friend.

After you've found some settings that work (it's amazing the range of output you can get), voila, you've got your first HDR image. As you can see below, the difference between the correctly exposed normal image and the tone-mapped HDR output is marked: richer darks in the water, less blown-out whites in the lights, and more vibrant colors.

So now what? As you might expect, there are countless groups on Flickr dedicated to HDR where you can show off your work, seek feedback, and learn more in the discussion forums. My favorite is the largest (simply called HDR), but there are many others dedicated to users of specific software, people who go for a more realistic look with their HDR imagery, and so on. As you'll soon see, some people really love HDR and some people really hate it, but as with most things dealing with the visual arts, a lot of it comes down to personal aesthetic. No matter what your feelings on the HDR look, though, it's still pretty amazing to see how the process works, and more amazing still that anyone with a camera and a computer can try it out for themselves. Viva la digital revolution! —John Mahoney

Comments

Odi: It's an interesting idea, the only thing which have to be changed is firmware of camera - one-off costs for producers. At the overfull market it could be the victory, for producer, to sell the camera with HDR capabilities. Even the tripod won't be neccesary, look at my tutorial http://www.nill.cz/index.php?set=tu1 - one JPEG file is often enough to achive HDR look.

I bet that different prototypes of "HDR-cameras" (32-bit or more) are on the desks of all camera producers all over the world. 2-3 years from now we probably can buy these cameras in the store, and maybe we even can afford 32-bit monitors/computer screens :-)

A digital camera cannot take them directly because the sensor cannot deal with very bright or dark light.
Looking at the example with 5 shots, in the picture where you can see the detail in the sky, you can see nothing under the bridge, and at the other end where you have detail in the dark bridge you have none in the sky.
It's not only because of the number of bits to store the data it's that there isn't enough (or there is too much) light shining on parts of the LCD sensor so part of the sensor is saturated, and/or part doesn't have enough light to 'see' anything.
By taking the pic twice with different exposure you are capturing the data in all areas.
Are HDR cameras possible? Probably the range can be improved at a price, but even then it will still be possible to find (or set up) scenes with too much dynamic range (eg looking down a long candle lit tunnel with a bright sky outside), which could be taken using the software techniques used here.

ES: I know the difference, and call it "fake HDR" in my tutorial. I don't express exactly here. I have looked at the problematic from marketing point of view, HDR feature in camera could have big success and it doesn't matter whether real or fake HDR will be implemeted. Ordinary consumer doesn't know the difference, only the simplicity of use and final look is important.

@BillT: I don't think HDR cameras will be about getting a wider range out of the sensor: more a move towards automating the HDR process within the camera- that's until a sensor that can handle a much wider range becomes available.

>>By taking the pic twice with different exposure you are capturing the data in all areas.

That's what I'm talking about. Why doesn't the camera just take 5 pictures itself, in quick succession, at different exposures, and then do some processing itself to create a 6th, HDR image... (or leave it to software later)

just out of curiosity leern, are you interested in participating in the process?

It's very cool that there is now a more singular way in which to do this process, but it has been done for years [since about photshop 3 in the early 90's] Of course we called it exposure bracketing and image combination back then... originally it was a way to get around the narrow lattitude of exposure of transparency film. And it took hours [or days] to do, so you only did it on a shot you could sell.

Now it sounds like you can do it very quickly. That surely makes the workflow much better...

I'm sure "in camera" HDR is technically possible. After all, a digital camera is just a specialized computer. It has, input, processing, memory, and output. My guess is that currently an imaging processor powerful enough to do in camera HDR is to expensive to be worth it. Any expert level users are going to be doing all of there post production work outside of the camera and currently nobody from the point and shoot crowd is going to pay that much to have a HDR capable camera.

Give technology enough time to where normal every day camera processors are capable of doing in camera HDR in a reasonable amount of time and I'm sure it will be the next must have feature on the market.

Power Fighter, the chips in modern cameras are plenty powerful enough to turn multiple exposures into HDR images, as long as they don't have to align the photos (read: use a tripod for best results.. but that's already the case now. The real problem here is that cameras are built for idiots (the P&S market) and old-school photographers (the high end market), so it's going to take a while for camera companies to try to sell them features that are obvious to digital imaging nerds. Happily, people are starting to catch on.. see some of Fujifilm's "expanded dynamic range" cameras (even if the interface is dumbed-down and backwards).

One more software option -- Corel Paint Shop Photo X2 (list price $99, 30-day trial at www.corel.com). The HDR forum editor at popphoto.com described it as HDR without having to learn a whole new language. Note that what Photoshop calls tone mapping-local adaptation is called Clarify here.