3. Digital cameras light gathering capabilities are measure in "Physical Stops," but the images produced are defined by "Digital Stops" (outlined in previous two sections).

4. DynamicRange in a digital image is defined by the total gradient levels of tonal range. These tonal levels are calculated mathematically through the binary decimal system into Stops. But Stops are only used to reflect or limit the Bit Depth potential. Stops by themselves do not define the DynamicRange in digital imaging (previous section).

5. HighDynamicRange software allows more data to be implanted into the Tonal Levels of the current digital imaging standard. It does not mathematically nor literally expand the DigitalDynamicRange of the current printing standard of an 8 Bit Depth across 5 Digital Stops.

6. Saving and editing image data in a 16 bit format (any RAW file type) allows for increased manipulation of both Tonal Depth and ColorRange. But when reproduced, it is compressed back to our current industry standard of 8 bits. Even commercial printers that accept files in a 16 bit TIFF format for printing do not actually have printers that are capable of printing an image with a 16 bit tonal resolution.

Trivia: Traditional film using light-sensitive photo paper was able to reproduce a smoother tonal range with much more detail in proportion to the grain of the film. Our DSLR cameras now out perform traditional film, but we will have to upgrade our current digital imaging reproduction standard if we want to get the full benefit of 21st century digital imaging technology.

7. Our DSLR cameras (including Hasselblad) have evolved to 12, 14 and 16 bit depths. Therefore, we will see below that we are capturing a "PhysicalDynamicRange" of more than 11 Physical Stops. That means we are able to capture, dissect and reassemble this massive amount of data into 12, 14 or 16 digital bits of tonal quality for a "DigitalDynamicRange" of up to 11, 13 or 15 Digital Stops, respectively. (The math is outlined in Sub Section 2: Divvying up the Data.)

8. See number 5 again (it summarizes the concept perfectly).

_____________________________________________________________

Part 2: HighDynamicRange Software

Before we get into using HighDynamicRange Software, let’s look at where to get it. A GOOGLE search will produce a lot of choices. I’ve only used two:

A. "Luminance HDR 2.3.0" is freeware (cut and paste search). It has no directions, but is fun to play with (again, it’s free).

B. "Photomatix" has a free trial. Photomatix is highly praised HDR software, it does work very well (used below). The purchase price is $39 for the basic "Essentials" version. It is mostly automated and easy to use. There is also a Pro version for $99.

With both of these programs (A and B), you can do some Tone Mapping with a single image. Tone Mapping is the term to perform the function as outlined in number 5 (above). Tone Mapping a "single image" simply takes all available compressed data and applies the detail up the tonal levels for improved tonal definition.

Although you can slightly improve the tonal range of a single image, the point of High Dynamic Range software is to combine the data of multiple images, then reconstruct an image using the concentrated imaging data.

__________________________________________________________________

Part 3: Single Image Tone Mapping

Let’s look at the benefit of Tone Mapping a single image.

In a DSLR digital negative, there is imaging data that is not present in an 8 bit image, but resides in the imaging data file. That is to say that when a DSLR image is shot at 12 or 14 bits, eventually it gets compressed to 8 bits, the tonal range gets compressed in proportion leaving imaging data unused. As previously outlined, the lower tonal levels (dark areas in an 8 bit depth) do not have enough tonal range to support much detail.

To prove the point that a lot of imaging data is still present in an image, but not displayed, we will look at this snap shot (Tone Mapping 1, example below).

Trivia: The brightest areas of an image are always easy to see because of the massive amount of data defining their values. However, if the brightest areas are posterized, the data not longer exists, its been "burned" out. But when the darker areas are posterized, a lot of the data still exists, compressed.

This is where HighDynamicRange software is valuable even with single exposures. Tone Mapping moves some of the hidden imaging data up the Tonal Levels to create more tonal definition.

In the Tone Mapping 1 example below, the second image is Tone Mapped from the exposure corrected image.

This was shot at a 12 bit tonal depth. It proves how much imaging data is actually available, then lost in a compressed 8 bit format.

Take note of the Histograms, they showcase the added imaging data recovered from HDR rendering.

Please let me know if you have a problem opening any image. Likewise, hitting the refresh icon at top left of the screen can help.

Let’s point out why HighDynamicRange benefits more of the tonal definition of the darker regions of an image.

Each Tonal Level has equal importance in Tonal Definition, but they do not share the same potential for defining image resolution.

Our 8 Bit, 256 Tonal Levels are labeled from 0 to 255, zero being the darkest tones represented by the least data in an image.

A 12 Bit Depth has 4,096 Tonal Levels. 12 Bits has 16 times more Tonal Levels than 8 Bits. Our cameras shoot with 12, 14 and even 16 Bits of Depth. Photoshop can up-scale them all to 16 Bits for editing.

So on our displays, over the Internet, and in printing, there is simply not enough tonal levels to adequately represent the darkest tonal imaging levels.

The brightest areas are also restricted to the inadequate gradient levels, but because they are of a higher light intensity, that makes them easier to discern the limited 8 bit tonal variations. They still posterize easily and would greatly benefit from a new digital imaging standard upgrade.

____________________________________________________________

Part 5: HighDynamicRange Implementation

To create a HighDynamicRange image utilizing the data from multiple exposures, we need to shoot our image multiple times, precisely. This entails:

I. Using a tripod (or a monopod if you are very steady).

II. Keeping your Depth of Field constant by using Aperture Priority Mode.

III. Keeping your variable Shutter Speed as fast as possible.

Sub Section A: Bracketing

Bracketing is the term used to shot the same image in succession with a different EV value in every frame (EV defined in Section B).

I. The tripod is self explanatory to keep the framing exact. But you must take care to not move or shake the camera while pressing the exposure button. If you are shaky, most DSLR cameras have an optional remote control exposure trigger.

II. Set your exposure to Aperture Priority Mode. We need to keep the Depth of Field exactly the same in every exposure. Even if it is a large Aperture like f-stop 4.0, distance scenery shots will still be in focus. If we are shooting our subject closer, the background will be blurred exactly the same in every image.

III. We are using the Shutter Speed to increment the exposure level. As we outlined in ISO Sensitivity, this is where we know the highest ISO value for the quality we need for our final image. The advantage of a faster ISO is to ensure that our slowest Shutter Speed will stay above the Inverse of the Reciprocal rule (more below). Note: As we are using a tripod, we are still considering the Inverse rule for any movement like camera jitter on the slowest Shutter Speed. Likewsie, we are trying to capture the definition on the brightest areas too. So our top Shutter Speed needs to be as fast as possible to offset the ISO increase.

To use Bracketing on your DSLR, follow your manual to set up bracketing with these parameters:

Decide how many exposures you want to take (most common is 5 to 9 full Stop exposures). You will be allowed to set from 3 to as many as the camera allows. They increment in odd numbers as the middle shot is EV balanced as the reference exposure. Therefore, make sure your EV auto exposure is set to zero (see Section D: Automatic EV (Exposure Value) Compensation).

Again, balance your Aperture and ISO Speed against the slowest Shutter Speed in the Bracket.

Lastly, set your camera to the fastest continuous exposure bursts. In Bracketing, it will automatically stop with the last bracketed frame. So once you hit the exposure button, be still and hold the button until the camera is finished shooting.

I shot this bracketed exposure of 7 Stops of the sun setting over the mountains and highway. It’s beneficial for explaining High Dynamic Range along with the Ghosting of moving objects (moving cars in this case).

Again, Please let me know if you have a problem opening any image. Likewise, hitting the refresh icon at top left of the screen can help.

The middle (4th) exposure is the EV balanced exposure. The three above and three below are each 1 Stop exposure different.

Sub Section B: Combining the Images for HDR output

Now that we have our imaging data (7 exposures), this is where combining different exposure images produces the best HDR image.

Photomatix accommodates no more than 5 images. They can be in any increment of Stops. e.g. 5 images at 1 Stop apart or 5 images at 2 Stops apart. I’ve found that 7 images at 1 Stop apart gives an adequate DynamicRange for HDR image rendering. Likewise, if your ISO allows a fast enough setting (cleanly) for the slowest Shutter Speed to meet the Inverse of the Reciprocal rule, then 9 frames of DynamicRange is better.

Let’s combine some different exposures to see what we get. I am going to show two renditions of each exposure. The top image leaving the "Ghosting" (the moving cars, self explanatory when you see it). The second image with Ghosting removed. Removing the ghosting also diminishes the image quality. This can be offset with more images included in the rendering.

As Shot #4 is the balanced exposure, it makes sense that the shadows were retrievable, but that the highlights might be burned out.

So by definition, the lower tonal levels retain more unused data. Likewise, we want less posterization in our highlights, so let’s Tone Map the under-exposed Shot #2 as it was captured at two Physical Stops darker.

It is all around better than #4. Ithink we are getting the hang of manipulating the Tonal restrictions of an 8 bit depth.

>>>

Trivia: Sound has always been easier to reproduce than visual images. Radio went main-stream in 1919. Television went main-stream in 1939 with black-and-white. In comparison, the CD was released in 1982. The medium (the CD itself), the CD player and the interface to existing sound systems were all released at the same time. Existing sound-systems could easily reproduce the extremely wide dynamic range that CD could provide. It was developed as the finest reproduction that could ever be released for digital audio reproduction. The audio CD standard with a "lossless" format stands today.

Conversely, our digital imaging JPG standard was developed in the late 1980s and finalized in 1992. But this was more like early television. The first Intel Pentium 60 MHz (mega hertz) CPU wasn’t released until 1993. The poor developers were still working on i486 processors. Hard drive capacity was just passing the 1 gigabyte mark, 40 gigabyte hard drives were still a dream. Even the 12 bit digital camera did not show up until the late 1990s.

The Compact Disk could never have been developed to its current standard if the technology for sound reproduction at that time had been a Philco Ford tube type radio from 1931.

Therefore, even my "middle of the road," AMD quad-CPU, 2.6 Giga Hertz processor is 43 times faster than the Pentium that had not even been tested at the time the JPG standard was released. When I am HDR rendering, it takes 2 minutes and 45 seconds to produce one HDR image using five 350 megabyte high-resolution TIFF input files. That would take 2 hours to run one image on an antique Pentium CPU.

Needless to say (but I will), our current Digital Imaging standard needs revised to meet our current technological capabilities. With DSLR technology, we should not have to mesh five images to produce a well balanced picture shot outside of five physical stops of light.

>>>

This time let’s still use the two extremes of exposure (#1 & #7), but with #4 included for better mid range definition. Removing ghosting still diminishes the image a little. Note: #1 and # 7 are clean/sharp images, but if either was not clean, we would use another image in its place.

Now that we have a working model for HDR rendering, let’s apply it to the full capability of our 7 exposure levels.

We now understand that the underexposed (dark) images add a dramatic contrast enhancement and better definition to the "lightest/brightest" parts of our image (the clouds in this case). Note: The Sun is already posterized, we can not add definition to already posterized light.

The mid tones (#4) are used to balance and maintain the definition in the most visually important part of the image.

The over-exposed frames are to accurately Tone Map the definition of the darkest areas into tonal levels that can be easily seen.

This last example does in fact represent the scene as it appeared to me when shooting.

I think I mentioned, this was a snap shot using a monopod. I have been waiting to correct the slight tilt of the horizon and the light poles. As we’ve explored cropping in The Digital Darkroom, here is the final image cropped to be level, in Photoshop.