In the paper "Merge to HDR in Photoshop CS2" it is stated that "Producing an HDR image requires taking enough separate exposures so that you place all of the brightness levels that you want in your final image into a range that your camera's sensor can record properly." So I will take 5 to 7 images one stop apart. The paper continues, "Ideally this means putting the darkest values no lower than somewhere in the mid-range of the sensor's sensitivity range."

My question: does this later statement refer to the darkest of the 7 images, the midpoint of the 7 images or what? I am trying to develop an image bracketing strategy prior to using Merge to HDR.

What you want to do is create a range of exposures such that the brightest exposure has it's shadow values approaching the middle of the exposure range and the darkest exposure has it's highlight values in the mid-upper right of the exposure histogram.

I find that a range of five exposures about .7 or 1 stop apart normally covers this for landscape exposures.

Merge to HDR is a useful tool but not a panacea yet. I often find it easier and faster to combine 2 or three bracketed exposures manually, particularly if there is any movement in the image (merge to HDR can produce too much softness soetimes).

raymondh -"Or even just making several conversions of the same image and combining?"

Quite coincidentally, a similar question was discussed yesterday on the support board of Picture Window Pro and the creator of PWP, Jonathan Sachs, expressed the following opinion:"There is no point in the commonly-suggested technique of shooting RAW, processing twice, and then combining images. If you simply convert RAW to 16 bits you will have extracted all the information and hence dynamic range present in the image and all you need to do is typically apply a contrast-increasing brightness curve."

If he were right, then applying a curve before RAW conversion would be exactly the same as applying it after RAW conversion and it has been shown that the application of a curve to convert the linear information captured by the sensor into a logarithmic space is an entropic, non reversible, process where some information is potentially lost.

By doing 2 conversions with different curves, it is possible to keep in the 2 converted images the most useful amount of local information that can hence be overlayed into a single image containing more information than a single conversion could.

In other words, if local curve application were possible in the RAW converter, then there would indeed be no advantage in overlaying images coming from 2 conversions, but this is currently not possible.

Funnily enough I tried a test of this a month or so ago. I use Photoshop CS, not CS2, so don't have access to the HDR function. So, for a high contrast shot (dark faces against a bright background, should have used fill flash but didn't have one with me) I exposed the image twice in Rawshooter Professional - once for the shadows and once for the highlights. I then merged the two exposures in PS, using the technique Micheal describes on this site. The image was acceptable but not great.

I then wanted to see if this was better than simply using the Shadow/Highlight tool. I exposed the image again in Rawshooter to get a single "average" scene, and altered the shadows and highlights in PS CS using the Shadow/Highlight tool. On comparing the two images (blended vs. S/H) at 100% I could not detect any difference whatsoever.....nor in a print at 11x14

Quote theory if you like, but if on screen at 100% and in a reasonably large print I cannot tell the difference then I don't think it is worth the effort of blending two exposures. I am interested in the HDR function of CS2 however.

Quote theory if you like, but if on screen at 100% and in a reasonably large print I cannot tell the difference then I don't think it is worth the effort of blending two exposures. I am interested in the HDR function of CS2 however.