An admittedly random look at hacking and trying to easily get electronic devices to improve the author's quality of life rather than ruin it.
The author is particularly interested in home theater, photography, and scuba diving. He's a software developer by trade but, strangely, prefers not to spend his spare time hacking more than necessary. His home page is at http://www.evaandering.com.

Wednesday, April 6, 2011

My Photo Workflow III: What does it mean to "Photoshop"?

In my second post, I described my "post-processing," or how I turn a RAW file from my camera into a final image. The reason I use RAW mode on my cameras is that all these adjustments can be made from the original data, not the reduced data in the normal JPG files. And if I want to change something a few weeks or years later I can easily go back into Lightroom, tweak the one thing I want to change and regenerate the JPG.

In this post, I'll give some before and after examples for the adjustments I make. This all comes back to the question of "Do you use Photoshop?" As I already implied in the previous post, "Yes, but that can mean a lot of things." People often assume "Photoshop" means "cheating," "altering reality," or "making things up." But image processing is just another tool and you can put it to any number of uses.

One of the most basic adjustments that you can make to a photo is to change the white balance, or the color of the light. For instance, light on a cloudy day is bluer than light on a sunny day. In the days of film your photo lab would alter this in printing, but if you used slides you were stuck. They actually make different films for different lighting (outdoor, indoor, fluorescent). Fortunately with digital, we can adjust this either at the time you take the photo or later. Since I use RAW it doesn't matter at all when I change it, so I just tend to leave it on Auto and deal with it later. If you are taking photos in JPG mode, you'll get slightly better results if you set your white balance to sunny or shady or whatever fits.

This concept is really important for underwater photography without a flash since water quickly absorbs nearly all the red light. Take a look at this photo below. On the left is what the original might look like, on the right is what it looks like when I've corrected it. You can see that the reds, which were almost lost, have been restored. When you see something like this underwater, your eye sees the red color, unlike the camera. (I picked an extreme example because the difference between sunny, cloudy, and shady light is much less pronounced.)

White balance illustration (Stoplight Parrotfish, Cozumel, Mexico)

Using the same photo as an example, let me illustrate another correction I make to my underwater photos. When you shoot through water, you tend to get haze due to dispersion within the water. This is basically like haze in the air, but because water is 800 times as dense, you get haze between you and subjects a few meters away (rather than a few kilometers in air). You can take some of this haze out by increasing the black level, or basically darkening the darkest parts of the photograph. You can see this below where I've done this on the right. (Look at the water to see the effect most clearly.)

Removing haze with black levels (Cozumel, Mexico)

This next photo is one of my favorites. This shows the effect of increasing the "Clarity" and "Vibrance" in Lightroom to get that warm, saturated film look. (I didn't change the white balance between the two.) The line down the middle is a little hard to see, but take a look at the photograph as a whole and you can see how much richer this simple adjustment makes the part of the photograph on the right.

Clarity and vibrance illustration (Lioness, Nakuru NP, Kenya)

Here's a simple example of using the targeted adjustment tool I mentioned in my second post. On the left is the original version of the photo. On the right I've "grabbed" the blue sky and made it darker (decreased the luminance) to look more like it appears to the eye. (Your eyes are much better than a camera at seeing a deep blue sky against a very bright snowy mountain.) You'll notice the line between the two photos is very distinct in the sky and all but invisible in the white parts. That's because I've only adjusted the blue parts of the photo.

Targeted adjustment tool illustration (Chamonix, Switzerland)

This is a much more extreme version of the same principle and this is one of a handful of photos where I've really altered something. The air in Cairo is horribly polluted and a clear day with a blue sky over the Giza pyramids and Sphinx is incredibly rare. In fact, many postcards use photos taken a decade or more ago. So short of living in Egypt for years, there is no way I could have gotten a photo of the Sphinx backed by a beautiful blue sky. So I cheated. :-) I adjusted the white balance a little to get some blue into the sky. Then I used the Lightroom's targeted adjustment tool to adjust the darkness and hue of the sky to get something that looked somewhat natural.

Targeted adjustment tool illustration (Sphinx of Giza, Cairo, Egypt)

Here's a good example of noise reduction. This is a 1:1 blow-up of part of a picture. It's shot on my small underwater camera (Canon G10) at ISO 800, which I would never have done except that I had so little light to work with, I had no choice. On the left is the image as it comes out of the camera. On the right is after a moderate amount of noise reduction and sharpening from Lightroom. The quality of the noise reduction in Lightroom 3 is much improved over previous versions, so for photos like this, it was almost like buying a new camera. It still doesn't look great, but it's a major improvement and when it's not blown up like this, this photo actually looks decent.

Noise reduction illustration (Hilma Hooker shipwreck, Bonaire)

So, these things are the majority of the image manipulations I do in Photoshop Lightroom. I left out a few, like small (one stop or less) exposure corrections, removing dust and back-scatter, or fixing red-eye when using a flash. The point, for me, is that while a tool like this can help make a photograph somewhat better, it can't make a photograph. It is still the technique, thought process, and, yes, luck that I've had that goes into making the photo what it is. Aside from the Sphinx picture, every photo I've ever taken fits that category. On the other hand, I didn't paste a sky with clouds into that photo. It is also much easier to take a few extra seconds in taking a photograph to capture the best photo you can than it is to "fix" things in Photoshop, so I try not to be lazy in that regard.

So that's what I use image processing software for. What I don't use Photoshop or Lightroom for, and what I gather is many people's perception of the program, is things like this. I think I may have once removed a powerline from an open sky. I don't have anything against extensive "Photoshop"ping per se, but in my opinion when it's used to alter reality and it's not obvious that reality has been altered, it should be disclosed. There are many ways to use these tools and all of them can be valid, the important thing is not to deceive the viewer. But remember, photography is an art and there is now a continuum of methods and styles from raw photojournalism at one extreme through to things like Sin City and Avatar on the other.

If you are wondering how I create these photos with two halves, I use a couple of programs from Imagemagick:

To split a photo into two parts (overwriting the original):mogrify -path test2 -format jpg -crop 50%x100% +repage *
and to stick two pieces back into one montage 469d2b37-2-0.jpg 469d2b37-1.jpg -mode Concatenate -tile x1 lion.jpg

No comments:

Post a Comment

About Me

Eric is particularly interested in home theater, photography, and scuba diving. He's a software developer by trade but, strangely, prefers not to spend his spare time hacking more than necessary. His home page is at http://www.evaandering.com