I know it's best to do as much post-processing before converting from RAW, but in cases where it's not possible - what is the optimal order of post-processing steps (like noise removal, dust-spot removal, color correction, brightness/contrast correction, straighten, distortion/aberration removal, selective edits, sharpening, resize, color space and bit-depth change, etc)?

When I say optimal order I mean the order that will result the least banding, clipping, halos and other digital artefacts. I'd also like to understand the reasons behind some particular ordering. Is it different for prints and web output?

4 Answers
4

Several of the operations you're describing manipulate the data in the image such that information is lost or transformed. For the most part, I don't think this matters with traditional photography (ie, prints and the like), but it definitely matters when each pixel is considered a measurement of the number of photons.

What I think about when I do operations is the propagation of error. Error can exist at the single pixel level, the spatial level, and the color level.

Noise is single pixel sensor error during the detection process, introduced either by errant photons, quantum effects (translating a photon into an electron for counting is a probabilistic event on the quantum level), and analog to digital conversion. If subsequent operations will do things such as stretch contrast (histogram equalization) or emphasize darker regions (fill light), then you'd want to reduce noise prior to doing those.

For a completely reduced example of what I mean, take a dark field image (picture with the lens cap on). The result is noise. You can contrast enhance that, or whatever you want, but it's still noise. A perfect noise reduction algorithm should remove all of it, so no contrast can be found to enhance in later steps.

Spatial error can be introduced in a number of ways. When you rotate an image, you introduce spatial errors. If you think of there being a 'true' image (in the platonic ideal sense), the camera records a digital version of that. Even when you use film-- the film grains/crystals are of finite size, and some sampling of the 'true' image will happen. When you rotate a digital image, you introduce aliasing effects. The very sharpest edges will be dulled slightly (unless you rotate to 90 degrees, in which case the grid sampling still holds). To see what I mean, take an image and rotate it by 1 degree increments. The sharp edge will now be (slightly) blurred because of the sampling necessary to do small rotations.

Bayer sampling may just be a spatial sampling error that we have to live with. It's one of the big draws (perhaps the only real draw) to the Foveon sensor. Each pixel has measures the color at that location, rather than getting the other colors from neighboring pixels. I have a dp2, and I must say, the colors are pretty stunning compared to my d300. The usability, not so much.

Compression artifacts are another example of spatial error. Compress an image multiple times (open a jpg, save it to a different location, close, reopen, rinse, repeat) and you'll see what I mean here, especially at 75% compression.

Color space errors are introduced when you move from one color space to the next. If you take a png (losslesss) and move it from one color space to another, then save it. Then go back to the original color space, you'll see some subtle differences where colors in one space didn't map to the other.

I don't count photons, but every once in a while I discover that some step down the line renders my image unusable. And this is not because of some extreme editing, but rather extreme shooting conditions (just a recent example: 2.bp.blogspot.com/_-yoT3Wnz6VY/TGBx0Ju3T1I/AAAAAAAAJPY/…).
–
KarelAug 10 '10 at 18:05

What don't you like about that image? It seems that you have a few particular images in mind, so maybe looking at fixing individual images might be the way to go to get a sense of how to make an overall workflow.
–
mmrAug 10 '10 at 18:28

Sadly, I don't have any good examples as I've always redone them and never saved the problematic versions. It's just something that has sat in my head for a long time now. From your answer I understand that it's best to do noise reduction as early as possible because other edits (like adding contrast) are likely to make the noise even more distinguishable.
–
KarelAug 12 '10 at 13:11

I think the propagation of error is the most important takeaway for me here. One should start from getting rid of errors (like noise) whenever possible and do all the other steps in order of "least error introduced".
–
KarelAug 12 '10 at 13:31

(this is more of a comment than an answer).
Order makes a difference irrespective of if you're doing "none-destructive" editing or not.

Photoshop is just as "none destructive" as any other editor depending how you use it. You're not changing the original raw file.

The main point is that it's easier to make some modifications before you switch from the linear values captured by the sensor to the log-response values used to drive your eye. So that's why much processing over the last few years has moved into the raw converter: it's better done before you map it to the log-response of the eye.

The raw converter is the best place for most "development" changes because it's before the gamma correction is applied. Try adjusting colour balance before and after raw conversion to get a feel for the difference. Of course I've no idea what order the raw converter does spot-removal and noise reduction in (although I could guess), but it's not particularly relevant: it's one step in processing.

Back in the day people would worry about doing most work in the maximum bit-depth and then converting down for output. There's nothing wrong with that principle, but in practice you should be able to do everything you need in the raw converter, so it's a moot point.

Sure, you need to resize and then sharpen at that size, the point missed by maybe 95% of people who display images on the web.

It is different for prints and web output. For web output you need to know how your monitor relates to others: is it sharp or not, and are the colours correct? Once you know, you'll know how much to sharpen. Mostly you'll find that printers are much softer, so then you'll usually want to over-sharpen on your screen so that the prints are spot on. The amount of over-sharpening you'll have to find by trial and error, as printers vary. Because you're none-destructively editing, you can sharpen for specific output devices without worrying about your originals.

I think in practice are very, very few operations where the order makes a particular difference. There may be some where you change a smaller amount of data in total, but concern about destructive editing is largely overblown. I have gone back and re-done photos only very rarely for a few particular favorites; if I've made what I later perceive to be a mistake, I prefer to make adjustments going forward.

I think rather than thinking in terms of order, it's more helpful to think in terms of interacting groups of operations.

(If relying on RAW conversion first, like Photoshop does) "close enough" color/tonality and white balance adjustments, applied as a batch. Minor changes are fine to make later. Big tip here is to use a manual WB setting when it's feasible, as that makes it much easier to batch.

keep/discard, crop & straighten, dust, distortion, white balance.

these are the basic things I do to get the image I'm working with. It doesn't matter if they're 'destructive' or not; I'm never going to re-do them.

Tonality: color, brightness, etc. Lots of perceptual feedback in these steps, so no particular order for me.

Export, I usually work from presets depending on the destination:

bit depth, color space

(rarely) re-adjust tonality

resizing

sharpening must be last

My archival copy is usually after step 3, but occasionally after step 2 if 3 seems particularly experimental or extreme.

I don't mean Lightroom or ACR, but editing in Photoshop for example.
–
KarelAug 10 '10 at 14:21

1

Photoshop is a (mostly) destructive editor, so the second section applies, however, there are some types of edits that can be done in a non-destructive way (usually through the use of layers).
–
chills42♦Aug 10 '10 at 14:25

3

In Photoshop, try to use **adjustment layers** and **layer masks** when you can. These allow you to do common adjustments in a non-destructive way. I've also recently learned that some quality sharpening techniques (involving copied layers, slight blurring, inversions, and opacity) can be done with multiple layers and layer blending, allowing you to sharpen your image in a non-destructive way. Photoshop is a powerful tool, and with a little care, you can use it effectively as a non-destructive editor.
–
jrista♦Aug 10 '10 at 17:42

1

Note, however, that Photoshop Smart Objects allow quite a bit more non-destructive editing -- and will prevent you from accidentally doing any destructive edits by accident (you need to convert it back to a normal layer first).
–
Jerry CoffinAug 10 '10 at 18:51