+ Sponsors

Saturday, 15 November 2008

Diffraction In Perspective

Last column I introduced you to diffraction. This time, I'm going to explain why you shouldn't pay it so much attention.

How do the component blurs in an optical system combine? Usually, the following equation is a good approximation to the total amount of blur you'll get:

TotalBlur^2 = Blur1^2 + Blur2^2 + Blur3^2...

In English, the total blur squared is the sum of the squares of the individual blurs.

Just to make sure you understand this equation, imagine you have a camera with a film and lens resolutions of 100 line pair per millimeter (blur circle of 10 µ). Then the total blur will be 14 µ and the final resolution will be 70 lp/mm.

Up the film resolution to 150 lp/mm, and the final resolution will be about 83 lp/mm. If your film resolves 200 lp/mm, then you're near 90 lp/mm. That's close enough to the resolution limit of the lens that further improvements in film aren't going to produce a visible gain.

Rule of thumb: if your worst blur isn't twice as bad as all the other blurs, then improving any of the blurs will improve the image, although (obviously) improving the weakest link gets you the biggest gain.

With film cameras, you've got five major sources of blur to worry about: film resolution, vibration, focus error, lens aberrations, and diffraction. Until one of them is twice as bad as all the others, improving any of them helps.

Focus error is the biggie that photographers don't think about. Cameras usually have SERIOUS focus errors, if you're trying to work on the scale where diffraction actually matters. The error has a variety of sources, from a mismatch between the distance to the focusing screen/autofocus sensor and the film/sensor plane to the finite step size in the encoders and servomotors in electrically-focusing cameras. It's a bigger problem than you would imagine. In an otherwise-good system, it is often the biggest source of image blur.

Film resolution is easy to characterize; digital sensor resolution is almost impossible! For a start, there are all the reasons mentioned in the article by Ruben et al. Add in several more:

• Even a 2 x 2 pixel array is a bare minimum. Does fitting the Airy disk into a square provide high quality image data? Nope. You're going to see improvements in the quality of the fine detail up to at least a 4x4 pixel array.

• Sensor pixels don't even operate entirely as discrete elements. There is charge leakage (analogous to a halation in photographic film) between adjacent pixels. This degrades sensitivity, sharpness, color fidelity, and noise, above what "theory" would tell you.

• Finally, there is the image processing that produces a viewable image. Some of you may have noticed how peculiar it is that the number of lines of resolution in many digital cameras is about the same horizontally, vertically, and diagonally! That's impossible if you're talking about a physical blur circle. This is a synthesized image. The resolution you get is not a simple mapping of pixel data.

Put it all together and claims that pixels have become too small because of diffraction effects loses all credence. There's too much stuff going on that muddies the picture (literally).

The same test photograph as last time. This figure is at 150% full-size [after you click on it to enlarge it —MJ]. The camera is my Fuji FinePix S100fs. Pixel pitch is around 2.5 µ. Nothing much changes in terms of sharpness all way down to ƒ/5.6. Sure, the pixels are minuscule and diffraction is getting worse, but everything else matters much more. Even at ƒ/8, the impact of diffraction is modest, although the Airy disk is twice the pixel size. It's only going that final aperture from ƒ/8 to ƒ/11 where diffraction truly dominates image clarity.

The only way to figure out if diffraction is important in your
photographs is to run some very, very careful tests and understand how
to interpret the data. For most of you, honestly, it's a waste of your
time comparable to running endless film tests instead of making nice
photographs.

In practical terms, the best way to assure near-optimum performance from most lenses is to stop down at least one and no more than three stops from the full aperture. (There are a very few commercial camera lenses which produce their "best" images wide-open, but these are EXTREMELY expensive lenses.)

I was recently reminded of one, albeit uncommon, situation where this information would be useful as I was looking at some superb images of very old tapestries. Capturing accurate edge-to-edge detail for such 2D art photography is essential and trickier than it might seem. Diffractive distortion is certainly a performance boundary that such photographers must keep in mind.

A kindred aspect of lens performance that's worth mentioning is color rendition at various apertures, particularly in low-to-moderate light. Those who've not already explored this with their own lenses might profit from spending a quiet day doing so. I think you'll be at least mildly surprised at what you discover, and those discoveries may be very useful for your lens selections for certain situations.

In light of all the technical posts that have appeared in TOP in the last few weeks I thought this might be an appropriate reminder.

In a volume titled 'The Art of Photography', from the 1971 'Time-Life Library of Photography', is the following:

"Looking through a camera's lens and visualizing in his mind's eye a picture, the photographer himself -not his equipment- is the most important element in the art of photography. His unique vision of the world, his experiences and memories, as much as his skills, are his real creative tools. With them he selects and organizes the raw materials before him, creating a picture to which others can respond."

A serious question: when is it helpful to "understand" this stuff, as opposed to "know about it?" Is it possible simply to get a cookbook description -- do this, don't do that, for *this* effect? Or, if you must do *this*, you'll get *that*? That would be more useful for some of us, who are not particularly technically minded. It's like, why do I have to know how to grow wheat, if all I want to do bake a loaf of bread?

On the other hand, if there is a solid reason to know this stuff (as opposed to simple intellectual curiosity), then it would be good to know that, too.

Wise advice... For decades, large format photographers have known that objectively, tiny apertures decrease the absolute sharpness of an image.

But because LF has so much "extra" sharpness, many LF photographers choose to shoot as high as f/64 and beyond, in order to maximize depth of field.

In other words, sometimes, being able to get the foreground and background in focus is more important than the resolving power you lose.

With that said, because LF lenses in particular stop down so far, it's not unusual to see a lens that can resolve 80 lp/mm at f/8, but only 25 or 30 at f/32... Then the question becomes how large you want to enlarge, and what lp/mm you feel you need to hit to satisfy your eye.

I'd probably simplify even further and tell people that if they stop down two stops from wide open they will hardly ever likely to go wrong. Sure, they might be a little off from the truly-optimum aperture, but nobody but a pixel-peeper will ever see the difference.

Dear Ken,

The hard part about copying 2D art is obtaining perfect parallelism between the film/sensor plane and the subject plane. it's almost impossible to get that perfect even with a professional alignment tool, if you're talking about taking things down to the diffraction level. On the other hand, close-up photography is one of the few places where diffraction often bites unwary photographers. In normal use, it's hard to find a lens that will stop down so far that it will get you in really serious trouble with diffraction. But move down to 1:1 and all those apertures are doubled. Add to that the inclination of close-up photographers to stop down as far as they can to maximize depth of field. Too often they go so far that they're actually reducing (or entirely eliminating) depth of field, because all the allowable blur is getting eaten up by diffraction.

In my opinion, it's almost never helpful nor necessary to know what's "under the hood" so far as making good photographs goes.

If you skip over any sentence in my column that has numbers, equations, or scientific trivia in it, you've pretty much got the cookbook you asked for.

But I'm not much interested in writing just cookbooks. I'm more interested in explaining to people why the cookbook works the way it does. Doesn't keep you from extracting the cookbook information from what I write. Just requires you to do the extraction yourself. I think that most of my columns are accessible if every time you hit a hyper-technical or mathematical lump you say to yourself, "that doesn't matter, that doesn't matter, that doesn't matter." Because, by and large, it doesn't!

I'm also always trying to dispel myths. They're worse than no technical knowledge at all. They actually lead people astray. So, on a lesser level, I'm satisfied if someone reads my columns on diffraction and concludes that they really don't understand the subject. That's better than them thinking they do and having it wrong.

All very true! Plus, unless you're using a tensioned or vacuum film holder, focus error is a serious problem for sheet film photography (it's even pretty serious for medium format roll film). Runouts can amount to millimeters! So you have to stop down to get enough depth of focus over the entire sheet of film.

This is a bigger problem for distant subjects than near ones. A focus error on a nearby subject merely moves the point of correct focus a little or closer further from the camera. In most cases, you won't notice. But at infinity, everything is in a single optical plane, and it's either in focus or out.

If you've ever made a photograph of a far distant subject on medium or larger format and noticed that there were some "soft spots" in an image that you expected would be sharp everywhere, you've fallen victim to this.

Very well written! I would add two other sources of blur, digital specific: antialiasing filter, and JPG compression. The AA filter is a very unknown part of the sensor - did you see any characteristics of this device from Nikon, Sony etc.? (I didn't, and I don't think it is important for photographers, but the ads with microlenses nearly smaller than the wavelength make my crazy).
The JPG compression is more important for compact cameras, mobile phones and people with small memory cards. Compression limits the total amount of details, not local sharpnes, which results in a "plastic" look of areas with lot of small details or textures.

Those both have an effect on image quality, but they won't plug into the equations I gave. The impact of an anti-aliasing filter is complicated, and it's an integral part of the total behavior of the sensor. You can attempt to measure the resolution characteristics of the whole package (good luck getting a sensible answer!) but not the anti-aliasing filter and sensor separately.

In other words, as a photographer you should ignore it. It matters as much to you as the precise order of coating of layers in a color film. It's the total performance, only, that affects you.

JPEG compression does not produce blur; it creates artifacts and erases certain kinds of detail, but it often preserves other detail quite well. Again, not something you can use in these equations.

('Sides, nobody using really high compression ratios will care about this column.)

Great info. This subject of diffration seems to be used a lot recently on forums on well known sites (you know which ones). It is often used as a reason that you should not make APS-C sized sensors any higher resolving than 12-13 MP. I have seen a couple of reviews and even interviews, where it says that the reason the pr. pixel sharpness on the EOS 50D is worse than its predecessor, the EOS 40D, is because of diffraction. There is no point in going any higher than 12-13 MP. What is your take on that? From what I can digest from your article, diffraction is not the limiting factor in normal photography. Strangely I have seen a comment where someone had calculated that f7.9 was the limit on the 15MP 1.6 cropped sensor. From there on the resolution goes downhill, even in normal shooting situations. Is that not the fault of diffraction?
Thank you for taking this very talked about subject up and clearing quite a bit of the mist that used to hinder my sight almost completely.

I'm glad you enjoyed the article. I don't know which well-known sites you're talking about, because I don't read photographic websites. I look up specific information on them when a Google search finds it for me, but TOP is the only site I read regularly.

Now I am going to be exceedingly blunt.

Unless the people who are making these claims about the EOS models have demonstrated at least as good a working knowledge of the sources of image blur as this column, they don't know what they're talking about. You can ignore them.

As for resolution going downhill as you stop down a lens past the optimum aperture? I'm not going to sugar-coat this.

That is Photography 101!

The basic rules didn't change when people moved from silver to silicon, and what I was taught in my one and only introductory photography class was that if you make your photographs with the lens wide open they aren't likely to be really sharp, if you stop down a couple of stops they're likely to be much sharper, and if you stop down too far they get fuzzier again.

Does this really sound like rocket science to any of TOP's readers? I hope not!

The other thing I was taught, as very basic photography, was that unless you wanted a slow shutter speed for some reason, you didn't stop down any further than you needed to to get the depth of field you need it. But you did stop down as far as you needed to get adequate DoF.

Anyone who comments about image sharpness, sources of blur, and usable apertures who hasn't demonstrated they understand these simple basic concepts has no business issuing pronouncements.