Depth of Field Myths: The Biggest Misconceptions

Depth of field causes more confusion among photographers — beginners and otherwise — than nearly any other topic out there. Many “common knowledge” tips about depth of field have some flaws, or are at least partially inaccurate. At a personal level, it took me far too long to separate the good suggestions from the bad, and I eventually realized that I had been relying upon some erroneous information for years without knowing better. My goal with this article is not to make the most controversial possible statements, or needlessly poke holes in things that are almost entirely true. Instead, my hope is to cover some of the basic, common inaccuracies that you may have heard about depth of field, in case you’ve been relying on faulty information for your own photography.

1) Is it True that Depth of Field Extends 1/3 in Front of Your Subject, and 2/3 Behind?

No, this one isn’t true. The 1/3-front, 2/3-behind suggestion is a fairly common one, but it doesn’t play out in practice.

In fact, the front-to-back ratio for depth of field varies wildly depending upon a number of factors. In very specific cases, it’s true that the ratio can be around 1:2 — but, more frequently, it’s something else entirely.

Which factors matter here? There are three: focal length, aperture, and camera-to-subject distance. As you focus closer, use wider apertures, and use longer lenses, the ratio starts to approach 1:1. When you do the opposite, the ratio quickly passes through 1:2, then 1:3, 1:10, 1:100, and onwards to 1:∞. The range where the focus is 1/3 in front of your subject and 2/3 behind (or the range where it’s close to that ratio) is quite thin indeed.

Where does this tip come from, then? My guess is that it started simply enough: There are cases where the depth of field behind your subject is twice as great as the depth of field in front of your subject. And, with certain lenses and apertures, that spot happens to be a very “medium” focusing distance away from your lens — in the range of 3 meters (10 feet). So, it’s not surprising to me that this morphed into a universal 1/3-front, 2/3-behind suggestion. And, it is indeed useful for beginners to know that depth of field takes longer to fall off behind your subject than in front.

Still, it’s quite a narrow window where the ratio is closer to 1:2 rather than 1:1.5, or 1:3, or 1:4, and so on. The ratio 1:2 isn’t some common figure that tends to occur when you focus at “medium” distances. It’s much more of a special case than a generalization.

NIKON D7000 + 105mm f/2.8 @ 105mm, ISO 1250, 1/100, f/3.5At such a close focusing distance and wide aperture, as well as with a 105mm medium telephoto lens, the depth of field here falls off in front and behind my focus point at almost exactly the same rate.

2) How Do You Double Your Depth of Field?

It depends. But there is no simple thing you can do to universally double your depth of field for a given photo, so long as you’re defining “double” how most people do, and you’re not calculating polynomial equations in your head.

What about using an aperture that is two stops smaller? Or stepping twice as far away from your subject and refocusing? Or using half the focal length of your current lens?

Nope. None of those things universally double your depth of field, even though you might have heard that they do.

This is easy enough to realize simply by doing a quick thought experiment. Say that you’re using a wide-angle lens, and your depth of field ranges from 1 meter to 15 meters. In this situation, infinity will be almost within your depth of field, but not quite; distant objects are probably only the slightest bit blurry. Still, they aren’t technically sharp enough to count within your depth of field.

In that case, you don’t need to do very much in order to get the farthest objects completely within your depth of field. Simply change your aperture by a fraction of a stop, or use a slightly wider focal length, or step back just a bit and refocus in the same spot.

In all of these cases, a minor change to your settings (focus distance, aperture, or focal length), will increase your depth of field from 14 meters (15 minus 1) to an infinite number of meters. Clearly, that’s more than doubling your depth of field! And, crucially, you don’t need to change your camera settings much in order to accomplish it.

(If you’re wondering about the exact values I used, it’s true that they’re a bit arbitrary. However, to make sure that they were realistic, I used this calculator with a 14mm lens, a subject distance of 2 meters, an aperture of f/5.6, and a 0.015mm circle of confusion. Feel free to use it and play around with your own values.)

That’s why there’s no merit to claims that you can “double your depth of field” by doing one particular thing for any photo. Sometimes, focusing twice as far away will triple your depth of field. Other times, doing exactly the same thing will increase it 10x, 50x, or infinitely. It all depends upon how much depth of field you already have.

3) How Many Variables Affect Depth of Field in a Photo?

Assuming a typical lens, there are three:

Focal length

Aperture

Camera to subject distance (how far away you’re focused)

From time to time, you may hear online that only two variables affect depth of field in a photo: aperture and magnification.

There’s a similar (though slightly less common) argument, too, that two other variables are the only ones that affect depth of field: subject distance and entrance pupil size.

Neither of these claims is technically wrong, but there’s an issue: People who say that depth of field only contains two variables are merging two of the three together. That’s perfectly fine, but the individual components still matter, and they still affect your depth of field.

Magnification merges together focal length and subject distance. (It’s the size of an object’s projection on your camera sensor relative to its size in the real world.)

Most of the time, it doesn’t make things simpler to combine these variables together. No one in the field spends time calculating entrance pupils. The same is true for magnification, unless you’re doing macro photography.

To put it simply, all three components matter — focal length, aperture, and focusing distance. If you change one without compensating by also changing another, you’ll alter your depth of field every time.

4) Do Crop Sensors Have Greater Depth of Field?

This one has a lot of controversy around it, and I don’t want to add to that. The reality is actually quite straightforward.

The short answer is no, crop sensors don’t inherently have more depth of field than large sensors, although it can seem that way — in order to mimic a larger sensor, you’ll have to use wider lenses, which do increase your depth of field. (You also could stand farther back, which again increases your depth of field, although that does alter the perspective of a photo.) But the sensor itself does not directly give you more depth of field.

When it comes down to it, this shouldn’t be too surprising. A crop sensor is like cropping a photo from a larger sensor (ignoring individual sensor efficiency differences and so on). Unless you think that cropping a photo in post-production gives you more depth of field, this shouldn’t cause any confusion (indeed, if you crop a photo and display the final images at the same print size, it’s even arguable that you will see a shallower depth of field in the cropped image, since any out-of-focus regions would be magnified; but now I’ve started diving into a different rabbit hole, and this is a complex discussion for another day).

Still, the claim that small sensors have more depth of field isn’t entirely unfounded. Imagine that you have two cameras — one with a large sensor, and one with a small sensor — as well as a 24mm lens on both. Because the crop sensor will have tighter framing, you might choose to step back or zoom out in order to match what you’d capture with the larger sensor. Both of these options — stepping back or zooming out — do give you more depth of field.

So, the result of using a smaller sensor might indeed be that your photos have more depth of field, if you don’t do anything else to compensate for it. But this is an indirect relationship. The smaller sensor itself is not what causes the greater depth of field; it’s the wider lens or greater camera-to-subject distance.

5) Does the Sharpest Focusing Distance Depend upon Output Size?

No, although it’s a nuanced argument.

Here’s the starting point: If you’re making tiny, scrapbook-sized prints, you have way more leeway in terms of what looks sharp compared to something like a large, 24×36 inch print viewed up close. You won’t notice errors very easily in the small print. Even when the original photo has some major flaws, they won’t be visible if the print is small enough (or far enough away).

But does that mean the sharpest possible focusing distance changes as your print size does? No, not at all.

Indeed, there is only one focusing distance that will provide you with the most detailed possible photo of your subject (or the most overall detail from front to back, if that’s your goal instead). Just because you can get away with focusing on your subject’s nose rather than their eyes, for example, in a small print, doesn’t mean that the “best possible focusing point” is on their nose. Whether you’re printing 4×6 or 24×36, and whether or not you can even see a difference, it’s still technically ideal to focus on their eyes.

Small prints let you mess things up more without noticing a huge effect; that’s very true. But they don’t alter the position of the best focusing point. So, the sharpest focusing distance does not depend upon output size (which is the impression you might get if you follow hyperfocal distance or astrophotography calculators too literally).

6) Do Hyperfocal Distance Charts Take Diffraction into Account?

No.

There are several flaws with hyperfocal distance charts. They don’t consider whether your foreground is nearby or far away (which matters if you want to focus at the proper distance). And, on top of that, they don’t take diffraction into account. They live in a world where f/8 is just as sharp as f/32.

If you’re still using hyperfocal distance charts to focus in landscape photography, you’re missing out on some potential sharpness in your images. It won’t be the difference between a masterpiece and a pile of garbage, but it’s enough that you might save yourself the price of a “sharper” lens by learning the proper technique for your current gear!

7) Should you focus 1/3 of the way into the scene?

I‘m not sure where this myth originated, but it holds no water.

The theory here is that you can get a sharp landscape photo from front to back by focusing 1/3 of the way into a scene — at which point, your foreground and background appear relatively equal in sharpness.

There are two problems here. First, it’s vague. If the farthest element in your photo is a mountain 30 kilometers away, is the “1/3” mark at 10 kilometers away? That would be quite a ridiculous place to focus, since, for all practical purposes, it’s infinity. If you focus at infinity for a landscape photo, you’ll sacrifice foreground detail unnecessarily.

I’ve heard other photographers say that it means 1/3 up the scene, visually speaking — in other words, taking the horizon as the top, and the bottom of your photo as the bottom. But that gets into another issue: The 1/3-up line almost always intersects with elements that are different distances from your camera. So, it still doesn’t tell you anything useful.

In the photo below, for example, would you focus on the nearby hill at the right, or the distant shrubs on the left? They’re both 1/3 up, by this definition:

Where would you focus according to the 1/3 method in this case? The nearby hill, or the distant shrubs? There is no solid answer.

In short, the 1/3 focus method is confusing to implement, and it’s not accurate. There are better ways to focus in a landscape photo if you want everything to be as sharp as possible.

8) Do Photos Look More Natural When Their Background is Slightly Out of Focus?

It’s an interesting question.

I hear statements like this relatively often: “Personally, for landscape photography, I make sure that my background is slightly softer than the foreground, since it looks more like how our eyes see the world.”

My question: Does it?

If you, personally, like your backgrounds to be slightly softer for aesthetic reasons, go for it. There’s nothing wrong with that decision at all. If you do so, though, keep in mind that it’s a personal creative decision, and not something that necessarily “looks natural” to everyone.

That’s simply because, in the real world, we absolutely have the ability to look in the distance and see a lot of sharpness and detail. Right now, I’m looking out a window at trees more than a mile away, and I can make out individual branches quite well (indeed, better than my camera setup, if I’m using a wide enough lens).

So, no, it’s not inherently natural for the foreground of an image to be sharper than the background. Out-of-focus blur isn’t a particularly strong depth cue to our eyes that something is in the distance.

To demonstrate that point more clearly, I made a quick diagram in Photoshop. This is one of those optical illusions where you can see the figure “popping out” in two different directions. Which direction do you notice first, or most prevalently over time? Do you see the top square at the front, or at the rear?

Personally, when I first look at this diagram, I can’t help but see the top square appearing farther away. Over time, I can flip it in either direction, but it does tend to keep jumping back to the distance. This is despite the fact that it’s the only “in focus” square of the three. Everyone is different, so your mileage may vary; however, in an unscientific survey before publishing this article, I can confirm that six for six saw it this way as well!

If sharpness is such an important cue for telling our brain that an object is nearby, the top square should appear closest for most people, not farthest away. So, what gives?

The answer is that our eyes pick up several depth cues from the real world, and defocus blur in a photograph isn’t one of the big ones. Other depth cues like the height and size of the object in the frame are stronger. Those are the driving forces in the illusion above — not sharpness or blurriness.

Still, I’ll make a couple counterpoints as well.

People, in general, spend a lot of time watching television and movies. So, perhaps our perception depends upon those frequent cues. And in most shows, it’s very common for the background to be slightly blurred (or more) in most scenes, since the focus tends to be on people talking nearer to the camera. There’s no way to rule out the possibility that the same effect could transfer to photos, and create its own depth cue — albeit, not necessarily as strong as others that may exist outside of digital media.

It’s also true that if we look at extremely nearby objects with our own eyes, the background will be clearly out of focus. The same is true if we look in the distance, and there’s something quite close to our eye. So, I could understand an argument that some amount of blur in a photo — foreground or background — can look more natural than perfectly sharp photos with the greatest possible depth of field. Even then, though, our brains always attempt to create a sharp mental map in every direction. Day to day, most people won’t pay attention to out-of-focus blur caused by their own eyes.

To sum it up, this “myth” isn’t as strong or widespread as others out there, but it’s still something you’ll come across. Personally, it is my opinion that landscape photos (or architectural images, and similar) should look sharp from front to back unless you have a separate creative reason not to do so. Other people may have different opinions, and I’m open to changing mine if I see a counterexample where slight blur in the background leads to a more natural look. As a whole, this topic is more about creativity than the technical side of things, which certainly allows for more individual interpretation.

9) What Do You Think of the Merklinger Method of Focusing and Selecting an Aperture?

I hadn’t heard about the Merklinger method until about a year ago. However, I will emphasize that it has major flaws if you’re using it as a way to capture the sharpest possible photos.

The Merklinger method involves focusing at the farthest object in every photo. If you’ve ever done landscape photography, it should be clear that this technique will make you lose some sharpness, especially if you have nearby foreground elements. By focusing on your farthest subject, you’re throwing away a lot of good depth of field.

Merklinger’s method succeeds at what it aims to do — providing a way to estimate depth of field in an image — but it certainly doesn’t provide a method of capturing maximum sharpness from front to back. Next time you’re out in the field, you can test this by photographing a scene with a nearby foreground. When you focus at infinity, no matter what aperture you use, you’ll get more blur than you would by focusing between the foreground and background.

NIKON D800E + 14-24mm f/2.8 @ 15mm, ISO 100, 1/20, f/16.0For this photo, I focused roughly on the corn lilies in the foreground, since they’re double the distance to my nearest element (the grass at the very bottom). If I had focused at infinity, even using f/16, the closest plants in my foreground would have lost significant sharpness. As it is, though, both my foreground and background are quite sharp.

10) Conclusion

Hopefully, this helped shine a light on the depth of field myths that you’ll see so frequently today. This is an important enough subject that accurate information is valuable, even if it isn’t always easy to find. And, of course, some of the tips in this article are suggestions more than pure, mathematical debunking. If you want to have defocused backgrounds, for example, go for it! Photography is all about your own creative vision, and that’s not something for me to determine.

Depth of field is a huge topic, and there certainly may be myths I haven’t covered yet. For space purposes, I also didn’t go into all the little nuances of some of these individual points, since this article already is quite long. So, if you have any questions about depth of field, feel free to let me know in the comments section below. I’ll do my best to answer them, or clarify anything I’ve written above.

Subscribe to Our Newsletter

If you enjoyed reading this article, please consider subscribing to our email newsletter to receive biweekly emails notifying you of the latest articles posted on the website. Email Address First Name

By checking this box I consent to the use of my information, as detailed in the Privacy Policy.

Related articles:

About Spencer Cox

Spencer Cox is a landscape and nature photographer who has gained international recognition and awards for his photography. He has been displayed in galleries worldwide, including the Smithsonian Museum of Natural History and exhibitions in London, Malta, Siena, and Beijing. To view more of his work, visit his website or follow him on Facebook and 500px. Read more about Spencer here.

Reader Interactions

Comments

1) ed

November 24, 2017 at 4:20 pm

The way we perceive the world was developed aeons ago. It is doubtful that TV and movies have anything to do with it.

Thanks, Ed, good point! Yes, that is most likely true; I don’t think TV and movies have a massive effect on how our visual system sees the day-to-day world (though it would make for an interesting study).

Of course, it certainly seems like it could impact what we consider “natural” within the specific medium of photography. If you only ever looked at faded color film photos your entire life, and then you immediately saw a more saturated color photo, I suspect it would be a jarring experience. I have no evidence one way or another that the same thing is possible with out-of-focus backgrounds (and indeed, I tend to believe that it isn’t possible, at least to a large degree). Still, I figured it was worth mentioning as a possibility so that readers could interpret for themselves.

Man your articles are always awesome! Thank you for taking the time. Quick question though about crop vs full frame, because I am still confused. When I look at a depth of field table, even accounting for equivalence, they all show less depth of field for a full frame vs a crop sensor. What gives? Thanks again Spencer, for another awesome article.

Are you sure that they show less depth of field for a full frame sensor? I’m looking at a few right now, and all the charts/tables I see actually show more depth of field for a full-frame sensor. If that’s not the case for yours, send me a link, and I’ll take a more specific look at what they’re doing to get their values.

Assuming that you meant more, this doesn’t surprise me at all. Depth of field tables definitely will tell you different values at different sensor sizes. The reason is simple: The equations used to create those tables require an input value for the circle of confusion (the out-of-focus blur diameter that is small enough to still be considered “acceptably sharp,” and thus within your depth of field). But the people who created most of the charts/tables typically decide against asking individual viewers to input their own CoC value, since it’s not something most people understand. Instead, they just tell you to pick the size of your sensor, or your specific camera model. The key is that these charts have a built-in CoC value that the designers already picked for every given sensor size (or sometimes individual cameras, although that’s rare). The chart then uses that pre-chosen CoC value in its equations, which would indeed output different answers for different sensor sizes.

Specifically, these charts and calculators pick a bigger CoC value for large sensors. In other words, they’re more lenient with what they consider to be within your depth of field — so, naturally, they’ll tell you that your depth of field is greater. Why would they pick a more lenient CoC value for large sensors? The reason for this decision is that a full-frame camera sensor doesn’t need to be enlarged as much for a given print size, so you won’t see pixel-level errors as easily at a given print size. Thus, it can be argued that it isn’t as important to have a small CoC when your sensor is large.

However, just because the chart designers picked a more lenient CoC value doesn’t mean out-of-camera files from full-frame sensors actually have more depth of field.Say that you’re photographing some slightly out-of-focus letters using a crop-sensor camera — just blurry enough to make it impossible to read the text. Using a larger sensor (while leaving aperture/subject distance/focal length unchanged) won’t make the writing any easier or harder to read (assuming the same pixel density on both cameras). The letters will look exactly the same on both cameras, once you magnify the full-frame camera’s photo farther.

(To a degree, this goes back to a side question I posed in this article: Does a cropped photo have less depth of field than the full-resolution image? It makes out-of-focus errors more visible, but it doesn’t change the overall detail in the original file. So, different photographers will have different answers.)

I hope that answers your main question!

Also, with regards to your mention about accounting for equivalence — if you want to see the same values between cameras with different sensor sizes, you’ll need to multiply both the focal length and the aperture by the crop factor. Equivalence includes more than focal length, even though that’s all most people talk about! Factoring in equivalence for both aperture and focal length, you should end up with equal depths of field for full frame and crop sensor cameras. For example, 30mm and f/5.6 on a full-frame camera gives me the exact same DoF values as 15mm and f/2.8 on a M43 camera with a 2x crop factor. (This is using a calculator that bases its CoC value upon sensor size only, and not on the pixel pitches of individual cameras; if that were the case, the values wouldn’t align between most cameras.)

Christopher, you are using a longer lens on the D800, and I see no reason why you would do that. Are you trying to factor it in for equivalence purposes? Equivalence requires that you multiply both the focal length and the aperture by the crop factor, which you didn’t do here. (Also, the crop factor isn’t exactly 1.5 from the D7000 to the D800.)

Factor in equivalence, and you’ll get identical values, ignoring rounding errors. Don’t factor in any equivalence at all, and the D800 table will show a greater depth of field (for the CoC reason I mentioned to Spence above). If you do a semi-equivalence calculation where you only pick one portion of equivalence, and disregard the rest, it’s no surprise to me that the values will be nonsensical.

Spencer, condescension understood, and am well aware of the equivalency of crop factor. I guess the ultimate point is with similar aperture values, with the consideration of field of view changes, the results are different, f8 doesn’t look the same across different sensors. If I want similar results,we are both aware that we have to apply the effective crop factor to field of view, aperture values, and iso performance. The argument essentially is that similar aperture values, on the camera, that results will be different, which is something we both agree on.

Christopher, I am sorry that my comment came across as condescending. That sincerely was not my intent. When I reply to a comment that doesn’t have any explanatory text, I have no way of knowing how advanced of a photographer you are, or how much of this you already know. So, I defaulted to overexplaining things, when you certainly know what you’re talking about, and you were trying to make a different point in the first place. My apologies.

As for the technical side of things, we are fully on the same page. With the same field of view, you definitely will get shallower depth of field on the large-sensor camera, unless you compensate for it by changing the aperture.

I think that there is a difference in DOF using a crop sensor vs FF lens, but I think you have it the opposite way. As Spencer replied above, the DOF should be greater, not less. You don’t compare a 24mm lens on a D7000 with a 36mm lens on a D800. What is changing is the FOV, not the actual focal length if I am not mistaken. A 24mm lens is still a 24mm lens on both a D7000 and a D800. It appears that you took the 24mm and multiplied it by the crop value, in this case 1.5, to come up with 36mm for the D800 and used that 36mm value when you looked up the DOF for the D800. If you use the same values that you used in your example, a D800 with a 24mm lens would have a DOF range of 4′ 5.1″ to infinity instead of what you listed for the DOF. Spencer or Nasim, if I am not correct in my reply to Christopher in my thinking that you would input 24mm for both the D7000 and D800, and not 36mm for the D800, please reply. My thinking, as I said, is that a 24mm lens is still a 24mm lens on both cameras.

Are the people who said a blurred background looks more natural the same one who uses graduated ND and blend exposure of their landscape in post? I am reminded of people who insist they only shoot in natural light and refuses flash.

John, it beats me. Personally, though — as someone who uses grad filters, sometimes blends exposures, and never uses flash (at least for landscape photography) — perhaps we don’t see things the same way! (Kidding of course :)

The idea that out-of-focus background blur “looks more natural” certainly is something I’ve heard before, including from some well-known photographers. It’s a creative decision, without a doubt, which means there are no definitive answers one way or another. But I’m with you on this — I’ve never seen it look “more natural” in my own experience, and there don’t seem to be any compelling reasons why it would do so in theory. I’ll certainly change my mind if I see a comparison to the alternative.

Coline, yes, I should write a separate article about circles of confusion (as well as Airy disks). They’re pretty important in discussions like this, and they aren’t as confusing as the name implies! But it’s not all common knowledge among photographers yet, including many who are quite advanced.

The old joke is that “circle of confusion” describes a bunch of photographers trying to understand depth-of-field !

Spenser: We had a discussion about the “focus 1/3 of the way in” method following your other recent article on this topic – – Thanks for looking into this with so much more thought and detail … most helpful.

Thank you, Ernie, I’m glad you liked it. Depth of field topics tend to attract controversy and confusion, so I did my best to write this article as straightforwardly as possible. Also, Nasim helped by reading over the article before publishing, which was useful for clarifying some of the more nuanced points :)

You might want to mention that many lenses let you set depth of field off the barrel. Such a lens will have a center mark, then apertures marked off to either side. Opposite that, on the part that rotates, will be feet/meters marked off, ending at the infinity symbol. To maximize your depth of field at f11, move the infinity symbol to a point just inside the f11 mark. That’s it.

You can find the inner edge of your focus as well as the outer; just look at the opposite side of the center mark. For instance, on the Nikkor 20mm 1:2.8 D lens sitting on my desk right now, f11 goes from infinity to about 0.8 meters. Or from 2 meters to a half a meter.

The center mark on those lenses can indeed be useful as a way to determine where you’re focused. However, as far as I know, all the lenses out there use circle of confusion sizes that are too lenient when calculating their depth of field (usually around 25 microns, even if better is possible in most scenes). So, the scale might tell you that everything from 0.8 meters to infinity is sharp at f/11, but you might get better foreground and background sharpness by using f/16 for the same photo (even accounting for diffraction) — or, you may need to focus slightly farther away than it tells you, using the same aperture, in order to get infinity to be acceptably sharp.

It’s still useful as a guideline, but it definitely does not tell you the optimal aperture to use for a given photo. Whether you actually need or want optimal sharpness varies from person to person, of course!

“It’s still useful as a guideline, but it definitely does not tell you the optimal aperture to use for a given photo.”

Too true.

I have used barrel focusing for decades and have found it very useful indeed, with two caveats. First, you move the infinity symbol (or whatever distance you want to use) just short of your target aperture. Second, you stop it down a notch from there. Barrel focusing is fast and convenient, but (as you indicate) a bit on the optimistic side.

Excellent article but I got a headache just thinking about how to assess my depth of field.For simplicity I will only take pictures of far away things.How far away is far enoughOh no the headache is coming back.?

LOL Pete, yes, depth of field can be quite a headache! My recommendation is to buy the most expensive wide-aperture lens that you can — something like a 58mm f/1.2 — and photograph everything at its closest focusing distance. That way, you never need to worry about depth of field :)

I never would have thought there are so many “myths” out there. Great reading.

This said, I personally tend to the 1/3-2/3 myth. To put it just plain simple: the math behind DoF is just too complicated, especially when counting in the hyperfocal distance … Moreover, it’s easily explained by common sense, albeit being completely wrong as the math proves – so what ! Let me try to explain with your last pic: if you had added a person in your composition, it would very likely have been somewhere between 1-5 times the distance to your corn flowers (average that and you got 3!!!). Your only 2 options would be: keep both subjects sharp or just one ? In the latter case, you would have asked the person to stand where you could either have him tack sharp and the flowers in the foreground on the verge of blurryness – or vice versa.

Thank you, Johnny! Yes, there are more myths out there than many people realize.

As for the 1/3-2/3 myth, a better way to visualize things might just be to say that the depth of field takes longer to fall off behind your subject. Why bother attaching numbers to it, if those numbers are wrong anyway? 1/4-3/4 is just as accurate, for example. But if you find one particular method helpful when you’re trying to visualize things, that’s fair.

Hi, Thanks for an informative article, technical but enjoyable. I shoot a lot of monuments with high towers, churches with belfries and spires etc and have problems you may be able to help. The only way I can get the tops of these towers really sharp is to focus on that point, yes I know the sharpest point in the image is the focus point, but I do lose foreground sharpness. Am I stuck with this problem or can it be improved? I normally use top quality pro 20-24 mm lenses, shooting at the sweet point of these lenses and I produce 50-70meg 8 bit tiffs which are checked for sharpness at 100%.

I would be very interested if you have produced an article on tilt shift lenses for perspective correction. I do use a Canon 24mm mk11 TSE but even with its large image circle, I still have problems with the top of my churches being less sharp when using near max shift, due to the degradation as you approach the outer edges of the image circle. This I don’t like but have accepted that I, like many photographers probably have the same problem, can it be managed better? without going to large view cameras?

It sounds to me like you have two separate problems. First, your tilt-shift lens isn’t as sharp as possible when you shift it to the maximum degree. That’s quite normal on tilt-shift lenses, and there’s not much you can do to fix that issue, aside from using the sharpest aperture on the lens (something you’ll probably need to test yourself, if you don’t already know it).

Your other issue seems to be foreground softening due to out-of-focus blur. If it’s an especially large problem, and you care about foreground and background sharpness equally, you might consider focusing at the hyperfocal distance — twice the physical distance to the nearest element in your frame. That’s the point at which foreground and background blurriness will be equal. From there, you’ll need to select a small enough aperture to capture both the foreground and background as sharp as possible, taking both depth of field and diffraction into account. It’s a tricky task, but you can read this article to see the mathematical answer (and then decide if it’s important enough to go through these steps for critical work): photographylife.com/how-t…t-aperture

If that’s more involved than what you want to do, or if background sharpness matters to you more than foreground sharpness (which sounds like it may be the case), my recommendation is slightly different. Instead, just shoot at smaller apertures, from f/11 to f/16, and then back off the farthest focusing distance slightly when you’re out in the field. To me, that seems like it will be a good combination to increase your depth of field, keep diffraction reasonable, and minimize lens issues. Hope that helps!

Micheal, there is an easy solution to your first problem: an app called OptimumCS-Pro, developed by George Douvos. It does all the necessary calculations for you. You just have to enter the focal lengh, the nearest and the furthest distance which need to be sharp, and it gives you the focusing distance as well as the aperture to set, taking diffraction into account. And it also tells you how sharp your nearest and furthest elements will get in terms of the achievable diameter of the circle of confusion. The only challenge left then is to set the correct focusing distance the app tells you. Up to now the app is only available for iOS, but apparently he is working on a version for Android as well. See: www.georgedouvos.com/douvo…S-Pro.html

There is an App called TrueDoF-Pro, also developed by George Douvos, which calculates the DoF, once you enter the focal length, the aperture, the focusing distance, and the diameter of the circle of confusion. For the actual work of a photographer OptimumCS-Pro is more suitable, though. Both apps can be used to easily verify Spencer’s statements in his above article.

Cans. And. Cans. Of. Worms. Yet, you navigated most of them reasonably well, Spencer. Kudos! Due to the diversity of issues, the article reads a bit more confused than your usual writings, but it should still be useful to many.

I’m not the focus-stacking kind, so I tend to make sure that the most prominent feature of a landscape is sharp (foreground or background), and the rest “sharp enough” (or not, by choice) through use of an adequate aperture. Usually, inspecting the resulting image by 10x lcd magnification tells me if I achieved my desired goal or not. If not, simply alter the settings and have another go!

Thank you, Greg! Yes, I don’t deny that this article is more of a winding road than what I typically do. For most of the questions posed in the article, the answer is simply “no,” but explaining them without any inaccuracies or oversimplifications requires me to tread quite carefully! And if there is even a single blatant inaccuracy in a debunking article, that would be quite embarrassing — not to mention a disservice to people who already face enough confusion about depth of field :)

The method you outline is a good one. It’s one of two valid approaches to focusing for landscape photography, with the other being the double-the-distance method to find hyperfocal distance, when you want foreground and background sharpness to be completely equal. Both work well, and it depends upon individual preferences (and upon the scene you’re photographing) to determine which one is preferable for you.

To clarify: I do in fact employ the “double the distance” method most of the time, but I still check sharpness on the lcd to make sure that I didn’t “over-shoot” my closest object of interest so to speak.

Btw, Spencer, if you want to write another debunking article, here’s inspiration from an ongoing discussion going on at DPR’s Astrophotography Talk Forum: www.dpreview.com/forum…t/60423631 (you’ll have to sift through the replies a bit, but I think it’d be worth your while)

Interesting article- thank you Spencer! I did have this feeling that much of the “truths” about DOF were actually bogus, but it is the first time I see somebody saying this out loud. On the other hand, I am not comfortable with your point about DOF being independent of crop factor. I did not go into deep analysis, but day-to-day practice tells me that if i take 2 photos of the same scene with the same FOV but with different sensor sizes (e.g FF and m43), I get a shallower DOF with the FF! I don’t say nothing about aperture/pixel density/iso or anything- just 2 plain pics in good light with the same fov. Tell me I am wrong!

Rashad, you are 100% correct — when the field of view is identical, and you don’t change any of your other settings, the crop sensor will have more depth of field!

But I will ask you to think about it a bit further: Exactly how do you get an equal field of view between a large sensor and a crop sensor? The answer is that you must use a wider lens on the crop sensor (or stand back farther, although that will change the relative sizes of the elements in an image).

A 35mm lens on a crop sensor might give you the same effective field of view as a 50mm lens on a full-frame sensor. And, the crop-sensor photo will have more depth of field in this case. But it’s not directly because it’s a crop sensor. Instead, it’s because you used a wider lens.

The only three variables that affect depth of field are camera to subject distance, aperture, and focal length. In this scenario, it is physically impossible to get the same field of view with a crop sensor and full frame sensor unless you change either the focal length or the subject distance! So, yes, if you do that, you will indeed see more depth of field on the crop sensor.

Great, we are in agreement that crop-sensor-based cameras give larger DoF- a point everybody seems to accept, and is not a myth after all. As for the why part, you are absolutely right: one or more of the 3 variable you described

Rashad, yes, but it also is true that you can exactly match the DoF of a small sensor using a larger sensor, without any image quality penalty. So, small sensor cameras don’t have an inherent depth of field advantage.

The theoretical DOF of any lens at any aperture is zero. It is a plane perpendicular to the sensor at the focal length of the lens. In photography we are evaluating acceptable focus front or back of the plane. We “invented” bokeh to make it acceptable.

Patrick, that’s all true, but it’s more theoretical than relevant to the real world. No one has a camera sensor with infinite pixel density, which means there’s always some point where an object can be out-of-focus while physically appearing exactly the same on the sensor. For example, on my Nikon D800e, the smallest blur diameter that the pixels can distinguish is roughly 10 microns. So, anything 10 microns or smaller — 6 microns, 4 microns, 1 micron — all appear exactly the same in my final photo. That means all of them are undeniably within my depth of field, even if a “perfect” sensor would still notice blur in all of them. Also, even elements with a larger out-of-focus blur can still appear “acceptably sharp,” which is the technical definition of depth of field in the first place.

I found it interesting in question 7) “Should you focus 1/3 of the way into the scene?” that you show the horizon as an average where the foreground ends and intersects the distant mountains. I see the horizon as the visual average of where the mountains meet the sky, which is also falls closer into the definition found on dictionary.com: “the line or circle that forms the apparent boundary between earth and sky”.

Using my definition I would place the 1/3 mark closer to the trees, or that general area, which is where I would have focused at f/11, or so. This doesn’t clear up any confusion into the problem as neither calculation, mine or yours, is truly 1/3 the physical distance between the photographer and the mountains. That looks to be some “unfocusible” point behind the trees but in front of the mountains.

I can’t help but think, however, how your definition has lead to a great deal of your confusion, or perhaps struggles, in getting it all in focus.

All of this being said, in the wonderful world of digital photography, one can take the shot, review for focus, and reshoot until it is deemed all is in focus. In short order one would learn some guidelines that would get them to the end result in less shots than before. Cheers!!!

TJ, that very well could be what the 1/3 focus crowd intends as well! I’m not certain. I picked the spot that looked like the horizon if there hadn’t been any hills or mountains, since it doesn’t make any sense whatsoever to me that your point of focus should change based upon how high the distant mountains are.

However, like you said, it doesn’t clear things up either way!

My confusion was more to understand what the 1/3 focus crowd means rather than knowing how to get everything to be as sharp as possible in a photo. I’ve got a couple articles that do demonstrate the method I prefer, including this one, which touches on everything: photographylife.com/how-t…t-aperture

Still, you are very right that digital photography gives us much more leeway in terms of testing these theories in the field, and reshooting until the result is satisfactory! It’s a feature I find particularly valuable :)

Hi, I have been a photographer since the age of 12 while at school in 1959 (yep I am now 70). Although I use digital, sometimes I use a 35mm film SLR as a back up. Your postings and lessons are extremely informative and very, very useful. I am in the UK and still “at it” in the sports and press arena. Please keep up the very excellent work. I for one am very grateful for your “tips”. Regards, Dave (UK).

Thank you, Dave, it is very nice to hear that you enjoy these articles. I’m especially glad that a sports and press photographer finds useful tips from some of my writing, since I generally cover things from a landscape background — I never quite know if photographers of other genres will find them as helpful, and it’s reassuring to know that you do!

An excellent well-written article with well thought out diagrams and examples. It confirms much of what I have observed. The data is equally applicable for fixed and zoom focal lengths, but is extremely difficult to calculate with zoom lenses at varying positions. It tends to support the basic concept of trying to shoot nearest the lens sweet spot, or smaller, easily adjusting ISO to permit shorter shutter speed.

Seems strange that virtually nothing has been said about focus stacking. One could take 2 pictures of your 2 landscape examples, one “perfectly” focused on the near plants and the other on the mountains , probably in less time than finding and consulting your max DOF chart. They could be loaded as layers into Photoshop and blended for best focus in less than a minute. No multiple exposure stacks or special software. The D850 will make a stack for you, I read. Spencer, since you are very interested in max DOF and sharp focus, I’m surprised you do not mention this. Seems like the better use of current techology.?

Anthony, quite true, focus stacking can be an excellent tool if you’re after maximum depth of field. However, I have to confess that I don’t use it very frequently myself. There are a few reasons why, which I’m sure you’re already familiar with, but nonetheless are important enough for me to forego focus stacking most of the time:

1. If anything in the scene is moving, it makes things far more difficult in post-processing. You need to do a lot of spot healing and content aware fill, even if you have high-end focus stacking software like Zerene Stacker or Helicon Focus (which I have used before, but not extensively). The more motion, the harder this becomes, and sometimes it’s hard to notice that anything is moving until you take the photos back to the computer! In the plant example that you mentioned, it was a moderately windy day, and a focus stack would have been really tough — looking back at the various photos I took from that spot, the blades of grass change significantly from shot to shot. (Also, I didn’t spend time consulting the depth of field chart for that image; a few months ago, I just decided to memorize it for the most important values, so it’s not a time-consuming process. And even if you don’t memorize or consult it, it should be fairly easy to know that a photo with such a large range will require quite a small aperture.)

2. Even if nothing moves, you need to be certain that you’re leaving enough overlap from shot to shot. Otherwise, the photos might not blend successfully — or, they may have blurry regions in part of the image. The D850 helps here, if you use it properly, although many people don’t have one, including me :)

3. Sometimes, you can create the perfect set of photos to focus stack, but Photoshop (or other stacking programs) still won’t do their part to merge them correctly in every spot. This might happen due to low contrast and detail in the image, or just because Photoshop is having a bad day. If that’s the case, you’re stuck doing manual blending, which is a time-consuming and frustrating process. This happens to me more frequently than you might expect. It’s better in software other than Photoshop, but none of them are flawless.

4. Focus stacking takes a bit more time in the field, as well as some extra memory card space, although those concerns tend to be secondary.

5. Focus stacking makes it more difficult to create other image blends successfully, like panoramas or HDRs (though certainly not impossible).

Personally, I much prefer capturing a single photo in the field whenever possible. I still focus stack from time to time, but only in scenarios where f/16 doesn’t provide enough depth of field for the image I want. That’s not a hugely common scenario, at least for me.

It’s true that I could capture sharper photos by focus stacking at f/5.6 or f/8 in those cases, and perhaps I’ll start doing that more over time. But, at least for now, I find that the negatives outweigh the positives, unless my desired depth of field simply isn’t attainable any other way. Perhaps things would be different if I used a D850, though! (We do have a separate tutorial on focus stacking in Photoshop if anyone is interested: photographylife.com/lands…hotography

I’m glad that you added this. It’s a good technique to know for people who want the sharpest possible photos.

Pieter, that is a very clever question. At least in theory, the answer is that the sensors would all be exactly the same, since you can (theoretically) perfectly replicate a photo from one sensor to another via equivalence calculations.

Equivalence says, essentially, that an exposure of 1/100 second, f/8, ISO 400, and 100mm on a full-frame sensor will be totally identical to an exposure of 1/100 second, f/4, ISO 100, and 50mm on a micro four-thirds sensor (due to the 2x crop factor). There are many individual sensor concerns that may make this ratio inaccurate — for example, one camera performing better at high ISOs, or having more pixels, or just a more efficient sensor design in general (usually favoring larger sensors by a bit) — but that’s the “theoretical” math behind everything. In a perfect world, if you could equalize the megapixels between the sensors without any consequences, the two photos would look 100% identical.

If you’re shooting at higher ISO values and you need a lot of depth of field, camera sensor size theoretically does not matter. You’ll get identical photos between them all.

However, if you’re shooting at 1/100 second, f/8, ISO 100, and 100mm on a full-frame sensor, that’s not possible to match with a 2x crop-sensor body, since you’d need to use an ISO of 25 (not available on any M43 cameras I know of).

Lenses are also different in different systems. There are 105mm f/1.4 lenses available for full-frame cameras, but no 52.5mm f/0.7 lenses currently made for M43 cameras (again, that I know of). Not that this couldn’t happen (or that you couldn’t use a speed booster like Metabones makes, at least in theory) — it just doesn’t exist yet.

Full-frame sensors tend to give you more flexibility in shallow depth of field for this reason, and they’ll give you better image quality than is possible with a small sensor if you shoot at low ISOs. But if you’re at higher ISO values, and you need as much depth of field as possible, there’s no theoretical difference between — for example — a 24mpx aps-c sensor and a 24mpx full-frame sensor, given the same sensor efficiency. If you want maximum depth of field, it really doesn’t matter what system you choose, assuming that you always shoot significantly higher than base ISO.

As for your question — the best lens, sensor, focal length, and aperture to use for maximum depth of field in macro — it’s trickier to answer than you might think. Although the specific lens and sensor don’t matter, and aperture has an obvious relationship to depth of field, the focal length you pick (as well as the distance you stand from your subject) might have an effect on depth of field in macro. Maybe. It depends upon your interpretation of depth of field.

One answer is that it doesn’t matter, and every focal length will give you an equal depth of field in practice; magnification is what matters. A 200mm lens at 1:1 magnification will have the same depth of field as a 15mm macro lens at 1:1 magnification. In other words, there is no more detail in the background with one lens or the other. A barely-readable out-of-focus line of text in one photo will be just as readable in the other.

The other answer is that the wider lens will have a greater depth of field, even at the same magnification. The reason? Even though just-readable text wouldn’t change between the 15mm and the 200mm lens, the text would be smaller in the background on the 15mm lens. It wouldn’t be blown up into huge bokeh-balls. That means, with the 15mm lens, the background will appear easier to understand and interpret than with the 200mm lens, even though it doesn’t have any actual additional detail. This is easiest to understand simply by Googling macro photos taken at ultra-wide angles. According to one interpretation of depth of field, they don’t have any more than a 200mm lens would. Other people would argue that they do, even at the same magnification and aperture, since it visually appears that they do. As far as I understand it, the answer here isn’t settled. It’s not even a debate that many people are having, just because it isn’t an especially well-known topic. Personally, I fall on the side that because a 15mm macro lens appears to have a greater depth of field (even if it technically doesn’t), you can think of it as having more depth of field. But you won’t technically get clearer background details from using it.

Another way of looking at the influence of focal length on DOF (last part of the reply above) is that while the magnification of the subject is the same for the 15 and 200mm lenses, the magnification of faraway details in the background is very different. And because of this, using a long macro lens (the 200mm in this case) you can select a suitable non-distracting small part of the background by slightly moving the camera and use this to isolate the subject, while with the 15mm you would nearly always get a “busy” background that includes everything within a large viewing angle.

In practice one often has to trade DOF (many macro subjects like insects are bigger than the DOF zone at common aperture values) against background isolation, so there is no easy solution. Often the photographer probably wants to concentrate on the macro subject and show as little background detail as possible – which for e.g. butterflies, dragonflies etc. means using a longer lens like 200-400mm on a DSLR. But if you want “environmental” shots (with same magnification of your macro subject) you may want to show as much of the background scene as possible, which means using a WA lens. And there we run into another practical issue: most WA lenses for DSLR (especially WA zooms) have relatively poor image quality for strong closeups. My experience is that small-sensor compacts can give better results for closeups in the WA range, not because of DOF but because the optics have less issues at high magnification (of course assuming that the light levels are sufficient to allow using low ISO on the small sensor camera).

So, there is no final answer, it really depends on what the photographer wants to accomplish with the image.

Yes, exactly! The details in the background vary wildly with a wide-angle versus a telephoto lens. Although the “typical” look is one with a telephoto, that doesn’t mean it’s inherently better. Wide-angles just provide a different appearance, which may or may not suit what the photographer is after.

This article relates to a thought I had just this week. I read the item about the “best black friday deals” and on top of the list was the Nikon D750 + 24-120 f4 combo, which also refered to the review of 24-120 lens. I own an ASP-C camera with an f2.8 lens. I understand the numbers are not exactly equivelent, but in terms of depth of field, is using an f4 lens on a FF camera over f2.8 lens on a cropped sensor still makes a noticeable diffrence?

I recommend that you read comment 18.1 that I just left for Pieter if you want a more detailed explanation. However, the short answer is that there won’t be a major difference. Sure, the numbers aren’t totally equivalent (the real numbers would be closer to f/4.3 being equivalent to f/2.8), but that’s quite a minor difference. Unless you compare photos on top of each other, and you’re really looking closely, I wouldn’t consider it relevant as far as depth of field.

And, indeed, if you’re forced to shoot at a higher-than-base ISO (anything above 200 with an aps-c sensor, for the most part) due to light concerns, there is no theoretical difference in image quality between your camera with the f/2.8 lens and a full-frame camera with an f/4.3 lens. The reason is simply that the lost light from f/4.3 on the full-frame camera requires a higher ISO, and the full-frame camera at a higher ISO performs similarly to an aps-c camera at a lower ISO. That’s getting into the nitty-gritty of equivalence though, where many words are necessary to convey small amounts of information, and any slip-up can prove fatal. My comment 18.1 touches on it a bit, but this also is a discussion that requires a more complete explanation to convey its nuances properly :)

Point 8) – the image plays on perspective – very easy for the mind to draw some 45 degree lines and make a sort of cube from the three squares. As a test, flip the image 180 degrees so that the sharp square is bottom left and the fuzziest square is toward top right. Totally different “perspective” in my eyes, and the fuzzy looks just fine at the far back of the “cube”

Yes, my main point there is that people will see whichever square is highest as appearing in the rear. That’s perhaps the strongest visual cue to our brain about how a nonmoving, 2D image is structured. So, if out-of-focus blur has any affect as a depth cue, it’s a small one. It doesn’t surprise me that placing the out-of-focus square at the top also results in the same appearance that it is in the rear. But I think it has more to do with height in the image than being out-of-focus.

The “exposure triangle” is a misnomer — and, indeed, the creator called it the “photographic triangle,” which is more accurate — simply because ISO isn’t really a component of exposure. Exposure has to do with the amount of light that you collect, and the only three things that change the amount of light hitting your sensor are your aperture, shutter speed, and scene/flash brightness. Any other variable — ISO, signal to noise ratio, brightness sliders in post production — aren’t “exposure,” by its true definition. Still, they do have a major effect on your final photo, and are all important parts of photography as a whole.

I’m suggesting an article that expands on this – ‘cos the myth that changing ISO somehow modifies the “sensitivity” of the camera’s sensor is omnipresent! A search on meaning of ISO will return this (incorrect) definition in about 90% of cases … even from sites one would expect more from.

That is true, the misinformation is fairly common! In fact, we are currently working to make our own ISO article more accurate in that regard, yet still understandable for beginners. Hopefully you’ll see that update (and some much bigger ones) within the next few months!

I had initially thought that maybe the blurry square looked closer because it appeared bigger. But after reading the comment above, I flipped it over. I found that I could make it flip back and forth a lot easier upside down. If you made the observable size the same (despite the blur) I bet your point would be made even stronger.

Thank you, Roger! Yes, that definitely makes sense. I think the size and height cues are the strongest. Initially, I tried making the squares the same size, but the blurred one started appearing much fainter. That added an issue of its own, so I just reverted to the larger version — but perhaps there is a happy median somewhere in between.

About your optical illusion: I think for most people it would appear that they see the blurred square ahead because I think most of our (western) brains are wired to read from left to right. It’s where I naturally look first and that is probably what makes my brain to perceive the image back to front (left to right). I suppose you could test this by placing the sharp square bottom left and the blurred out one top right, essentially swapping them. It might just confirm which illusion “pops” first for most people.

This DOF equivalence between camera systems (FF, APSC, MTF) seem to be confusing but actually it is not. I always have the impression that it is explained way to complicated for most people. Taking a photo with 50mm and f1.4 on FF, basically means that I have to divide the focal length by the crop-factor and(!) also have to divide the aperature by the crop-factor for getting the same results on the small sensor (in terms of viewing angle and DOF). I don’t take ISO and shutter speed into account as this simply adds some confusion for most people and wouldn’t change the visual result of the picture in most cases (when having enough light). At the end the camera’s metering system will do the job anyway …

I received a bill for 4 tolls. My transponder was in my car. It is NH 24C85. The transponder has been in that car since the day I received it last summer. The bill is invoice #: V000560008583. I should not be paying the $150 and $2 charges per toll. My account number is 27240432.

You may be a wonderful photographer , but not a clear communicator in the written word …

No insult intentional , … how about … NOT telling US all the wrong things and simply making a few points on WHERE to focus? Imo YOU need to focus on simplicity , perhaps read Strunks Art of Simplicity … I do appreciate your attempt to take on the many and varied myths , but it becomes difficult to follow and your final points are NOT abundantly clear to me … Picture , then point where YOU focused and WHY … I don’t care personally about ALL THE MYTHS and why others do what they do … kind regards really …

Comment Policy: Although our team at Photography Life encourages all readers to actively participate in discussions, we reserve the right to delete / modify any content that does not comply with our Code of Conduct, or do not meet the high editorial standards of the published material.

Footer

Site Menu

Privacy & Cookies: Our partners will collect data and use cookies for ad personalization and measurement. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, please see our Privacy Policy