Avoidable photographic errors

Rule number one: there are no rules. A ‘mistake’ may not necessarily be a mistake if it helps convey the message or story or feeling intended by the photographer. I can easily think of multiple examples that go against every scenario described below. That said, for the most part, I’ve found these ‘mistakes’ to hold true. And if you want to achieve something very specific, then you either won’t be reading this article in the first place, or you’ll know when to bend the rules. The general viewing public probably has some preformed opinions of what is right/good, but these are born out of as much ignorance as conditioning by companies trying to sell more software or lenses or something else. There are rational reasons why these opinions may not necessarily be right in the context of fulfilling creative intention.

More bokeh is better
Less depth of field means less context. Less context means less story, and a weaker image. Have shallow enough depth of field, and you can’t actually even clearly identify the subject: it’ll be like looking at something through a thick liquid. Unless this is your creative intention, it’s actually quite annoying to the audience to be able to see something but not really make it out clearly. Don’t get me wrong: there is a right amount of depth of field, where everything you want to be clear is in focus, and everything that you don’t is not to varying degrees of blur. It’s still important to be able to identify the non-important elements for context. Faster lenses have a function, too; and that’s usually when it comes to creating some separation at much longer distances, or collecting more light to keep your shutter speeds up in dim situations. Both of these situations of course require a lens that can actually perform well wide open or close to it and can be focused reliably; otherwise you’ll land up with a soft image anyway. Not all optical designs are equal, so you may well find that by f2.8, f1.4 and f1.8 maximum aperture lenses are pretty much identical; you might save yourself some money and weight in the process.

Tilting horizons is funky
The nature of our visual cortex is that it corrects for inclined horizons: this is to say, if your head is tilted slightly, you’ll never perceive the horizon as being slightly off – it’s either drastically tilted, or not at all. If it’s drastically tilted, we start to feel uncomfortable because the situations in which this happens in reality are generally the precursor to something very bad to which we need to react immediately. Part of the reason our brains can do this is because the visual field doesn’t drop off to zero abruptly at the edges; rather, it fades out. This means there are no obvious non-orthogonal edges to correct for where a skewed horizon meets the border of the frame. This does, however, exist in a photograph: our brains cannot correct for this, and it then becomes very obvious that the horizon is skewed because there is a visual cue we cannot ignore. This is especially obvious if the horizon is close to the bottom or top of the frame, or forms a clear line that intersects the right or left borders (e.g. a seascape as opposed to a cityscape without a distinct horizon). Hold the camera straight.

No horizon, and an unnatural (very compressed/tele) perspective: but our brain says the implied horizon is flat because the buildings are vertical, which are in turn affirmed by the parallel relationship between the verticals in the buildings to the edges of the frame…

Filters and overprocessing can make an image interesting
If the first impression of a photograph is one of processing (color shifts, vignettes, artificial tilt shift effects, grain etc.) then the chances are the actual subject matter will never really stand out, simply because the presentation dominates. Chances are, if the processing is too strong and you had a dozen photographs processed and presented identically, nobody will remember the subjects at all. Is the photograph about the subject or the presentation? If the content is unimportant, why take the photograph at all? An interesting subject and/or composition should not require heavy processing to make it interesting to begin with. The role of processing is to support and enhance the presentation of an idea only.

Wide angle lenses are to ‘get more in’
Wide angle lenses are to emphasise foregrounds over backgrounds because the geometry resulting from a wide angle of view is such that a near foreground object will appear to dominate more over a distant one of the same size because it occupies a greater linear percentage of the frame when projected into two dimensions. If you simply back up to include more linear distance of background, then the foreground grows even more dominant relative to the background and the image appears even emptier; if anything the result is the opposite to the effect you’re aiming for. It would probably be better to stitch multiple frames from a telephoto.

There is no clear subject
Humans are pattern-recognisers; this works both ways: we see repetition and breaks in repetition. However, when we see repetition, we ignore individual elements that aren’t too different; a crowd of people still looks fairly homogenous even though each individual is different. No one person stands out unless they are very different; think somebody wearing a neon pink jumpsuit and hat in a group of grey suits, for instance. If there’s a second person in neon blue, then Mr. Pink will have competition – and so on. So for a subject to stand out, it has to break pattern with the background and visually dominate. I see a lot of images in which what stands out the most isn’t the intended subject – beware of tunnel vision in composition, too.

The subject has to be in the middle
Most camera’s AF points are clustered about and most effective in the centre of the frame; this is due to engineering more than anything else. It’s actually very rare that a composition works best with a dead-centre subject; you run the risk of having empty and wasted space on either side of the long axis of your frame. Where you put the subject should be dictated by the available context and your intended message or composition, not the technical limitations of your hardware.

Off-center subject, enough depth of field but not so much as to be distracting or ambiguous.

Motor drive makes up for good timing
More fps isn’t necessarily better for capturing the decisive moment – achieving critical timing is actually easier in single-shot mode, because it’s easier to know exactly when the shutter will fire. Even if you have 10fps, you don’t really improve your chances of hitting the critical point in time for several reasons: the total exposed time isn’t really that much more in absolute terms, and moreover the 10x longer blackout time is going to have a much greater negative impact on the result simply because you cannot see what is going on. The reasons why more fps can help is if some subsequent unexpected action happens, then a fast camera will be ready to go again in less time than a slow one and probably also have a larger buffer; you’re still better off shooting singles, though. Remember: HC-B had to wind the camera manually between frames, with a 36 shot buffer and probably a good 30 seconds or more to rewind the film and load a fresh roll. No motor drives there, and it didn’t seem to affect the results much.

High key and low key still need some absolute blacks and absolute whites
This one is a little more subtle: color or monochrome, no matter how dark or light an image, you still need an area that’s close to black or white so that the audience can calibrate their expectation of the scene and know that it was meant to be interpreted as dark or light and not an exposure error. The majority of the spatial area of the scene can be predominantly dark or light. If you find yourself with nothing that appears absolute black or white after adjusting exposure to the desired level, then this is where dodging and burning come in handy*.

The technical bits matter to the exclusion of the image
Perhaps the greatest fallacy of all: you can make a technically perfect image that is boring, but not a great one whose composition suffers from being low resolution. Yes, all things equal, better technical qualities of an image are better and give you more output options; however, they should not be the first consideration unless you know how to deploy those technical properties to the enhancement of the intended idea – e.g. a 60″ wide Forest print would not give the impression of transparency and being there if the source file was 2MP.

The camera doesn’t see what you see
I believe this is the biggest disconnect of all: our eyes don’t work the way a camera does. Not being consciously aware of the differences and subsequently either using them to advantage or compensating is where the translation between idea and finished image falls down. Some of these differences are structural, some of them are perceptual and brain-related.

Of course, avoidance of these pitfalls doesn’t guarantee an interesting photograph – we haven’t said anything at all about the four things or output or context. But they can certainly go a long way to helping translate and communicate an idea from creator to audience – and ultimately, that is the purpose of photography. MT

The next article will go into more detail about the difference between eye and camera, and what this means in practical terms.

Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Comments

Super article. #1, especially, is one of my pet peeves. Using thin DoF is a one trick pony that gets old really fast. Just because you have that f/0.95 lens does not mean that you have to shoot it at f0.95 all the time.

BTW, Ming, you should probably look at your web hosting. Here in the U.S., I was able to finish reading the entire article before any of the images downloaded. The site usually isn’t *that* bad, but it’s consistently noticeably slower to load than other, U.S. based sites.

Thanks for this (and all your other great content). Lots of interesting suggestions and points. You point on wide-angles and foregrounds especially hit home.

On the absolute black/absolute white point are you saying both are needed or just one? Can the audience calibrate if they can see either absolute white or absolute black or do they need both some absolute white and absolute black?

“The camera doesn’t see what you see”
!!
I believe a great part of that is that only the macula of the retima sees sharp; what we see sharp is made up by scanning and building a picture in the mind.
When we watch an image I believe we scan it differently from the way we scan reality, partly because the image is still and in the real world we are always unconsciously on the lookout for movement and change.

When we see something we want to photograph it takes some training to be aware enough of the surroundings making up the rest of the frame, and to learn that what caught the mind first may have had to do with movement and change lost in the photograph.

Some cameras now have 4K video with extractable 8Mp stills…
I would say this may help first with 50 fps!

I once found it very hard to find the right frame to cut after in a (25 fps) video sequence.
It turned out that one subject made a _very_ quick sideways glance with a slight head turn across just two frames. It was so quick it took me some time to find out why cutting +/- one frame made such a big difference.

( And not long ago many “serious” digital cameras had longer shutter lag.)

You make a point. But I bought a Nikon D70 nearly a decade ago and I think it was a serious digital camera and it had no perceptible shutter lag. But my previous digital camera, a Sony T-33, did have a noticeable shutter lag. Yet I wouldn’t classify the Sony as a “serious” camera in the sense that it would be sufficient, if not perfect, in a pinch for a professional photographer. Of course, that’s just my opinion.

I only wish it was a psychodynamic problem, but not so long ago most reviews of P&S cameras included their shutter lag time. I just noticed Kristian’s clarification, so yes, I am speaking about after achieving focus, the noticeable delay between fully presiding the button, and the actual shutter operation. I’m not sure but I think now-a-days shutter lag is a thing of the past.

I’m pretty sure my older ones left a lot to be desired too. And it wasn’t til I got a DSLR that the ‘instant’ feel came back – that said, I also remember some pretty significant shutter lag with my Minolta film SLR. But yes, lag is gone. Even on phones.

Well, video and stills are two different kettles of fish. Cutting in a frame earlier or later makes a difference because of persistence of vision; even if the extra frame has motion blur. That frame wouldn’t be usable for a still, probably – just as a sequence of perfectly sharp images makes for rather jerky looking video. 🙂

Very fast, since even if you have 10fps, your shutter might only be open for at most 10% of the time (as opposed to 1%). It isn’t so much spray and pray as timing the first shot, then catching any unexpected action that might happen immediately afterwards – you may not know what a bird will do or where it will fly, or how a football play develops, or when two F1 cars collide etc.

… I believe I mentioned 50 fps as a probable minimun .. 🙂
( for people.. )
If I had wanted good stills of that monent instead of video I would have needed that, or a *very* fast finger with a lot of good luck

There’s still the problem of freezing motion. Suppose you had 50fps, and a full global electronic shutter that could read out fast enough – you’d need at least 1/100th, preferably 1/200th. In the best case, your shutter will still be closed 50% of the time. That is of course assuming an instantaneous event, which fortunately most are not…

Yes, certainly, the problem of any automatic shooting unless you have two (or four..) cameras in one! 🙂
But how quickly can the best photographer react, even with an instantaneous shutter?
Your point on *anticipation* is certainly important!
( I think 4K video – with fast shutter speeds – with stills extraction can help in _some_ situations … and I certainly agree that 10 fps is very slow for catching people expressing themselves.)

But I get the impression we mean the same thing.
My first comment was really only about illustrating your point on not relying on bursts (whether 10 or even 25 fps).

What I didn’t emphasize, was that even with good anticipation it probably would have been a very lucky shot even for a quick photographer to catch that (amateur) actor’s unconsciously improvised very expressive sudden and very short side glance.

If it had been rehearsed and known to a good photographer, he could have nailed that glance.

Perhaps a 50-100 fps fast shutter (4K video+ ?) might have…

(In the video you saw it, with the right cut, but only half consciously – I mean it influenced you but you probably couldn’t tell exactly what you had seen. The audience at the performance would have seen it – if attentive enough.
The limitations of technology…)

You said, “No horizon, and an unnatural perspective: but our brain says the implied one is flat because the buildings are vertical, which are in turn affirmed by the relationship between the verticals in the buildings to the edges of the frame…”. When you said “but our brain says the implied one is flat..” what does “one” refer to? To the horizon? To the perspective?

So True. I recently tried for the first time in depth photography. High aperture along high ISO. Despite getting some noise in the photograph the details were great. Keeping a low appetite may cost us the out of focus centre of attraction.

Good question. A subtle one, but it’s there – stitching with a tele effectively creates a much larger ‘virtual’ sensor, similar to using larger format. You have the angle of view of the wider lens, but the DOF properties and projection signature of the longer FL. This article on MF might help.

Stitching is my favourite ‘trick’ for creating high resolution photos. You can create images of an arbitrary size by choosing your focal length and the number of rows and columns in the panorama. Pre-visualisation and composition are really no different to a single capture.

As Ming points out, DOF is the primary difference. However, for compositions with subjects near infinity and no foreground, such as land/city-scapes, DOF is a non-issue. The only thing I would add is the significantly higher cost of failure.

Every photo in the panorama to be executed perfectly. You also need a scene with static or low dynamics. Screwing up a single photo produces unusable results. Wind or something changing in the scene might force you to start all over again or… unusable results.

You also can’t do any closed-loop experimentation (chimping & reshooting) since the final result is only available once processed. I find this to be a *good* thing. It forces you to exercise discipline and deliberate pre-visualisation. If you don’t achieve the results you intended, you’ll waste both time in the field and time at the computer processing.

Technically well executed panoramas are a significant investment in time and that says nothing about how the results will look!

Up here, as in many other places, the ocean – at least out to the inner coastal islands – freezes in winter. And even further north…
Very stitchable, with care even on ice-fishing or skating days. Bring a tilt-lens to get it all sharp. 🙂

As usuall, great article ming. Very interesting topic
But the sadness is for the newbie like me, hard to implement your article to the real world shooting, maybe my photography soul not yet reached your level.