The question asked by pretty much everyone outside the film industry and a surprising number of people in it is: “don’t cameras do that themselves these days?”, or “why can’t the cameraman do it?”. When given the explanation they shake their head in disbelief. Focussing a lens without looking through it seems a strange way to spend a working day. Yet that is how it is done. And once a few things are understood it does begin to make some sense!

The first thing to appreciate is the size and complexity of a film camera kit. It may well fill a 7.5 tonne truck, be worth upwards of half a million pounds, and weigh several hundred kilos when taken out of its plethora of aluminium cases and assembled. And this is delicate equipment, susceptible to damage by scratching, knocking or dropping, wind, dust, dampness, cold or even, in some cases, just being looked at in the wrong way.

Technocrane on a tracking vehicle.

Then: this mountain of delicate equipment can be assembled in umpteen different configurations depending on the shot. It can be operated hand-held; on a tripod, dolly or crane; it can be used with a Steadicam, perhaps on a Segway or quad-bike. It may need to be mounted on a tracking vehicle, a roller-coaster, or even an actor. The right way up, or upside down. Underwater, or in the air. It may need to be protected from an explosion, a sandstorm, maybe chemicals or raw sewage. You name it, every shot you see on the screen has its particular demand on camera configuration.

Finally, the pressures of shooting (the tight schedules, the demands of the director, the tendency for location, weather, animals, children and officialdom to conspire against you) mean that this large, heavy, expensive, delicate, complicated bunch of metal and glass needs to assembled and re-assembled over and over again in new and often innovative ways…. like yesterday! It needs to be done quickly and efficiently.

It’s not a one-man job. Often it’s a three, four, or five man job. So roles need to be split. The first split is between the guy who talks to the director about all the arty stuff to do with lighting and where to point the camera (let’s crane up slowly then tilt up to the sky as the spirit of the heroine ascends to the heavens), and the guy who has to get the kit ready to do the shot (change to wide-angle lens, remove base-plate, fit wireless remote-focus and video tap, check matte-box, liaise with crane op regarding power and weight, and so on). It’s rather like the captain and chief engineer on a ship, but with fewer beards.

Having set up a shot (and rehearsed it as necessary) it is time to “turn over”. The cameraman (whether he’s a CAMERA OPERATOR, or LIGHTING CAMERAMAN) has a tough and very skilled job. It may seem easy to point a camera at something, and at a very basic level it is. The professional camera operator though, needs always to be thinking about composition, movement, not allowing the edge of set to come into frame, performance, his own and the director’s vision for the overall film, cutting points, boom shadow… The list is endless. Often the shot is physically challenging, requiring quick reactions and high manual dexterity. Then there are the frequent occasions when you only get one go, you either get the shot or you don’t – the stunt vehicle will be in flames, the squibs all gone, the building demolished. The pressure can be immense.

Which brings us to focus. As anyone who has tried to focus an old-fashioned camera will know, it is often difficult to find that point at which you’re sure the image is sharp. You have to rotate the barrel of the lens back and forth, slowly zeroing in on that sharp point. Then you’re not quite sure and you do it again. Autofocus systems can work in just the same way – “finding” the focus point by trial and error. And this is on a stills camera, often with a still subject.

Imagine trying to do this while following a subject around a room. Then imagine trying to do it through binoculars! (Binoculars are “long lenses” with a high magnification factor. When things are out of focus on a long lens they look much more blurred, and the point of focus is trickier to find. Filming is often done using lenses with these characteristics because the out-of-focus blurred quality is much used in the visual language of cinematography to aid composition and lend “depth”.) How about from a moving car?

It is clear that in many day-to-day filming situations it is going to be nigh on impossible for a cameraman looking through a camera to focus on a moving subject without allowing the subject to become blurred, and having to “find” focus. Often it will be completely impossible. Allowing focus to divert the cameraman’s attention from the 101 other things he has to think about will result in bad and often unusable shots. It’s no use having a shot featuring a more or less sharp De Niro in full flow if there’s a microphone hovering over his head, a lamp stand in shot and the frame jumps every time the camera moves.

The focus puller’s job then, is to use a variety of techniques to ensure the image is sharp where necessary (or in some cases soft – artistic decisions about composition are one of the key aspects of the job, and one that is difficult to automate). The basic practice is to:

Test and calibrate lenses so that there is an accurate scale on each lens, in feet and inches, from the camera to the subject.

Take the time during rehearsal to mark actors’ and other subjects’ positions in relation to the camera, and take measurements with a tape measure.

When it’s time to shoot, use those marks and distances, combined with a certain amount of judgement and guesstimation, to focus the lens on the subject.

After the shot, make checks to ensure that the camera has functioned as it should have done, and make the call as to whether or not another take is required due to camera or focus issues.

Shooting “The Wire” – focus pullers sitting to the left of the cameras.

There are endless variations and challenges. The main one is not having the time for a rehearsal or the ability to take marks. This is increasingly common and requires the focus puller to “wing it” (rely entirely on their judgement of distance), which can be very challenging.

There are numerous aids. On a film camera the job is done “blind” because the film will not be devoloped and projected until the following day. But with HD, increasingly, good quality monitors are available. They need to be used with care though, even an apparently excellent monitor may not be full resolution and full contrast, and reacting to a monitor can leave the focus puller just behind the action. Systems which partially automate the process can be used, but require skill to get the best out of them.

It’s a bit of a black art – it takes a while to learn the tricks of the trade and gain confidence, and it requires a certain aptitude. But with practice it is possible. The best in the business can nail an unrehearsed boat-to-boat close-up, lit by candle light on a 200mm time and time again. (Very tricky because boat-to-boat means no marks, no references; candle light means wide aperture so shallow focus; similarly the 200mm is a long lens so shallow focus, more so on a close-up. Unrehearsed means you are “using the force, Luke!”)

Remote focus device – see post title pic for manual version.

Then there’s all that kit to worry about, and the CLAPPER LOADER, trainee, and camera car driver. There’s the weather report, additional equipment to hire in, faulty or broken items to be returned, liaising with production, making sure there’s a supply of good strong coffee even in a field at 3am…

The QUBIT has the same two states as a classical BIT, 1 / 0, on / off, etc. However, it also has the ability to be a superposition of the two states – i.e both 1 and 0.

Superposition is one of the mind-blowing concepts of quantum physics: the superposed state is a property predicted by Schrodinger’s equation, he of cat-in-the-box fame. (The cat is both dead and alive until you open the box and look!)

This may seem esoteric but scientists are actually using spinning electrons in semi-conductors as qubits to perform elementary logic operations, and classical super-computers use quantum simulation for solving complex equations in astrophysics and climate change. This technology would be perfect for incredibly powerful video processing.

Who knows, the Playstation 16, the RED Epic plus plus plus (where do you go from epic?), the Apple iBrain etc., may all use quantum computing in 50 years time, creating multiple alternative hyperrealities in our minds. Film will still be better though. It’s ’cause the light goes into the emulsion!

Meaning “without colour” – a black and white image is an achromatic image.

Usually used with regard to a lens that has been corrected to some degree for CHROMATIC ABERRATION.

Sometimes used interchangeably by manufacturers with APOCHROMATIC, which has a specific definition of its own.

An ACHROMATIC cine lens typically uses a combination of elements of different refractive index (for instance crown-glass and flint-glass) in order to cancel out the effects of CHROMATIC ABERRATION.The classic achromatic doublet uses convex and concave elements bonded together to produce a positive lens of less power than the convex element on its own. While the optical power of the convex lens is greater than that of the concave lens, accounting for the overall positive nature of the doublet, the greater refractive index of the concave element allows the CHROMATIC ABERRATION caused by the convex element to be more or less completely counteracted despite the concave element’s weaker optical power.

While the effect of ACHROMATIC lens construction is to greatly reduce CHROMATIC ABERRATION, there is usually a residual effect which cannot be completely overcome in practical designs whereby the green light component focusses at a different distance to the red and blue components, which focus together. This is the LONGITUDINAL SECONDARY SPECTRUM and is critical to lens performance.

“x-BIT” when used to describe a camera system refers to the colour BIT-DEPTH of either the internal processing or the output video format – they are not necessarily or even usually the same.

For instance most DSLRs have 14-BIT internal colour processing when grabbing image information from the sensor; this 14-BIT image is then compressed to an 8-BIT format that is saved as a video file by the camera.

The higher the BIT-DEPTH the greater the number of gradations between the 0% and 100% levels on a colour channel and hence the more subtle the tones.The human eye can detect approximately 10million distinct colours – fewer than that encoded even by 8-BIT colour systems (which, with 256 values for each colour channel, can encode 256x256x256 colours, more than 16million).

Despite this higher BIT-DEPTH systems are advantageous. The human eye is not equally sensitive throughout its dynamic range. The high BIT-DEPTH, linear, colour data from the sensor can be intelligently encoded in-camera or in post production to preserve the colour detail to which the eye is most sensitive- often by encoding in a logarithmic COLOUR SPACE, and using COLOUR SUBSAMPLING, and/or a BEYER MASK.