Before I “became” a professional photographer, I was a software developer. Windows applications, wireless devices, game software, web… This was the late 90s, and we hacked everything, pushing beyond design limits. We rolled our own script languages, designed database engines from the ground up, dissected device drivers, and programmed using interrupts and bit operations. It was awesome. A small alpine-style team of programmers could go anywhere and do anything, applying their craft and creativity to this magical hardware. Create a server to host interactive games to half a billion Chinese cell phone users? Yeah, we can hack that out in our garage.

But all good things come to an end. Eventually the hackers got so good at it that their hacks became the standard, and soon all you had to do was take a class on it and learn how to pull the levers. Projects became so big that it took an army of lever-pullers to do anything cool.

I left that world about the same time digital cameras started hitting the shelves. Hacking photos is pretty much the same: These magical little machines can do much more than what they were designed for. Even better: you can see the results, right there, in a picture. Or in a bunch of pictures.

I’m going to share a couple examples of hacking the digital camera. First I use multiple cameras to create motion in a still image. Then I use one camera and multiple stills to create motion.

credit: Andrew Kornylak / Aurora Photos

An editorial client came to my agency Aurora Photos asking for a time-lapse shot of a guy jumping a gap, in the style of Parkour, a French pastime of navigating the urban jungle in the most efficient and spectacularly dangerous way possible. Because of their editorial standards, they wanted the shot to be a single exposure, in-camera, with no Photoshop. Aurora tapped me to produce this photo, because of my comfort with capturing action with artificial lighting.

Even though I was shooting digital and I could experiment in-camera, Max couldn’t do this big jump forever. Plus, there were cops about so eventually we’d have to move on. I had to do some back-of-the-envelope calculations to figure this one out ahead of time. In these cases its best to know what all the limitations are, and go from there. So, first I figured out what I knew for sure. Get ready for some mumbo-jumbo:

To separate Max’s positions on the jump across the field of view of the 10.5 fisheye, I’d need about 5fps, which is the maximum speed on the D2X without cropping the frame. According to Profoto, the 7b’s second-lowest power stop, at 34ws, recycles in about 0.18 sec, which would just let me shoot at 5fps. Ambient exposure was about 3 seconds @ f/4.5, at ISO 100. That wasn’t changing since it was nighttime. I needed to figure out the flash exposure so that Max’s positions would be equal to the ambient exposure. Since the flash output was fixed, this was a matter of moving them closer or further to Max. I did some test shots to get a rough idea of the flash-to-subject distance. It was far enough away that I could keep them out of the frame if I doubled the strobes up. To simplify things I would try to only light max with the strobes so the flashes would not add to that exposure, using grids and barndoors. I used a second camera, a D200, to fire the flashes at 5 fps using Pocketwizard radios. So I just started the exposure on the D2X on a tripod, then reached over and mashed the D200 until the exposure was over. I had assistants hold the flashes so that they could follow max in an arc.

See? Mumbo-jumbo. But the numbers and stuff are important. I read the manuals and nerd out about camera specs, but really after tons of shooting like this, I get a gut feeling for what will work. Or what might work.

“Might” is the space I like to work in. It keeps me moving forward rather than doing the same shit all the time.

Or sometimes you move sideways, and stuff doesn’t work. For this shoot, a lot was not working. The terrain I mapped out for Max was too complicated at first. It was hard to control strobe spill. Max hurt his ankle. A cop showed up.

Max had one more jump left in him. This was the very last frame of the shoot, and I think it was the best. The client loved the photos, but for other reasons they didn’t run the piece. Time Magazine was doing a story on Parkour around the same time though, and they licensed a different frame showing the entire landing roll as a two-page spread to open the feature.

Stillmotion – using high-speed bursts of still images to create motion pictures – is another way you can hack the digital. It fits somewhere between time-lapse and stop-motion, both techniques that use stills to either compress or stretch time into motion. With stillmotion the intent is to capture real-time movement, like video. The limitations: no live sound and relatively low frame rates. The advantages: print-resolution frames, you can use strobe lighting, and it just looks cool.

I have done several stillmotion pieces over the years, for personal projects, commercials, and even as short clips in documentary and television production.

I shot “In Line” as a personal project, to experiment more with stillmotion and strobes, as well as different ways of handholding and triggering the camera.

The bearded hipster in the video is John Kelso, is a good friend of mine who works frequently with me as an assistant. Not only is he a great photographer (who shoots film in the style of William Eggleston), but John was also a punk rock singer and a pro inline skater.

My goal here was to use a single camera – the Nikon D3 – to capture everything for the video, including audio.

I roughly storyboarded the whole thing and came up with a shot list and a set of interview questions. John is a busy guy (“Is this interview over yet? I got shit to do.”), but even if you don’t shoot such jetsetter rock stars, its important to do some planning for this kind of thing. It’s kind of experimental, so almost everything goes wrong while you are shooting it and you need time to improvise.

The interviews were conducted on John’s couch, with the camera’s onboard mic as close to John’s mouth as I could get it without feeling too foolish. The sound is not bad actually, considering the D3’s mic is intended for quick voice memos.

I shot a series of establishing clips that I knew I’d want: John walking out of the door with his skates, John on his back porch, John’s camera collection. To keep the camera steady and to get smooth camera movements, I had the D3 on a Glidecam 4000, which is a small counterweighted rig used to make steady handheld movements with a video camera. It’s awkward for the D3. You’d never use it for still shooting. You can’t just press “record” and have it start firing off frames continuously, and if you even touch the shutter, it throws off the balance of the Glidecam. For the same reason, you can’t have any wires coming off the camera. So I devised a wireless trigger using a Pocketwizard receiver connected to the D3 via a N90M3 10 pin to mini jack connector. This allowed me to trigger the shutter remotely using a Pocketwizard transmitter and not upset the balance of the camera on the Glidecam. (Note: the N90M3-P is a better solution, since it has a pre-trigger to give you better shot accuracy)

For a lot of the skating shots I also used a strobe, a Profoto 7b firing at 11fps, at the lowest power setting (it recycles in 0.09 seconds at this setting). I triggered the strobe from the camera using another transmitter on the D3’s hotshoe. So for each shot, I had an assistant (Andy Scott) mash down a Pocketwizard transmitter which was on one channel. This triggered the D3. The D3 triggered the strobe via the hotshoe Pocketwizard on another channel. Complicated. Spectacular.

With the strobe I could do things you can’t do in video, like drag the shutter. I did that here and there but for the most part things were complicated enough. I mostly used manual focus since you can’t really trust the AF at 11fps, especially shooting blind (the shutter blackout means you can’t see what you are shooting). Once the skating started, most everything was run-and-gun. We’d get kicked out by a rentacop, hit another spot, get kicked, hit the grocery store, get kicked…

Each “scene” consisted of a burst of photos firing at 11fps, which lasted about 10-12 seconds. I was shooting in cropped mode at jpeg high, to give me the highest speed and deepest buffer possible. Once I had all the clips, I ingested them into Adobe Bridge, organizing them into folders. Each of those folders then contained about 130 jpeg files. I opened up 130 jpegs at a time in Adobe Camera Raw, figured out the look I wanted and then applied it in batch to all the other images in the same lighting conditions. Then I exported everything as tiff files to separate folders.

I imported those tiff files into a Final Cut Pro project, which was set to 24fps. Here is where it gets a little tricky. Since my frames were shot real-time at 11fps, I had to multiply frames so that it would play back at 24fps and still look real-time. You can either multiply each frame 2 times and then things will look slightly fast-motion, or you can use some tools in Final Cut and Cinema Tools to conform it more smoothly to exactly 23.976fps, in which case you will have to blend some frames. I did both in this video, just for kicks. You can make it easier by shooting at 10fps and setting your timeline to 30fps.

I took the sound bytes and music and sequenced everything in Final Cut, and exported it as (letterboxed) HD video.