Behind the shot: 'Louisville in Motion'

The time-lapse video above started out by accident. I was learning how to use a slider to create a motion-controlled time-lapse with photos, rather than sped-up video. My first attempt turned out alright, but hardly anything ever turns out exactly the way you envision it, especially when you're learning something new.

I tried a few other motion-controlled time-lapses, and when they were turned from photos into a video file, I was fairly pleased with the results. During these early efforts, while learning how to use this new gear, I would browse Vimeo and check out what time-lapse videos others had made. I was amazed at some of the city montages on the site, and figured since I had already created a few of my own, I would make a video showing off Louisville, Kentucky during the summer.

Exposure

First I'm going to talk about exposure, since many have asked if this video is made up of 'HDR' shots. The answer is no, each frame of the video was taken from an image converted from a single Raw file. The reason very little of the video looks blown out while still retaining detail in the shadows comes down to exposure settings, how the image was saved, and techniques I used to process the images.

Here's a video that shows how a typical clip looks before and after exposure and color correction.

Before capturing each scene I take a few test shots and have the Panasonic GH2 set to highlight overexposed areas. Knowing I want to still see detail in the brightest part of the image and not induce excess noise in the darker parts of the photo, I would let a small portion of the clouds 'blow out' or become completely white information. Because clouds are usually white, if a small percentage is overexposed I'm still not losing much detail in the highlights, while maximizing what the camera can detect in the darker areas.

The blue circle marks the section of sky that is overexposed.

When exposing to protect the bright parts of an image, pictures usually look pretty dark overall, since the sky and especially clouds are generally brighter than things on the ground. To brighten darker parts of an image we want to have as much information from the camera's imaging sensor as possible. When saving something as a JPEG, the camera throws away a lot of what the sensor is capable of reading; however when shooting Raw, many more color values are saved that we can boost later.

Finally, Raw files have to be processed to create an image. I'm not going to go into detail about how, since each scene will require different settings and there are many Raw converters available. But most Raw conversion software can darken highlights and brighten the shadows, as well as change color temperature, and even adjust specific shades and brightness of color. For each scene I spend about an hour trying to get the cityscape to my liking. I admit it's not how things look in real life, but then again, what is? I make my adjustments to match how I remember those moments.

Motion blur and depth of field

When I first started working on this video I didn’t have any ND filters so during the day shoots I would generally stop down to about F8 to get as much of the scene in focus as possible, then used a high shutter speed to ensure good exposure for the highlights in my photos. I didn’t mind the stop-motion appearance of the cars or people with this technique because the clouds and the buildings are the really important parts, and they look smooth. As time went on I began experimenting more and ended up getting a Formatt 77mm Neutral Density 2.4 Filter, which lowered the light hitting the sensor by eight stops. That allowed me to play around with blurring motion in the daytime.

After processing one time-lapse scene and seeing cars and people turn into blurry undefined streaks I decided I much preferred a higher shutter speed during the day. Strangely enough, as much as I appreciated a sharp, blur-free image during the day, I loved the way headlights and tail lights blur on vehicles at night and I would sometimes leave the shutter open for a couple seconds.

A lot of the video is about seeing something ordinary in a way that’s not possible in real life and the streaking of lights emphasizes this. The cool thing about shooting at night is that since the light level isn’t really changing much you don’t have to be nearly as consistent about taking the next photo every, say 15 seconds. You can wait until something interesting happens even if the time between shots fluctuates radically. A good example of this would be the shot of the fountain at night and lights darting through the frame.

This was taken in a quiet residential neighborhood and cars didn’t go by all that often. To make the scene more exciting, sometimes I would wait 30 seconds between shooting and sometimes I might have waited a minute and a half to get a really neat shot when a bus would go by, or two cars would be in the frame at once.

One other reason I shot these sequences for greater depth of field is I wanted the viewer to decide what they want to look at. Usually there is a main area of interest for a shot where I use camera movement and framing to help the viewer see what I think is most important, but it’s really up to the person watching to pick out what they want to focus on. Even now I’ll re-watch the video and see something I didn’t the first hundred or so times around.

Camera support

To add interest to the video I thought there should be some form of movement besides just the clouds or sun changing position. Since most buildings move very little, I decided the camera should physically travel in each shot. The items listed aren’t the only, or necessarily the best tools, but I believe they worked well to help create the movement in this video.

Manfrotto 535 legsThese are three-stage carbon-fiber tripod legs. They are light, which is nice for when climbing to the top of a parking garage, and can get really low to the ground while also extending a little over my head when a tripod head is attached. They are also rated to hold 44lbs (20kg) which is more than enough for any of the time-lapse gear used in the video.

Sachtler FSB-8 headTruthfully this head is overkill for anything in this video, a much cheaper head with a half-ball base would have worked fine, but this fluid head did make leveling for hyper-lapses more enjoyable, plus this is great for everyday video use.

Kessler Pocket Dolly Ver1The slider was used for the shorter, slower moves, or most of the day-to-night time-lapses seen in the video. It’s basically a rail with a carriage for the camera to move along.

Kessler Shuttle Pod MiniThis is similar to the Pocket Dolly, but it’s a modular device that can range from 4 feet to 16 feet depending on how many sections of track you decide to add. This was also used for shorter moves, but mostly when I wanted a longer vertical move than I could get with the Pocket Dolly.

Oracle controller and motors The two previous items aren’t much good for a time-lapse without a motor to move the camera between each photo and a controller that waits a set amount of time before turning the motor on and off. The controller and motor are what makes the short moves look smooth.

Hyper-lapse

I most often get asked how the long dolly-type moves are made. These shots are called hyper-lapses and don't require much in the way of specialized video gear. Most decent video tripods have a recessed bowl built into the tripod legs, and then the tripod head attaches to a half ball that in turn sits in the tripod's recessed bowl. This half ball allows the tripod's head to be leveled so when panning and tilting, the horizon will always be pretty close to horizontal (I'll explain why a pistol grip or ball head aren't my preferred style of head in the 'helpful hyper-lapse tips' section below).

Above is a video head attached to half-ball mount, which fits into the tripod with a bowl base.

When contemplating a hyper-lapse sequence, I first walk the distance I want to record while looking at a structure off in the distance; this will be my object of interest. If the parallax effect looks interesting, I'll focus in on a very specific point on my object of interest. For example, if the object is a building, I'll pick out the top left corner of it and retrace my path making sure nothing ever obstructs my view: the top left corner of my building is now my point of interest. If a light post, tree, or sign ever block my line of site to this point I'll either scrap the location, move further back, or move in front of whatever is blocking my view and see if that fixes the problem. If my point of interest is no longer blocked while walking, I'll start the the hyper-lapse.

The object of interest is colored light red, with the point of interest circled in yellow.

The first thing I do is level the tripod's half-ball, then pan and tilt the camera until I get my framing correct. Since the camera I used for this project is a Panasonic G series, I was able to set a vertical line and a horizontal line to act as an anchor point that I would always snap to the point of interest. This is important so the framing between shots is almost identical to the shot before.

I used a Panasonic GH2, but if your brand of camera doesn’t have this option I would either use one of the autofocus points in the optical viewfinder, or if you want to use live view, tape some fishing wire across the screen in both the vertical and horizontal directions so the points that line are up now your cross hairs.

Once a photo is taken, I move the tripod in my preferred direction of travel, about the length of a shoe, level the half-ball, pan and tilt so the camera's anchor point matches up with the point of interest, and take another photo. I repeat until I run out of space or have enough photos to make my desired sequence. Once I've finished shooting, the video will look very shaky, so I use a video stabilization program to smooth out the scene (I'll briefly go over this in the post-production area).

The point of interest is lined up with an anchor point onscreen.

A few helpful hyper-lapse tips

Finding an edge. Moving the camera in a straight line will make for a smoother hyper-lapse. The edges in sidewalks or curbs are great for this. Pick a sidewalk that is straight, place two of the tripod's legs against the edge of the curb. Level the tripod head and take a photo, then continue lining up those two tripod legs against the edge of the curb as you move along to your next shots.

Setting duration. To figure out how far you want to move between each shot, think of how long you want the scene to last. I recommend at least five seconds. In countries that use NTSC as the video format you will most likely have a video running at 24 or 30 frames per second (25 frames per second in PAL countries). For this project I decided I would make everything 24 frames per second. To get five seconds of footage I multiplied five seconds by 24 frames and came up with 120. So the camera would have to take a photo and be moved 120 times between my starting and ending point.

Calculating number of shots. If you're not sure how much distance to move the camera each time, just walk along the ground you want to cover and count your footsteps. Next figure out how many lengths of your shoe cover each step. I usually cover two shoe lengths between each step, so If it took me 30 steps to walk the path I wanted to cover that would come out to be 60 lengths of my shoe. Knowing I want to take 120 photos, I would move a particular leg of my tripod a half shoe length between each shot.

Intervalometer.For picking an interval between shots on a hyper-lapse it’s not usually necessary to have an intervalometer. The only time I do use an intervalometer with a hyper-lapse is with day-to-night, or night-to-day shots because they take two to three hours.

Horizons with ultra-wide lenses. Use of a half-ball tripod is more important when using ultra-wide-angle lenses. Pistol grips or ball heads make it harder to keep the horizon line straight when you move the tripod, and the extreme distortion of ultra-wide-angle lenses makes it harder to keep this aspect consistent when you're only using the one anchor point.

Pay attention. Hyper-lapsing is very repetitive. Don't let your mind wander too much or you might snap your anchor point to the wrong edge of the building. This is more applicable if your point of interest is a specific window on a building where all the windows look the same.

Secure that zoom. If using a zoom lens, tape the zoom ring unless you want to incorporate a zoom into the hyper-lapse. Having your focal length slip between shots can ruin the sequence.

Click the link below to read page two of Stemen's behind the scenes look at creating his time-lapse video.

WOW!!!!! I lived in Louisville in the 1980's, and was just back visiting friends in January. We toured the city to see how much it's changed, and.. in 4 1/2 minutes I just relived it all again!!!This is groundbreaking work you are doing!! And I bet as a collection of stills, this would play extremely well on full-size movie screens as well. (I bet the Louisville Tourism office LOVES YOU!!)

Thanks Paul, glad you like it! I hope the Tourism office likes it, I heard the Louisville convention and visitor bureau likes it but haven't heard anything from the tourism office...unless they are the same place. I'm working on a new video mostly focusing on the trees in bloom right now.

Wow, great stuff!You said to tape the zoom ring, but in many shots zooming was used to good effect. Could you shed some light into this?My guess is you chose two or more anchor points and framed accordingly and the zoom effect would happen automatically as you moved? It's just amazingly smooth and constant. :)Or did you shoot at constant wide angle and move your crop in post? I doubt you could set / read off the current focal length with such accuracy that you could actually zoom the lens in increments "by hand" across the shot?Thanks for sharing.

I recommend taping the zoom ring when you aren't trying to do a focal length change during a shot since I ruined parts of shots by accident before when the focal length accidentally shifted.For the shots where the focal length changed I did those by hand. For example I would physically move the camera backwards from a building then barely increase the focal length by hand....almost impercevably. Stabilizing footage can do some amazing things as long as you try your best to make things as smooth as possible in camera.

Its a good question for sure. Shooting video with the camcorders I own wouldn't let me shoot in as high a resolution as with a still camera, although there are camcorders out there that will shoot 6k. Also most camcorders don't shoot raw which I really needed with how I exposed the images. I admit there are video cameras that shoot raw as well now. The hyperlapses wouldn't have been possible without a regular photo camera because I physically moved and leveled the camera between each shoot. You could use a dolly for some of them, but that would have been a pain to build up and super expensive. Shooting stills also saves a lot of hard drive space because you would end up throwing away an incredible amount of frames when you speed the video up.So taking a bunch of stills is significantly cheaper, it's a lighter load to carry, quicker for editing, and the software workflow I used was for still photos.

Hi Paul, for the vast majority of the hyperlapse type shots the flip out LCD didn't really make much of a difference...I mainly used the EVF for those since I could see the cross hairs easier if the sun was shinning. For the slider shots it helped out quite a bit. I could have made it work without the swivel LCD but it would have been much more frustrating and a little more difficult. Usually the slider was in some sort of awkward angle either low to the ground or doing a vertical move which would block the back of the camera. If the LCD was embedded in the cameras back I would have had to remove the camera and set focus and focal length then reattach and kind of guess what my framing was.In short is was very useful for maybe half the shots but a flip out LCD would only slightly influence my buying decisions. I used a GH2 because that was the best camera I owned, and the mult-iascpect sensor gave a slightly wider field of view and a little more resolution than most M43 cameras.

Thanks! The staff at dpreview deserve a good deal of credit for making it readable and and coming back with more questions to help it out. Much appreciation goes out to Shawn and Barney for all their help!

Does anyone have an idea why he might use After Effects? As far as I know, Premiere Pro can do all of the things he's listed, i.e. "specify the frame rate of the video, trim the video if needed, stabilize hyper-lapses, and export your video into a file that can be read on other computers or editing programs"?

I had to import the raw photos as an image sequence which I don't think Premiere can do...I could be wrong on that though since I never even attempted it. With After Effects you can change the preview quality to whatever you would like...plus you can mask and keyframe easier if you need it.

In Premiere Pro (don't know about Elements), you can also change preview quality and keyframing is as simple and intuitive to use as AE. You can't mask in Premiere though as it's not built for compositing.

I meant raw photos, It's entirely possible that Premiere would let me do it, I've just never tried it. I do find it necessary to export the raw photo sequence as a video file because the raw photo files play back soooo slow.