Contents

Overview

The basic idea is very simple: Capture each frame of the video as individual frame of the video as an image and assemble the images into a video. Thus there are are two necessary parts in this process:

A camera that can be controlled to capture images at regular interval. The delay between the frames depends on the length-ratio between the real-time event and the playback. For example, you want to create a time-lapse video with 25 fps of an event that lasts for 15 hours and you want the playback length to be 3 minutes. For a 3 minute video you'll need 4500 frames and over 15 hours this corresponds to capturing one frame every 12 seconds.

Video encoder software that can take the recorded frames and encode them into a video. A good video encoder will also allow post processing, e.g. scaling and cropping.

An alternative technique can be used where one uses a regular video camera to record a standard video and converts the captured video using fast-playback. This technique is only feasible for short events up to an hours or so.

Logitech QuickCam Pro 9000

The Logitech QuickCam Pro 9000 is UVC compatible[1] and therefore works extremely well with recent Linux kernels.

When using a webcam with Gstreamer we can record a video at 1 fps. It would be nice if we could time-lapse it by simply treating it as 25fps input, something like this:

ffmpeg -r 25 input.ogg -sameq -r 25 output.ogg

Note the using the -r</code< option on the input only works with raw video streams. If we have the video in an OGG container we have to convert it to individual frames first and then assemble the frames into a new video.