I would like to make a video that essentially shows a long exposure progressing as it is exposed. Essentially it would be initially dark and gradually get lighter. Lights that pass in front of the camera would be shown as gradually leaving a trail.

My options that I know of are either:

1) Instead of doing a single long exposure, do many short exposures sequentially and then create a video where each frame additively overlays another photo. This would be challenging because my intervalometer(sic?) is external and would be difficult to configure so that it triggers another picture as soon as the previous picture finishes exposing. Essentially there would be gaps between the photos.

Edit: Actually just occurred to me that for #1, I could just set the exposure to something like 1/10th second, set to continuous, and tape the button down. The only delay between photos would be time to save them.

2) Write a program that processes the sensor data from the raw format, splitting it up into frames based on what the sensor collected at segments of time. I don't know if the raw format supports this. Doing some brief reading on the format, it isn't really clear to me if this is possible or not. That's really the main reason of this post, is determining whether or not the raw format has exposure over time information that would allow this type of post processing. I.e. such that I can recreate the effect of the sensor capturing more and more light over time. I'd rather not spend alot of time figuring out the format/APIs and prototyping a program if it's not even possible.

So I am looking for an answer that indicates if #2 is possible, any alternative suggested solutions on how to accomplish this, or solutions for accomplishing #1.(I don't expect anyone to go out and write a program for me, but just pinging the community's knowledge of a possible existing solution or workaround) I already have a free/open source program for stitching together simple time lapses. For #1 to work, I would need a variation of this that additively overlays subsequent photos. Something that might support tweening would probably eliminate the anomalies causes by the small gaps between each photo.

Please note when you are suggesting software, whether or not it is expensive or not. Doing this as probably a one or two off hobby and would rather not pay a bunch of money for a commercial video effects package, but it won't hurt for posterity's sake in case others with the same question that come across this post are willing to shell out the big bucks.

A variable density filter sheet that is slid across the lens or a series of neutral filters that can be added or removed as desired - or a binary weighted set, would probably be useful. Does not address all issues.
–
Russell McMahonApr 11 '12 at 7:29

Generally nothing is moving except for some blinking lights, which would gradually draw light trails onto the exposure. Since they are blinking, I'm not too concerned with them being occasionally missed by method #1. Lots of good ideas. I will have to try and read through them after I get my taxes done :) I think I am going to try #1 and manually create a few frames in gimp to see how it looks as a test.
–
AaronLSApr 12 '12 at 15:11

5 Answers
5

First off, #2 is not possible. #1 is kind of possible although most cameras have relatively limited buffers so I would advise using video to produce a video, rather than RAW images. You wont need the extra resolution anyways.

While I have not tried this, here is my intuition which would require some scripting at least:

Shoot a video of your scene at the fastest frame-rate your camera supports.

Split the video into frames (free tools can do this)

Sum each frame and the next few (4-9 let's say) consecutive frames into a new frame.

Create video from the new frames yo created.

This will create an interesting rolling exposure effect video since frame 1 will be the sum of frames 1-10 (say), then frame 2 would be the sum of 2-11, frame 3 that of 3-12, etc. Your video here can be of arbitrary length. If you do this, please post it somewhere and let us know!

If all you want it to show a single exposure progressing, then you can cheat by getting an OM-D E-M5 which will output this directly over HDMI. All you have to do is Enable the Live-Bulb mode and start a bulb exposure. Any HDMI recording device can be used to capture the stream.

If I understand what you are trying to do in method #1, for this idea to work you would need to shoot your sequence of images underexposed, since you want the animated exposure in your video to grow in small increments from one frame to the next. Unfortunately adding a bunch of underexposed images will not be equivalent to a single well exposed image, the combined image will not be using the entire dynamic range of the camera since you are starting from images that use only a small portion at the left end of the exposure scale. Also, the amount of noise in an underexposed image is relatively high compared to the pixel values, and the additions will amplify the noise to much higher levels than what you would get in a single exposure.

A different approach would be to change the exposure on each successive picture and then just animate the pictures directly, without adding them up. I do understand this is not what you want to do, it is a compromise solution. The result would be pretty close to your idea if the scene you are shooting is static. If you are using a Canon DSLR, then Magic Lantern can help you automate this task, you can run Lua scripts inside the camera now!

Regarding your method #2, the raw format does not record the light entering the sensor over time, it just records the collected light amounts at the end of the exposure period (this is my understanding, at least).

One approach that you did not consider is to simulate the animated exposure in software, starting from a single well exposed image. Applications that process raw files have an exposure slider that allows you to make these kind of adjustments. This isn't the same as the real thing, of course, but it may be close enough for your needs. I recommend that you check out a raw processing open source to learn about this technique and decide if it is of use to you. Here is one: http://sourceforge.net/projects/ufraw/.

I don't see why you'd need to shoot the image sequence underexposed, you should just be able to darken each image in post before adding them together. If you do the processing in a 16bit and each of your images is 8bit there's no loss of quality doing it this way or problems with noise. Shooting successively longer exposures or using a single image and playing with the exposure slider won't create the same effect, especially if you want light trails. Shooting a sequence of equal length exposures will work fine, however.
–
Matt GrumApr 11 '12 at 7:19

#2 is definitely not possible - if it were overexposing a photo would not be a problem as you could just go into the RAW data and grab the value before it became overexposed.

#1 will work, though you might end up with small gaps as the shutter closes and reopens. This can be mitigated by making the exposures quite long compared to the gap between frames, e.g. one second. This will require you to be working in darkish conditions, but that's probably what you need anyway for a long exposure!

Once you have your images it's just a data processing task to cumulatively build up the exposure by adding the images together. You'll have to make sure no single image is overexposed in order for this to be accurate, as the image brightness values need to be divided by the number of frames, so the sum adds up to an image with the correct exposure. Assembling the video would probably be most easily achieved using a script that was feeding an image editor such as Photoshop of Gimp, both of which support scripting.

Using the video mode of your camera if there is one could help as there would be little noticeable gap between images.

The length of an exposure in a video camera is limited by the frame rate of the video being shot. If your video camera shoots at 30 frames per second(30 fps), each frame occupies 1/30 second of time, so the exposure to create that frame cannot be longer than 1/30 second. Not all videos cameras permit changing the exposure time or frame rate, but some do. With the NTSC video standard that is used in USA, Canada, Japan, Korea, and a few other countries, the fame rate is 29.97(approx. 30) frames per second. Most of the world uses the PAL video standard which is 25 fps, while film is usually shot at 24 fps. When a video or film is shot at 24, 25 or 30 fps, the exposure time used by the camera cannot be be longer than duration of a single frame: 1/24 sec, 1/25 sec, or 1/30. So if you camera is shooting at 30 fps, it cannot be set to shoot at 1/10 duration because 1/10 second is longer than 1/30 second. The only way to avoid this limitation is to shoot at a slower frame rate. If your want a 1/10 sec exposure, then your frame rate must be 10 fps or slower. To my knowledge,only profession(expensive)cameras would have this ability. If you are using a DSLR camera, such as a Canon EOS that shoots HD video, these camera can be controlled through software(Canon EOS Utility) to shoot at slower frame rates with longer exposure, but not a fast as 10 fps, and it would not create a video file. That process would create a series of individual images which could then easily be assembled into a video clip using video software, such as Premiere, Final Cut Pro or After Effects.

Consumer video cameras will usually shoot at the frame rate associated with video standard used in that country, 30 fps in the USA, 25 fps in France, 30 fps in Japan, etc. Semi-pro and pro cameras that shoot in high definition(HD) will usually allow shooting at 24, 25 and 30. Magic Lantern is a firmware update for Canon EOS cameras that apparently allows you to do what you are asking. It is free.
http://magiclantern.wikia.com/wiki/Magic_Lantern_Firmware_Wiki

Your idea of using the raw data to "change" exposure length is not going to work - the raw file only records the brightness of each pixel at the end of the exposure.

Your backup idea of using multiple images is also problematic, it may be doable but will require a lot of work.

First, the gaps between pictures is significant, for example an entry level DSLR can only shoot 3-4 frames per second no matter what the exposure time is.

Even pro movie cameras have gaps between images, actually, in pro movie cameras the gap is always half the time - so for 24 fps movie each exposure is 1/48 of a second leaving a 1/48 of a sec gap - to change exposure time in a pro movie camera without changing frame rate you have to physically replace the shutter.

You can (almost) eliminate the gap with consumer cameras that can shoot video at the maximum theoretical limit (that is, 1/30 exposure for 30 fps) but that usually requires you to lower the resolution - and ...

All the pictures will have the exact same exposure time - and ...

When you shoot video you have to deal with the "rolling shutter effect" (since the camera is reading the sensor line by line anything moving will be in a different position on each line of the image, effectively skewing anything that moves).

Merging images, adjusting exposure and fixing the gaps and the skewing will require a lot of editing and the result will probably have more computer graphics that actual photos.

So, what can you do:

Simulate the entire things - a few well exposed pictures and a lot of time in photoshop will probably give more "realistic" results than trying to create the thing from multiple real images

Work in a controlled environment - if you can repeat the exact movement for each frame or have a static scene this becomes simple - just take multiple photos at the different exposure times.

Just do it and see what happens - if you take multiple photos as quickly as possible at increasing exposure times of a moving scene you will get something different than you originally planned but it may end up interesting and beautiful - you don't know unless you try.

I see no problems with using video mode for this. You're not going to get any noticeable rolling shutter effects in this sort of project when the camera is on a tripod unless you have some oscillating motion. Also if the gaps between frames are short enough to trick the eye into perceiving smooth continuous motion then they should be short enough for this timelapse video. Finally merging the image and adjusting exposure can be done with a simple script (no fixing of gaps or skewing required!)
–
Matt GrumApr 11 '12 at 8:45

@MattGrum It all depends on how fast things are moving - for static or slow moving scene you don't have too much problems but for faster objects just merging images can create light trails with gaps in them, gaps in motion blur, etc. - if you want a realistic image you will have to paint in the missing details (also, I have a feeling this will require lots of fine tuning, but this is just a feeling)
–
NirApr 11 '12 at 9:26