It tries to establish the correspondence of multiple images to each other, pixel by pixel. There are many, many ways to combine images, and many ways to achieve a specific result - but this is the point of optical flow. It has nothing to do with time. It's motion estimation, a kind of motion tracking with a result for every pixel. This is obviously different from motion tracking where you look for an eye, or some other constant to track, and much more difficult.

There are many things that you can do with a series of related images. A simple use of optical flow is just aligning the edges of two images with a transform.

You can see it right now in Google Street View images right now. If you look closely, you can sometimes see that the tree branches overlap unexpectedly. You could have a more detailed alignment with a repeat to make the picture cleaner - but in my neighborhood is an actual seam where the edges meet!

I have been interested in optical flow for a long time. It comes initially from research in the eighties, in the field of computer imaging primarily for completely automated tasks, such as unmanned navigation. This research was largely funded by the military, and has no application to the kind of work that we do.

When I started to play with optical flow in 1995 or '96, computers were still too slow. It was really not completely useful yet.

However at about that time, I was hired by a visual effects company in Massachusetts called Mass Illusion. They had been asked to do tests for two different movies, as part of a package to finance them. One was "What Dreams May Come," and the other was "The Matrix."

I started to work with "What Dreams May Come" in the fall of 1996, and production on the film itself didn't begin for a year after that.

("The Matrix" tests began in the winter of '97, two years before they entered real production.) It was a very long process: they asked me to come see what kind of painterly effects they wanted. A year and a half later, I was still working on it.

The goal for "What Dreams May Come" was basically to make paintings from a live action source, where things in motion appear to be painting as they move. The first test was to show that it was possible to do, because no one had seen it before. The second aspect of it is was to show how we could also render different styles.

This meant compositing style, but also different painting and different brushing styles: heavy oil, watercolor, charcoal, and a big component of it is just playing with the colors.

They had paintings to use as references, but they kept changing because the story kept changing. So rather than relying on the reference, we had to achieve those painting scenes as much as possible using the principal photography as the input for the effects. Which implies that we have to generate the motion to have the brush stoke followed properly, which was optical flow.

(During the production cycle, we also developed a technique to generate motion vectors from 3D graphics to create better motion blurs and compositing, and import or export the motion vectors with other applications.)

So in our case, the input source for the paint effects on "What Dreams May Come" was real. Things move.

If you just render brush strokes procedurally, this often creates a shimmering effect from frame to frame. Or sometimes the strokes have no true relationship with how the background photography moves.

We imagined each stroke as a point attached to a moving image, with all of these little points moving from frame to frame with optical flow to create a stroke.

As I was working on this, Pete was doing similar research at Apple. In 1997 Apple was not doing so well, and laid off something like two or three thousand people, including the Advanced Technology Group where Pete worked.

I was at SIGGRAPH that year because we were about to go into production, and Pete was presenting a paper on doing impressionistic renders over image sequences. I was sent to hire another software engineer, and this is how Pete and I started to work together.

When "What Dreams May Come" won its 1998 Oscar for Visual Effects and "The Matrix" won the year after that, it was a good lesson for us. I had only worked on small films before this, but these both had huge effects budgets. I discovered that I couldn't spend my life as an effects artist because I would die young with so much work to do!

That's when we formed RE:vision Effects, in December 1998. And of course the first thing we did was become a contractor on another movie!

I can't say which movie it was - I respect that - but that was the beginning of our tradition of spending part of every year working on real visual effects projects, not just development.

Our first software product became Video Gogh, first as a QuickTime plug-in, then an After Effects plugin, to provide these optical flow paint effects.

The hidden part is that it is built on an engine computes the same across all the products. But if you think of them application by application, you can adjust. For example, if you are doing motion blur, then you can be a bit more sloppy in terms of motion estimation because you're blurring anyway

To put it another way, it's a trade-off between quality and speed, to create the most convincing result for the least processing. This is what we do with ReelSmart Motion Blur.

Pete and Pierre at the Academy Awards, with tuxedos, without the optical flow warps. Happy Anniversary, guys! We're proud to have worked with you so long, and to have you be such an active part of the COW!

We can also use optical flow for noise reduction. I can take a set of three frames for example, and compare the frame before and frame after. If there's anything that doesn't belong on the frame, it should be gone at next frame. I can fix that by warping the pixels from the previous and next frame with optical flow to the current frame.

And again for deinterlacing: we bring one field to the location of the other, and blend the two together. We then look at surrounding frames to reconstruct sharper edges.

Now you can understand how Twixtor works. It uses an internal optical flow engine to perform motion estimation. You decide how you need the timing to change, but the change is calculated by the motion between the sets of pixels. That's how we create the new frames we need - by comparing the pixels before and after the current frame, and warping their position. Motion vectors.

During re-timing you can say most definitely that the motion blur solution is going to cause a problem. If I set it up a re-timing across frames with very little detail, like moving my hand across a wall that has the same color, it doesn't work right. I need all the precision I can get, even if it costs time.

We have learned what makes the process fail, but we also know that for you as the customer, failure is not an option. We have a lot of interactive tools to help you guide the algorithm if it cannot build the effect correctly by itself.

These interactive tools use another component in our software, which is morphing. This is quite fundamental: both "What Dreams May Come" and "The Matrix" represent extreme examples of morphing: creating pixel-by-pixel warp transforms across multiple images.

With morphing, we use splines to help adjust the parts of the frame to focus on, but we also use motion detection. I remember back when we first tried to do morphing with Elastic Reality or T Morph. If you tried to animate that person to a CG face, or from this dinosaur model to that CG dinosaur, you end of eating a lot of geometry. This eye, that ear, they don't match. You have the feeling that things are slipping, not morphing.

Using optical flow permits secondary detail motion, very small movements. It just maps things correctly, so you end up having a lot less geometry to specify for the "to" and "from" in the morph.

Optical flow gives you the opportunity to try more complicated morphs between moving subjects that are moving - because you are being assisted by something that understands motion.

So depending upon how you need to use optical flow, we tweak the internal engine differently to create the best compromise between speed and quality for the application that you want to use it for. The technology itself is very flexible.

Some people have used our stuff to transfer color correction from one frame to another. On our website (www.revisionfx.com/company), check out a movie called "Dreamkeeper."

[Ed. note: "Dreamkeeper" won the 2003 Emmy Award for Special Visual Effects.]

The visual output is mono, but it was shot stereo. We used the motion from one view to the other to create a depth map. Not a true Z buffer or anything, but enough depth to allow a little bit of 3D camera, or insert smoke or live-action fights between objects.

A final note about our Academy Award: it is for technical achievement, not tied to a specific movie. It was tied to our products. One of the Academy's requirements is that the products have to be used in a lot of movies, so they acknowledge that we made optical flow affordable and available to so many people.

We have found that we have two distinct clients. One is more of a video editor: if it doesn't work, he won't use it. The other is a compositor who might have 2 days to do 3 seconds worth of animation. He's willing to spend time to do some manual work. Which is important in the visual effects world, because people are very sensitive to something that doesn't work.

From here? I like what we're doing. We're committed to generate a plug-in set each year, so I am doing some software development. I also do one production project per year of some sort. Otherwise the world of development gets pretty narrow. I don't want to become disconnected with real work. We enjoy direct contact with our customers, but it's also good for us to stay connected to their needs by doing the kind of work that they do.

Pierre JasminSan Francisco, California USA

Pierre and Pete founded RE:Vision Effects in 1998. Since then, their work has been used in thousands of films and projects of every scale. In addition to the RE:Vision Effects forum at CreativeCOW.net, you can also find Pierre regularly posting in forums including Broadcast Video, Avid Editing, Apple Shake, Autodesk Combustion, Autodesk Smoke, and Adobe After Effects.

Find more great Creative COW Magazine articles by signing up for the complimentary Creative COW Magazine.

Twixtor: When and How to use Tracking PointsPlay VideoThis Tutorial shows how to get better results using Tracking Points for more complex shots and to help get rid of warping or ghosting. Tracking Points are available in Twixtor Pro for AE, FCP (pre-FCPX), Premiere Pro, Nuke & OFX hosts such as Scratch & Composite.

Twixtor: Better Retiming Using a MattePlay VideoThis Tutorial shows how to get better tracking using a matte to separate your footage into multiple layers. Multiple layers are available in Twixtor Pro for AE, FCP, Premiere Pro, and Twixtor for Smoke, Flame, Fusion, Nuke & OFX hosts such as Scratch & Composite.

Twixtor in Avid MC: Frame Rate ConversionsPlay VideoThis tutorial shows how how to do a Frame rate conversion if the Input and the Output are both Interlaced or both Progressive and also how to do a Frame Rate Conversion from Progressive Footage to Interlaced Footage in Avid Media Composer.

Software developer RE:Vision Effects just released Twixtor 5 plug-in for Avid Media Compser, Symphony and NewsCutter, including 64-bit support for Media Composer 6, Symphony 6 and NewsCutter 10. That will make Twixtor's slow motion shots - achieved quickly and at a high image quality - available to Avid editors.

Frame Edge Issues and TwixtorPlay VideoThis Tutorial explains when it is appropriate to use Smart Blend and shows regular Twixtor with Inverse and with Forward Warping and compares a panning shot slowed down 10x with and without Smart Blend.