Why code your animated sequences when you can draw what you
want and let a program do the rest? In this article, Barry Feigenbaum
and Tom Brunet show you how to combine lossless images, Swing
technology, and the authors' own Java-based animation engine to generate
movement sequences for fixed objects in 2D animation.

In two-dimensional (2D) animation it is often necessary to move an
object or objects around the 2D area in predetermined patterns, sometimes
called control paths. This type
of animation requires that you solve two problems:

How to specify the control paths for the objects to follow.

How to move the objects along the selected paths.

In this article we'll show you how to solve these problems using
lossless images, Swing technology, and a Java-based animation engine.
We'll start by drawing the desired trajectory for our animated objects,
and then use the animation engine to drive the objects along the defined
control paths.

While lossless images (described below) are both easy to
create and easy to process, the technique of using them can be as finely
tuned as you want to make it. Using an example animation sequence, you'll
learn how different color sets can be used to create more and less
sophisticated movement sequences. You'll also learn how to process your
image to extract out the desired control paths, layer the control paths
against a background image, create the objects (Swing GUI components) for
your animation sequence, and drive them along the defined control paths to
complete the animation process. See Resources for more
information.

Note: This article
assumes you have a working knowledge of Java programming in general and
Swing GUI construction in particular. Some additional experience
manipulating images on the Java platform using Java 2D will be
helpful.

What's not to
lose?A lossless image is one in
which all the image pixels are permanently retained. The image must be of
a type that can be either stored exactly or recovered to an exact replica
of the original.

You can use a variety of applications programs to create lossless
images, including Microsoft Paint, Jasc Paint Shop Pro, and some custom
applications. You can store the images in files or create them only in
RAM. The images must either be stored uncompressed or compressed using a
lossless compression scheme, such as zip compression. Typical lossless
image formats include Microsoft's Bitmap (BMP) and the Portable Network
Graphics (PNG) format. Lossy compression schemes such as those often used
for GIF (Graphics Interchange Format) and JPEG (Joint Photographic Experts
Group) files will not work for the animation techniques described in this
article.

It's all a matter of
controlIn its most general form a control path represents
the behavior to be taken at a particular position and time through any n-dimensional space. We
define a control path as the path taken by one or more objects through a
2D space. You represent a control path by mapping an object's position to
a behavior at that position. A program then iterates through the defined
objects, looks up the behavior for the position of the object in the map,
and performs the instructed action on the object. For all but the simplest
of control paths, creating such a mapping in code can be both time
consuming and error prone; thus a drawing program is more appropriate.

Control paths can be time invariant, in which
case they are static, or time variable, in which
case they are dynamic. If your lossless image is contained in an image
file, it will be time invariant, or static. If your lossless image is
contained in RAM and used directly, it will be time variable, or dynamic.
In this article we'll focus on static control paths. With the right
editing program, static images can be much easier to generate, although
the types of behavior defined will also affect the process to some
degree.

Let's have a hot time
tonight!A good way to learn about animation is to do it
yourself. We'll use an animation example to illustrate the concepts
discussed throughout the remainder of the article. Our example is an
animated fire escape sequence, so we'll generate control paths to
represent the escape trajectory for several figures. We'll use the partial
floor plan in Figure 1 as a background image. You can see the full-size
background image in Figure 6.

We can generate the control mapping from any array of values. Using an
image for the array (as shown in Figure 2) allows us to use a color value
to represent the behavior at each location. The size (number of color
bits) of each color value will depend on the image format. Figure 2
illustrates some of the control paths for our fire escape sequence.

Color my worldAfter
an image is generated, it can be easily converted to the needed mapping.
We simply iterate through the colors of the image and assign a behavior
for each color value. So, for example we could use white, which is
typically an all-ones value, to indicate no mapping or a default behavior.
Black, which is typically a zero value, could be used to represent a
custom behavior. If mapped according to our image, when the object
encounters a position that shares the same behavior (that is, the same
color, say black) it will continue in the direction defined by that
position. If the position does not share the same behavior, it will find
an adjacent position that shares its behavior without backtracking.

We can represent other behaviors with different colors. Those color
values not defined will be ignored. Thus, pixels in the background (such
as the light-gray pixels in Figure 3) can be
ignored.

Listing 1 shows how the mapping is accomplished. The image is first
scanned for specific color values, then the position of each color pixel
is used to define a control state at that location in the map of control
states. In the Escape example, six behaviors are defined by the various
STATE_xxx constants.

Variety is the spice of
lifeAlthough specific color values are used in Listing 1
(that is, red is 200, green is 0, and blue is 0) we could easily enhance
the code to support color ranges. Using color ranges reduces the precision
of the color selection used in the drawing program, thus making it easier
to create a path image.

Using a wider selection of colors allows you to define many more
states, and also to describe much more complicated behaviors. For example,
you could use different color bands in an RGB scheme to create overlapping
control paths. If each of the above states were encoded by different
intensities of a single color rather than by different colors, three
independent control paths could be overlaid on top of each other. Of
course, using different intensities of a single color makes it harder to
discern the subtle color differences between behaviors. Most image editor
programs display exact color values of the selected pixel, mitigating this
problem.

It's also possible to define more than three control paths. If you
accessed each color value through a bitmask, you would be limited only by
the number of bits in your image format (typically 24; 32 if you used
alpha values). Using single bits for a path is more complex than using
color bands but it can be done. You would either need to have a program
merge together separate image control paths or use a single image and
additive painting. If you did not need to support overlapping paths (that
is, have multiple states exist at one position), you could have 2^24 (or
2^32) states at each position. You could also mix these two approaches.
For example, using the red band via a bitmask and the green and blue bands
for other states.

Figure 4 shows the complete control path image used for our Escape
simulation. Notice the use of multiple colors, and how the colors are used
to represent different behaviors at different locations.

Run for your
lives!After we've defined a state map, we can begin to move
around objects in the 2D space. The example Escape application models
moveable objects as instances of the Entity class. Two major
subclasses have been defined: a Person and an
Alarm. Persons can move while
Alarms are stationary. The Entity interface is
defined in Listing 2.

The addToPanel() method creates one or more Swing
components to represent the objects and adds them to the provided panel.
The components are typically JLabels with an icon set. The
panel is typically the implementation of the 2D space. Its background
displays the animation background.

The updateTick() method causes the object to animate
itself for each cycle of the animation. Alarm objects change
their color to create a flashing effect. Person objects move.

Alarm objects are simple components that blink, as
implemented in Listing 3.

This fairly complex method basically examines the map around the
current position. It then selects the best new position to go to. It works
by attempting to go in the same direction as much as possible. Note that
hints are markers that
provide a preferred direction. They are typically used at starting
locations or intersections.

A Person can stop prematurely (that is, before reaching
the end of a path), or be set to delay the start of motion for a fixed
time. Person objects are also capable of leaving a fading
trail of images (or history) to depict their motion, as shown in Figure
6.

Painting the
sceneClass BuildingViewer creates the
container in which the objects are moved. The paintChldren()
method first draws the background image, then any alert message, and
finally the subcomponents representing the various entities.

To create an effective animation, you need objects to animate. Listing
9 shows the code to create a series of Person entities of the
same type (that is, disabled, non-disabled, firefighters, etc.) based on
the provided inputs.

Listing 10 defines a set of Person entities using the
initLoop code from Listing 9. This code uses several parallel
arrays (based on the length of the locs array) to provide
information about the objects to be created. The locs array
provides an index into the set of defined starting locations, as provide
by the control path. The starts value specifies at what timer
tick the Person is to begin to move. The appear
value defines the timer tick when the Person should become
visible (often before it starts to move). The stops values
specify the (possibly multiple) stop points each Person can
have.

Although shown as hand-typed values below, it is possible to get most
of these input values from the control path by adding new colors that
represent states that position entities. This enhancement can simplify the
input of these values and make them less subject to error when the control
paths change.

ConclusionIn this
article, we have shown you how to use lossless images, Swing technology,
and a custom animation engine for motion-path generation in 2D animation.
This method allows us to visualize the animation as we create it via
control paths in a quick and predictable way. Some of the advantages of
this technique are as follows:

Ease of use

Most image editors have a number of ways to generate straight lines,
rounded arcs, and other shapes. These options allow us to generate some
paths by hand quickly while reducing error. For some behaviors, this is
very useful.

Reference images

When the animation is moving relative to a background image (such as
in Figure
1), we may wish to move objects within the confines of items in that
image, for example keeping objects within the confines of hallways. Many
image editors will allow us to use a semi-transparent layer to generate
our control image over our background image. We then can easily create
our control paths to match our background image, as we can see both
images lined up while we generate the control image.

Additive painting

By blending colors, we are able to encode multiple behaviors at a
position. For example, using RGB colors we can use red (0xFF0000)to
represent the path to follow for one object and green (0x00FF00) to
encode the path to follow for another object. Using additive painting, a
point at which the paths intersect will be yellow (0xFFFF00).

With this additive model, when using, say, a 32-bit color model,
up to 32 different behaviors can be easily extracted from a given
position like a bitmask. Although we have only described one simple
behavior, the number of behaviors that can be encoded is limited only by
the number of bits assigned to each color in the image format.

We have also shown and described a simple animation engine for moving
objects around the sets of paths. Each object, called an
Entity and implemented as a JLabel in the
example, is driven periodically to update its position and/or appearance.
A long-running timer thread is assigned to drive this process. A
JPanel is used as the container of the objects and also as
the means of painting the background. See Resources for the
complete code for the animation example and the Java-based animation
engine introduced in this article.

The UK-based Technical Advisory Service for Images (TASI) has
provided a useful overview of the
various file formats and compression techniques for digital images,
including a section on lossless image compression techniques.

Learn more about some of the image types mentioned in this article
-- namely JPEGs, PNGs, and GIFs.

Barry Feigenbaum has also written "Coding for
accessibility" (developerWorks,
October 2002), which shows you how to use Swing/JFC and a unique
Accessibility Toolkit to build more accessible Java
applications.

About the
authorsDr. Barry Feigenbaum is a member of the IBM
Worldwide Accessibility Center, where he is part of a team that
helps IBM make its products accessible to people with disabilities.
Dr. Feigenbaum has published several books and articles, holds
several patents, and has spoken at industry conferences such as
JavaOne. He serves as an Adjunct Assistant Professor of Computer
Science at the University of Texas, Austin. You can contact Dr.
Feigenbaum at feigenba@us.ibm.com.

Tom Brunet is a graduate student at the
University of Wisconsin-Madison. He received his B.S. in Computer
Science from the University of Texas at Austin, where he was named
as a Dean's Honored Graduate. During his undergraduate studies, he
worked for IBM Research for four years, spending a summer with the
Data Abstraction Research group, and the remainder with the
Accessibility Center. He also worked concurrently as an
undergraduate research assistant for Dr. Nina Amenta. You can reach
Tom at tomab@cs.wisc.edu.