Animation that matters — adding value to your interface

Animation principles for UX and UI designers

We spend a lot of time refining digital experiences and animation is often an afterthought when it comes to building them. In reality, many designers have no animation experience and we animate based on what “feels right”. This feels right because it’s natural and it’s natural because it mimics our everyday interactions with the real world.

On the other hand, we tend to add animations to digital products without much thought; mostly because it looks nice and we fool ourselves into believing that this adds value. Yes, they mesmerize our clients but are they functional and do they add to their experience?

How can we then bottle this feeling and sell it to our clients in a meaningful way? How can we ensure that our animations add to the user’s experience?

The answer is through science and principles.

This article is mostly theory wrapped in eye candy.
– Vernon Joyce

What is animation?

Animation is a method in which individual images are combined to make them appear as if it’s a smooth singular motion. Every bit of animation you see today consists of multiple images (or a single image in multiple states), smashed together to trick the human eye.

The amount of frames within a single second of animation is called the frame rate or frames per second (FPS). The human eye requires at least 24 frames to perceive an animation as a fluid motion.

In web, for example, the frame rate of your CSS animations are largely dependent on the speed of the browser and computer. You might see an animation stutter or lag if your computer is slow and this is (often) an indication that the frame rate has dropped below 24 FPS (and that it’s time to buy that new iMac).

Gaming is another great example; where games appear laggy when your computer is unable to run it at 24 FPS — although most gamers will insist that anything under 60 FPS is laggy.

The basic physics

Timing and pacing

Timing is the amount of time or frames required for an object to move. If a ball takes 5 seconds to drop to the ground, its animated timing would be 120 frames (5 x 24 FPS).

Timing plays a huge role in creating animations that are realistic. Every real-world object takes a certain amount of time to perform an action. Although it’s not necessary to calculate the number of frames required to animate a button in an application; it could be a useful tool for determining how long that button should animate for.

Pacing, on the other hand, refers to the speed at which a motion happens. If your animations are too slow, you might bore your users or cause frustration. If they are too fast, your user might lose track of where they are or what they were doing.

Size and weight

As we know, most objects in the real world have size and weight. These dimensions give an object what is called a center of gravity and this has an influence on how it moves and rotates.

Components also have a size and weight and this, in turn, is used to determine, for example, hierarchy. Similar to the real world, our natural go-to is to use the center of a component as its center of gravity. This is both functional and realistic. It is also possible for the center of gravity to shift as its size changes.

Gravity

Gravity is a naturally occurring force that pulls objects towards one another. It anchors us to the planet and is also responsible for the tides of the ocean. It’s mysterious — so I won’t pretend to know more than I’ve already told you other than the fact that it has a huge impact on the movement of objects.

In my view, our devices have two gravitational forces pulling at it — firstly from top to bottom on the Y-axis, and secondly into the depths of the UI on the Z-axis. Google realized very early on that our devices had depth and has built much of material design’s philosophy on what they refer to as elevation.

In a similar way, I wonder whether our tendency to drop things down is a result of our interpretation of gravity on the Y-axis. Drop-downs, select boxes, accordions — all of these components animate towards the bottom of our applications.

Resistance

It’s something we experience daily (like your reluctance to get out of bed) and is a result of forces found in nature as an object moves through space and time. Resistance could be a result of gravity, surfaces or tensions.

Resistance is used in experience design quite often. A great example is Apple’s 3D touch (RIP), where the interface almost “resists” an action until you press hard enough. This resistance is then demonstrated through animation, with the icon highlighting more or less depending on the pressure you apply.

Pull-to-refresh, where a user has to pull the UI down to show the latest content, is another great example. The user has to pull down, with some resistance (is their anti-gravity involved here?), until it reaches a certain threshold where it reloads the page.

Action and reaction

Every object persists in its state of rest or uniform motion in a straight line unless it is compelled to change that state by forces impressed on it.
– Sir Isaac Newton

Newton’s laws are surprisingly relative to UX and animation in particular. When you press a button; you expect a reaction. In a way, gravity compels you to move your mouse and the button reacts violently by displaying a hover effect.

It’s all very scientific, isn’t it?

For every action, there is an equal and opposite re-action.
– Sir Isaac Newton

Newton’s third law is particularly relevant to animation within experiences and interfaces. Interfaces are reactive by nature — there’s even this library called React (or something). This is especially true when it comes to changes in data, size, color, background and more. The role of animation here is to create the visual cues necessary for letting the user know where they are and what they’re doing. When a user taps to download an image there is an expectation to see some indication of progress, failure or success.

Simply put, don’t animate things for the sake of because we naturally expect something with motion to resolve to an action.

The 12 principles of animation

In 1981 two Disney animators, Ollie Johnston and Frank Thomas proposed that all animation consists of twelve basic principles. These principles adhere to the laws of physics mentioned earlier and serve as guidelines to create realistic movement.

These principles are not just applicable to Frozen but have excellent value when applied to user experiences and design as long as we keep the physics in mind.

I’ve listed these principles along with of Dribbble’s best examples below (in no particular order).

1. Squash and stretch

In nature objects are malleable — their shapes change as they interact with their world. They are able to squash and stretch depending on their composition. Similarly, our interfaces can squash and stretch when they are interacted with. The weight and center of gravity of the component do not change but is merely displaced.

2. Anticipation

Anticipation refers to the small actions that lead up to a much larger action. In the wild, a cat might lower its back and pull back its ears in anticipation of pouncing its prey. Anticipation could also be the complete lack of an action such as the dramatic pause before the cat pounces. This anticipation could serve as a warning, be used to entice or create excitement.

In a very similar way, we can use small animations to create anticipation with our users. Hover effects are a great example of this as it indicates to the user that this object (for example a button) can perform a larger action.

I really wish I knew where these clicked to, the anticipation is killing me. Credit Yancy Min.

3. Staging

Staging (or the old adage of setting the stage) involves setting up a scene in order to emphasize characters, objects or events. This could be done in several ways such as lighting, music or camera movement. Staging could also be used to build anticipation.

A classic example of the staging principle in interface design is a loading icon. Not only does this solve a technical problem, but it also lets the user know that the “stage” is literally being set. Furthermore, the actual design of the loader could be used for staging as well; giving the user a glimpse into the type of content they could expect.

This loader by Su is both functional and descriptive for loading an eBook.

Skeleton loading, an extension of the loader icon, is considered a much better loading experience. A “skeleton” of the content to be loaded is shown to the user and then gets filled in as the content loads.

4. Straight ahead action and pose to pose

This principle refers to the way animation is drawn. With straight ahead, you start at frame 1 and draw each subsequent frame. This often results in much better realism and smoothness as you are in control of each subsequent action.

With pose to pose, you might draw the first frame, then the end frame, and only then would you fill in the frames in between.

Most animation in UI today would be pose to pose. As developers, we generally write a static component with CSS, then write the CSS for the animated state and we then toggle this animation with a class or key frames.

5. Follow through and overlapping action

Objects in the real world often consist of multiple moving parts. Cars, people, animals, plants — all good examples. These multiple parts are all affected differently by forces like gravity due to their weight and size. As a result, the same object can have parts that move at different speeds or rotate at different angles. They might also have different levels of resistance due to their size which impacts how long they take to accelerate or decelerate.

In a similar fashion, UI components consist of multiple parts, whether it’s typography, color, shape or spacing. If you are animating multiple parts of the same component it’s important to consider the weight and size each has as well as their relationship to one another. Components that are part of the same group should always animate together, but it’s the subtle differences in speed and acceleration that will make it a good experience.

6. Slow in and out (Easing)

Slow in and out is really just Disney’s term for easing. Objects in life rarely come to an instant halt — they tend to gradually lose momentum and slow down.

Most designers and developers implement easing in their animation already. But do we sometimes go overboard? It’s very easy to mess up an easing curve and this will leave users feeling a bit uneasy. There are great resources for grabbing pre-built easing curves — my favorite being Animista.

7. Arcs

In nature things are very rarely animated in a straight line, simply because no one can throw a ball in an exact straight line. Objects in nature often move in what are called arcs. Arcs are essentially the curved path on which a ball would move if you were to throw it.

Generally, interfaces are aligned to some sort of grid system so we tend not to animate components in arcs. In a way easing is the arc that we use as it makes our animations feel as if they are animating on an arc. That said, there is real value in implementing some sort of arc in these animations as they add a sense of natural fluidity. It’s just a case of finding the right opportunities.

The small arc in the initial animation is not necessarily functional, but it does add a smoothness to the animation that brings it back to the real world. Credit Divan Raj.

8. Secondary action

Secondary actions is any action that happens in addition to the main action. These actions are generally used to support the main action. A real world example would be the turning of a wheel as a bicycle moves.

Secondary actions are excellent for giving the user additional information about their actions. Icons in buttons are a pretty common example of this.

9. Timing

We’ve already looked at timing in terms of physics but there is another, much more literal application. Timing could also refer to how multiple animations play out in sequence. Google refers to this quite aptly as an interface’s choreography.

Transition choreography is a coordinated sequence of motion that maintains user focus as the interface adapts.
– Google Material Design

The order in which components animate is a great way to lead a user through a journey. Our eyes react to movement, even on a micro scale.

In this example by Anton Tkachev the user is guided through their journey with subtle animations. The illustration of the human figure animates first, leading your eye into the next section, followed by the additional information and category tags.

10. Exaggeration

Exaggeration (along with solid drawing and appeal) is where animators get to be a bit more creative. The size, shape or movement of an object is exaggerated beyond realism to add emphasis or interest to an object.

This example of 3D touch almost feels like a bursting bubble once it reaches the resistance threshold. This burst is exaggerated by the shapes flying from it. Credit to Voicu Apostol.

11 & 12. Solid drawing & appeal

Both of these, quite simply, refers to how appealing your component or experience is to your user. This comes down to good design, good UI, great experiences and refined animation.