Ultimate Guide to Character Animation: How to Lip Sync to a Voiceover

In this blog, I will be giving a brief overview on how to get your animated character talking.

This is the second post in a 5 part series of blog posts that will approach the subject of character animations. They’re written by our in house Motion Graphics Designer – Harry.

We all know that the voiceover delivers the message and meaning of the video, and as our character is taking on the role of ‘presenter’, making him deliver the voiceover should be the first port of call.

Through making the character speak, not only will we deliver the message of the video effectively but we will also be on the way to making the character come to life.

Without it, your character will be just a random image with no connection to the voiceover but with it, your character becomes a person.

So how do we go about creating a lip sync?

There are many ways to create a lip sync. The most time consuming of these would be frame by frame animation but this simply isn’t cost effective.

The method that I find simplest, is to get a piece of code to do it for you!

In total we will need 3 elements:

> the voice track

> a composition containing 3 mouth shapes (closed, mid & open)

> a simple piece of code

The first step in making this process work, is to create a composition that contains the 3 mouth shapes – each lasting for 1 frame.

The closed mouth on the first frame, the mid mouth shape on the second frame and open mouth on the third frame.

If we then pre-comp this, we can then enable time remapping on the pre-comp so that individual frames can be selected on demand.

The next step is to bring your audio track into the composition that contains the mouth shapes pre-comp and convert the audio track into keyframes.

This means that the amplitude (db) of the audio track is converted from a waveform into numbers.

Or to put it another way, it is now data that can be analysed by a simple piece of code!

By analysing this data, coupled with the fact that we have enabled time remapping on the mouth shapes comp, we can now use a simple piece of code to select a frame within the mouth shape pre-comp, and therefore select a mouth shape, on demand, depending on the loudness of the audio track.

– And hey presto! We have created a lip sync!

In the next blog in this series, I will be approaching the subject of walk cycles