TotalLipSync is a module for voice-based lip sync. It allows you to play back speech animations that have been synchronized with voice clips. TotalLipSync has all the same capabilities as the voice-based lip sync that is built into AGS, and offers the following additional advantages:

It works with LucasArts-style speech (as well as with Sierra-style and full-screen speech modes).

In particular, Rhubarb support means that lip syncing can be 100% automated (with decent results): no manual tracking of the speech clips is required.

It is more flexible: You can switch speech-styles mid-game, change the phoneme mapping, use files with different data formats, etc.

You don't have to do the phonemes-to-frames mapping manually: The module comes with a default auto-mapping.

How to use

Create the lip sync data files for the speech clips. You can use one of these tools (personally I would recommend Papagayo for manual tracking, and Rhubarb for automatic lip syncing, but the Lip Sync Manager plugin is good too):

The filename of each sync file should be the same as the speech clip except for the extension, and you need to place them in your compiled game folder (by default, in a folder names "sync/" inside the game folder).

Create the speech animation for your character(s), with different animation frames for the different phonemes (see below), and set it as their speech view.

Download and import the TotalLipSync module into your AGS project.

Make sure your game settings are correct: the AGS built-in lip sync (in the project tree under "Lip sync") should be set to "disabled".

If you are going to use Sierra-style (or full-screen) speech for your lip sync animations, you must create a dummy view. Make sure to give it exactly one loop and one frame. If you name the view TLS_DUMMY it will automatically be used by the module. Otherwise you can set the view to use with TotalLipSync.SetSierraDummyView().

You are now ready to use the module. Add the code to initialize TotalLipSync on startup:

And that's all there is to it! (If you don't use a speech clip prefix, or if there is no matching sync file, the speech animation won't play at all.)

Phoneme-to-frame mappingsThe principle of lip syncing is that different sounds (phonemes) correspond to different mouth shapes. If we display an animation frame with the right mouth shape at the same time as that sound appears in the audio being played, the animation will seem to match the speech. The first step, then, is to identify the phonemes and timing of them in the speech (that's what the tools listed above are for), and the second step is to choose an appropriate animation frame for each phoneme. We usually don't use different animation frames for all the different phonemes, so we combine phonemes into groups that are all mapped to a single frame. The different tools have different sets of phonemes (or phoneme groups), so we have to define different mappings from phonemes to frames.

So here is the default mapping for each data format used by TotalLipSync. It has been set up for a speech animation with ten different frames, each representing a different mouth position. (This is a fairly standard setup.) If you stick to these frames and these mappings, you can use the same speech view no matter what lip sync tool or data format you use:

Pamela and Annosoft SAPI 5.1 LipSync use almost exactly the same phoneme sets, with only minor variation (TotalLipSync is not case sensitive). Pamela can tag vowels with three levels of stress (0-2), e.g. AY0, UW1. This is not particularly useful in AGS, and should be ignored (by setting TotalLipSync.Init(ePamelaIgnoreStress)) unless there's good reason not to. Anyway, here's the full list and what they represent (ones where Annosoft differs emphasized):

The script doesn't call Rhubarb, you'll have to do all of that. Take all the speech clips, convert them to .wav if necessary, copy them into the Rhubarb directory, and for each one, call "rhubarb.exe myclip.wav > myclip.tsv" (where "myclip" is the name of the clip). You can also put the text corresponding to each clip in individual .txt files to assist with the speech recognition, and then you'd call "rhubarb.exe myclip.wav -d myclip.txt > myclip.tsv". Then once that's done, copy all the .tsv files over into the directory of your compiled AGS game, and the module will read them.

Obviously this process is tedious, and also Rhubarb takes quite a while to process each clip, so if you have more than a couple of dozen clips you'll definitely want to automate it (you could write a batch file to go through and process each .wav file in the directory)

In fact, I wrote a very simple version of such a batch file:

Code: Bash

for%%F in(clips/*.wav)do(

rhubarb.exe clips/%%~nxF -d guide/%%~nF.txt > sync/%%~nF.tsv

)

This assumes that the voice clips are in a folder called "clips/" inside the Rhubarb directory, that the text files are in a folder called "guide/", and that there is a folder called "sync/" where the .tsv files will be written. It also requires a .txt file for each .wav file. So there are a lot of possible improvements. Save this in a text file in the Rhubarb directory and name it something like agsbatch.bat, and you can run it to process all the speech clips in one go (which might take a while!).

Oh, nice! Just what I needed! and the drawing one is just perfect reference for me to try and "copy" into the Blender models. I've been doing 6 to 8 frames lipsync, so jumping to 10 doesn't feel that scary.

Okay I've converted all files to TSV etcOne last question: where do I put the TSV files?My game files are in K:\The speech (as .OGG files) is in K:\gamename\Speech\The WAV files for speech is in K:\gamename\Speech\WAV (I use .OGG files)

Do I put the TSV's in the Speech dir?...

I notice '$INSTALLDIR$/sync' - is this "C:\Program Files (x86)\Adventure Game Studio 3.4.0\sync" or "my installed game\sync" dir, or "my currently un-installed un-compiled game directory\sync", or...?

While you're working on the game, it's – in your case – "K:\gamename\Compiled\sync" or "K:\gamename\Compiled\Windows\sync" (I believe AGS will read both directories). "K:\gamename\Compiled\Windows" is probably the directory you will ultimately distribute once the game is finished (unless you're aiming for another platform), so that's where I would put it.

While you're working on the game, it's – in your case – "K:\gamename\Compiled\sync" or "K:\gamename\Compiled\Windows\sync" (I believe AGS will read both directories). "K:\gamename\Compiled\Windows" is probably the directory you will ultimately distribute once the game is finished (unless you're aiming for another platform), so that's where I would put it.

$INSTALLDIR$ is the directory where main game data file is located at (*.exe or else) the moment you run it.

AGS does not really check more than one directory normally; there are special rules when running from under the Editor only (debugger mode): in that case Editor passes couple of alternative paths to the engine (one includes AudioCache, for instance).

I'm using a different movement view, and speech view, for my character. Before this, no bug; now, the compiler will randomly choose a point and then crash with a runtime error:

But then, if I toggle breakpoints in the file in one of the sections leading to the error (line 649), I can do the same thing, but no crash:

This would appear to be a timing issue, as I've checked everything else; all frames and movement and speech are the same as the original character, nothing is missing. No idea how the module works though; it appears to crashing on speechstyle=lucasarts...?

Ah, good! As I was just about to write, there are two things to check first:

-What version of the module are you using?-Does your speech view have enough frames (in each direction)? It should be 10 for the default mapping.

Rhubarb only has 9 different phonemes/mouth positions (it uses the same frame for W as for U/OO sounds), but the auto-mapping still assumes a 10-frame speech view (W, frame 7, will never be used, so you can leave it blank or make it the same as frame 6) for consistency with the other formats. Also note that the order of the frames is not as listed on the Rhubarb page, but as in the table (behind spoiler tags) in the first post in this thread.

However, I'll see about adding some checking so the module can give a more informative error message if this happens.

I am curious - haven't yet tried a comipiled game...If my sync/ folder is Compiled/; but my actual game is in Compiled/Windows/ - does this mean I include the sync directory in Windows/ too? Is the sync/ files just compiled into the game EXE?

AFAIK when you run the game from the Editor (with F5), game does not look into Compiled at all, it gets files and subfolders that are located right in the project's root folder.

But when you order "Build Exe" it builds it to Compiled/Windows. But it won't put extra files there automatically, so you would need to copy them over when preparing package.(EDIT: actually just files from Compiled get copied into Compiled/Windows, but not subfolders.)

Exactly how does the EXE reference it's own install dir? Should I make it '/sync/' or 'C:/moci/Compile/Windows/sync' or...?It seems the directory where the TSV files are stored is hard-coded in, so it will only work if the game is in '/programme files (x86)/moci/sync', and nothing else -- which makes it hard for testing.

AFAIK when you run the game from the Editor (with F5), game does not look into Compiled at all, it gets files and subfolders that are located right in the project's root folder.

Well, I just tested it, and if you have a "sync" folder inside "Compiled", it will be read when you run the game from the editor and reference files in "$INSTALLDIR$/sync". (But you are correct that the directory is not automatically copied into "Compiled/Windows/sync"; you have to do that yourself when you're ready to distribute the game.)

AFAIK when you run the game from the Editor (with F5), game does not look into Compiled at all, it gets files and subfolders that are located right in the project's root folder.

Well, I just tested it, and if you have a "sync" folder inside "Compiled", it will be read when you run the game from the editor and reference files in "$INSTALLDIR$/sync". (But you are correct that the directory is not automatically copied into "Compiled/Windows/sync"; you have to do that yourself when you're ready to distribute the game.)

Oh right... probably I had a moment of amnesia; it was just couple of months ago what I was working around that, because in 3.4.1 Compiled folder is now Compiled/Data.

So, yes, when debugging from Editor the rules are a bit complicated, the "installdir" is actually "made" of three folders:- working directory, which is project root folder: for example game takes font files from there- Compiled folder: I think this was made to take speech.vox from there, and maybe something else, like translation files.- AudioCache folder: it takes audio files from there.

Basically it looks into project root first, and if it did not find needed files, it also checks either Compiled or AudioCache, depending on what kind of material it is looking for.

Why it was made so: I think to increase compilation speeds when testing, because it does not have to gather/package all those files every time.

For the record, I think this is a good thing, because it means you can keep all your "to be distributed" files in "Compiled/", and you don't need separate copies for each target platform (with the potential nightmare of keeping all the copies in sync).

(I might split this whole discussion off as a separate thread, since it doesn't really have much to do with this module specifically.)

I wrote a little script to help people who use Rhubarb to do lip-syncing.

Rhubarb does the lip-sync tracking automatically by analyzing the audio, but you can also supply the actual text of the dialog to help guide it, improving the results. Using something like this batch file, you can then create the lip-sync tracking for the entire game automatically:

Code: Bash

for%%F in(clips/*.wav)do(

rhubarb.exe clips/%%~nxF -d guide/%%~nF.txt > sync/%%~nF.tsv

)

There hasn't been a convenient way to create these guide files from AGS, however. This script helps to automate the task.

It works with the voice acting HTML scripts generated by the Speech Center plugin. Once you've created the voice acting scripts, place them in a subfolder (e.g. /VoiceScripts) inside your compiled game folder. Place a call to this function in your game, and run it to extract the guide files, like so: