OMG its the Machinima BLog!

Phonetics tutorial

Phonetics tutorial:
First of all I’m going to give you an explainaiton of phonetics, dont worry it’ll be gaming orientated.
Phonetics are an explaination in symbols of how we use our mouth, tounge etc. to make sounds (usualy words). This might sound easy, and it is once you get used to the 3 ways of translating: the International Phonetic Alphabet, the American Phonetic Alphabet and the Faceposer Phonetic Alphabet. As you can guess the second one is used in america, the first around the globe – america (if I’m guessing right) and the third one is the one we want.
The Faceposer Phonetic Alphabet (from now on said as FPA) tells the source engine how the mouth of the character(s) in-game move their mouth, this is also called lip synching.
Some game engines such as the one used for Star Trek Elite Force 2 have a tool for roughly converting the things we say in mouth movement, that might seem easy, but it won’t give accurate results because our jaw and tounge aren’t the only things that are moving.
The Source Engine however uses a phonetics system so the character really seems to be talking. Much better than the old engines could.

Ok enough talk, to business.
For this tutorial I’ve expected that you read at least some stuff about how to work with faceposer. If you didn’t, check the URL at the bottom of the page, you’ll learn a lot from it.
Once you opened faceposer, open the Phoneme Editor.
Load in the sound file you’d want the actor to animate.
Click on re-extract, enter the sentence that is said and click ok.
One of 3 things could happen now:
1)Phonetic translation is done(green text).
The program recognized how the words were said.
Just check them for small errors, the translation isn’t completely bug free.
2)Phonetic translation is almost done (yellow text).
The program recognized most of the words, or isn’t sure if it understood the word correctly.
Check/add/edit the phonetics.
3)Phonetic translation failed (red text).
This is what happens if you use Vista or higher, since Valve messed it up for those operating systems.
You have to manually add the phonetics. This tutorial only talks about what to do when you encounter number 2 and 3.

First of all you need to edit the blocks with a word in it to match the size/duration of what is said in the file. You can do this by selecting the wordblock (de-select = escape button) and to resize it press Ctrl and left click + drag the edges of the textblock to match the word that is said in the file. Shift + drag to move the block. You might have the problem that when you decrease word 1, word 2 increaces, and vice versa. To seperate the words, click on them both, right click and select seperate words.

Once all the word-blocks are matched up with what is said in the file, select one block, right click and choose “Add phoneme to ‘WORD’ ” now you’l need the phonetic translation of the word. Use the converstion file that is attatched below. The file works as followed:

B: Big Voiced alveolar stop

B is the FPA translation, the underlined word is an example of how this phonetic is said. And the stuff at the end is only used when you have more then 1 way of choosing, such as with ER and ER2. The stuff at the end tells you where in the mouth the phoneme is said. More information on the sentance at the end: here

Once you clicked in the right phoneme sequence, hit ok. Then, the same way you did with the word-blocks, edit the size of the phonemes.

Repeat the phoneme process with all the words, and voillá! Hit safe and the phonetics will be added to the end of the wav file so you can work on it later.

Next tutorial I’l explain what to do with the wav file you just created.