Software Voice Vowel Detection in AS3

Lest you think that I have come up with the solution to this and you merely look for a download link, I have to let you know that I’ve come pretty close but gave up. I’ll tell you what I did and why I decided to put the project down. If you’ve followed the thread on Flashcoders (edit:old by now), you already have some insight. Perhaps this post might get you thinking and you may come up with a workable solution!

Start.While implementing a text to speech engine (which returns an on-the-fly .mp3 file), I harnessed the power ofSoundMixer.computeSpectrum. This allowed me to pretty easily move the jaw on a character up and down based upon the amplitude of the audio playback. When not moving the jaw too drastically, it looks pretty decent.

But what I really wanted to do was to shape the mouth to match the audio as best I could. Since I was using a software voice (not related in any way to Mac OS X’s voices), I could more accurately theoretically match patterns.

I began by creating a spectrum analyzer so I could evaluate the .readFloat values coming through SoundMixer. Now, I wanted to generate vowel “patterns” of values that I could store and use for matching later on the fly. I added an input text field and a speak button. A handy array in my spectrum class would gobble up values as they poured through theSoundMixer. Another button would later trace out all of the values captured. Yes, I just ran this application for each vowel I entered and played back. I ignored all zero values for each vowel, as there were tons of these… mostly at the beginning and the end of the audio file playing back.

So I ran and collected values for naked vowels (“a”, “e”, “i”, “o”, and “u”). Granted, even gathering values on naked vowels in the system, sometimes there is a little variation on the resulting values. For every enterFrame I collected 256 values. For an example, here are the values for the vowel “e” (for my software voice, non-zero values, beginning to end, and I show it as the vowel contained the least amount of data associated with it):

There you have it. My “e”. I created arrays for each vowel that serve for lookup.

Then when the sound was playing through, I’d look for a starting value (or close to it) in order to start and continue looking to see if the values were close (excluding zero values of course). I think that readFloat may keep some kind of counter and gets the next value in a byte array each time it’s called. I haven’t seen declaratives on this method yet. Anyway, I found that in no way do you need to check every single value for the current byte and that in a particular pattern array. Sometimes just checking the first one or two values would work a charm. And you can probably check every 10th value or something. Otherwise you really risk letting some values not match (if you are going to be extremely strict). Once a match is deemed impossible you should stop checking against that vowel since it’s already failed before the full pattern match completed. Why keep checking when you know it already failed? Save some cycles.

Those pattern values have many decimal places… which also risk throwing off matches. So I used Number( level.toFixed(3)) – and this seemed to work pretty well. Not trimming them before stuffing them into the pattern arrays allows me some flexibility. Keeping those as registered gives flexibility later.

After a positive test for a vowel, I dispatch a custom event so I can manipulate a mouth in the document class, or do whatever I’d like. Testing has gone pretty well. It’s working.

Now, one has to also consider that vowels in words sound quite differently. So one has to pull out those parts as patterns. A, Ahh, Cat, Dart, etc. so those aren’t serving as vowels anymore, but sounds. Those would need to be added.

Why did I stop work on this project?
Well, this vowel recognizer I’ve coded up ONLY works for this exact software voice I am using. Which means if you tried using it, it wouldn’t work for you unless you were using the exact same text-to-speech voice that I am using. That’s a bummer. You’d have to generate your own patterns for the voice you were using. And trust me, it’s a bit of a pain in the ass.

I’ve noticed that even just testing with a few vowels, I’d get a little response lag. That’s to be expected. The only way this would be perfect was if you’d know the match exactly as it happend, or slightly ahead of time. My system merely evaluates what’s already passed through and acts as soon as it can. Which at times is visually disturbing. With all the effort, I’m not sure it’s always worth it. When it happens quickly, it’s pretty awesome. I have also noticed that at times in normal words (not naked) the system is detecting the vowel sounds and firing the proper events. That’s pretty awesome.

This project started off as a nice to have feature. Purely visual. I wanted to see if I could quickly come up with a solution.

I looked at many things:

somehow using dynamic cue points.

splitting up the known text string into words, listen for pauses so I could walk through the current word being spoken, and evaluate the word currently being spoken (note: this may still be a decent way to go as long as you’re supplying the text string to be converted to voice).

matching visual bitmap representations of waveforms.

Some other stuff I can’t quite remember

So my solution almost works, but it’s not quite fast enough. It’s strictly tied to an exact software voice… even if pitch or tempo were to change my system would break.

So there you have it. I’ll leave it in place for my local musings. I wonder if anyone has tackled this problem before or might plan on doing so. I’d be curious to see what you did or what you come up with. Cheers.