Visual Speech Recognition - When Will HAL Read Lips For Real?

Visual Speech Recognition, also known as automated lip reading, is a field with a special meaning for science fiction fans. In the film 2001:A Space Odyssey, the HAL 9000 computer was able to read lips.

(HAL 9000 [background] eavesdrops on astronauts Poole and Bowman)

In the film, HAL's increasingly erratic behavior becomes a matter of concern for the astronauts. Since HAL can effectively monitor every part of the ship, the astronauts retire to a small pod to discuss the matter. Unfortunately, it turns out that somebody did research on computer lip-reading, and so HAL was on to them, with very unfortunate results for Poole.

In a recent paper, Ahmad Hassanat at Mu’tah University in Jordan provides a review of existing approaches, and suggestions for moving forward with VSR. He also outlines some of the challenges in actually creating a computer able to read lips, like the fictional HAL 9000.

The fundamental process of lip reading is to recognize a sequence of shapes formed by the mouth and then match it to a specific word or sequence of words.

There is a significant challenge here. During speech, the mouth forms between 10 and 14 different shapes, known as visemes. By contrast, speech contains around 50 individual sounds known as phonemes. So a single viseme can represent several different phonemes.

And therein lies the problem. A sequence of visemes cannot usually be associated with a unique word or sequence of words. Instead, a sequence of visemes can have several different solutions.

The first problem for automated lip reading is face and lip recognition. This has improved in leaps and bounds in recent years. A more difficult challenge is in recognizing, extracting and categorizing the geometric features of the lips during speech.

This is done by measuring the height and width of the lips as well as other features such as the shape of the ellipse bounding the lips, the amount of teeth on view and the redness of the image, which determines the amount of tongue that is visible.

Determining the exact contour of the lips is hard because of the relatively small difference between pixels showing face and lips.

Another problem is that some people are more expressive with their lips than others so it easier to interpret what they are saying from lip movements alone. Indeed, some people hardly move their lips at all and these so-called “visual-speechless persons” are almost impossible to interpret.

Hassanat’s own visual speech recognition system is remarkably good. His experiments achieve an average success rate of 76 percent, albeit in carefully controlled conditions. The success rate is even higher for women because of the absence of beards and mustaches.

Technovelgy readers may want to recall that, even in the surveillance classic 1984, the telescreen was always on, but whether or someone was watching was not clear.

There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork.

With Visual Speech Recognition, thought, your conversation with others could be surveilled by machines even if people are not watching.