"There are many people who have perfect use of their voice who don't have use of their hands and arms," said Jeffrey Bilmes, a University of Washington associate professor of electrical engineering. "I think there are several reasons why Vocal Joystick might be a better approach, or at least a viable alternative, to brain-computer interfaces."

Vocal Joystick detects sounds 100 times a second and instantaneously turns that sound into movement on the screen. Different vowel sounds dictate the direction: "ah," "ee," "aw" and "oo" and other sounds move the cursor one of eight directions. Users can transition smoothly from one vowel to another, and louder sounds make the cursor move faster. The sounds "k" and "ch" simulate clicking and releasing the mouse buttons.

Versions of Vocal Joystick exist for browsing the Web, drawing on a screen, controlling a cursor and playing a video game. A version also exists for operating a robotic arm, and Bilmes believes the technology could be used to control an electronic wheelchair. Vocal Joystick requires only a microphone, a computer with a standard sound card and a user who can produce vocal sounds.

"A lot of people ask: 'Why don't you just use speech recognition?'" Bilmes said. "It would be very slow to move a cursor using discrete commands like 'move right' or 'go faster.' The voice, however, is able to do continuous commands quickly and easily." Early tests suggest that an experienced user of Vocal Joystick would have as much control as someone using a handheld device.

The newest development uses Vocal Joystick to control a robotic arm. The pitch of the tone moves the arm up and down; other commands are unchanged. This is the first time that vocal commands have been used to control a three-dimensional object, Bilmes said.