More than three decades ago, the film "2001: A Space Odyssey" introduced audiences to a chillingly malevolent supercomputer named HAL. In what was then the far-off future, the film imagined machines that could speak and understand human speech (and even read lips!). HAL became an instant symbol for the dangers of runaway technology.

While artificial intelligence researchers have managed thus far to avoid creating monsters like HAL, the idea of humans and computers speaking to each other is no longer the stuff of science fiction. It is instead the driving force behind the growing discipline of computational linguistics, which studies the computational aspects of human language.

"Basic speech recognition systems have now become commonplace," says Julia Hirschberg, who joined the Department of Computer Science in Fall 2002. "Researchers today are moving into some very interesting and complex areas. We're looking at how to enable computers to recognize speech errors, perform audio browsing and retrieval of email, and recognize and produce emotional speech."

Hirschberg received a Ph.D. in computer science from the University of Pennsylvania in 1985. That same year, she began working in the linguistics research department at AT&T Bell Laboratories. In 1996 she moved to AT&T Labs - Research, specializing in human-computer interface research.

Hirschberg is part of a Columbia team researching speech summarization, an area of natural language processing which enables systems to summarize spoken words. The Columbia Natural Language Processing Group, headed by Kathy McKeown, obtained an NSF and a DARPA grant for research in this area.

"We want to learn what speech cues will lead a system to understand what is important to summarize," Hirschberg says. "Although a lot of research has been done on text summarization, not too many people are trying to produce spoken summaries. It could lead to applications that summarize things like news broadcasts or voice mail messages in spoken form."

Another focus of Hirschberg's research at Columbia is emotional speech. Computer scientists are studying this process from the perspectives of speech recognition as well as speech generation.

"There is an effort underway to enable computers to recognize emotions in human speech," she said. "Work is also being done to help computers generate emotional speech. Both of these would have enormous potential for commercial applications."

Applications might include a tutorial system that could recognize confidence, or the lack thereof, in a student, and customize its lessons accordingly. On the generation side, text-to-speech systems that speak emotionally could have a number of consumer uses, from books for the blind to computer games.