What’s Next After Touch Computing?

Computers that see and hear people could boost productivity, collaboration.

Humans can typically understand spoken words, hand gestures and facial expressions at an early age. Yet computers, even after decades of evolution, still struggle to interpret them. That’s about to change, according to tech industry experts who see so-called perceptual computing as the next step in controlling computer devices.

Touch interfaces are changing the way people interact with computing devices, but hand gestures and voice recognition will make interactions even more human.

“Touch is dramatically changing the way we interact with our devices, but using voice, gestures and other human expressions will bring this to a whole other level,” said John Bergquist, a communications director at software companies Soma Games and Code Monkeys.

Touch functionality has become integral to smartphones and tablets and is showing up on an increasing number of Ultrabooks and all-in-one computers.

Looking ahead, Bergquist is curious to see how quickly and easily young kids will take to using speech, hand gestures and even facial expressions to control their computing experiences.

The technology required to use those inputs is already available, but not widely used. Many computing devices today have cameras, microphones and speakers that can be trained to react to owners, according to Anil Nanduri, director of perceptual computing solutions and products at Intel.

“Until now, it has been us engaging with the machine, but now the machines have the ability to engage you,” said Nanduri. “Computers have enough performance to manage a vast amount of data, a lot more than we can process in real time through our brains.”

Nanduri notes that cameras and microphones built in to personal computing devices expected to be available later this year will work like eyes and ears trained on their owners. “When computers can see and understand their owners, they can help people be more productive and collaborative,” he said.

Perceptual computing will allow people to just ask their computers to play a song, search the Internet, send a Tweet or post a picture to Facebook all without having type, tap a touchpad or touch the screen according to Nanduri.

“It’s not about this replacing touch or replacing a keyboard,” he said. “It’s about having the best experience and interaction with a particular computing device.”

/* #clipboard-content is used exclusively to allow the "copy to clipboard"/"take this content" functionality a place to grab the article content */ ?>

What’s Next After Touch Computing?

Computers that see and hear people could boost productivity, collaboration.

Humans can typically understand spoken words, hand gestures and facial expressions at an early age. Yet computers, even after decades of evolution, still struggle to interpret them. That’s about to change, according to tech industry experts who see so-called perceptual computing as the next step in controlling computer devices.

Touch interfaces are changing the way people interact with computing devices, but hand gestures and voice recognition will make interactions even more human.

“Touch is dramatically changing the way we interact with our devices, but using voice, gestures and other human expressions will bring this to a whole other level,” said John Bergquist, a communications director at software companies Soma Games and Code Monkeys.

Touch functionality has become integral to smartphones and tablets and is showing up on an increasing number of Ultrabooks and all-in-one computers.

Looking ahead, Bergquist is curious to see how quickly and easily young kids will take to using speech, hand gestures and even facial expressions to control their computing experiences.

The technology required to use those inputs is already available, but not widely used. Many computing devices today have cameras, microphones and speakers that can be trained to react to owners, according to Anil Nanduri, director of perceptual computing solutions and products at Intel.

“Until now, it has been us engaging with the machine, but now the machines have the ability to engage you,” said Nanduri. “Computers have enough performance to manage a vast amount of data, a lot more than we can process in real time through our brains.”

Nanduri notes that cameras and microphones built in to personal computing devices expected to be available later this year will work like eyes and ears trained on their owners. “When computers can see and understand their owners, they can help people be more productive and collaborative,” he said.

Perceptual computing will allow people to just ask their computers to play a song, search the Internet, send a Tweet or post a picture to Facebook all without having type, tap a touchpad or touch the screen according to Nanduri.

“It’s not about this replacing touch or replacing a keyboard,” he said. “It’s about having the best experience and interaction with a particular computing device.”