Is 'vision' the next-gen must-have user interface?

Earlier this month, IMS Research (Austin, Texas) issued a press release questioning if Apple and the iPad are falling behind competitors in user interface technologies.

The point of the research company’s commentary, to paraphrase, was: Sure, Apple has changed the game by bringing touch-screen interaction to the masses; but is that all? Shouldn’t Apple be also embracing embedded vision technologies in the next product release?

The industry still wants to know: Where will the battle lines be drawn for the next-generation user interface – beyond touch. Will it be gesture, motion, or voice? What about mental telepathy?

A growing number of FPGA, DSP and processor companies are now betting the future on embedded vision.

Contributing to this movement are: a) the growing processing power (using parallelism) in embedded systems, and b) increasingly sophisticated machine-vision algorithms that let embedded systems not just see but extract information to produce necessary intelligence.

Jeff Bier, president of Berkeley Design Technology, Inc., said, “Thanks to Microsoft’s Kinect (used in Xbox 360), we now have ‘existence proof’ for embedded vision. We now know it works.”

Consumers, moreover, are becoming more familiar with gesture controls, and automotive manufacturers are integrating embedded-vision applications in cars in the cause of driver safety.

Jon Cropley, principal analyst at IMS Research said the market for intelligent automotive camera modules alone was estimated at around $300 million in 2011 and is forecast to grow at an average annual rate of over 30% to 2015.

Meanwhile, the market for intelligent video surveillance devices (devices with embedded analytics) was estimated at about $250 million in 2011 and is forecast for annual growth to 2015 of more than 20%, he added.

Biggest and perhaps most established is the market for industrial machine vision hardware (smart sensors, smart cameras, compact vision systems, and machine vision cameras). Cropley's estimate is around $1.5 billion in 2011 with an average annual rate over 10%.

But from the engineering community’s standpoint, Bier said, “Many design engineers, generally, just don’t think about vision, and they still don’t know what’s possible.”

Embedded vision is in fact a “classic long-tail story,” Bier said. “There are thousands of applications; and its market is extremely diverse.” Bier founded Embedded Vision Alliance, an industry association set up to inspire and empower embedded system designers to use vision technology.

Working with the Embedded Vision Alliance, EE Times put together an image gallery that showcases the latest embedded vision-enabled consumer products.

Never miss a word?

That’s a marketing tagline used by Livescribe, a company that developed a platform consisting of a digital pen, digital paper and software apps.When used with digital paper, Livescribe’s pen -- integrated with an infrared camera and a digital audio recorder -- records a conversation while one participant takes notes on digital paper.

Digital paper consists of numerous small black dots in patterns essentially invisible to the human eye, but detectable by the pen’s camera. It allows a user to replay portions of a recording by tapping on the notes – no matter how messy the scribble. Not a single word is lost. A reporter’s dream; a politician's nightmare.

The more I learn about this "embedded vision" thing, it is fascinating. As Jeff Bier said during the interview, "We have only scratched the surface," as far as the embedded vision applications are concerned.
What's your favorite "embedded vision" product?

These applications have been developing for some time, even if not strictly under the name of "embedded vision."
What I'm most intrigued about is applying these techniques to privately owned medical devices, with optional communication to your doctor's office. Any number of tests and diagnoses should be doable this way, and potentially more reliable than current methods.
Another application is self-driving cars. Just yesterday, I commented to my wife how in a few years, we won't believe how people could ever have been trusted to drive manually. How reckless of us!

I look forward to putting on a headset that will block my vision with displays that provide me a superset of what I could see without the headset. In several sci fi shows/movies, these have been imagined in various ways. Whether it's as "simple" as having a high def, heads up display in your head, giving you quantitative data on what you are seeing through the eyes of the headset. This would include Automatic IR/visual switching, etc. depending on the enviroment. Automatic zoom and macro capability. (that's right, fellow plus 40ers, no readers necessary!)

Is the first time I hear the term embedded-vision. I think is appropriate. It really is impressive the kind of app's that are coming out. Never thought it was possible to diagnose if you had a little too much alcohol to drink just by a camera scanning your eye. Wow! wait a minute... this app in the iphone and the iphone connected with the car through perhaps Bluetooth... then the iphone can stop the car from starting if the eyes show that the party was really hard. I think this could save lives, don't you?

In my opinion Embedded vision is a dangerous concept to come as a UI. Because you can do multitasking while listening music with your headphones, while taking on your microphone or the mobile but it is almost impossible if your eye gets engaged in something.

It has been a long-awaited dream for developers of audio speech analyser code to get help from lip-reading cues; these are a big help in improving accuracy of speech-to-text. Only now are we getting ubiquitous cameras and the CPU power to clarify what you are saying to your device.
Very important for low-cost computers for non-literate societies.