Vision-Based GUI Control: Tobii Makes It So

Shortly before Christmas, I mentioned how Sweden-based Tobii hoped to use eye-tracking algorithms in conjunction with image sensor-inclusive automobile interiors to (for example) awaken drivers when they were nodding off or alert parents when their behind-the-wheel teenagers were paying more attention to text messaging sessions on their cellphones than on the roadway ahead of them. Not surprisingly, the company also strives to see its technology succeed in other lucrative (high volume, high margin, or both) markets. To wit, as both Engadget and the Wall Street Journal recently mentioned, next week at CES the company plans to demonstrate an eyeballs-augmented user interface for upcoming Windows 8.

Truth be told, I'm not yet sold on the compellingness of Tobii's Gaze approach. Admittedly, the large-tiled Windows 8 Metro interface (also found on the Windows Phone O/S) makes a conventional mouse cursor-based approach somewhat overkill. But unlike the spokesperson in the video, I don't find using a trackpad to be particularly inconvenient. As the video notes, you'll still need to leverage a trackpad or mouse in order to click on a tile or icon after eyeball-highlighting it, anyway, as well as to select graphical items that are smaller than Gaze's unique-discernment threshold. And if an unconventional user interface suits your fancy, an eyes-on approach isn't your only option; image sensor-based gesture schemes are equally feasible.

Which candidate is more natural is something I'll need to decide for myself after testing both approaches, which I hope to do soon. For those of you who've already tried eye tracking and/or gesture interface schemes, what's your opinion?