I get the feeling that ten or twenty years from now, when gestural recognition is finally widespread and reasonably usable, people will look back on the TouchStream much the same way people look back on Doug Engelbart's1968 NLS demo. "How was that even possible with the technology of the time? Why didn't it change the world then? How come even today's systems can't do all the things that were demonstrated that day?"

The FingerWorks gesture interpretation scheme is far, far ahead of its time. I did my own PhD in human-computer interaction, specializing in low-level interaction patterns, so I was pretty thoroughly familiar with the field. Westerman's stuff is in an entirely different class from the normal run of HCI research, in the same way that the SR-71 was in a different class from other aircraft technologies.