Invoked Computing: spatial audio and video AR invoked through miming

Summary

Direct interaction with everyday objects augmented with artificial affordances may be an
approach to HCI capable of leveraging natural human
capabilities. Rich Gold described in the past ubiquitous
computing as an "enchanted village" in which people
discover hidden affordances in everyday objects that act as "human interface "prompt[s]" (R. Gold, "This is not a pipe." Commun. ACM 36, July 1993.). In this
project we explore the reverse scenario: a ubiquitous
intelligence capable of discovering and instantiating affordances suggested by human beings (as mimicked actions and
scenarios involving objects and drawings). Miming will prompt the ubiquitous
computing environment to "condense"
on the real object, by supplementing it with artificial affordances through common AR techniques. An example:
taking a banana and bringing it closer to the ear. The
gesture is clear enough: directional microphones and
parametric speakers hidden in the room would make the
banana function as a real handset on the spot.

In other words, the aim of the "invoked computing" project is to
develop a multi-modal AR system able to turn everyday
objects into computer interfaces / communication
devices on the spot. To "invoke" an application, the user
just needs to mimic a specific scenario. The system will try to recognize the suggested affordance and
instantiate the represented function through
AR techniques (another example: to invoke a laptop
computer, the user could take a pizza box, open it and
"tape" on its surface). We are interested here on developing a multi-modal
AR system able to augment objects with video as well as
sound using this interaction paradigm.