Andrew H. Fagg: Wearable Computing and Human-Machine Interaction

There are currently many consumer electronic devices that promise
to improve our
daily lives by performing a wide range of tasks -- especially related
to communication and memory functions. However, in practice, these
devices demand greater amounts of personal attention on
the part of the user, which detracts from their benefits.
A solution is to develop devices
capable of automatically making intelligent guesses as to the
information that the user will need over the next few minutes. This
information should then be presented in a form that minimizes user
distraction. By reducing the user's need
to attend to the mechanics of interacting with the devices, we open up
a wide range possibilities for new uses of such wearable computing
systems.

Multi-Modal Wearable Computer Interfaces

I have developed a distributed service model to address these
problems. A set of independent agents is responsible for gathering
information that may be useful to the user at any given time (e.g.,
email, news, and location-dependent "sticky" notes). However, these
agents do not communicate directly to the user, but instead submit
information to a central interaction process. This process is
responsible for making context-sensitive decisions about whether
the information should be presented to the user and how it
should be presented (displayed as text or whispered in the user's
ear). I approach this decision problem as one of control in which a
representation of the user's activity
is translated into an appropriate presentation
action. This control perspective of the user interface enables us to
engage a variety of machine learning approaches, including both
supervised and reinforcement learning techniques.

To date, this perspective has been applied in two experiments. First,
I have shown that an effective association can be acquired between a
representation of the user's current activity and a document that she
will access in that context. This prediction is acquired by
looking over the user's shoulder and observing regular patterns of
document access. Predicted documents are presented to the user in
menu form and can be selected with a minimal number of keystrokes,
increasing the speed at which many documents can be retrieved.
Second, a student of mine has examined a context-sensitive power
management problem in which a mobile computer must decide at any given
time to suspend for a short period of time or continue to be active so
as to respond to user requests or critical sensory events. We
formulated the problem in terms of an SMDP and employed Q-Learning (a
form of reinforcement learning) to
optimize the selection of control actions. The learned control policy
acquired an implicit representation of the conditions under which the
processor could safely suspend while only missing a small number of
external events. In the coming semester, we will be applying similar
techniques to the problem of when/how to present agent-generated
messages.

Human-Robot Interaction Through Virtual and Mixed Reality Interfaces

The issues addressed in the wearable computing domain also apply to
the area of human-robot interaction. Here, we wish to maximize the
efficiency of communication between the human and (potentially) many
robots. Several students and I have been developing mixed-reality
interfaces (a combination of real and virtual environments) for this
purpose (Fagg et al., 2002; Ou, Karuppiah,
Fagg, and Riseman, 2004). Here, a virtual
environment is used to summarize the state of the real world as
extracted by the set of sensors and to make explicit the physical
relationships between the different robots and sensors. This approach
allows the user to explore the data space in a spatial manner and then
to select individual sensors for access to their live data streams or
individual robots for control purposes.

Robot Learning Through Human Interaction

One of the dominant paradigms in robot control for space applications
or hazardous environments is for a user to teleoperate a
robot.
Due to the large cognitive
effort required to ensure that the robot acts as intended by the
teleoperator, the useful operation time of a user is often less than an
hour. I have been exploring the use of mixed
autonomy approaches that allow the robot to perform some subtasks
autonomously after permission is given by the user. One approach
that I have been pursuing is to use our already-existing humanoid
control system as a mechanism for the recognition of the intended movement produced by the teleoperator.
This
technique is being used to preemptively complete movements initiated
by the teleoperator (giving the teleoperator short periods of time to
rest) and to train the control system to perform sequences of
submovements within a single demonstration.