Surfaces in context + gestures and body

"The Cube" is a unique facility that combines 48 large multi-touch screens
and very large-scale projection surfaces to form one of the world's largest
interactive learning and engagement spaces. The Cube facility is part of the
Queensland University of Technology's (QUT) newly established Science and
Engineering Centre, designed to showcase QUT's teaching and research
capabilities in the STEM (Science, Technology, Engineering, and Mathematics)
disciplines. In this application paper we describe, the Cube, its technical
capabilities, design rationale and practical day-to-day operations, supporting
up to 70,000 visitors per week. Essential to the Cube's operation are five
interactive applications designed and developed in tandem with the Cube's
technical infrastructure. Each of the Cube's launch applications was designed
and delivered by an independent team, while the overall vision of the Cube was
shepherded by a small executive team. The diversity of design, implementation
and integration approaches pursued by these five teams provides some insight
into the challenges, and opportunities, presented when working with large
distributed interaction technologies. We describe each of these applications in
order to discuss the different challenges and user needs they address, which
types of interactions they support and how they utilise the capabilities of the
Cube facility.

In this paper, we present the design, implementation, and preliminary
evaluation of AstroTouch, a prototype desktop surface application to support
analysis and visualization in the field of astrodynamics. We describe the
fundamental characteristics of this complex scientific domain and discuss how
these characteristics, combined with an assessment of current research
surrounding multi-touch and the digital desktop, informed the design of our
system. We detail the prototype implementation and present the results of an
initial design critique conducted with domain experts.

SkyHunter: a multi-surface environment for supporting oil and gas
exploration

The process of oil and gas exploration and its result, the decision to drill
for oil in a specific location, relies on a number of distinct but related
domains. These domains require effective collaboration to come to a decision
that is both cost effective and maintains the integrity of the environment. As
we show in this paper, many of the existing technologies and practices that
support the oil and gas exploration process overlook fundamental user issues
such as collaboration, interaction and visualization. The work presented in
this paper is based upon a design process that involved expert users from an
oil and gas exploration firm in Calgary, Alberta, Canada. We briefly present
knowledge of the domain and how it informed the design of SkyHunter, a
prototype multi-surface environment to support oil and gas exploration. This
paper highlights our current prototype and we conclude with a reflection on
multi-surface interactions and environments in this domain.

We present two experiments examining the impact of navigation techniques on
users' navigation performance and spatial memory in a zoomable user interface
(ZUI). The first experiment with 24 participants compared the effect of
egocentric body movements with traditional multi-touch navigation. The results
indicate a 47% decrease in path lengths and a 34% decrease in task time in
favor of egocentric navigation, but no significant effect on users' spatial
memory immediately after a navigation task. However, an additional second
experiment with 8 participants revealed such a significant increase in
performance of long-term spatial memory: The results of a recall task
administered after a 15-minute distractor task indicate a significant advantage
of 27% for egocentric body movements in spatial memory. Furthermore, a
questionnaire about the subjects' workload revealed that the physical demand of
the egocentric navigation was significantly higher but there was less mental
demand.

Trans-surface interaction addresses moving information objects across
multi-display environments that support sensory interaction modalities such as
touch, pen, and free-air. Embodiment means using spatial relationships among
surfaces and human bodies to facilitate users' understanding of interaction. In
the present embodied trans-surface interaction technique, a peripheral NFC tag
array provides tangible affordances for connecting mobile devices to positions
on a collaborative surface. Touching a tag initiates a trans-surface portal.
Each portal visually associates a mobile device and its user with a place on
the collaborative surface. The portal's manifestation at the top of the mobile
device supports 'flicking over' interaction, like playing cards. The technique
is simple, inexpensive, reliable, scalable, and generally applicable for
co-located collaboration. We developed a co-located collaborative rich
information prototype to demonstrate the embodied trans-surface interaction
technique and support imagining and planning tasks.

Body Panning: a movement-based navigation technique for large interactive
surfaces

In this note we introduce Body Panning, a novel interaction technique for
horizontal panning on interactive surfaces. Based on an established sensory
hardware setup, we implemented a robust body tracking system for a large-scaled
tabletop. On this basis a user can pan through a spatial user interface by
adjusting her position at the table. As a natural form of interaction, this
technique is convenient and applicable to many existing use cases and applic's
navigational and spatial memory performance. We conducted an experiment between
a common touch panning and a body panning interface to find out about
differences in these performances. For the body panning condition, we observed
an increased spatial memory performance and an invariant navigation
performance. We present and discuss these results focusing on application
domains for the body panning technique.

Pen and touch

Combined pen and touch input is an interaction paradigm attracting
increasing interest both in the research community and recently in industry. In
this paper, we illustrate how pen and touch interaction techniques can be
leveraged for editing and authoring of presentational documents on digital
tabletops. Our system exploits the rich interactional vocabulary afforded by
the simultaneous availability of the two modalities to provide gesture-driven
document editing functionality as an expert alternative to widgets. For our
bimanual gestures, we make use of non-dominant hand postures to set pen modes
in which the dominant hand articulates a variety of transactions. We draw an
analogy between such modifier postures and modifier keys on a keyboard to
construct command shortcuts. Based on this model, we implement a number of
common document editing operations, including several page and element
manipulations, shape and text input with styling, clipart retrieval and
insertion as well as undo/redo. The results of a lab study provide in-sights as
to the strengths and limitations of our approach.

Modifying a digital sketch may require multiple selections before a
particular editing tool can be applied. Especially on large interactive
surfaces, such interactions can be fatiguing. Accordingly, we propose a method,
called Suggero, to facilitate the selection process of digital ink. Suggero
identifies groups of perceptually related drawing objects. These "perceptual
groups" are used to suggest possible extensions in response to a person's
initial selection. Two studies were conducted. First, a background study
investigated participant's expectations of such a selection assistance tool.
Then, an empirical study compared the effectiveness of Suggero with an existing
manual technique. The results revealed that Suggero required fewer pen
interactions and less pen movement, suggesting that Suggero minimizes fatigue
during digital sketching.

Situation maps play an important role in planning and decision making in
emergency response centers. Important information such as operating units or
hazards is usually shown on these maps. Arranging the huge amount of
information can be challenging for operators, as this information should always
be visible while not occluding important regions of the underlying geographic
map. As large interactive whiteboards are increasingly replacing traditional
analog maps, new ways to assist with arranging information can be provided. In
this paper, we present a new approach for placing annotations automatically on
top of a map. For finding the optimal placement, our metrics are based on
geographic features, on the users' sketched input, and on the annotations
geometry itself. Moreover, we also added additional features to minimize
occlusions for multiple annotations. First results were highly promising and
showed that our approach improves input performance while keeping an optimal
view of the map.

Tangibles

Dedicated input devices are frequently used for system control. We present
Instant User Interfaces, an interaction paradigm that loosens this dependency
and allows operating a system even when its dedicated controller is
unavailable. We implemented a reliable, marker-free object tracking system that
enables users to assign semantic meaning to different poses or to touches in
different areas. With this system, users can repurpose everyday objects and
program them in an ad-hoc manner, using a GUI or by demonstration, as input
devices. Users tested and ranked these methods alongside a Wizard-of-Oz speech
interface. The testers did not show a clear preference as a group, but had
individual preferences.

The fun.tast.tisch. project: a novel approach to neuro-rehabilitation using
an interactive multiuser multitouch tabletop

Acquired brain injury, mostly caused by stroke, is one main cause for adult
disability, often involving cognitive impairment. Neuro-rehabilitation aims at
treating these impairments by maximizing the effect of brain plasticity and
functional reorganization. Specific exercises help patients to regain skills
that have temporarily been lost. Yet, conventional training can involve
disadvantages, e.g., the setup of an individual training environment causes a
lot of effort, the computation of statistics is time-consuming and must be done
by therapists manually, and it is usually not possible to discreetly adapt the
level of difficulty of an exercise. Further, software solutions for desktop PCs
often do not lead to the desirable results because they are too distinct from
the conventional therapy setting. The fun.tast.tisch. project introduces a
tabletop-based training system for the application in neuro-rehabilitation.
This system should not only come close to the conventional setting but also
overcome problems involved in existing solutions. The paper introduces the
project, describes its first module Tangram, and summarizes the results of a
small-scale study conducted to evaluate the module with the help of therapists
and patients at an early stage of development.

We present a controlled laboratory experiment comparing touch, physical, and
touch + overlay (passive finger guide) input for parameter control.
Specifically we examined two target acquisition and movement tasks with dial
and slider controls on horizontal touch screens. Results showed that physical
controls were the fastest and required the least eye fixation time on the
controls, while the overlay improved performance when compared to touch alone.
Speed and accuracy differences were seen primarily for dial controls; there was
little difference between input conditions for sliders. These results confirm
the value of physical input devices for parameter control tasks. They also
reveal that overlays can provide some of the same benefits, making them a
suitable input approach for certain applications where physical controls are
impractical.

Capacitive multi-touch displays are not designed to detect passive objects
placed on them-in fact, these systems usually contain filters to actively
reject such touch data. We present a technical analysis of this problem and
introduce Passive Untouched Capacitive Widgets (PUCs). Unlike previous
approaches, PUCs do not require power, they can be made entirely transparent,
they are detected reliably even when no user is touching them, and they do not
require internal electrical or software modifications of the touch display or
its driver. We show the results from testing PUCs on 17 different off-the-shelf
capacitive touch display models, and provide initial technical design
recommendations.

TempTouch: a novel touch sensor using temperature controllers for surface
based textile displays

In this paper we propose a new technology of touch sensitive textile
displays that can detect touch without adding additional sensors to a
thermochromic inks and peltier semi-conductor display system. Without any
changes to the hardware of the display, we present a method to modify the
existing temperature controllers of pelter elements to detect impulse
temperature transients that are caused through touch. Using this method, the
textile display can become a touch sensitive interactive textile display. We
present the results of this system and two application prototypes that enhance
simple table cloths into ubiquitously becoming a tic-tac-toe gaming platform
and an interactive drawing pad with touch sensing. In addition, our user
evaluations to observe the robustness and the performance of the system
indicate that the system is detects touch at acceptably high speeds. In
addition, the system performance is independent of the ambient temperature and
depends mainly on the temperature of the finger.

Education and training

While a number of guidelines exist for the design of learning applications
that target a single group working around an interactive tabletop, the same
cannot be said for the design of applications intended for use in
multi-tabletops deployments in the classroom. Accordingly, a number of these
guidelines for single-tabletop settings need to be extended to take account of
both the distinctive qualities of the classroom and the particular challenges
of having various groups using the same application on multiple tables
simultaneously. This paper presents an empirical analysis of the effectiveness
of designs for small-group multi-tabletop collaborative learning activities in
the wild. We use distributed cognition as a framework to analyze the small
number of authentic multi-tabletop deployments and help characterize the
technological and educational ecology of these classroom settings. Based on
previous research on single-tabletop collaboration, the concept of
orchestration, and both first-hand experience and second-hand accounts of the
few existing multiple-tabletop deployments to date, we develop a
three-dimensional framework of design recommendations for multi-tabletop
learning settings.

One of the primary goals of teaching is to prepare learners' for life in the
real world. Given that we live in a three dimensional world educators must
teach 3D concepts. As 3D content becomes increasingly available through the
Internet, large display touch and tangible manipulation needs to make 3D
manipulation simple and uncluttered to enable adoption in classrooms.
We describe an iterative process with actual customers to create a
commercial product for education. In the process we discover customer needs
such as occlusion minimizing 3D rotation, scale, simpler mixed reality cube
selection, hide and reveal features, and labelling. This application paper
summarizes 36 weeks of hardware and software development. It illustrates the
use of a lean start-up methodology to achieve a minimum viable product. We
discuss some of the lessons learned from Genchi Genbutsu (i.e. Toyota method
meaning go see for yourself) observations at several school visits as part of a
technical trial deployment.

In this paper, we describe the design process and early experiences of the
Activity Pad, an interactive digital artifact for active learning environments.
The pad combines a 4x6 grid of programmable NFC readers together with printed
sheets of A4-sized paper to allow teacher-driven creation of interactive
learning applications featuring application-specific tangibles. We describe
iterative design process for this teaching tool, including mock-up prototypes,
focus group discussions with teachers and the first complete prototype together
with two example applications. Teachers were eager to innovate applications for
the Activity Pad, and the feedback indicates the potential of this kind of
teaching tool in diverse learning environments.

This paper presents the design of OrMiS, a tabletop application supporting
simulation-based training. OrMiS is notable as one of the few practical
tabletop applications supporting collaborative analysis, planning and
interaction around digital maps. OrMiS was designed using an iterative process
involving field observation and testing with domain experts. Our key design
insights were that such a process is required to resolve the tension between
simplicity and functionality, that information should be displayed close to the
point of the user's touch, and that collaboration around maps cannot be
adequately solved with a single form of zooming. OrMiS has been evaluated by
domain experts and by officer candidates at a military university.

Redefining surfaces

AquaTop display: interactive water surface for viewing and manipulating
information in a bathroom

Due to the wide spread use of smart phones and PCs, people can access
information everywhere in everyday life. However, there are very few methods to
access content within an bathing environment. Some people carry smart phones
into a bathroom but it is unnatural to be holding a device during bathing. This
paper proposes an interactive water surface display system, in which
information is projected on the surface of a white water solution and users can
interact with this information using gestures. In this paper, we discuss
interaction design in a bathroom, describing an implementation of our system
and its proposed applications.

This paper reports on the ongoing development of the TapTiles system, a
low-cost, floor-interaction technology that overcomes problems found in
previous overhead projector-based floor-interaction systems by using Light
Emitting Diodes (LEDs) embedded into a carpet tile. Despite many advantages
compared to projector-based floor interaction systems, LED-based systems could
be criticized for lacking the resolution for a worthwhile interactive
experience. User studies of both simulated and real hardware are reported on.
This includes a comparison of tiles of different resolution that suggests that
pixel density, over the range of tests, is less important than visual artifacts
introduced by carpet tile edges. Contrary to initial expectations, denser LED
spacing did not improve legibility or raise user preferences. Overall our
studies suggest that LED-based floor interaction can be legible and effective
in a walk-up and use situation.

We present the design and implementation of ForceForm, a prototype
dynamically deformable interactive surface that provides haptic feedback. We
use an array of electromagnets and a deformable membrane with permanent magnets
attached to produce a deformable interactive surface. The system has a fast
reaction time, enabling dynamic interaction. ForceForm supports user input by
physically deforming the surface according to the user's touch and can
visualise data gathered from other sources as a deformed plane. We explore
possible usage scenarios that illustrate benefits and features of the system
and we outline the performance of the system.

This paper proposes TransformTable, an interactive digital table, whose
shape can be physically and dynamically deformed. Shape transformations are
mechanically and electrically actuated by wireless signals from a host
computer. TransfomTable represents digital information in a physically
changeable screen shape and simultaneously produces different spatial
arrangements of users around the table. This provides visual information while
changing the physical workspace to allow users to effectively handle their
tasks. We implemented the first TransformTable prototype that can deform
from/into one of three typical shapes: round, square, or rectangular. We also
discuss implementation methods and further application designs and scenarios. A
preliminary study shows fundamental and potential social impacts of the table
transformation on users' subjective views in a group conversation.

The sound of touch: on-body touch and gesture sensing based on transdermal
ultrasound propagation

Recent work has shown that the body provides an interesting interaction
platform. We propose a novel sensing technique based on transdermal
low-frequency ultrasound propagation. This technique enables pressure-aware
continuous touch sensing as well as arm-grasping hand gestures on the human
body. We describe the phenomena we leverage as well as the system that produces
ultrasound signals on one part of the body and measures this signal on another.
The measured signal varies according to the measurement location, forming
distinctive propagation profiles which are useful to infer on-body touch
locations and on-body gestures. We also report on a series of experimental
studies with 20 participants that characterize the signal, and show robust
touch and gesture classification along the forearm.

Touch fundamentals

In this paper, we evaluate the performance and experience differences
between direct touch and mouse input on horizontal and vertical surfaces using
a simple application and several validated scales. We find that, not only are
both speed and accuracy improved when using the multi-touch display over a
mouse, but that participants were happier and more engaged. They also felt more
competent, in control, related to other people, and immersed. Surprisingly,
these results cannot be explained by the intuitiveness of the controller, and
the benefits of touch did not come at the expense of perceived workload. Our
work shows the added value of considering experience in addition to traditional
measures of performance, and demonstrates an effective and efficient method for
gathering experience during inter-action with surface applications. We conclude
by discussing how an understanding of this experience can help in designing
touch applications.

This paper presents Arpège, a progressive multitouch input technique
for learning chords, as well as a robust recognizer and guidelines for building
large chord vocabularies. Experiment one validated our design guidelines and
suggests implications for designing vocabularies, i.e. users prefer relaxed to
tense chords, chords with fewer fingers and chords with fewer tense fingers.
Experiment two demonstrated that users can learn and remember a large chord
vocabulary with both Arpège and cheat sheets, and Arpège
encourages the creation of effective mmnemonics.

Multi-touch gestures are prevalent interaction techniques for many different
types of devices and applications. One of the most common gestures is the pinch
gesture, which involves the expansion or contraction of a finger spread. There
are multiple uses for this gesture -- zooming and scaling being the most common
-- but little is known about the factors affecting performance and ergonomics
of the gesture motion itself. In this note, we present the results from a study
where we manipulated angle, direction, distance, and position of two-finger
pinch gestures. The study provides insight into how variables interact with
each other to affect performance and how certain combinations of pinch gesture
characteristics can result in uncomfortable or difficult pinch gestures. Our
results can help designers select faster pinch gestures and avoid difficult
pinch tasks.

The expressiveness of touch input can be increased by detecting additional
finger pose information at the point of touch such as finger rotation and tilt.
PointPose is a prototype that performs finger pose estimation at the location
of touch using a short-range depth sensor viewing the touch screen of a mobile
device. We present an algorithm that extracts finger rotation and tilt from a
point cloud generated by a depth sensor oriented towards the device's
touchscreen. The results of two user studies we conducted show that finger pose
information can be extracted reliably using our proposed method. We show this
for controlling rotation and tilt axes separately and also for combined input
tasks using both axes. With the exception of the depth sensor, which is mounted
directly on the mobile device, our approach does not require complex external
tracking hardware, and, furthermore, external computation is unnecessary as the
finger pose extraction algorithm can run directly on the mobile device. This
makes PointPose ideal for prototyping and developing novel mobile user
interfaces that use finger pose estimation.

Although multi-touch interaction in 2D has become widespread on mobile
devices, intuitive ways to interact with 3D objects has not been thoroughly
explored. We present a study on natural and guided multi-touch interaction with
3D objects on a 2D multi-touch display. Specifically, we focus on interactions
with 3D objects that have either rotational, tightening, or switching
components on mechanisms that might be found in mechanical operation or
training simulations. The results of our study led to the following
contributions: a classification procedure for determining the category and
nature of a gesture, an initial user-defined gesture set for multi-touch
gestures applied to 3D objects, and user preferences with regards to
metaphorical versus physical gestures.

Latency and occlusion + CSCW

The end-to-end latency of interactive systems is well known to degrade
user's performance. Touch systems exhibit notable amount of latencies, but it
is seldom characterized, probably because latency estimation is a difficult and
time consuming undertaking. In this paper, we introduce two novel approaches to
estimate the latency of touch systems. Both approaches require an operator to
slide a finger on the touch surface, and provide automatic processing of the
recorded data. The High Accuracy (HA) approach requires an external camera and
careful calibration, but provides a large sample set of accurate latency
estimations. The Low Overhead (LO) approach, while not offering as much
accuracy as the HA approach, does not require any additional equipment and is
implemented in a few lines of code. In a set of experiments, we show that the
HA approach can generate a highly detailed picture of the latency distribution
of the system, and that the LO approach provides average latency estimates no
further than 4 ms from the HA estimate.

A case study of object and occlusion management on the eLabBench, a mixed
physical/digital tabletop

We investigate how users managed physical and digital objects during the
longitudinal field deployment of a tabletop in a biology laboratory. Based on
the analysis of 15 hours of video logs, we detail the objects used, their
presence, use and organization, in this particular setting. We propose to
consider occlusion as a situation which should be prevented rather than reacted
to, particularly to avoid distracting changes or animations. This implies (1)
pre-positioning digital content in locations where it is not likely to be
occluded and (2) acknowledging that some physical objects are deliberately put
in occluding positions. Since users want to interact with them conveniently,
occlusion management action should not necessarily be triggered immediately.

In this paper, we address the challenges of occlusion created by physical
objects on interactive tabletops. We contribute an integrated set of
interaction techniques designed to cope with the physical occlusion problem as
well as facilitate organizing objects in hybrid settings. These techniques are
implemented in ObjecTop, a system to support tabletop display applications
involving both physical and virtual objects. We compile design requirements for
occlusion-aware tabletop systems and conduct the first in-depth user study
comparing ObjecTop with conventional tabletop interfaces in search and layout
tasks. The empirical results show that occlusion-aware techniques outperform
the conventional tabletop interface. Furthermore, our findings indicate that
physical properties of occluders dramatically influence which strategy users
employ to cope with occlusion. We conclude with a set of design implications
derived from the study.

An interactive surface solution to support collaborative work onboard ships

Industrial environments are notoriously known as difficult places to gain
access to conduct any type of contextual inquiry work, and marine vessels are
no exception. But once this initial hurdle is overcome, these environments
reveal interesting research directions. Challenges faced onboard ships range
from issues with communication links, to the lack of support for current work
practices. Based on findings from an earlier field study, the work presented in
this paper focuses on several challenges involving collaboration,
communication, information sharing such as video and images, and tracking task
completion of crew members. This paper therefore presents a prototype which
consists of a Microsoft surface, mobile phones, and PCs to enable crew members
onboard ships to effectively communicate and collaborate with their colleagues.

Support for collaborative situation analysis and planning in crisis
management teams using interactive tabletops

Crisis management requires the collaboration of a variety of people with
different roles, often across organizational boundaries. It has been shown that
geographic information systems can improve the efficiency of disaster
management operations. However, workstation-based solutions fail to offer the
same ease of collaboration as the large analog maps currently in use. Recent
research prototypes, which use interactive tabletops for this purpose, often do
not consider individual roles and the need for accountability of actions. In
this paper, we present coMAP, a system built for interactive tabletops that
facilitates collaborative situation analysis and planning in crisis management
teams. Users can interact with coMAP using multi-touch as well as pen input.
The latter is realized by new methods for the use of Anoto digital pens without
the Anoto microdot pattern. A pen-optimized pie menu provides access to
role-specific information and system functions. A combination of role-based
access control and indoor tracking via Bluetooth is used to support
accountability of actions while still allowing collaboration and information
sharing. Initial user feedback for our system shows promising results.

ITS'13 best paper & ITS'13 best note

Penbook: bringing pen+paper interaction to a tablet device to facilitate
paper-based workflows in the hospital domain

In many contexts, pen and paper are the ideal option for collecting
information despite the pervasiveness of mobile devices. Reasons include the
unconstrained nature of sketching or handwriting, as well as the tactility of
moving a pen over a paper that supports very fine granular control of the pen.
In particular in the context of hospitals, many writing and note taking tasks
are still performed using pen and paper. However, often this requires
time-consuming transcription into digital form for the sake of documentation.
We present Penbook -- a system providing a touch screen together with a
built-in projector integrated with a wireless pen and a projection screen
augmented with Anoto paper. This allows using the pen to write or sketch
digital information with light on the projection surface while having the
distinct tactility of a pen moving over paper. The touch screen can be used in
parallel with the projected information turning the tablet into a dual-display
device. In this paper, we present the Penbook concept, detail specific
applications in a hospital context, and present a prototype implementation of
Penbook.

This paper presents the design and development of a novel visual+haptic
device that co-locates 3D stereo visualization, direct touch and touch force
sensing with a robotically actuated display. Our actuated immersive 3D display,
called TouchMover, is capable of providing 1D movement (up to 36cm) and force
feedback (up to 230N) in a single dimension, perpendicular to the screen plane.
In addition to describing the details of our design, we showcase how TouchMover
allows the user to: 1) interact with 3D objects by pushing them on the screen
with realistic force feedback, 2) touch and feel the contour of a 3D object, 3)
explore and annotate volumetric medical images (e.g., MRI brain scans) and 4)
experience different activation forces and stiffness when interacting with
common 2D on-screen elements (e.g., buttons). We also contribute the results of
an experiment which demonstrates the effectiveness of the haptic output of our
device. Our results show that people are capable of disambiguating between 10
different 3D shapes with the same 2D footprint by touching alone and without
any visual feedback (85% recognition rate, 12 participants).

Demonstration

We present SimMed, a novel tool for medical education that allows medical
students to diagnose and treat a simulated patient in real-time. The students
assume the roles of doctors, collaborating as they interact with the patient.
To achieve immersion and support complex interactions for gaining procedural
knowledge, the hybrid user interface combines elements of real-time Virtual
Reality (VR) with multitouch input. On the one hand, SimMed features a
simulated, life-sized patient that is rendered and reacts in real-time. On the
other hand, a more conventional touch input interface allows access to a large
variety of medical procedures and tools.

After the Fukushima nuclear accident, means to mitigate the severe situation
have been proposed in many countries. One of those means is the mobile control
room in which operators can monitor and control the damaged facilities from
remote locations. The mobile control room is different from usual main control
rooms of nuclear power plants in aspects of the quantity of transmitted
information and the size of room space. A new type of operator interface for
the mobile control room is required as operator tasks are reduced but more
critical. A prototype of operator interface which aims to provide direct
perception and manipulation function is shown in this presentation. The
operator interface runs on 3D environment and is being developed on a touch
device-SUR40. Currently, navigation from a site to a unit, zooming in/out for
monitoring the overall status of facilities and diagnosing faulty components
and controlling active components can be carried out through this prototype.

Tabletop displays often serve as workbenches by allowing users to interact
with them using touch capability. This work presents PhoneCog, a device
authentication method on interactive tabletops with minimalistic hardware
settings by utilizing a color sequence pattern recognition technique for device
identification. Users may place their smartphones on a surface, which
authenticate and authorize the devices to access various services such as
sharing images, and downloading apps. The method is built only with the
tabletop displays and does not require any additional hardware such as a
depth-camera. We also present an in-field case study where users utilized the
device to share various contents among each other in various regions in China
where the fast network connection is not readily available.

Acquired brain injury, mostly caused by stroke, is one main cause for adult
disability, often involving cognitive impairment. Neuro-rehabilitation aims at
treating these impairments by maximizing the effect of brain plasticity and
functional reorganization. Specific exercises help patients to regain skills
that have temporarily been lost. Yet, conventional training can involve
disadvantages, e.g., the setup of an individual training environment causes a
lot of effort, the computation of statistics is time-consuming and must be done
by therapists manually, and it is usually not possible to discreetly adapt the
level of difficulty of an exercise. Further, software solutions for desktop PCs
often do not lead to the desirable results because they are too distinct from
the conventional therapy setting. The fun.tast.tisch. project introduces a
tabletop-based training system for the application in neuro-rehabilitation.
This system should not only come close to the conventional setting but also
overcome problems involved in existing solutions.

Trans-surface interaction addresses moving information objects across
multi-display environments that support sensory interaction modalities such as
touch, pen, and free-air. Embodiment means using spatial relationships among
surfaces and human bodies to facilitate users' understanding of interaction. In
the present embodied trans-surface interaction technique, a peripheral NFC tag
array provides tangible affordances for connecting mobile devices to positions
on a collaborative surface. Touching a tag initiates a trans-surface portal.
Each portal visually associates a mobile device and its user with a place on
the collaborative surface. The portal's manifestation at the top of the mobile
device supports 'flicking over' interaction, like playing cards. The technique
is simple, inexpensive, reliable, scalable, and generally applicable for
co-located collaboration. We developed a co-located collaborative rich
information prototype to demonstrate the embodied trans-surface interaction
technique and support imagining and planning tasks.

We present PointPose, a prototype that allows finger pose information (tilt
and rotation) to be obtained at the point of touch on touch-based mobile
device, thus adding to the expressiveness of touch input. PointPose uses a
short-range depth sensor viewing the touch screen that provides a point cloud
that is used to infer finger pose information. Our prototype is lightweight,
does not require any additional tracking, and can be adapted to work with most
touch-based mobile devices, making it ideal for prototyping touch-based
applications that make use of finger pose information.

Capacitive multi-touch displays are designed to detect touches from fingers
that often change the location. This is quite the opposite of our goal: detect
passive objects placed on them. In fact, these systems usually contain filters
to actively reject such inactive input data. We present a technical analysis of
this problem and introduce Passive Untouched Capacitive Widgets (PUCs). Unlike
previous approaches, PUCs do not require power, they can be made entirely
transparent, and they do not require internal electrical or software
modifications. Most importantly they are detected reliably even when no user is
touching them.

We propose the SWINGNAGE, which is a digital signage system using
gesture-based mobile interaction on distant public displays. This system
provides two techniques for mobile-display interactions: the device pairing
between a mobile device and a public display using an embedded sensor of a
mobile device and a camera attached to a public display; and the dynamic layout
of information banners to support three actions: search; comparison; and
examination on the public display and the mobile device. It is then possible to
associate experiencing a digital signage with mobile-display interactions.

Poster

In this study, we propose a menu interaction technique that utilizes the
forearm (the part of the arm between the elbow and the hand) on direct input
surfaces such as tabletop systems. On such systems, users commonly operate
various types of data, such as images, video, audio, and documents, using menus
for each type of data. In this study, we focus on the space on user's forearms
as a very easy-to-access area for displaying a menu to control the data in
operation. In addition, since the tabletop surface and the forearm can be used
as different layers, they can be divided into a "working area" and an "area for
menu operation." Thus, a menu can be displayed without being hidden by the hand
or forearm.

We propose a tabletop keyboard that assists stroke patients in using
computers. Using computers for purposes such as paying bills, managing bank
accounts, sending emails, etc., which all include typing, is part of Activities
of Daily Living (ADL) that stroke patients wish to recover. To date, stroke
rehabilitation research has greatly focused on using computer-assisted
technology for rehabilitation. However, working with computers as a skill that
patients need to recover has been neglected. The conventional human computer
interfaces are mouse and keyboard. Using keyboard stays the main challenge for
hemiplegic stroke patients because typing is usually a bimanual task.
Therefore, we propose an assistive tabletop keyboard which is not only a novel
computer interface that is specially designed to facilitate patient-computer
interaction but also a rehab medium through which patients practice the desired
arm/hand functions.

This paper presents a tabletop application used to explore the potential of
tabletops on maritime ship bridges. We have constructed four conceptual
scenarios for tabletops in everyday ship operations. An initial study consists
of creating video prototypes within a full-sized bridge simulator. These
scenarios correspond to tasks regularly performed by a ship's crew. We have
introduced an interactive surface to a bridge simulator to conduct an inquiry
of bridge officers. Future research should introduce tabletops to a real bridge
to investigate their use in a real environment.

The study of geological outcrops has seen recent improvements due to LiDAR
technology, which allows for the creation of complex, high-resolution
computational representations of geological terrains. It calls for suitable
visualization strategies, that provide flexibility as well as timely
intuitiveness. In this work we present our initial efforts to visually explore
and annotate geological outcrops through multitouch, including a 3D navigation
technique and horizon surface creation and edition.

Integrating digital tabletops into homes or desktop environments will give
rise to a set of problems emerging from placing everyday objects on interactive
tabletops. Chief among them is the arbitrary placement of physical objects that
considerably limits the digital working space on the surface of tabletops. In
this paper we contribute PeriTop, an interactive back-projected tabletop system
which exploit the surface of physical objects and tabletop rims as additional
interactive displays to represent and interact with digital objects. This is
realized by augmenting the tabletop system with an inexpensive pico
projector-depth camera pair. We support the PeriTop approach by depicting
several salient use case scenarios aiding users in performing activities on
hybrid physical-digital tabletop settings.

Improving awareness of automated actions using an interactive event timeline

Digital tabletops provide an opportunity for automating complex tasks in
collaborative domains involving planning and decision-making, such as strategic
simulation in command and control. However, when automation leads to
modification of the system's state, users may fail to understand how or why the
state has changed, resulting in lower situation awareness and incorrect or
suboptimal decisions. We present the design of an interactive event timeline
that aims to improve situation awareness in tabletop systems that use
automation. Our timeline enables exploration and analysis of automated system
actions in a collaborative environment. We discuss two factors in the design of
the timeline: the ownership of the timeline in multi-user situations and the
location of the detailed visual feedback resulting from interaction with the
timeline. We use a collaborative digital tabletop board game to illustrate this
design concept.

Just blink and levitate objects, just move your fingernails and open the
door. Chemically metalized eyelashes, RFID nails and conductive makeup are some
examples of Beauty Technology products, an emergent field in Wearable
Computers. Beauty Technology embedded electromagnetic devices into non-invasive
beauty products that could be attached to the human body for interacting with
different surfaces like water, clothes, the own wearer's body and other
objects, just blinking or even without touching any of these surfaces.

The motivation in this research endeavor is to design a flexible and compact
modeling language for multi-touch gesture recognition using Petri Nets. The
findings demonstrated that a Petri Net can be used effectively for gesture
detection, with the potential for such a model to be composed of many Petri
Nets for faster and user friendly applications.

Setting of document importance based on analysis of user's usual working

We studied a system that allows users to browse multiple documents on a
tabletop display. However, in such situations, important documents may be
buried, just as they do in the real world. To solve this problem, we
investigated which documents were the most important for users by observing and
analyzing their usual working patterns. As a result of this, we found that
writing and the position, frequency of use, and the date of documents are
related to importance. In particular, we focused on writing and position. Then,
based on these two parameters, we developed a system that sets an importance
level to each document.

Gaze interaction has attracted attention as an intuitive input means for the
tabletop interface. However, it is not easy to arrange cameras and light
sources to fit the target volume of the system. In this study, we propose a
general simulation method of eye-tracking volume, using a gaze cone, to
configure hardware settings for tangible and multi-user tabletop interaction.
We develop a prototype of the simulator and demonstrate its effectiveness by
developing a box that illuminates where you look.

The authors propose an operating method for multi-touch environments based
on a metaphor of the table manners of Western dishes. The user holds two styli,
or uses two fingers, as a knife and fork, for operation. The fork plays a role
of pointing and selection and the knife plays a role of executing a command.
The actions are organized based on table manners and construct the operating
method. In this paper, the authors introduce the concept of the operating
method-dinner metaphor interface, and report the prototype system.

The increasing trend toward multi-device ecologies that provide private and
shared digital surfaces introduces a need for effective cross-device object
transfer interaction mechanisms. This work-in-progress paper investigates
visual feedback techniques for enhancing the usability of the Pick-and-Drop
cross-device object transfer technique when used between a shared digital table
and private tablets. We propose two visual feedback designs aimed to improve
awareness of virtual objects during a Pick-and-Drop transfer. Initial results
from a comparative user study are presented and discussed, along with
directions for future work.

Investigating attraction and engagement of animation on large interactive
walls in public settings

Large interactive walls capable of delivering dynamic content to broad
audiences are becoming increasingly common in public areas for information
dissemination, advertising, and entertainment purposes. A major design
challenge for these systems is to entice and engage passersby to interact with
the system, and in a manner intended by the designers. To address this issue,
we are examining the use of different types of animation at various stages of
the interaction as someone approaches and begins interacting with the system.
Using usage measures from museum studies, namely, attraction and engagement of
an exhibit, we plan to assess the effectiveness of different types of animation
in the context of an interactive notice board application in a university
campus. We describe our design approach and plans for studying the animation
design in a real-world public setting.

Interfaces for many new interactive systems lack useful adaptation towards
the properties of those systems. Users and designers used to use the same
system. This is often no longer the case and it is hard for designers to know
what implications their design decisions have. We study the two main components
of interaction performance, input and perception, with regard to how
performance can be transferred from a reference system to a target system. We
show how to calculate element sizes that allow near identical perceptual and
input performance across systems.

In this paper, we studied how display size affects human pointing
performance given the same display field of view when using a mouse device. In
total, four display sizes (10.6, 27, 46, 55 inches) and three display field of
views (20°, 34° and 45°) were tested. Our findings show that given
the same display field of view, mouse movement time significantly increases as
display size increases; but there is no significant effect of display size on
pointing accuracy. This research may contribute a new dimension to literature
in describing human pointing performance on large displays.

This paper presents Sidelock, a tangible authentication prototype on mobile
devices. Grasp events and finger gestures are sensed by twenty capacitive
sensors on left and right sides. They were used in a two-phase authentication
process, in which grasp pattern wakes up devices and a 1-D template-based
gesture recognizer verifies if input matches pre-defined password templates.
Compared with other popular authentication approachers like PINs or grid lock,
Sidelock has much larger password space and approximate recognition speed, with
only 5.3% critical errors. The prototype could be minimized and embedded
broadly in many common mobile devices. The user feedback suggests it is an
memorable and acceptable authentication method.

As applications that originated on desktop computers find their way onto new
multi-touch enabled devices many interaction tasks that were designed for
keyboards and computer mice spread to new touch-based environments. One example
is the selection of regions, for instance in image editing applications. While
there are already several studies involving multi-touch object selections,
region selections have not been closely examined. Instead of using traditional
mouse-based interaction schemes we propose a multi-touch selection technique
that better suits touch-based devices. Based on this technique we propose a
novel way to take advantage of multiple touches to easily extend, modify and
refine selections based on the order and relative position -- the context -- of
touches.

In this paper we introduce a new method for 6DoF marker tracking, specially
designed for Microsoft SecondLight or any camera-based tabletop interface that
is able to see objects through the surface. Our method is based on topological
region adjacency for the identification of the markers, which are fitted into a
squared shape for properly track the marker pose in the real world. We also
describe the constraints imposed by the system which will determine the size
and ID range of the new markers, and we finally evaluate the system.

Testing new interface concepts for expert users on large and immobile
display prototypes complicates the application of a user-centered design
approach. In this paper we report on our experience with developing an
emergency management system on a large curved display using an iterative
user-centered design process. Involving the expert users was a major challenge
due to the immobility of our display prototype. We present and discuss
different prototyping and evaluation strategies and assess their suitability
for such a scenario.

In this poster we present the development process of natural interaction for
card games on multiple devices. Our goal was to provide users with an
application that can be interacted with similar to real cards. It is a critical
part of actual game play that all users can observe clearly what actions are
performed by each player. To imitate such flexible use of cards in real games,
we did not implement any game rules in our system. Rather, we strived towards
making all computer-related user interactions as clear and visible to all
players as manipulations of real cards. For the development of our system we
used an iterative user-centered design approach.

In this paper we present Xplane, a software layer for fast development of
applications running in separate, independent windows in multi-touch
interactive tabletops. Our framework supports gestures recognition and
communication between different windows or applications. Moreover, it is based
on web technologies to abstract from hardware and software configuration.

Workload on your fingertips: the influence of workload on touch-based drag
and drop

In this paper we explore if it is possible to recognize different cognitive
states of a user through analyzing drag and drop behavior on a tablet device.
We introduce a modified version of the classic Stroop task, which is a commonly
used psychological stressor and investigate how different levels of perceived
workload correlate with measures related to fingertip movement during drag and
drop. A study with 24 participants is reported, where we were able to replicate
the Stroop effect in a touch-based drag and drop task and present 2 measures in
fingertip movement that correlate with subjective ratings of workload based on
the NASA-TLX questionnaire.

In this paper, we present insights gained from studying the way joints of
the body (e.g. hands, elbows and shoulders) move while performing dynamic whole
body gestures. We describe how we, through exploring our own movements, came to
use statistics typically computed to explain swarm movement (e.g. movement of
honey bees). We report a study we conducted in order to investigate the benefit
of theses measures in the context of a movement-based target catching game.
Participants were able to learn to use these measures for interaction while
individual and diverse gestures were supported.

Large screens are populating a variety of settings motivating research on
appropriate interaction techniques. While gesture is popularized by depth
cameras we contribute with a comparison study showing how eye pointing is a
valuable substitute to gesture pointing in dragging tasks. We compare eye
pointing combined with gesture selection to gesture pointing and selection.
Results clearly show that eye pointing combined with a selection gesture allows
more accurate and faster dragging.

LampTop: touch detection for a projector-camera system based on shape
classification

The LampTop enables an effective low cost touch interface utilizing only a
single camera and a pico projector. It embeds a small shape in the image
generated by the user application (e.g. a touch screen menu with icons) and
detects touch by measuring the geometrical distortion in the camera captured
image. Fourier shape descriptors are extracted from the camera-captured image
to obtain an estimate of the shape distortion. The touch event is detected
using a Support Vector Machine. Quantitative results show that the proposed
method can effectively detect touch.

This paper presents a multi-touch based interface for mixing music. The goal
of the interface is to provide users with a more intuitive control of the music
mix by implementing the so-called stage metaphor control scheme, which is
especially suitable for multi-touch surfaces. Specifically, we discuss
functionality important for the professional music technician (main target
user) -- functionality, which is especially challenging to integrate when
implementing the stage metaphor. Finally we propose and evaluate solutions to
these challenges.

In this paper, we present our computer-assisted language learning system
called TandemTable. It is designed for a multi-touch tabletop and is meant to
aid co-located tandem language learners during their learning sessions. By
suggesting topics of discussion, and presenting learners with a variety of
conversation-focused collaborative activities with shared digital artifacts,
the system helps to inspire conversations and help them flow.

CubIT is a multi-user, large-scale presentation and collaboration framework
installed at the Queensland University of Technology's (QUT) Cube facility, an
interactive facility made up 48 multi-touch screens and very large projected
display screens. CubIT was built to make the Cube facility accessible to QUT's
academic and student population. The system allows users to upload, interact
with and share media content on the Cube's very large display surfaces. CubIT
implements a unique combination of features including RFID authentication,
content management through multiple interfaces, multi-user shared workspace
support, drag and drop upload and sharing, dynamic state control between
different parts of the system and execution and synchronisation of the system
across multiple computing nodes.

Touch-based web browsing with tablet devices is not yet utilizing its full
potential. This paper introduces an asymmetric bimanual interaction technique
that makes browser-based multi-touch gestures more expressive. In the proposed
TouchModifier technique, a semi-transparent panel with modifier controls is
docked to the side of the screen. The non-dominant hand operates the side
panel, while the dominant hand interacts with the application content as usual.
The controls on the side panel operate as fluid mode selectors that enrich and
override the semantics of the dominant hand gestures. This opens novel
interaction possibilities in browser applications, while remaining
interoperable with existing web pages. In this paper, we describe the proposed
concept and present its prototype implementation with a use case.

The effect of active encouragements of situated public display with
interactive quiz

Successful deployment of a situated public display (SPD) relies on its
ability to engage many users steadily and for a considerable length of time. In
this work, to evaluate the SPD's ability to actively encourage users to engage
in an interactive public display, we compared 3 types of touch-based
interaction modes on a multi-touch based public display, the Wall of Quiz, each
mode providing, respectively, (1) a funny video clip, (2) a quiz game, (3) a
quiz with an encouraging message for 10 consecutive correct answers. We
videotaped user behavior in the wild, having developed the Mensecond as an
evaluation index, and found that mode (3) resulted in a significantly higher
Mensecond rate. This result showed that the provision of motivation leads to
in-depth engagement in display content, which may in turn result in successful
delivery of such information as ads, notices, campaigns, and so on.

his paper proposes the use of Tabletop Computers for use in Project
Management activities like Task Assignment. Task Assignment is essentially
collaborative, which ideally should be done at table discussion, now-a-days
happens over network on personal devices even though there is no constraint on
common time and space. Face-to-face collaboration is dwindling, even though it
is faster in reaching consensus, richer in terms of quality of communication
and tends to be more satisfying for the group (as compared to
computer-mediated)[1]. Use of a tabletop computer, which combines the
productivity benefit of a computer with the social benefits of around-the-table
interaction, can potentially enhance the effectiveness of such collocated Task
Assignment meeting without affecting the agility or disturbing the traditional
settings.

Doctoral symposium

My research is a combination constructive design research and practice-led
research in the domain of producing novel big screen experiences for school of
art and design and department of motion picture, television and production
design. As one of the case studies in my thesis research I present a generic
user interface for large interactive walls, Kupla UI. Kupla UI applies physics
modeled spherical content widgets to present information. It is primarily
targeted for multi-user information exploration and holding informal
presentations in public spaces, such as exhibitions, commercial spaces and
lobbies. Kupla is designed to support multiple simultaneous users, graph-based
content hierarchy, flexible installation form-factors, heterogeneous content,
and playful interaction. In Kupla design we have developed multiple different
states for spherical widgets that differ in terms of visualization, function
and physical modeling. These different states help to accommodate different use
cases with the same installation.

Technological developments open new opportunities to meet the increasing
expectations of museum visitors. Although these technologies provide many new
possibilities, individual challenges and limitations are rife. Museums should
aim to unify many such technologies in order to capture visitor attention,
engage interaction and facilitate social activities. By incorporating exhibits,
objects, devices and people into a network of interconnected systems, new
patterns, interaction types and social relations are expected to emerge. The
goal of the research described in this paper is to explore the behavioural
patterns emerging from visitors' interaction within the museum environment, how
these patterns can be utilised in order to create new engaging and social
experiences and how unifications of new technologies can contribute to engaging
visitor interactions.

Wearable Computing had changed the way individuals interact with computers,
intertwining natural capabilities of the human body with processing apparatus.
But most of this technology had been designed just for clothing or accessories
and it is still flat and rigid, giving the wearer a cyborg look. Beauty
Technology, based on electromagnetic devices that are embedded into
non-invasive beauty products, opens new possibilities for interacting with
different surfaces and devices. It locates wearable technologies on the body
surface and makes use of muscle movements as an interactive interface. This
work introduces the term Beauty Technology as an emergent field in Wearable
Computing. It discusses the materials and processes used for developing the
Beauty Tech prototypes and present some examples of the use of beauty
technologies in everyday beauty products.

This paper outlines doctoral research that investigates the use of magnetic
forces to achieve directly deformable interaction on tabletop interactive
surfaces. The problem is defined as being the lack of tangibility in touch
surfaces and the lack of touch surfaces that feel soft to the touch. A survey
of related work is presented along with a description of the chosen approach of
using magnetic forces to implement a directly deformable, soft, interactive
surface.

Workshops and tutorials

This workshop proposes to bring together researchers who are interested in
improving collaborative experiences through the use of multi-sized interaction
surfaces, ranging from large-scale walls, to tables, tablets and phones. The
opportunities for innovation exist, but the tabletop community has not still
completely addressed the problem of bringing effective collaboration activities
using multiple interactive surfaces, especially in complex work domains. Of
particular interest is the potential synergy that one can obtain by effectively
combining different-sized surfaces.

Interactive surfaces for interaction with stereoscopic 3d (ISIS3D): tutorial
and workshop at its 2013

With the increasing distribution of multi-touch capable devices multi-touch
interaction becomes more and more ubiquitous. Multi-touch interaction offers
new ways to deal with 3D data allowing a high degree of freedom (DOF) without
instrumenting the user. Due to the advances in 3D technologies, designing for
3D interaction is now more relevant than ever. With more powerful engines and
high resolution screens also mobile devices can run advanced 3D graphics, 3D
UIs are emerging beyond the game industry, and recently, first prototypes as
well as commercial systems bringing (auto-) stereoscopic display on
touch-sensitive surfaces have been proposed. With the Tutorial and Workshop on
"Interactive Surfaces for Interaction with Stereoscopic 3D (ISIS3D)" we aim to
provide an interactive forum that focuses on the challenges that appear when
the flat digital world of surface computing meets the curved, physical, 3D
space we live in.

Our goal with this workshop is to bring together researchers, designers, and
practitioners, working in the library domain to share their experiences with
and plans for integrating interactive surfaces into the library space.

This workshop proposed to bring together researchers interested in visual
adaptation of interfaces. The gaze-tracking community is often constrained to
visual adaptation at short distances where gaze data is reliably available.
Researchers working on distance-based interfaces tend to work in room-sized
environments, with wall-sized displays or multiple displays. Visual adaptation
using contextual information or personalisation is relatively independent of
the size of the environment but comes with its own set of challenges due to the
complexities of dealing with contextual information. Even though most of these
researchers are creating visually adaptive interfaces, their approaches,
concerns and constraints differ. The aim of this workshop was to create an
opportunity to increase awareness of the diverse research as well as for
establishing areas of possible collaboration.

This tutorial introduces strategies how the knowledge of people's and
devices' proxemic relationships can be applied to interaction design. The goal
is to inform the design of future proxemic-aware devices that -- similar to
people's natural expectations and use of proxemics -- allow increasing
connectivity and interaction possibilities when in proximity to people, other
devices, or objects. Towards this goal, the tutorial introduces strategies how
the fine-grained knowledge of proxemic relationships between entities can be
exploited in interaction design for digital surfaces (e.g., large interactive
displays, or portable tablets).

Paper-pencil sketches are a valuable tool during different stages of
experience design in human-computer interaction. This hands-on tutorial will
demonstrate how to integrate sketching into researchers' and interaction
designers' everyday practice -- with a particular focus on the design of
applications for interactive surfaces (e.g., phones, tablets, tabletops,
interactive whiteboards). Participants will learn essential sketching
strategies, apply these in practice during various hands-on exercises, and
learn the various ways of using sketches as a tool when designing novel
interactive systems.