TACCESS 2013-09 Volume 5 Issue 1

Point and click interactions using a mouse are an integral part of computer
use for current desktop systems. Compared with younger users though, older
adults experience greater difficulties performing cursor positioning tasks, and
this can present limitations to using a computer easily and effectively. Target
expansion is a technique for improving pointing performance where the target
grows dynamically as the cursor approaches. This has the advantage that targets
conserve screen real estate in their unexpanded state, yet can still provide
the benefits of a larger area to click on. This article presents two studies of
target expansion with older and younger participants, involving
multidirectional point-select tasks with a computer mouse. Study 1 compares
static versus expanding targets, and Study 2 compares static targets with three
alternative techniques for expansion. Results show that expansion can improve
times by up to 14%, and reduce error rates by up to 50%. Additionally,
expanding targets are beneficial even when the expansion happens late in the
movement, that is, after the cursor has reached the expanded target area or
even after it has reached the original target area. The participants'
subjective feedback on the target expansion are generally favorable, and this
lends further support for the technique.

Performing Locomotion Tasks in Immersive Computer Games with an Adapted
Eye-Tracking Interface

Young people with severe physical disabilities may benefit greatly from
participating in immersive computer games. In-game tasks can be fun, engaging,
educational, and socially interactive. But for those who are unable to use
traditional methods of computer input such as a mouse and keyboard, there is a
barrier to interaction that they must first overcome. Eye-gaze interaction is
one method of input that can potentially achieve the levels of interaction
required for these games. How we use eye-gaze or the gaze interaction technique
depends upon the task being performed, the individual performing it, and the
equipment available. To fully realize the impact of participation in these
environments, techniques need to be adapted to the person's abilities. We
describe an approach to designing and adapting a gaze interaction technique to
support locomotion, a task central to immersive game playing. This is evaluated
by a group of young people with cerebral palsy and muscular dystrophy. The
results show that by adapting the interaction technique, participants are able
to significantly improve their in-game character control.

Many researchers internationally are studying how to synthesize computer
animations of sign language; such animations have accessibility benefits for
people who are deaf and have lower literacy in written languages. The field has
not yet formed a consensus as to how to best conduct evaluations of the quality
of sign language animations, and this article explores an important
methodological issue for researchers conducting experimental studies with
participants who are deaf. Traditionally, when evaluating an animation, some
lower and upper baselines are shown for comparison during the study. For the
upper baseline, some researchers use carefully produced animations, and others
use videos of human signers. Specifically, this article investigates, in
studies where signers view animations of sign language and are asked subjective
and comprehension questions, whether participants differ in their subjective
and comprehension responses when actual videos of human signers are shown
during the study. Through three sets of experiments, we characterize how the
Likert-scale subjective judgments of participants about sign language
animations are negatively affected when they are also shown videos of human
signers for comparison -- especially when displayed side-by-side. We also
identify a small positive effect on the comprehension of sign language
animations when studies also contain videos of human signers. Our results
enable direct comparison of previously published evaluations of sign language
animations that used different types of upper baselines -- video or animation.
Our results also provide methodological guidance for researchers who are
designing evaluation studies of sign language animation or designing
experimental stimuli or questions for participants who are deaf.

Distinguishing Users By Pointing Performance in Laboratory and Real-World
Tasks

Accurate pointing is an obstacle to computer access for individuals who
experience motor impairments. One of the main barriers to assisting individuals
with pointing problems is a lack of frequent and low-cost assessment of
pointing ability. We are working to build technology to automatically assess
pointing problems during every day (or real-world) computer use. To this end,
we have gathered and studied real-world pointing use from individuals with
motor impairments and older adults. We have used this data to develop novel
techniques to analyze pointing performance. In this article, we present learned
statistical models that distinguish between pointing actions from diverse
populations using real-world pointing samples. We describe how our models could
be used to support individuals with different abilities sharing a computer, or
one individual who experiences temporary pointing problems. Our investigation
contributes to a better understanding of real-world pointing. We hope that
these techniques will be used to develop systems that can automatically adapt
to users' current needs in real-world computing environments.

Real-time captioning enables deaf and hard of hearing (DHH) people to follow
classroom lectures and other aural speech by converting it into visual text
with less than a five second delay. Keeping the delay short allows end-users to
follow and participate in conversations. This article focuses on the
fundamental problem that makes real-time captioning difficult: sequential
keyboard typing is much slower than speaking. We first surveyed the audio
characteristics of 240 one-hour-long captioned lectures on YouTube, such as
speed and duration of speaking bursts. We then analyzed how these
characteristics impact caption generation and readability, considering
specifically our human-powered collaborative captioning approach. We note that
most of these characteristics are also present in more general domains. For our
caption comparison evaluation, we transcribed a classroom lecture in real-time
using all three captioning approaches. We recruited 48 participants (24 DHH) to
watch these classroom transcripts in an eye-tracking laboratory. We presented
these captions in a randomized, balanced order. We show that both hearing and
DHH participants preferred and followed collaborative captions better than
those generated by automatic speech recognition (ASR) or professionals due to
the more consistent flow of the resulting captions. These results show the
potential to reliably capture speech even during sudden bursts of speed, as
well as for generating "enhanced" captions, unlike other human-powered
captioning approaches.

Care staff, those who attend to the day-to-day needs of people in
residential facilities, represent an important segment of the health-care
provision of those entrusted to their care. The potential use of technology by
care staff has not been a focus of researcher attention. The work reported here
provides initial steps in addressing that gap, considering both the design
requirements for this population and presentation of early work on a software
system for use by care staff. We describe the development of a software tool
for use by care staff, called Portrait, and report two studies related to
factors affecting technology use by this population. The results of this
research are promising, with Portrait being very positively received by care
managers and care staff. Use of this software in a care home for over a month
indicated continued use, with care staff returning to the system throughout the
test period. The contributions of this research are the identification of
factors important in working with a care staff population, the introduction and
evaluation of a novel software tool for care staff in residential homes, and
the highlighting of potential benefits of technology in assisting care staff.

TACCESS 2014-03 Volume 5 Issue 4

Video sharing sites enable members of the sign language community to record
and share their knowledge, opinions, and worries on a wide range of topics. As
a result, these sites have formative digital libraries of sign language content
hidden within their large overall collections. This article explores the
problem of locating these sign language (SL) videos and presents techniques for
identifying SL videos in such collections. To determine the effectiveness of
existing text-based search for locating these SL videos, a series of queries
were issued to YouTube to locate SL videos on the top 10 news stories of 2011
according to Yahoo!. Overall precision for the first page of results (up to 20
results) was 42%. An approach for automatically detecting SL video is then
presented. Five video features considered likely to be of value were developed
using standard background modeling and face detection. The article compares the
results of an SVM classifier when given all permutations of these five
features. The results show that a measure of the symmetry of motion relative to
the face position provided the best performance of any single feature. When
tested against a challenging test collection that included many likely false
positives, an SVM provided with all five features achieved 82% precision and
90% recall. In contrast, the text-based search (queries with the topic terms
and "ASL" or "sign language") returned a significant portion of non-SL content
-- nearly half of all videos found. By our estimates, the application of
video-based filtering techniques such as the one proposed here would increase
precision from 42% for text-based queries up to 75%.

Automatic Task Assistance for People with Cognitive Disabilities in Brushing
Teeth -- A User Study with the TEBRA System

People with cognitive disabilities such as dementia and intellectual
disabilities tend to have problems in coordinating steps in the execution of
Activities of Daily Living (ADLs) due to limited capabilities in cognitive
functioning. To successfully perform ADLs, these people are reliant on the
assistance of human caregivers. This leads to a decrease of independence for
care recipients and imposes a high burden on caregivers. Assistive Technology
for Cognition (ATC) aims to compensate for decreased cognitive functions. ATC
systems provide automatic assistance in task execution by delivering
appropriate prompts which enable the user to perform ADLs without any
assistance of a human caregiver. This leads to an increase of the user's
independence and to a relief of caregiver's burden. In this article, we
describe the design, development and evaluation of a novel ATC system. The
TEBRA (TEeth BRushing Assistance) system supports people with moderate
cognitive disabilities in the execution of brushing teeth. A main requirement
for the acceptance of ATC systems is context awareness: explicit feedback from
the user is not necessary to provide appropriate assistance. Furthermore, an
ATC system needs to handle spatial and temporal variance in the execution of
behaviors such as different movement characteristics and different velocities.
The TEBRA system handles spatial variance in a behavior recognition component
based on a Bayesian network classifier. A dynamic timing model deals with
temporal variance by adapting to different velocities of users during a trial.
We evaluate a fully functioning prototype of the TEBRA system in a study with
people with cognitive disabilities. The main aim of the study is to analyze the
technical performance of the system and the user's behavior in the interaction
with the system with regard to the main hypothesis: is the TEBRA system able to
increase the user's independence in the execution of brushing teeth?