ASL Animation Tools & Technologies

Improving classroom accessibility

How can we improve the classroom experience of deaf and hard-of-hearing students? This project’s goal is to investigate the effectiveness of eyewear computers to display ASL for managing multiple visual sources of information.

Usability of Automatic Captions for Meetings

We are investigating a tool to caption live one-on-one meetings using imperfect automatic speech recognition (ASR) technology, including how to best convey when the ASR system is not confident it has recognized the words - so that users know when they can trust the captions.

Funding Support

Matt Huenerfauth (PI). February 2017 to February 2018. Identifying the Best Methods for Displaying Word- Confidence in Automatically Generated Captions for Deaf and Hard-of-Hearing Users. Google Faculty Research Awards Program. Amount of funding: $56,902.

Larwan Berke (student fellowship recipient), Matt Huenerfauth (faculty advisor). September 2017 to August 2020. National Science Foundation Graduate Research Fellowship (NSF-GRF) to Larwan Berke. Amount of funding: Tuition and stipend for three years, approximate value: $138,000.

Matt Huenerfauth and Michael Stinson, PIs. September 2015 to August 2017. “Creating the Next Generation of Live-Captioning Technologies.” Internal Seed Research Funding, Office of the President, National Technical Institute for the Deaf, Rochester Institute of Technology.

This project is research conducted by Matt Huenerfauth and his students, in collaboration with colleagues at the National Technical Institute for the Deaf, including Michael Stinson, Lisa Elliot, James Mallory, and Donna Easton.

Effective Methods of Teaching Accessibility

This project examines the effectiveness of a variety of methods for teaching computing students about concepts related to computer accessibility for people with disabilities. This multi-year project will include longitudinal testing of students two years after the instruction to search for lasting impacts.

Relevant Links

Teach Access

This national initiative among technology companies and universities is promoting accessibility education in university computing degrees.

This project is joint work among Stephanie Ludi, Vicki Hanson, and Matt Huenerfauth.

Investigating Effective Pedagogies for Teaching Accessibility

This project focuses on how to create and evaluate various pedagogical techniques for including accessibility topics in computing curricula in higher education.

This project is conducted by Kristen Shinohara and her students.

Our Approach

Several areas of interest include: determining how accessibility can be incorporated in specific domains (e.g., how can we incorporate accessibility in algorithms courses?), and examining specific pedagogies that can be included in teaching practice.

Generating ASL Animation from Motion-Capture Data

This project is investigating techniques for making use of motion-capture data collected from native ASL signers to produce linguistically accurate animations of American Sign Language. In particular, this project is focused on the use of space for pronominal reference and verb inflection/agreement.

This project also supported a summer research internship program for ASL-signing high school students, and REU supplements from the NSF have supported research experiences for visiting undergraduate students.

Data & Corpora

The motion-capture corpus of American Sign Language collected during this project is available for non-commercial use by the research community.

Learning ASL through Real-Time Practice

We are investigating new video and motion-capture technologies to enable students learning American Sign Language (ASL) to practice their signing independently through a tool that provides feedback automatically.

This project is joint work with City University of New York, City College and Hunter College.

Word Importance in Captions for Deaf Users

The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. In order to evaluate the usefulness of captions for Deaf or Hard of Hearing (DHH) users based on ASR, simply counting the number of errors is insufficient, since some words contribute more to the meaning of the text.

We are studying methods for automatically predicting the importance of individuals words in a text, for DHH users in a captioning context, and we are using these models to develop alternative evaluation metrics for analyzing ASR accuracy, to predict how useful ASR-based captions would be for users.

Funding Support

Matt Huenerfauth and Michael Stinson, PIs. September 2015 to August 2017. “Creating the Next Generation of Live-Captioning Technologies.” Internal Seed Research Funding, Office of the President, National Technical Institute for the Deaf, Rochester Institute of Technology.

Featured Papers

Sushant Kafle, Matt Huenerfauth. 2017. Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'17). ACM, New York, NY, USA.

Sushant Kafle, Matt Huenerfauth. 2016. “Effect of Speech Recognition Errors on Text Understandability for People who are Deaf or Hard of Hearing.” Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), INTERSPEECH 2016, San Francisco, CA, USA.

This project is research conducted by Matt Huenerfauth and his students, in collaboration with colleagues at the National Technical Institute for the Deaf, including Michael Stinson, Lisa Elliot, James Mallory, and Donna Easton.

Creating Linguistic Stimuli for ASL Research

Animated virtual humans can produce a wide variety of subtle performances of American Sign Language, including minor variations in handshape, location, orientation, or movement. This technology can produce stimuli for display in experimental studies with ASL signers, to study ASL linguistics.

This project is joint work among Matt Huenerfauth and colleagues at NTID.

Matt Dye (PI), Matt Huenerfauth (Mentor), Kim Kurz (Other Personnel). March 2016 to August 2017. “Validity of Avatar Stimuli for Psycholinguistic Research on ASL.” Scholarship Portfolio Development Initiative, National Technical Institute for the Deaf, Rochester Institute of Technology. Amount: $10,000.

Methodologies for DHH User Research

We have conducted a variety of methodological research on the most effective ways to structure empirical evaluation studies of technology with Deaf and Hard of Hearing (DHH) users.

This research has included the creation of standard stimuli and question items for studies with ASL animation technology, analysis of the relationship between user demographics and responses to question items, the use of eye-tracking in studies with DHH users, and the creation of American Sign Language versions of standard usability evaluation instruments.

This research is conducted by Matt Huenerfauth and his students.

Featured Papers

Matt Huenerfauth, Kasmira Patel, Larwan Berke. 2017. Design and Psychometric Evaluation of an American Sign Language Translation of the System Usability Scale. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'17). ACM, New York, NY, USA

Improving the Usability of Resources for Speech Language Therapists

This project investigates the usability and utility of resources available to speech language therapists. By understanding the usability of existing resources, we design tools that give insight to the varied language characteristics of diverse individuals with non-fluent aphasia.

This project is conducted by Vicki Hanson and her students.

Featured Paper

Paula Garcia. 2017. Distribution of Language Measures among Individuals with and without Non-fluent Aphasia. In Proceedings of the 10th International Pervasive Technologies Related to Assistive Technologies (PETRA ’17), ACM.

Tools for Blind Programmers

This project investigates what the difficulties are that blind computer programmers face when navigating through software code. By investigating what current tools these programmers use when moving through computer code and studying the work-arounds that many of these programmers use to make technologies work for them, we look for ways to improve this experience with new technologies.

Featured Papers

Khaled Albusays, Stephanie Ludi, Matt Huenerfauth. 2017. Interviews and Observation of Blind Software Developers at Work to Understand Code Navigation Challenges. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'17). ACM, New York, NY, USA.

This project is joint work among Stephanie Ludi, Matt Huenerfauth, and their students.

Facial Expression for Animations of ASL

We are investigating techniques for producing linguistically accurate facial expressions for animations of American Sign Language; this would make these animations easier to understand and more effective at conveying information -- thereby improving the accessibility of online information for people who are deaf.

Relevant Links

RIT MyAccess Website

Developing an objective method to facilitate the situation awareness of blind travelers

The current evaluation methods of Orientation Assistive Technology (OAT) that aid blind travelers indoors rely on the performance metrics. When enhancing such systems, evaluators conduct qualitative studies to learn where to focus their efforts.

This project has been completed. This was conducted by Stephanie Ludi and her students.

Our Approach

This project investigates the use of situation awareness metrics to objectively evaluate blind travelers' situation awareness when using indoor OAT systems. The goal of this project is to design an objective method that can highlight design areas that needs improvements when evaluating such systems.

Predicting English Text Readability for Users

This project has investigated the use of computational linguistic technologies to identify whether textual information would meet the special needs of users with specific literacy impairments.

In research conducted prior to 2012, we investigated text-analysis tools for adults with intellectual disabilities. A state-of-the-art predictive model of readability was developed that was based on discourse, syntactic, semantic, and other linguistic features.

In current work, we are investigating technologies for a wider variety of users.

This project is conducted by Matt Huenerfauth and his students.

Funding Support

Matt Huenerfauth, PI. December 2008. “Text readability software for adults with intellectual disabilities.” Research Enhancement Committee, Queens College, The City University of New York. Amount: $10,000.

Eye-Tracking to Predict User Performance

Computer users may benefit from user-interfaces that can predict whether the user is struggling with a task based on an analysis of the user's eye movement behaviors. This project is investigating how to conduct precise experiments for measuring eye-tracking movements and user task performance -- relationships between these variables can be examined using machine learning techniques in order to produce preditive models for adaptive user-interfaces.

An important branch of this research has investigated whether eye-tracking technology can be used as a complementary or alternative method of evaluation for animations of sign language, by examining the eye-movements of native signers who view these animations to detect when they may be more difficult to understand.

ASL Animation Tools & Technologies

The goal of this research is to develop technologies to generate animations of a virtual human character performing American Sign Language. The funding sources have supported various animation programming platforms that underlie research systems being developed and evaluated at the laboratory.

In current work, we are investigating how to create tools that enable researchers to build dictionaries of animations of individual signs and to efficiently assemble them to produce sentences and longer passages.