ASL and other signed languages have only recently been recognized to be full-fledged natural languages worthy of scientific study. These languages make use of complex articulations of the hands in parallel with linguistically significant facial gestures and head movements. Until recently, the lack of sophisticated tools for capture, annotation, retrieval, and analysis of the complex interplay of manual and non-manual elements has held back linguistic research.

Thus, although there have been some fascinating discoveries in recent years, additional research on the linguistic structure of signed languages is badly needed. The relatively limited linguistic understanding of signed languages is, in turn, a hindrance for progress in computer-based sign language recognition. For these reasons, crossdisciplinary collaborations are needed in order to achieve advances in both domains.

This talk provides an overview of the nature of linguistic expression in the visual-spatial modality. It includes a demonstration of SignStream, an application developed by our research group for the annotation of video-based language data. SignStream has been used for creation of a substantial corpus of linguistically annotated data from native users of ASL, including high-quality synchronized video files showing the signing from multiple angles as well as a close-up view of the face. These video files and associated annotations, which have been used by many linguists and computer scientists (working separately and together), are being made publicly available.

Bio:
====

Having received her Ph.D. in Linguistics from MIT with a dissertation on Russian syntax, Carol Neidle is Professor of French and Linguistics at Boston University and Director of the American Sign Language Linguistic Research Project (ASLLRP), http://www.bu.edu/asllrp/. Robert G. Lee, a member of the ASLLRP team, is a Ph.D. student in linguistics at BU, as well as a certified ASL/English interpreter. He has taught linguistics and interpreting at Northeastern University.

The ASLLRP group has been conducting syntactic research on ASL (see, e.g., Neidle, Kegl, MacLaughlin, Bahan, and Lee, The Syntax of American Sign Language, MIT Press, 2000), developing tools for linguistic analysis of signed languages, and collaborating with computer scientists (primarily Dimitris Metaxas and Stan Sclaroff) on issues relevant to sign language recognition.