Washington state researchers harness Windows Mobile Platform phones with open source video encoders to send sign language over the airwaves

While text messaging has helped give deaf people access to mobile communications, those who rely primarily on American sign language (ASL) have been left behind by cell phone technology. That could change, thanks to the MobileASL project at the University of Washington.

The MobileASL project seeks to use video compression to allow sign language communications over wireless phones. PDA phones with larger screens and built-in video capture capabilities have helped the effort toward its goal, but university researchers still face bandwidth constraints from today’s slow wireless networks. To produce the quality of video needed for intelligible ASL, they have had to invent a real time video compression scheme using a specialized H.264/AVC-based open source encoder called x264.

Officials with the project say that they have been able to almost double the compression ratios of MPEG-2, allowing them to transmit video that allows users to understand semantics of ASL, regardless of the bandwidth issues posed by existing wireless networks. The National Science Foundation-sponsored project relies on cell phones running the Windows Mobile Platform.

The MobileASL stretches the bandwidth even further by using motion and skin detection algorithms to focus in on the most important areas in the video – the hands and face. By concentrating on the portions of the image that contain skin pixels, the researchers found they could then encode those regions at higher rates than the rest of the image.

The MobileASL group is inviting a few of the more than one million deaf or hard of hearing Americans who are fluent in sign language to take part in an eye-tracking study to determine visual patterns in ASL conversations.

"There's no chance that the iPhone is going to get any significant market share. No chance." -- Microsoft CEO Steve Ballmer