When the first iPhone came out in 2007, as we all know, it created a quite a buzz and a whole lot of excitement - except for one group: the blind.

"The blind community was very upset," said Larry Goldberg, the director of the National Center for Accessible Media (NCAM), a research and development group in Boston, USA devoted to improving the accessibility of media and new technologies for people with disabilities. The original iPhone's glass screen, single button and - most importantly - lack of accessibility features, such as screen reader integration, made it all but impossible for visually impaired people to use.

However, thanks to input from the blind community and NCAM as well as Apple's dedication to making their products accessible, by 2009 the iPhone 3GS had VoiceOver, Apple's screen reading technology, fully integrated and today the iPhone is, "the most popular phone with the blind community," says Goldberg. Apple, he said, "bought in."

The iPhone example illustrates an important issue in this day of lightning fast technological development, when, seemingly, new smartphones, tablets and laptops (not to mention associated apps) are introduced daily: Who makes sure that hardware manufacturers, software and app developers and content creators are ensuring that people with vision problems, hearing difficulties, or other disabilities can enjoy of all the new media and technology most of us take for granted?

NCAM is one of the most important players in the US filling this role, which is only getting more challenging as technology advances.

"It's always a game of catch-up," says Goldberg.

I recently had the chance to visit the NCAM offices and lab to learn more about their history and mission and their view on the future of accessible media and technology. I also learned about a system they recently developed, Media Access Mobile, which may soon be coming to a museum (or theater) near you.

A long history of developing accessible technologies

NCAM is "dedicated to addressing barriers to media and emerging technologies for people with disabilities in their homes, schools, workplaces, and communities". To that end it conducts research and development into accessible media technologies, consults with technology and media companies to help them integrate accessibility into their products, and participates in standards setting and policy development. It's part of WGBH, a public television and radio broadcaster in Boston, with a long history in developing technologies to make media accessible, starting with captioning in 1972.

The first TV show to display captions was Julia Child's The French Chef (a WGBH production), which first displayed open captions on February 11, 1972. At the time, open captioning, or displaying captions on the screen for all to see, was "the only way you could caption stuff," said Goldberg, who got his start working in WGBH's Caption Center in the mid-1980s.

NCAM works directly with the disabled community to find out what their concerns are. It has formed strategic partnerships with virtually every major hardware company, such as Apple, Microsoft, Google, IBM, and HP (except for, interestingly, Amazon) to advise them on accessibility.

"We find out ways to work with companies that want to make their products more accessible," said Goldberg.

NCAM staffers are also active participants in the development of standards and policies for accessible technology and media, such as the W3C's Web Accessibility Initiative (WAI) and HTML5.

"We work to build accessible provisions into those evolving standards," said Goldberg. Geoff Freed, NCAM's director of technology products and web media standards, says that his role is to "keep up with what they're doing, chime in when necessary and make sure that they keep accessibility in mind when they're developing things and to complain when I think they're doing things that are bad for accessibility."

At its core, though, NCAM is an R&D shop and its people are known as experts on making media and technology accessible.

"We brainstorm, we prototype, we try and rally other developers and the community to pick up technologies and run with things and create things," says Brad Botkin, NCAM's director of technology. "We're known to be the problem solvers around accessibility," he said.

Media Access Mobile

Botkin and NCAM's manager of business and sales, Peter Villa, took me into their lab to demonstrate and talk about one of their latest accessible technologies, Media Access Mobile (MAM). In a nutshell, MAM is a complete system for displaying closed captions, foreign language subtitles and DVS on mobile devices.

The vision, said Villa, is to use MAM in "a bizarre space like a museum, that's not traditional, for video artists who don't want open captions to ever grace their image."

But it could also potentially be applied to other places where captions aren't traditionally available.

"We get a lot of interest from live theater people, " said Villa. "They thought there could be some application in their space."

MAM was first used publicly at IBM's THINK Exhibit in New York city in September, 2011.

"It was for... a big multimedia exhibit. They needed a way to share languages and captions over a localized wi-fi network," said Villa. "Closed captions have been around for a long time and we're always looking for ways to repurpose them. We got to thinking about that data and streaming it, over the web," said Botkin. The solution was to display the captioning and description data on the browser of mobile devices, so no special app or software is required on the part of the end of user.

MAM was based on existing tools and utilities, some of which had been prototyped earlier during a project for the Whitney Museum. But to bring it all together, additional development work was required. While NCAM already had technology to support the streaming of captions, additional work was needed to support the multi-threaded streaming of captions, descriptions and subtitles.

An additional complication in the development of MAM was the need to serve captions and descriptions for media that didn't have timecodes built in.

"After speaking to some museums we realized that a lot of their artists are just delivering a DVD for a multimedia exhibit. They just burn something on a disk and say 'Here it is.' How do you synchronize something with that?" said Botkin. The solution was to use an off-the-shelf DVD player that outputs a relative time code, and to serve captions based on that code.

After about a two-month coding push, MAM was ready for its public debut last fall.

In the NCAM lab, Botkin and Villa showed me a working version of the system that grew out of that work.

As we talked, Casablanca played on a big screen TV, and was being made accessible via MAM simultaneously to deaf, hard-of-hearing and visually impaired users – as well as viewers comfortable in a range of languages other than English, all via a mobile device.

While it was at it, MAM was also driving Rear Window Captioning in the lab. For the visually impaired who use an iPhone, audio descriptions can be read by VoiceOver.

In term of technical specs, a server is required to run the MAM software, which can be an inexpensive PC. In the NCAM lab, they ran it off a $250 netbook. A dedicated wi-fi network is also required, so a wireless router is needed. The MAM software consists of a .NET application that reads the caption, description and subtitle data and passes it to the caption server, which is written in Java. The captioning data itself is read from a file, so no database is required, though, as Botkin said, "If you wanted to caption multiple exhibits from it, it might be best to integrate it with a CMS."

The mobile code to display the captions is JavaScript and HTML5.

Normally, precomposed captioning/descriptive data files are required for MAM. These files, if not already available, can be created by WGBH's Caption Center (much as they do for TV and movies). However, MAM can also support the display of live captions, generated on the fly by stenocaptioners.

MAM can serve captions and descriptions based on a supplied SMPTE timecode, a pseudo-time code (such as a relative one from a DVD player) or even be manually driven, if need be.

"If you've got something so free-form that the timing changes from performance to performance, you could have someone drive it from, say, a lighting board," said Botkin.

The browser display can be customized to look how the client wants or, as Botkin said, "It could be wrapped into a more complicated end user device that has wayfinding integrated into it," which would be of extra use to the blind.

As for the total cost of implementing the system, Villa says that, soup-to-nuts, including consulting, captioning and hardware, MAM can be up and running for less than $8,000. "We would hope closer to $5,000," said Villa. For that money, says Botkin, "You get closed captions, and you get multi-language subtitles and you can do DVS. You get a lot of bang for the buck."

The future of accessible media

While much progress has made over the years in making media and new technologies accessible, there is still, obviously, much to do. I asked the NCAM staff what areas were currently hot topics among those who work on accessible technology. Several big topics were mentioned:

Health records: With more and more medical records being put online, the accessibility of these records is becoming a big issue. NCAM was recently given a grant to research the accessibility of this information. "What's been done to make sure that, if you're blind, you can access your health records?" asks Goldberg.

Ebooks: The rise of ebooks raises concerns about the making these offerings accessible to the visually impaired.

Websites/HTML5: While websites have obviously been around, now, for 20 years, Freed estimates that only a "good, solid 5%" are truly accessible, so much work remains to be done. The good news, though, is that he sees accessibility creeping into high school and college design programs, meaning that "Kids that are learning design techniques are learning accessibility as well, instead of having to remember to do it or retrofit it."

HTML5 will have a large impact on the future accessibility of websites, much of it in a good way, according to Freed. For example, the new track element will make it much easier to add captions, subtitles and descriptions to online video.

However, not everything about HTML5 is good for accessibility. One contentious issue right now is the removal the longdesc attribute of the img tag. This was previously used to add longer descriptions to images, but was removed from HTML5 several years ago, because the editors concluded it wasn't being used properly. According to Freed, "The accessibility community in the standards world has been in an uproar ever since, just trying to get that attribute back into the recommendation so that something for long descriptions exists until something new is developed."

Wayfinding: Helping blind people find their way around indoor spaces such as museums is also a technology that needs to be perfected, according to Botkin. "Blind people are waiting for the day when there's something that will give them feedback about where they are and where resources are."

Overall, the people at NCAM are quite optimistic about the current state as well as the future of accessible media and technologies. Whether due to federal regulations, fear of lawsuits or just the goodness of their hearts, accessibility "has a higher profile now in the development communities now, in hardware and software both," says Goldberg.

That extends not just to US companies, but to Europe, Japan and now, especially, China. "I hear there's a lot going in China for accessibility," said Goldberg. "I think they'll be surprising us with some interesting stuff."

Thanks to efforts and attitudes of many, he says, "I'm not worried when I open the box on the new iPhone they'll have missed something egregious. I think they'll get it and I think that others are really up there too."