Yesterday Glenn Goldstein from MTV, Craig Cuttner from HBO, Shane Feldman from the National Association of the Deaf, and I spoke at SXSW in Austin, TX. Our topic was “The Future of Access to Digital Broadcast Video” and we covered current policy, tooling, and challenges broadcasters encounter in supporting accessibility for video at scale. Delivering captions for 500 hours of video is a lot of work, but what if you need to caption 100,000 hours every month? We discussed this question and more, much of which is not directly reflected in the slides, but I’m providing the slides here for people to check out.

Additional preset for caption providers using Streamtext. Streamtext provides an online caption delivery service utilized by hundreds of real-time captioners in North America and Europe.

Support for word-by-word caption delivery and caption correction. End users can receive captions as they are entered by the stenocaptioner rather than waiting for a full line of captions to be delivered. Stenocaptioners also have the ability to correct mistakes in the captions by backspacing to delete errors and retype the correction. This feature is an option for the caption provider – at present Streamtext and CaptionFirst support this feature.

Support for in-meeting captioners. Sometimes a meeting is scheduled when a stenocaptioner is not available, or budget doesn’t allow the hiring of a professional. For these situations, it is now possible to assign a participant the role of captioner. The captioner’s work will be viewed in the caption pod and can be exported to text or HTML and is archived as part of recorded sessions just like captions delivered by stenocaptioners. In-meeting captioners are less expensive but also typically deliver less high-quality captions for end-users. If experimenting with in-meeting captioners, make sure to ask end-users who need captions how effective the results are.

Updated documentation for caption provider implementation is provided in the download package. Any caption service can deliver captions to Adobe Connect’s caption pod with this information.

Some images of the new pod:

In-meeting captioner view of caption pod

Selecting a participant as captioner

Participant view of in-meeting captions

Caption provider options

As with the last version of this pod, development work was done by eSyncTraining, and we hope that you are as pleased with the results as we are!

A key question around captioning is the best file format for caption data. The W3C’s TTML is a standards which is commonly used, and SMPTE has extended this standard for an additional format, commonly known as SMPTE-TT. In addition to these, the WHAT-WG recently invented a new format, named WebVTT (based on a previous format, SRT). Authors are not surprisingly unsure as to the right format to use. As appealing as a single caption format may be, it currently seems unlikely that a single format will meet the needs of all providers of captions.

Adobe has helped those delivering video via Flash deliver closed captioning for several years. Flash CS3 included support for TTML (then known as DFXP) back in 2007 and has provided similar support for TTML in the Open Source Media Framework (OSMF). Our most current work on captioning addresses other standards for captioning:

Support for SMPTE-TT in OSMF. We’ve developed a plugin for OSMF to support SMPTE-TT. This is freely available and licensed under the BSD software license, so even if you aren’t using OSMF it is possible to utilize the source code to support SMPTE-TT in other environments. This plugin supports robust positioning and formatting for closed captions.

Participation in a community group for WebVTT at the W3C. WebVTT is still new and needs work to fully support the necessary functionality for captions. The advantage of this work happening at the W3C is that there is a greater opportunity for additional input. As this is a format that browser vendors have expressed interest in implementing, it is important for developers and end users to join the community group and weigh in on strengths and weaknesses of the format to help ensure that the spec provides support which is sufficient for the needs of all concerned. Adobe has joined this group to help ensure that this is true for the WebVTT community spec being drafted.

Our intent is to support what our customers want, and we have some customers who want each of these three formats. As a result we’re engaged with multiple efforts. The bottom line for Adobe is that end users who depend on captions need complete information to provide access to video and audio content and developers and video providers need efficient solutions that fit into their overall video workflow. Whether providing implementations for a developed standard or engaging in a standards development activity, we will work to ensure that both end user and video provider needs are met.

I spoke on Adobe’s efforts to support the captioning aspects of the 21st Century Communications and Video Accessibility Act at the TDI Conference a couple of weeks ago in Austin, TX. In this talk I highlight a number of efforts to support captioning that Adobe has worked on – most people interested in captioning are familiar with some but not all of these. Take a look and let me know if you have any comments.

Built-in ability for users to record a transcript of the captioning, and export to text or HTML. (Meeting hosts can disable this if required)

Five color and contrast options for caption display, and multiple font size choices

Support for multiple concurrent tracks of captioning, of particular use for multi-lingual audiences

End-user rewind controls to review caption information

As with the earlier version, captions are recorded when a Connect meeting is recorded, so an archived meeting will display any captions available during the live meeting, and end-users who may find live captioning distracting or who simply do not wish to view captions can disable the display for their view of the meeting without disrupting the captioning for other participants.

Closed captioning vendors interested in delivering captioning to Adobe Connect meetings can contact us (email: access [at] adobe) for instructions on how to communicate with the caption pod.

The pod was developed by eSyncTraining who did a great job taking a wide variety of requirements into consideration and building the pod. We’re discussing further improvements to the pod already, as in developing this pod we consulted with experts at each of the caption agencies as well as current users and captioning experts and as a result have additional ideas to investigate. If you have other ideas for the pod, please let us know.

Last week, members of the Adobe accessibility team attended the California State University’s “Technology and Persons with Disabilities Conference” – aka CSUN. This is a big event in accessibility each year and if you are interested in accessibility you should consider attending in 2009.
Adobe participated in four talks at CSUN:

IAccessible2 Development: An Accessibility API that Works for Assistive Technologies and Applications. This was a panel discussion involving IT and assistive technology companies.

Accessible PDF Authoring Techniques. This was a talk by Greg Pisocky and Pete DeVasto from Adobe and Brad Hodges from the American Foundation for the Blind. The presentation slides are available.

Rich Internet Applications with Flash and Dreamweaver. This was a talk by Matt May and Andrew Kirkpatrick discussing Flash and AJAX accessibility, related to Adobe’s SPRY framework, Flash and Flex. The presentation slides are available.

Accessible Internet Video. This was a talk by Andrew Kirkpatrick on how you can deliver the most accessible experience in video online using Flash. The presentation slides
are available. I’m going to post the main demonstration example shortly.

CNET TV is providing captioning for the videos on their site as of last week. The video is in Flash and uses the DFXP caption support we put into Flash CS3. Check it out yourself at http://www.cnettv.com/9742-1_53-31702.html.

I (Andrew Kirkpatrick) am delivering a seminar on Flash captioning Tuesday, July 10. See details below and sign up on our website.
Title: Captioning in Flash
Tuesday, July 10th, 2007 11:00 A.M. PDT
Adding captions to video in Flash is essential to ensure that users who are deaf or hard of hearing can access Flash video content. Adobe Flash CS3 includes a new component to make captioning easy and effective, and a variety of captioning tools are available to help developers define a process that fits into their existing workflow. This session will share best practices for Flash 9 swfs, Flex applications, and older Flash 8 swfs and will show you how to get captions in your video step by step.