http://www.w3.org/ -- 14 October
2008 -- W3C published today a standard that will simplify the
development of Web applications that speak and listen to users. The Pronunciation
Lexicon Specification (PLS) 1.0 is the newest piece of W3C's
Speech Interface Framework for creating Web applications driven by
voice and speech. PLS can reduce the cost of developing these
applications by allowing people to share and reuse pronunciation
dictionaries. In addition, PLS can make it easier to localize
applications by separating pronunciation concerns from other
parts of the application.

"Standard pronunciation lexicons were a missing piece in the W3C
Speech Framework," said Paolo Baggia, Director of International
Standards at Loquendo and editor of the PLS 1.0 specification. "I'm
very happy to have actively contributed to filling this gap. As a
result, starting today people can create '100% standard' voice
applications."

Voice Interaction Part of W3C's One Web Vision

Real-world voice-driven Web applications abound, though people may
not always realize they are interacting with a Web service; examples
include airline departure and arrival information, banking
transactions, automated phone appointment reminders, and automated
telephone receptionists. By one estimate, over 85% of Interactive
Voice Response (IVR) applications for telephones (including mobile)
use W3C's VoiceXML 2.0 standard.

"There are 10 times as many phones in the world as connected PCs.
Phones will become the major portal to the Web," said
James A. Larson, co-Chair of the Voice Browser
Working Group, which produced the new standard. "Speech recognition is
not yet widely associated with the 'visual Web', but this will change
as devices continue to shrink and make keyboards impractical, and as
cell phones become more prevalent in regions with low literacy rates."

Asking for directions while driving and hearing the response
through speech synthesis illustrates how practical "hands-free"
applications can be to mobile users. Voice applications also benefit
people with some disabilities (such as vision limitations) and people
who cannot read.

W3C considers voice access to be one piece of more general
"multimodal" access, where users can use combinations of means to
interact: voice input, speech feedback, electronic ink, touch input,
and physical gestures (such as those used in some video games). The Voice Browser Working Group and the Multimodal Interaction Working Group are
coordinating their efforts to make the Web available on more devices
and in more situations.