At some point in the near future, automatic speech transcription will become fast, free, and decent. And this moment — let’s call it the Speakularity — will be a watershed moment for journalism.

So much of the raw material of journalism consists of verbal exchanges — phone conversations, press conferences, meetings. One of journalism’s most significant production challenges, even for those who don’t work at a radio company, is translating these verbal exchanges into text to weave scripts and stories out of them.

After the Speakularity, much more of this raw material would become available. It would render audio recordings accessible to the blind and aid in translation of audio recordings into different languages. Obscure city meetings could be recorded and auto-transcribed; interviews could be published nearly instantly as Q&As; journalists covering events could focus their attention on analyzing rather than capturing the proceedings.

Because text is much more scannable than audio, recordings automatically indexed to a transcript would be much quicker to search through and edit. Jon Stewart’s crew for The Daily Show uses expensive technology to process and search through the hundreds of hours of video the various news programs air each week. Imagine if that capability were opened up to citizens — if every on-air utterance of every pundit, politician, or policy wonk were searchable on Google.

The likeliest path to the Speakularity runs through Google. The company has already taken significant steps in this direction. They’ve trained their speech processing algorithms through the millions of queries submitted to Google 411, so that now, my Android phone is already pretty good at recognizing my voice commands. They automatically add captions to YouTube videos and transcribe voicemails through Google Voice. Developers can already call on Google’s voice recognition system when developing apps for Android devices.

The Speakularity itself probably won’t happen in 2011, but I think a key moment might. Let’s say that sometime in 2011, Google unveils a product called Google Transcribe. Not for charity, of course; better transcription = more relevant ads. The core of the product is a speech transcription API: send it audio and get back text in return. But there’s a front end to Transcribe where non-techies can get their mp3s auto-transcribed. Crucially, that app allows the user to manually correct the transcription (highlight a passage and it plays automatically), enabling a human feedback loop that makes the machine better and better over time. In addition to captioning, YouTube videos appear by default next to an automatically generated transcript that users can use for navigation, Debate-Viewer-style.

Constant social feedback plus machine learning could improve automatic speech transcription to the point where it’s finally ready for prime time. And when it does, the default expectation for recorded speech will be that it’s searchable and readable, nearly in the instant. I know this sounds totally retrograde, but I think it’s something like the future.

I think you meant to say: “make audio recordings accessible to the DEAF.”

Patrick

I imagine software companies will do what they can to ensure this technology is never free.

For starters, the best software – like Dragon – is only good for 1 user per software license. Each license runs from $400 – $5500. Medical transcription, for example, costs around $22 per transcribed document. By the end of the year, hospitals can rack up massive bills on transcription charges. It’s far cheaper for hospitals to buy a software license for each of their physicians (about $1200) than it is to have each document manually transcribed. It represents a huge cost savings for the hospitals, but it’s also the bread & butter for software companies that have developed the technology.

Hospitals may get the last say as to whether or not the technology will be made available FOC, but this technology is far more sophisticated that most people think and giving it away won’t be easy for the companies who have invested so much in this type of software.

http://speakertext.com Matt Mireles, SpeakerText

Hi Matt,

You’re 100% correct on the need. But you’re wrong about the the ability of machines to tackle this problem alone. What you’re asking for is genuine artificial intelligence. And we’re at least 15-20 yrs away from that.

If you look at what actually comes out of the research labs and works, there’s always some human component––this includes Google, which, btw, was the first search engine to incorporate human link creation into its algorithm.

SpeakerText combines speech recognition with crowdsourcing to provide the kind of on-demand speech-to-text that you want. It ain’t free, but unlike Google Voice, it’s reliable and it actually works. Check it out: http://speakertext.com