Although touchscreen technologies are continuing to evolve at a breathtaking pace, there comes a time when it is easier to tell a phone what to do rather than tap or swipe. Just look at the popularity of Apple's Siri and Google Now.

Both are great services for controlling the device OS, but what if you want to build applications that users can interact with using their voice? One option is to build your own speech-to-text processing algorithm. Hmm, sounds like a tall order? Well, AT&T has a great service that will handle this part of your mobile application development: their Speech API. Just send a request to the service with a wav file of the audio, and back comes a text string, even with a confidence score on how well the service thinks it did at transcription.

Sounds cool? Want to learn more?

Then register for the AT&T DevLab (promo 'dworksatt' for 50% off) coming up at the Computer History Museum in Mountain View, CA (arguably the heart of Silicon Valley) on September 25. This is no death by Powerpoint event. For instance, in one session you'll dive in and use the Speech API to direct Sphero: a 'whimsical robotic ball'.

And if this isn't enough, the hands-on lab also includes an overview of the AT&T Application Resource Optimizer (ARO) which allows you to improve the performance and lessen the battery-drain of the applications you develop:

The event has a great speaker lineup including Eric Ries, author of 'The Lean Startup' and Guy Rosen, CEO of Onavo.

Coming soon: AT&T and IBM are teaming up to integrate these new APIs with IBM Worklight, our mobile enterprise application platform. Once you've given the APIs a try, get familiar with Worklight 5.0 as well. You'll be ready to start building high performing, speech-enabled apps that can be deployed to multiple mobile client platforms without device-specific coding. Stay tuned!