Real-Time Acoustic Processing Has Big Data Potential

You're jogging down a busy city street, cranking tunes on your smartphone, oblivious to the world around you. The intersection ahead looks clear, and you're unaware of loud sirens signaling that a speeding ambulance is coming your way. But before disaster strikes, your smartphone shuts off the music and warns you of the approaching vehicle.

This is just one of many potential uses of real-time acoustic processing, a machine-learning system that analyzes ambient audio to predict near-future outcomes. In the example above it saved a clueless jogger from being squashed like a bug, but the technology has other potential uses too. It could, for instance, detect when industrial equipment is about to fail, alert deaf people to alarms and other auditory warnings, helping ornithologists analyze bird calls, and even monitor bodily sounds -- such as heartbeats, stomach rumblings, and snoring -- for use by mobile medical apps.

"Wearable technology is now powerful enough to do serious machine learning, even at the audio level. And that technology will change the world in terms of monitoring," said David Tcheng, One Llama Labs' cofounder and chief science officer, in a phone interview with InformationWeek.

The company's Audio Aware machine-learning app is capable of analyzing hundreds of sounds, including music, from its surroundings. It will be available this month in the Google Play store; One Llama Labs plans to develop iOS and Windows Phone versions too, but no timetable was given.

The audio technology is based on research started a decade ago at the National Center for Supercomputing Applications' Automated Learning Group (which Tcheng cofounded) at the University of Illinois at Urbana-Champaign. One Llama Labs' original focus was on music recommendation technologies -- "sort of what like Pandora does but using supercomputers," explained company cofounder and EVP of business development Hassan Miah, who joined the call.

"The core acoustic, artificial-intelligence machine learning could apply to a lot of things," said Miah. "And now with the emergence of wearable technology, the cloud, and other factors, [our] technology can be used well beyond music. So that's the genesis of how we came out with the... Audio Aware system."

The company sees three primary markets for Audio Aware on mobile devices. The first: deaf users. "They can't hear alarms and other alerts," said Tcheng. "With my previous work with audio recognition and bird-call analysis and speech recognition -- in general, machine learning -- I knew we could detect these sounds with some of the audio machine-learning software I've created."

The second group: music lovers wearing headphones. "There is an epidemic of people just walking around -- kind of like zombies -- attached to their cellphones," said Tcheng with a chuckle. "And in the worst case [they're] cranking music so loud that they can't hear common threats."

The third group: people who want to be notified of specific sounds -- for example, nature lovers or users who study birds and other wildlife in outdoor settings.

Medical applications have potential as well, although identifying bodily sounds may present its own set of technical challenges. "We've been thinking about doing a sleep apnea application, because all the system needs to learn is how to recognize a breath," said Tcheng. "But as soon as you put the microphone on a body, you pick up all sorts of bodily sounds, from heart rate to the digestion system. If you've ever heard someone's tummy, it makes all sorts of noise."

In industrial settings, audio machine-learning technology might be used to distinguish between normally functioning machines, those in need of maintenance, and those about to fail, Tcheng said.

Engage with Oracle president Mark Hurd, NFL CIO Michelle McKenna-Doyle, General Motors CIO Randy Mott, Box founder Aaron Levie, UPMC CIO Dan Drawbaugh, GE Power CIO Jim Fowler, and other leaders of the Digital Business movement at the InformationWeek Conference and Elite 100 Awards Ceremony, to be held in conjunction with Interop in Las Vegas, March 31 to April 1, 2014. See the full agenda here.

Jeff Bertolucci is a technology journalist in Los Angeles who writes mostly for Kiplinger's Personal Finance, The Saturday Evening Post, and InformationWeek. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.

Yes! With this new technology, people will be free to completely ignore their surroundings and the people around them. Reality will only have to assert itself, well, when ones about to walk into a fountain.

Railroads already analyze audio of train wheels, using microphones on tracks to listen for growling bearings and software to spot the anomalies. But it's not real time, and takes a person to make a judgment on the wheels deemend out of spec.

The potential here is really limitless! Think of the military apllications - it would be impossible to surprise a soldier, because the device could be "trained " to pick up the faintest sounds and recognize them as an enemy's steps. Also medical - what is the sound of a heart about to suffer a heart attack or of an arterry about to burst?

At its core, digital business relies on strong security practices. In addition, leveraging security intelligence and integrating security with operations and developer teams can help organizations push the boundaries of innovation.