Automotive

How Amazon Taught the Echo Auto to Hear You in a Noisy Car

Dhananjay Motwani is thinking of an animal, and his 20 Questions opponent is, question by question, trying to figure out what it is.

“Is it larger than a microwave oven?”“Yes.”“Can it do tricks?”“Maybe.”“Is it a predator?”“No.”“Is it soft?”“No.”“Is it a vegetarian?”“Yes.”

What’s impressive here isn’t that the questioner is a computer; that’s old hat. It’s that the machine and Motwani are chatting in his blue Hyundai Sonata, trundling along one of Silicon Valley’s many freeways. The traffic, as it tends to be in this part of the country, is bad. The game is a good way not just to pass the time, but to show off what the Echo Auto can do as we creep toward the Sunnyvale lab where Amazon taught it to understand the human voice in the acoustic crucible that is the car.

Amazon introduced the road-going, Alexa-equipped device in September of last year, and started shipping to some customers in January. Amazon is working with some automakers to build Alexa into new cars, but the $50 Auto works with tens of millions of older vehicles already on the road: All you need is a power source (either a USB port or cigarette lighter) and a way to tap into the car’s speakers (Bluetooth or an aux cable).

About the size and shape of a cassette, the Echo Auto sits on your dashboard and brings 70,000 Alexa skills into your car. Its eight built-in microphones let you make phone calls, set reminders, compile shopping lists, find nearby restaurants and coffee shops, and hear Jake Gyllenhaal narrate The Great Gatsby.

An Artificial Head Measurement System with “the acoustically relevant structures of the human anatomy” plays a key role in Amazon's development of the Echo Auto.

Amazon

Adding the Auto to a growing collection of Echo products makes sense. “There’s no better place for voice than in the car,” says Miriam Daniel, Amazon’s head of Echo products. Your hands are supposed to be on the wheel, your eyes on the road. But when she and her team started developing the thing about 18 months ago, they discovered that there’s no worse place than the car for making voice recognition actually work. “We thought the kitchen was the most challenging acoustic environment,” Daniel says. But family chatter and humming refrigerators proved easy to overcome compared to wind, air conditioning, rain, the radio, and road noise. “The car was like a war zone.”

To safely cross the aural minefield, Daniel’s team started by adapting the Echo’s hardware, software, and user interface to the car. That meant adjusting the device so it can handle being turned on and off frequently, and boot up in a few seconds instead of the minute and a half it took when they first tried it. The team adjusted its responses to be shorter. They added geolocation, so the device can point users to the nearest caffeine injection site. They disabled incoming “Drop Ins,” where approved friends and such can automatically connect to one’s Echo device for a chat.

Daniel’s team created new audio cues and streamlined the potentially distracting activity of the Auto’s LED bar. They gave it one tiny speaker to play the occasional error message, but chose to rely on the car’s audio system to do the heavy lifting, to reduce the Auto’s bulk and cost. They tested a variety of microphone arrays and settled on the dashboard as the best placement after eliminating the cupholder (far from the driver’s mouth and prone to rattling about), clipped onto an air vent (too noisy), and ceiling (would leave wires dangling all over the place).

At Amazon’s reliability lab, the Echo Auto endured climatic chambers, heat and UV exposure, drop tests—just what they sound like—and yank tests, in which a specialized device yanks cords out of the thing with different levels of force. Standard stuff for all Echo devices.

But making sure the Echo can hear you properly in a moving car took a new kind of test. That’s why Motwani, an Alexa product manager, is pondering large, not-soft herbivores while driving me to Amazon’s testing complex in Sunnyvale. The complex contains mocked up kitchens and living rooms, but I’m not allowed to see those. Instead Motwani leads me into a gray room the size of a one-car garage, most of it taken up by a black Honda Accord.

Amazon build a library of road noises by sending drivers into the wild in cars loaded up with microphones, then playing the sound recorded by each at the speaker in the same location.

Amazon

For up to 18 hours on end, the dummy will talk to the Echo Auto sitting on the dash, calling out the same commands and queries over and over again.

Amazon

In the driver’s seat is what looks a bit like the upper bit of a crash test dummy, a head and shoulders mounted on a gray plastic box. The head features a black cross where a human has eyes and a nose, a pill-shaped opening for a mouth, and unsettlingly accurate, molded ears. Its maker, Head Acoustics, calls it an Artificial Head Measurement System with “the acoustically relevant structures of the human anatomy,” and it’s a common tool in audio testing. Also in the Honda are six large speakers, placed throughout the cabin.

Standing by the computers on a table against one wall, Motwani and two of his fellow Amazon engineers decide to start their demonstration at 40 mph, in the rain. A few keystrokes later, the speakers come to life, and the inside of the unmoving, sheltered car becomes an auditory facsimile of what it sounds like to drive through a storm: the pelting rain, the swiping windshield wipers, the engine running, the tires humming against the wet asphalt. They’ve collected these sounds by sending drivers into the wild in cars loaded up with microphones, then playing the sound recorded by each at the speaker in the same location.

From the computer, the engineers show off the other conditions the car can mimic: different speeds, changing weather conditions, windows up or down, talk radio or music blaring. This is where the dummy goes to work, and when I learn why its sole facial feature is a mouth, which is really a speaker. For up to 18 hours on end, it will talk to the Echo Auto sitting on the dash, calling out the same commands and queries over and over again. The team records Alexa’s responses, looking for weak points and misunderstandings. This is how machine learning happens: You feed your system as much data as you can find. And the process works best when that data is carefully selected (or created) to simulate what Alexa will be listening for.

Now that the Echo Auto has shipped to some customers, the garage-lab is focused on improving its performance in extreme conditions like convertibles and rain (though probably not the combination of the two). Like other Alexa products, it will keep getting better, and keep adding skills. But today, at least, it hasn’t bested the human mind: my ride with Motwani ended before it could figure out what animal he was thinking of. It was an elephant.

More Great WIRED StoriesPolio is nearly wiped out—unless some lab tech screws upNo, data is not the new oil—and never will beBest Nintendo Switch deals, bundles, and essentialsWIRED Guide to your personal data (and who's using it)Amazon Alexa and the search for the one perfect answer👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest storiesRead More