ReCode: This self-driving startup wants to change the way robot cars make decisions and communicate

This robot car is brought to you by deep learning.

From consumers to the experts, there’s still a lot of uncertainty about self-driving cars.

Is the public ready for two-ton automobiles driven by software? Is Lidar — an expensive and often unwieldily technology used to calculate distances using lasers — even necessary? (Elon Musk says no.) And how will self-driving cars communicate with other cars and pedestrians?

This new self-driving startup — like many others before it — purports to have the answers. Drive.ai, co-founded by Carol Reiley, Sameep Tandon and a team of six other engineers, came out of stealth just last week with one basic premise: The answer to all these questions lies within deep learning.

What’s the difference between deep learning and other machine learning systems? At the risk of oversimplifying it, deep learning is a machine learning system that is mirrored closely after the human brain, whereas other machine learning systems often require rules and human programming.

So why do Reiley and Tandon think deep learning is the key to unlocking the true potential of self-driving cars?

Consider this: If you had to give a machine a set of rules that would help it recognize a chair, what would those rules be? Maybe: This chair stands on four legs and is brown.

But what happens when you flip that chair upside down? Or if you introduce a different colored chair? Based on the given rules, that machine may not be able to recognize the object.

But give a person a chair — just a chair — and they’ll be able to recognize chairs everywhere.

Unlike other computer vision and machine learning systems, deep learning systems can roughly think and make decisions like a human brain.

“Other systems use a lot of rules, but autonomous driving is a hard and complicated problem,” Tandon, who is also Drive.ai’s CEO, told Recode. “Deep learning systems are the most effective in a practical, real-world application because it can capture more nuance and different variations than other machine learning algorithms.”

After launching out of stealth early last week, Drive.ai joined the growing fray of self-driving startups with plans to integrate with commercial fleets in an effort to acquire more driving data. Eventually, like the self-driving startups before them, Reiley and Tandon hope to work directly with automakers and have already secured a few fleet partners, though they declined to say who they were.

Drive.ai isn’t alone in its ambitions. Many of the freshman class of self-driving startups that have recently launched fall into one of two buckets: They have either been created in the hope of eventually being acquired by an automaker or, like Drive.ai, they want to be vehicle-agnostic and hope to avoid being restricted to a single legacy automaker.

Think Cruise, which was acquired by General Motors, versus MobileEye, which has become an industry standard. But as the self-driving market gets more competitive, there may be a shift away from automakers working with technology companies that are standard across the industry in order to secure a competitive advantage over its rivals.

For now, the team is focused on its commercial fleet partnerships in order to ensure their technology is safe and reliable. Tandon, who was studying artificial intelligence at Stanford along with several other of the team’s engineers before halting his studies to start Drive.ai, and Johns Hopkins Ph.D. candidate Reiley have so far raised $12 million from undisclosed investors and have a total of 20 employees.

The company also announced that General Motors veteran Steve Girsky joined its board — a move that may bolster its relationships with carmakers (or even hint at an existing one).

Admittedly, Drive.ai isn’t the only company using deep learning algorithms in their self-driving cars, Tandon said. Mobileye, for example, recently announced they would use deep learning for their camera systems.

But Drive.ai is the only one using deep learning to drive the entire system, he said. That means deep learning is powering everything — from the sensors and cameras, to the vehicle’s decision-making, to the way the car communicates with people and things around it.

Drive.ai

Communication, like deep learning, is crucial, according to Reiley and Tandon.

Cars, regardless of what system they’re operating, will need to be able to communicate with one another just as people will need to be able to understand the cars’ intentions. As the self-driving space becomes increasingly crowded with the entrance of legacy and new transportation players, standardizing that communication becomes trickier.

Communication, according to Tandon, is also an important method of gaining the trust of the public. After all, consumer trust — like the reliability and legislative approval of the technology — can either be the ultimate impediment to the deployment of self-driving or it can be what expedites it.

“We believe [a standard] needs to exist,” Tandon said. “Cars today have break lights, horns, rear-view mirrors, side-view mirrors — but all those standards developed over time. Our belief with the self-driving car is that a similar transformation will have to occur. For example, we may have to replace the human hand signals, like the wave.”

Already, Tandon and his team have been discussing implementing a standard method of communication with his existing partners.

“We’re still trying to figure out what that standard even is,” he said. “But to me, it’s just a fundamental part of the problem of self-driving cars.”

To that end, the company’s “kit,” which can be retrofitted to existing vehicles, includes LED signs that sit atop the car and speakers that relay the car’s intentions to pedestrians and human drivers around it. (Naturally, the kit is also equipped with sensors like radar, GPS, cameras and Lidar, though Reiley said the team considers Lidar to be more of a redundancy than a necessary part of the system.)

The company is still testing different methods of communication to measure things like how quickly and how well people respond to the audio-visual cues. For example, do pedestrians notice the LED sign that says that it’s safe to cross, or do they need an extra sound or voice?

Using deep learning to power that part of the system, Tandon said, allows it to learn how to communicate in a way that a pedestrian and another human driver might understand.