Perfecting the AI that will drive your car

If we want to bring self-driving cars to life, we must take a critical look at how we’re training AI systems.

* This article originally appeared on TechCrunch, click here to see the original story.

Let’s consider what we want from a car that is powered by AI. Personally, I want to get in my car on a Friday night in Berlin, go to sleep, and wake up parked neatly in front of a chalet in the Swiss Alps.

Unfortunately, despite all the AI hype, the world of self-driving cars is not quite there yet – but we’re getting closer every day.

On my team, our work in AI has a focused approach. First, we have to teach computers to perform tasks that traditionally only humans could do. Let’s suppose you want to create a 3D map of an entire city.

3D models of cities have traditionally been hand-crafted. There is a whole tools industry committed to helping humans build these models – unfortunately that process is unscalable. If we need precise 3D maps of entire cities at a global scale, and the world demands those maps to be updated at faster and faster intervals, it quickly escalates up into an impossible task.

Alternatively, by teaching it the necessary skills, AI can take data from video recordings and aerial LiDAR captures, and automatically create 3D city models. This is a deeply useful task that simply is not possible at scale without a machine learning approach and process.

To accomplish this, let’s go back to the first step: teaching the computer a skill.

To teach a computer, you must provide training data sets. Those data sets must be created by people. It takes a human to laboriously teach the machine positive and negative examples of what it should detect.

That’s a stop sign. That’s still a stop sign. That’s not a stop sign. It takes literally hundreds of thousands of each type of example for an AI to reach acceptable performance levels – if you reach those levels at all.

To speed up this process, we use what is called Data-Efficient Machine Learning. Essentially, it’s about how we can design machine learning systems that achieve the same performance, but that learn efficiently using less data.

An example of this approach is to put the machine to work inside of a toy world. Think of giving an AI a virtual chess board. You define the rules of how the game is played, and how the pieces move – then let it play itself. Independently, the AI will begin to learn efficient strategies for play, limited only by its computational power.

That toy world is very clean cut. The real world is messy – and teaching a machine through this mode of self-play to solve real-world problems is something that has yet to be cracked – though we’re getting closer every day. One way we’re getting closer is by utilizing the HD Live Map.

If a vehicle can detect a pole or a stop sign in the world, then associate that same pole or stop sign with one registered in our HD Live Map, then the vehicle can locate itself with a level of precision that goes far beyond GPS. The accuracy goes from being measured in meters to centimeters. That’s the level of precision that AI-driven vehicles need.

We’re deeply motivated to bring this vision to life, because this collaborative approach to data and AI learning is going to empower new solutions we haven’t yet thought of.

For me, I’ll still work on getting my car to drive me overnight to the Alps. Perhaps next year.