Existing autonomous vehicles require 3D maps to navigate. That’s the reason self-driving cars can’t actually drive themselves in all places. In fact, more than 1/3 of the roads in US are unpaved, and 65 percent don’t posses reliable lane marking.

These vehicles use predesigned maps to know where they are, what route to take and what to do if they see any obstacles. Since most of the rural roads are poorly connected, they are extremely difficult for autonomous driving.

To deal with this, MIT engineers (in collaboration with the Toyota Research Institute) have developed an advanced system — mapless driving framework — that can navigate without using these 3D maps. It allows self-driving vehicles to take roads less traveled.

How It Works?

The framework merges two key components – local perception system and open street map for navigating individual road segments. They enable global navigation over large areas with a reasonable amount of required preloaded information [open street map].

Image credit: MIT CSAIL

The GPS data is accurate enough to enable topological localization, and therefore it can be augmented with local perception to tackle the issues of full autonomous navigation system since the open street map is packed with all directives associated with each road segment.

The system robustly tracks boundaries of the road suing a LiDAR sensor. It measures the surface edges of the road and estimates the road geometry, even if there is no road markings.

A framework like this, which can operate with on-board sensors, shows the real potential of self-driving vehicles. They can actually handle roads beyond the number that giant tech companies (like Google) have mapped.

Testing

According to the developers, their technique is both reliable and efficient, despite the fact that sensors collect a vast amount of data (current road boundary estimate is used at the next measurement step).

﻿

In a probabilistic framework, road boundary detections are fused with the vehicle odometry. Developers tested the framework on a full autonomous Toyota Prius in a rural area. Also, they evaluated the algorithm offline on datasets gathered from test sites.

The complete perception framework executes on a standard computer at 5 Hertz and is capable of detecting the road up to 35 meters, which means the self-driving car running on this system can travel at the speed of 67 miles per hour (or 107 kilometers per hour). The speed can be increased by implementing the framework on a GPU (in parallel).

While the technology like this could open up more roads to self-driving vehicles, it’s still a long way to go. The system has some limitations. For instance, the framework doesn’t account for sudden changes in elevation.

For now, developers are working to make the vehicle capable of handling a large variety of roads. The ultimate goal is to make vehicles as reliable as humans at driving on unfamiliar roads.