Building Local Terrain Maps for Autonomous Navigation

In order for a mobile robot to navigate safely and efficiently in an outdoor environment,
it has to recognize its surrounding terrain.
Our robot is equipped with a low-resolution 3D LIDAR and a color camera (see Fig. 2).
The data from both sensors are fused to classify the terrain in front of the robot.
Therefore, the ground plane is divided into a grid and each cell is classified as either asphalt, cobblestones,
grass, or gravel.
We use height and intensity features for the LIDAR data and Local ternary patterns (LTP) for the image data.
By additionally taking into account the context-sensitive nature of the terrain,
the results can be improved significantly.

Context-sensitive classification

Important for improving the classification results is the insight that terrain appears in contiguous areas - a fact
that is ignored when the grid cells are considered only independently of each other. Only very rarely will one find
terrain that varies greatly within a small range.
To account for this, a suitable mathematical model is needed, which exists in the form of a Conditional Random Field
(CRF) (see Fig. 1). A CRF models the conditional probability of the labels given the features directly; such a model
is called discriminative.

Fig. 1 The terrain label y of a grid cell depends on the measured features x,
but also on the labels of its neighboring grid cells.

Terrain Maps

Taking into account several consecutive frames we get a spatio-temporal terrain classification. By also detecting
obstacles with the LIDAR, the robot can build a local terrain and elevation map of its environment as it drives
(see Fig. 2). These maps can be used for robot localization and autonomous navigation.