Abstract:
This paper investigates the use of semantic information
to link ground-level occupancy maps and aerial images.
A ground-level semantic map is obtained by a mobile robot
equipped with an omnidirectional camera, differential GPS
and a laser range finder. The mobile robot uses a virtual
sensor for building detection (based on omnidirectional images)
to compute the ground-level semantic map, which indicates
the probability of the cells being occupied by the wall of
a building. These wall estimates from a ground perspective
are then matched with edges detected in an aerial image.
The result is used to direct a region- and boundary-based
segmentation algorithm for building detection in the aerial
image. This approach addresses two difficulties simultaneously:
1) the range limitation of mobile robot sensors and 2) the
difficulty of detecting buildings in monocular aerial images.
With the suggested method building outlines can be detected
faster than the mobile robot can explore the area by itself, giving
the robot an ability to "see" around corners. At the same time,
the approach can compensate for the absence of elevation data
in segmentation of aerial images. Our experiments demonstrate
that ground-level semantic information (wall estimates) allows
to focus the segmentation of the aerial image to find buildings
and produce a ground-level semantic map that covers a larger
area than can be built using the onboard sensors.