Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Laser-based range sensors are commonly used on-board autonomous mobile robots for obstacle detection and scene understanding. A popular methodology for analyzing point cloud data from these sensors is to train Bayesian classifiers using locally computed features on labeled data and use them to compute class posteriors on-line at testing time. However, data from range sensors present a unique challenge for feature computation in the form of significant variation in spatial density of points, both across the field-of-view as well as within structures of interest. In particular, this poses the problem of choosing a scale for analysis and a support-region size for computing meaningful features reliably. While scale theory has been rigorously developed for 2-D images, no equivalent exists for unorganized 3-D point data. Choosing a satisfactory fixed scale over the entire dataset makes feature extraction sensitive to the presence of different manifolds in the data and varying data density. We adopt an approach inspired by recent developments in computational geometry and investigate the problem of automatic data-driven scale selection to improve point cloud classification. The approach is validated with results using real data from different sensors in various environments (indoor, urban outdoor and natural outdoor) classified into different terrain types (vegetation, solid surface and linear structure)

Keywords

scale selection, terrain classification, laser data

Notes

Sponsor: Army Research Laboratory, National Science FoundationGrant ID: DAAD19-01-209912, IIS-0102272Number of pages: 28