Abstract

In this paper we address the topic of feature extraction in 3D point cloud data for object recognition and pose identification. We present a novel interest keypoint extraction method that operates on range images generated from arbitrary 3D point clouds, which explicitly considers the borders of the objects identified by transitions from foreground to background. We furthermore present a feature descriptor that takes the same information into account. We have implemented our approach and present rigorous experiments in which we analyze the individual components with respect to their repeatability and matching capabilities and evaluate the usefulness for point feature based object detection methods.

Photos and Video Results

This videos shows examples for the border extraction procedure, the interest point extraction and the descriptor for the NARFs (Normal Aligned Radial Features):

Note regarding the code

Please note, that the code presented below was tested on Ubuntu 9.10 and apparently does not work anymore under newer Ubuntu versions (because of some of the outdated ROS packages). If you are interested in using NARFs, they are implemented in the Point Cloud Library (PCL): http://www.pointclouds.organd the tutorials on ros.org: http://wiki/pcl/Tutorials

Instructions for reproducing experiments

An Open Source implementation of the features presented in the paper written using the Robot Operating System (ROS) is available for download.

rosinstall File

If you know how to use rosinstall, you can use the following file to setup the code used for the experiment:

The first command does the analysis without interest points, with interest points and with RVPs. The second command does the analysis for the rotational invariant version. The program creates a folder "icra11_data_files" where all the results are saved.

The first command will take multiple hours, because of the cross comparison of all points. If you want it to only take a few minutes use