= Registering pointclouds in ROS with PCL for [[Robot_Lab_(Spring_2012)]] =

+

Paths followed, lessons learned, code shared, next steps.

−

== pages ==

+

== Find some data ==

−

* [[Robot_Lab_(Spring_2012)]]

+

The data I used is dataset 14 at http://kos.informatik.uni-osnabrueck.de/3Dscans/

−

* [[ROS_(Robot_Operating_System)]]

+

−

* [[Navigation]]

+

−

== tbd ==

+

My python script, publish_from_file.py, can be easily adjusted to read any similar format, wherein each line gives the coordinates for a point. [http://en.wikipedia.org/wiki/PLY_(file_format) PLY] is another option; PLY pointclouds can be loaded and viewed in [http://meshlab.sourceforge.net/ Meshlab].

−

* can rosparams be set in a config file somewhere and auto-loaded?

+

−

* it seems the /map frame in rviz is different from the /map topic? because rviz can't find the /map frame even when the /map topic is being published.

+

−

== rviz view controls ==

+

I didn't write any code to read pcl files, but this is the format that the [http://pointclouds.org pointcloud library] reads and writes, so if you found some pcl data and like c++ that would also work.

−

* left / regular mouse button: rotate the view

+

−

* middle button (how?): move the view or focal point

+

−

* host click / right click / scroll: zoom

+

−

== create a map from log data and watch it on rviz ==

+

Ideally, it would be best to take your own data, then you know everything about it. It turns out the better you know the data (how far did the robot move between images? what unit does your scanner use for measurements -- meters? pixels?) the better off you will be in fine-tuning the registration algorithms.

−

$ roscore

+

−

$ rosparam set use_sim_time true

+

−

$ rosrun rviz rviz

+

−

# make sure map is checked and subscribed to topic /map

+

−

$ rosrun gmapping slam_gmapping scan:=base_scan

+

−

$ rosbag play ~/ros/gmap_tester.bag

+

−

$ rosrun map_server map_saver

+

−

== topics ==

+

== Learn about registration ==

−

* gmapping:

+

The pcl site has a lot of information. I would recommend reading the introductory material on a number of pcl modules, including registration, keypoint estimation, features, and correspondence. There are a number of ways to do this and it would be helpful to get a sense of some of them before starting on one.

−

** /base_scan

+

−

** /clock

+

−

** /map

+

−

** /map_metadata

+

−

** /slam_gmapping/entropy

+

−

** /tf

+

−

== mapping from point clouds ==

+

== Try my scripts ==

−

http://www.ros.org/wiki/pointcloud_to_laserscan

+

Download a tgz of my scripts: [[File:Registration_code.tar.gz]]

−

http://www.ros.org/wiki/rgbdslam

+

+

=== pose_registration.py ===

+

Uses the pose data from the data set to transform each point cloud into a global frame of reference. Writes PLY files.

Reads a pointcloud from a PLY or .3d file, in which each line gives the coordinates a point. Publishes each pointcloud as a message to the a ROS topic.

+

+

To run:

+

# first set the global variables for file locations

+

$ rosrun pointclouds_ah public_from_file.py

+

+

=== register_pointclouds/registration.cpp ===

+

Reads a pointcloud off the ROS bus (the topic name is set to the same one public_from_file.py writes to). Runs Iterative Closest Point on each pair of pointclouds.

+

+

To run:

+

$ rosmake register_pointclouds

+

$ rosrun register_pointclouds register

+

+

== View results ==

+

The results never came out perfectly, but they do nicely show the effects of different types of translation on the data / stages in the process.

+

+

Running the original data set through pointcloud registration only:

+

+

[[File:RegistrationOriginal.png|500px]]

+

+

Pointclouds registered by translating them according to odometry data:

+

+

[[File:RegistrationOdometry.png|500px]]

+

+

Running the data set through odometry translation and then pointcloud registration:

+

+

[[File:RegistrationPrecomputed.png|500px]]

+

+

Illustration accompanying the data set I used:

+

[[File:RegistrationEt4.jpg]]

+

+

== Do something new ==

+

Things I would do if I were to keep working on this project.

+

* Working odometry into the registration process appears to be key. It would be nice to figure out the best way to do it.

+

** A new version of pcl should be coming out any day now as I finish this quarter. A new registration algorithm will take odometry data with each pointcloud; it's called Normal Distributions Transform. That would be worth trying out.

+

** I don't know whether / how it's possible to send odometry and pointclouds over the ROS bus and correlate them on the other side. Do they need to be combined into a single message? Can the callbacks on the two topics be tied together somehow? It would be easier to start by skipping the ROS bus altogether and just reading in the pointcloud and omdometry data from file.

+

** Further research on Euler angles and rotations in 3d space would probably be useful. There may be mathematical errors in the script I wrote to do this.

+

* Cleaning noise from the data could be helpful. I believe the circular artifacts in my images come from noise in the pointclouds. Without them, registration could be more accurate.

+

* Experiment more with max correspondence and epsilon parameters to registration algorithms. These are highly dependent on the data set you use, and it seems they need to be discovered via trial-and-error, and then re-discovered if you use a different data set.

Find some data

My python script, publish_from_file.py, can be easily adjusted to read any similar format, wherein each line gives the coordinates for a point. PLY is another option; PLY pointclouds can be loaded and viewed in Meshlab.

I didn't write any code to read pcl files, but this is the format that the pointcloud library reads and writes, so if you found some pcl data and like c++ that would also work.

Ideally, it would be best to take your own data, then you know everything about it. It turns out the better you know the data (how far did the robot move between images? what unit does your scanner use for measurements -- meters? pixels?) the better off you will be in fine-tuning the registration algorithms.

Learn about registration

The pcl site has a lot of information. I would recommend reading the introductory material on a number of pcl modules, including registration, keypoint estimation, features, and correspondence. There are a number of ways to do this and it would be helpful to get a sense of some of them before starting on one.

# first set the global variables for file locations
$ ./pose_registration.py

pointclouds_ah/publish_from_file.py

Reads a pointcloud from a PLY or .3d file, in which each line gives the coordinates a point. Publishes each pointcloud as a message to the a ROS topic.

To run:

# first set the global variables for file locations
$ rosrun pointclouds_ah public_from_file.py

register_pointclouds/registration.cpp

Reads a pointcloud off the ROS bus (the topic name is set to the same one public_from_file.py writes to). Runs Iterative Closest Point on each pair of pointclouds.

To run:

$ rosmake register_pointclouds
$ rosrun register_pointclouds register

View results

The results never came out perfectly, but they do nicely show the effects of different types of translation on the data / stages in the process.

Running the original data set through pointcloud registration only:

Pointclouds registered by translating them according to odometry data:

Running the data set through odometry translation and then pointcloud registration:

Illustration accompanying the data set I used:

Do something new

Things I would do if I were to keep working on this project.

Working odometry into the registration process appears to be key. It would be nice to figure out the best way to do it.

A new version of pcl should be coming out any day now as I finish this quarter. A new registration algorithm will take odometry data with each pointcloud; it's called Normal Distributions Transform. That would be worth trying out.

I don't know whether / how it's possible to send odometry and pointclouds over the ROS bus and correlate them on the other side. Do they need to be combined into a single message? Can the callbacks on the two topics be tied together somehow? It would be easier to start by skipping the ROS bus altogether and just reading in the pointcloud and omdometry data from file.

Further research on Euler angles and rotations in 3d space would probably be useful. There may be mathematical errors in the script I wrote to do this.

Cleaning noise from the data could be helpful. I believe the circular artifacts in my images come from noise in the pointclouds. Without them, registration could be more accurate.

Experiment more with max correspondence and epsilon parameters to registration algorithms. These are highly dependent on the data set you use, and it seems they need to be discovered via trial-and-error, and then re-discovered if you use a different data set.