Navigation Stack - Robot Setups

The navigation stack in ROS

In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position.

You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack.

The robot must satisfy some requirements before it uses the navigation stack:

The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways.

It requires that the robot publishes information about the relationships between all the joints and sensors' position.

The robot must send messages with linear and angular velocities.

A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot.

The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous:

In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack.

Creating transforms

The navigation stack needs to know the position of the sensors, wheels, and joints.

To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.

Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us.

If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets.

Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation.

Creating a broadcaster

Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it:

This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link.

As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point.

A transform tree defines offsets in terms of both translation and rotation between different coordinate frames.

Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link):

The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is.

Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack.

Watching the transformation tree

If you want to see the transformation tree of your robot, use the following command:

And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code:

$ rosrun chapter7_tutorials tf_broadcaster$ rosrun tf view_frames

The resultant frame is depicted as follows:

Publishing sensor information

Your robot can have a lot of sensors to see the world; you can program a lot of nodes to take these data and do something, but the navigation stack is prepared only to use the planar laser's sensor. So, your sensor must publish the data with one of these types: sensor_msgs/LaserScan or sensor_msgs/PointCloud.

We are going to use the laser located in front of the robot to navigate in Gazebo. Remember that this laser is simulated on Gazebo, and it publishes data on the base_scan/scan frame.

In our case, we do not need to configure anything of our laser to use it on the navigation stack. This is because we have tf configured in the .urdf file, and the laser is publishing data with the correct type.

If you use a real laser, ROS might have a driver for it. Anyway, if you are using a laser that has no driver on ROS and want to write a node to publish the data with the sensor_msgs/LaserScan sensor, you have an example template to do it, which is shown in the following section.

But first, remember the structure of the message sensor_msgs/LaserScan. Use the following command:

As you can see, we are going to create a new topic with the name scan and the message type sensor_msgs/LaserScan. You must be familiar with this message type from sensor_msgs/LaserScan. The name of the topic must be unique. When you configure the navigation stack, you will select this topic to be used for the navigation. The following command line shows how to create the topic with the correct name:

It is important to publish data with header, stamp, frame_id, and many more elements because, if not, the navigation stack could fail with such data:

scan.header.stamp = scan_time;
scan.header.frame_id = "base_link";

Other important data on header is frame_id. It must be one of the frames created in the .urdf file and must have a frame published on the tf frame transforms. The navigation stack will use this information to know the real position of the sensor and make transforms such as the one between the data sensor and obstacles.

With this template, you can use any laser although it has no driver for ROS. You only have to change the fake data with the right data from your laser.

This template can also be used to create something that looks like a laser but is not. For example, you could simulate a laser using stereoscopy or using a sensor such as a sonar.

Publishing odometry information

The navigation stack also needs to receive data from the robot odometry. The odometry is the distance of something relative to a point. In our case, it is the distance between base_link and a fixed point in the frame odom.

The type of message used by the navigation stack is nav_msgs/Odometry. We are going to watch the structure using the following command:

$ rosmsg show nav_msgs/Odometry

As you can see in the message structure, nav_msgs/Odometry gives the position of the robot between frame_id and child_frame_id. It also gives us the pose of the robot using the geometry_msgs/Pose message and the velocity with the geometry_msgs/Twist message.

The pose has two structures that show the position in Euler coordinates and the orientation of the robot using a quaternion. The orientation is the angular displacement of the robot.

The velocity has two structures that show the linear velocity and the angular velocity. For our robot, we will use only the linear x velocity and the angular z velocity. We will use the linear x velocity to know whether the robot is moving forward or backward. The angular z velocity is used to check whether the robot is rotating towards the left or right.

As the odometry is the displacement between two frames, it is necessary to publish the transform of it. We did it in the last point, but later on in this section, I will show you an example to publish the odometry and tf of our robot.

Now let me show you how Gazebo works with the odometry.

How Gazebo creates the odometry

As you have seen in other examples with Gazebo, our robot moves in the simulated world just like a robot in the real world. We use a driver for our robot, the diffdrive_plugin.

This driver publishes the odometry generated in the simulated world, so we do not need to write anything for Gazebo.

Execute the robot sample in Gazebo to see the odometry working. Type the following commands in the shell:

Then, with the teleop node, move the robot for a few seconds to generate new data on the odometry topic.

On the screen of the Gazebo simulator, if you click on robot_model1, you will see some properties of the object. One of these properties is the pose of the robot. Click on the pose, and you will see some fields with data. What you are watching is the position of the robot in the virtual world. If you move the robot, the data changes:

Gazebo continuously publishes the odometry data. Check the topic and see what data it is sending. Type the following command in a shell:

The publish_odometry() function is where the odometry is published. You can see how the fields of the structure are filled and the name of the topic for the odometry is set (in this case, it is odom). The pose is generated in the other part of the code that we will see in the following section.

Once you have learned how and where Gazebo creates the odometry, you will be ready to learn how to publish the odometry and tf for a real robot. The following code will show a robot doing circles continuously. The finality does not really matter; the important thing to know is how to publish the correct data for our robot.

Creating our own odometry

Create a new file in chapter7_tutorials/src with the name odometry.cpp and put the following code in it:

First, create the transformation variable and fill it with frame_id and the child_frame_id values to know when the frames have to move. In our case, the base base_footprint will move relatively toward the frame odom:

On the rviz screen, you can see the robot moving over some red arrows (grid). The robot moves over the grid because you published a new tf frame transform for the robot. The red arrows are the graphical representation for the odometry message. You will see the robot moving in circles continuously as we programmed in the code.

Creating a base controller

A base controller is an important element in the navigation stack because it is the only way to effectively control your robot. It communicates directly with the electronics of your robot.

ROS does not provide a standard base controller, so you must write a base controller for your mobile platform.

Your robot has to be controlled with the message type geometry_msgs/Twist. This message is used on the Odometry message that we have seen before.

So, your base controller must subscribe to a topic with the name cmd_vel and must generate the correct commands to move the platform with the correct linear and angular velocities.

We are now going to recall the structure of this message. Type the following command in a shell to see the structure:

The vector with the name linear indicates the linear velocity for the axes x, y, and z. The vector with the name angular is for the angular velocity on the axes.

For our robot, we will only use the linear velocity x and the angular velocity z. This is because our robot is on a differential wheeled platform, and it has two motors to move the robot forward and backward and to turn.

We are working with a simulated robot on Gazebo, and the base controller is implemented on the driver used to move/simulate the platform. This means that we will not have to create the base controller for this robot.

Anyway, in this chapter, you will see an example to implement the base controller on your physical robot. Before that, let's go to execute our robot on Gazebo to see how the base controller works. Run the following commands on different shells:

When all the nodes are launched and working, open rxgraph to see the relation between all the nodes:

$ rxgraph

You can see that Gazebo subscribes automatically to the cmd_vel topic that is generated by the teleoperation node.

Inside the Gazebo simulator, the plugin of our differential wheeled robot is running and is getting the data from the cmd_vel topic. Also, this plugin moves the robot in the virtual world and generates the odometry.

Using Gazebo to create the odometry

To obtain some insight of how Gazebo does that, we are going to have a sneak peek inside the diffdrive_plugin.cpp file:

$ rosed erratic_gazebo_plugins diffdrive_plugin.cpp

The Load(...) function performs the subscription to the topic, and when a cmd_vel topic is received, the cmdVelCallback() function is executed to handle the message:

And finally, it estimates the distance traversed by the robot using more formulas from the kinematic motion model of the robot. As you can see in the code, you must know the wheel diameter and the wheel separation of your robot:

Notice that the equations are similar to diffdrive_plugin; this is because both robots are differential wheeled robots.

Do not forget to insert the following in your CMakeLists.txt file to create the executable of this file:

rosbuild_add_executable(base_controller src/base_controller.cpp)

This code is only a common example and must be extended with more code to make it work with a specific robot. It depends on the controller used, the encoders, and so on. We assume that you have the right background to add the necessary code in order to make the example work fine.

Creating a map with ROS

Getting a map can sometimes be a complicated task if you do not have the correct tools. ROS has a tool that will help you build a map using the odometry and a laser sensor. This tool is the map_server (http://www.ros.org/wiki/slam_gmapping). In this example, you will learn how to use the robot that we created in Gazebo, as we did in the previous chapters, to create a map, to save it, and load it again.

We are going to use a .launch file to make it easy. Create a new file in chapter7_tutorials/launch with the name gazebo_mapping_robot.launch and put in the following code:

With this .launch file, you can launch the Gazebo simulator with the 3D model, the rviz program with the correct configuration file, and slam_mapping to build a map in real time. Launch the file in a shell, and in the other shell, run the teleoperation node to move the robot:

When you start to move the robot with the keyboard, you will see the free and the unknown space on the rviz screen as well as the map with the occupied space; this is known as an Occupancy Grid Map (OGM). The slam_mapping node updates the map state when the robot moves, or more specifically, when (after some motion) it has a good estimate of the robot's location and how the map is. It takes the laser scans and the odometry and builds the OGM for you.

Saving the map using map_server

Once you have a complete map or something acceptable, you can save it to use it later in the navigation stack. To save it, use the following command:

$ rosrun map_server map_saver -f map

This command will create two files, map.pgm and map.yaml. The first one is the map in the .pgm format (the portable gray map format). The other is the configuration file for the map. If you open it, you will see the following output:

Now, open the .pgm image file with your favorite viewer, and you will see the map built before you:

Loading the map using map_server

When you want to use the map built with your robot, it is necessary to load it with the map_server package. The following command will load the map:

$ rosrun map_server map_server map.yaml

But to make it easy, create another .launch file in chapter7_tutorials/launch with the name gazebo_map_robot.launch and put in the following code:

And now, launch the file using the following command and remember to put the model of the robot that will be used:

$ roslaunch chapter7_tutorials gazebo_map_robot.launch model:=

"`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"

Then you will see rviz with the robot and the map. The navigation stack, in order to know the localization of the robot, will use the map published by the map server and the laser readings. This will help it perform a scan matching algorithm that helps to estimate the robot's location using a particle filter implemented in the amcl ( adaptive Monte Carlo localization) node.

Summary

In this chapter, you worked on the steps required to configure your robot in order to use it with the navigation stack. Now you know that the robot must have a planar laser, must be a differential wheeled robot, and it should satisfy some requirements for the base control and the geometry.

Keep in mind that we are working with Gazebo to demonstrate the examples and explain how the navigation stack works with different configurations. It is more complex to explain all of this directly on a real, robotic platform because we do not know whether you have one or have access to one. In any case, depending on the platform, the instructions may vary and the hardware may fail, so it is safer and useful to run these algorithms in simulations; later, we can test them on a real robot, as long as it satisfies the requirements described thus far.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.