Main menu

Post navigation

From unboxing RPLIDAR to running in ROS in 10 minutes flat

We received our RPLIDAR this morning and, just as kids on Christmas day, we were very eager to play with it right away.

But I’ll hold my horses, as I can hear you ask: “and what exactly is a RPLIDAR?”

A RPLIDAR is a low cost LIDAR sensor (i.e., a light-based radar, a “laser scanner”) from Robo Peak suitable for indoor robotic applications. Basically a cheaper version of that weird rotating thing you see on top of the Google self-driving cars. You can use it for collision avoidance and for the robot to quickly figure out what’s around it.

Google self-driving car from https://www.google.com/selfdrivingcar

We bought some sensors for our incoming robotic fleet that will take over the world (the true aim of our Cloud Robotics initiative), and this is the first arrival.

I didn’t have ROS installed on my system and I really wanted to get going, so while TMB proceeded with the HW unboxing and config I got started with the SW on my laptop.

Here’s the RPLIDAR in all its might:

RPLIDAR

Mind you, I run Ubuntu 14.04, so if your OS is different the process might need some adjustment.
The whole thing is quite straightforward. The RPLIDAR starts spinning as soon as you plug it in your laptop USB port. The software part involves installing ROS, providing it some basic configuration, creating a workspace, downloading the ROS node for the RPLIDAR, and building it with catkin. Although it looks like a lot of commands it takes a very short time.
Anyhow, here’s the quick and dirty list of commands I entered to get it working:

Rviz will pop-up and show a background grid. The “view” from the laser scanner will be marked in red. The laser scanner is positioned at the center of the grid, it has a range of roughly 15cm to 6 meters, so you’ll be able to see everything around it on its scanning plane within that range.

Troubleshooting
If you get permission errors on accessing the USB device with ROS take a look here:
http://question2722.rssing.com/browser.php?indx=42655234&last=1&item=4

I am working my senior design and it’s a drone consisted of six motors that can fly without a controller using an RPlidar
and a flight control board. However, I am using a Raspberry Pi and an Ubuntu software, I am wondering what kind of a ROS should I install. and how can modify the drone to fly on a certain level in the air.

There is no device ID with the data, but you’ll typically start 2 different ROS nodes reading the data from two different devices (e.g., /dev/ttyUSB0 /dev/ttyUSB1). You’ll then have the ROS nodes publishing on two different ROS topics (e.g., /front_scan /rear_scan).

kazi ataul goni gravatar image
kazi ataul goni
1
In an industrial field, one robot will pick up the apples and sort them out. The robot will move fast. In that case, if any human is near to robot it should be slow down. For that purpose, I want to use Rplidar A2 which will be in a fixed position. using Rplidar I wanted to detect any human or other obstacle is approaching towards the danger zone. So far using Rplidar python package I was able to extract the data from it. As I am totally new I do not know how to achieve this.

I was thinking I could do the environment mapping using hector slam beforehand which i have seen here, so that robot can sense the environment and later on when the environment is changing it could take the decision whether any human or obstacle is near to the robot or not. After I got the image of the environment what would be the next step?

I will be so glad if i you could give me an idea how i can achieve this

what you want to do is pretty standard and implemented in the move_base ROS package of any (supported) robot.
Basically you can navigate a map you have built with SLAM (e.g., gmapping or Google cartographer) and use the lidar for obstacle avoidance.
I suggest you have a look at the turtlebot tutorials to see how it’s done.