Aerial Robot Estimation and Control with Onboard Sensors

Many recent works on aggressive flight maneuvers highly rely on the presence of precise external tracking systems. This makes them both very expensive and limits their flexibility greatly as they cannot be deployed in unstructured environments such as natural disaster sides.Thus, we focus on the use of cameras and on-board processing to be both independent from transmission limitations to ground stations and the aforementioned tracking systems. Yet, we had to find efficient ways to cope with the restricted on-board computational power.

However, existing visual systems mostly make either use of prebuilt maps and known environments or build a map as the robot moves around using simultaneous tracking and mapping (SLAM) approaches. Usually, these methods fail once tracking is lost for even short periods of time.

In order to avoid the problem we propose a robust velocity control, which requires only the estimation of the velocity instead of the estimation of the full 3D configuration. Depending on the onboard sensors and the underlying assumptions, different strategies can be applied to estimate the velocity.
For this research, we have implemented two different platforms based on the MikroKopter UAV kit (500g payload).

RGB-D Based Autonomous Velocity Control

Click to Enlarge

In the development of this platform have not taken any particular assumption, equipping the quadrotor with an RGB-D sensor. The extraction of velocity measurements relies on the integration of DVO (Dense Visual Odometry) software from TUM, which is in general able to compute an estimate of the position of a camera in the space without a full mapping of the environment. This information, although affected by an unavoidable cumulative error (because the position is not observable), can be geometrically derived, filtered and fused with IMU information in order to compute a reliable estimate of the velocity.

In addition, the RGB-D sensor is used to compute a local map of the obstacles in the surroundings of the robot and employ obstacle avoidance techniques to improve safety and autonomy of the system. The resulting platform relies only on on-board sensors without any particular assumption on the environment, and is suitable both for autonomous navigation and teleoperation.

Since our current setting employs an ASUS Xtion Live Pro as RGB-D sensor, the platform is currently suitable only for indoor environment. However, we are planning to move to outdoor application by the integration of a more conventional stereo camera and a GPS.

Obstacle Tracking and Avoidance

The depth map from the RGB-D sensor can be efficiently used to track the obstacles in the nearby of the robot using multi-target tracking techniques. In particular, the use of a robot-centered bin-occupancy filter on a limited domain surrounding the obstacle allows the employment of obstacle avoidance in all the direction of motion of the robot, and not only in the field of view of the sensor. Additionally, this comes with a constant computational time which does not grow over time, and exploit the benefits of a full probabilistic approach.

In this platform we have addressed the problem of motion estimation from two consecutive frames making use of the continuous homography constraint in the presence of flat environments (as found in most indoor scenarios as well as when flying in high altitudes) for a closed-form decomposition of the homography matrix. This reveals both the rotation and translation of the UAV and allows for a robust velocity control.

As the metric scale of the observed velocities cannot be obtained from visual information alone, we fuse both the camera and the acceleration readings from the on-board inertial measurement unit for different scale estimation techniques.

The algorithms were shown to work reliably at around 35 frames per second on our evaluation setup consisting of a

Mikrokopter UAV kit (500g payload)

Intel Atom 1.83GHz dual-core CPU

Single monochrome camera with VGA resolution

Inertial measurement unit (IMU)

Our algorithms make use of the OpenCV library and ROS, the Robot Operating System. Further, they were integrated into our controller framework TeleKyb (link to telekyb).