Saliency Detection and Model-based Tracking: a Two Part Vision System for Small Robot Navigation in Forested Environments

View/Open

Date

Author

Metadata

Abstract

Towards the goal of fast, vision-based autonomous flight, localization, and map building to support local planning and
control in unstructured outdoor environments, we present a method for incrementally building a map of salient tree trunks
while simultaneously estimating the trajectory of a quadrotor flying through a forest. We make significant progress in
a class of visual perception methods that produce low-dimensional, geometric information that is ideal for planning and
navigation on aerial robots, while directing computational resources using motion saliency, which selects objects that are
important to navigation and planning. By low-dimensional geometric information, we mean coarse geometric primitives,
which for the purposes of motion planning and navigation are suitable proxies for real-world objects. Additionally, we
develop a method for summarizing past image measurements that avoids expensive computations on a history of images
while maintaining the key non-linearities that make full map and trajectory smoothing possible. We demonstrate results
with data from a small, commercially-available quad-rotor flying in a challenging, forested environment.