Bill Smart is a Professor in the Robotics Program at Oregon State University. He is an Associate Director of the Collaborative Robotics and Intelligent Systems (CoRIS) institute, and Director of the Robotics Program. Smart holds a Ph.D. and Sc.M. in Computer Science from Brown University, an M.Sc. in Intelligent Robotics from the University of Edinburgh, and a B.Sc. (hons) in Computer Science from the University of Dundee. His research is focused in the areas of human-robot interaction, long-term autonomy, assistive robotics, and the interactions between robotics, policy, and law.

Abstract:

Recently, there has been a considerable interest in data-intensive machine learning techniques, such as Deep Neural Networks, for perception tasks. These techniques have achieved impressive results on image recognition and classification tasks, outperforming previous approaches on a variety of data sets. However, these techniques rely on large, labeled training and testing corpora, making them problematic for us on real mobile robotics. Labeled images in traditional data sets are often not representative of images taken from robot platforms, where there are often severe constraints on the viewing angle and type of sensor used. However, collecting a large data set from a mobile robot platform is prohibitively time-consuming, since the robot must move through physical space to position its camera. Labeling these images imposes an additional burden on the human. In this paper, we present a framework for using data-intensive machine learning techniques on real, physical robots that directly addresses these concerns. The approach uses only minimal human input to identify objects of interest in the environment. The robot then autonomously curates a large, labeled data set on which the the machine learning algorithms can run. We also describe on-going work with extending the approach to identify novel objects in the world, and to refine recognition models for these and for the original set of objects over time. We summarize progress on this work to date, and discuss our plans to fly the system on the NASA AstroBee platform on the International Space Station, hopefully sometime in 2019.