Asimov: Middleware for Modeling the Brain on the iRobot Create

The iRobot Create provides a low-cost robotics platform ideal for students and hobbyists to explore simple robotic control. By adding a netbook running the Python-based Asimov middleware, this robot now becomes an advanced tool for developing and integrating brain-based sensorimotor models for navigation and decision making.

Abstract

Developing and implementing brain-based models of sensory perception for navigation and decision making is an active area of research in robotics. This is because while robots can greatly exceed humans' ability to obtain sensory information about their environment, they are barely able to approach the data processing ability humans exercise almost effortlessly. Not only will better models increase the functionality of robots, but they may also provide insight into how our own brains work. Enter: Asimov. Asimov is a Python-based middleware application that facilitates the development and testing of brain-based sensorimotor models on the iRobot Create.

The Asimov system is composed of three essential components: a
physical platform, the switch daemon, and one or more compute clients.

The physical platform includes the iRobot Create robot and a mounted netbook serving as a control computer. This control computer is responsible for collecting signals from all available sensors (robot, camera, and stereo microphone array), streaming sensory data to the switch daemon, listening for motor commands from the switch daemon, and delivering motor commands to the robot. A pair of multi-threaded control scripts written in Python are responsible for all control computer functions.

The switch daemon acts as a relay between the low-power physical platform and the high-power compute clients. It is based on the memcached protocol, but supports a streaming UDP interface. Memcached was used as a prototype as it is fast, simple, and supports atomic transactions, while the addition of the streaming communication model reduces packet-loss-induced latency spikes, a critical consideration in high-loss Wi-Fi environments.

The compute clients handle all the heavy lifting of processing the
received sensory data and executing the neural models. They allow for
the robot to have the benefit of significant CPU and RAM resources as
well as accelerators such as GPUs that would otherwise be unavailable
on the netbook. All that is required is a means to communicate with
the memcached-based switch daemon and the ability to read/write JSON
data. Optional support for JPEG images is required to work with the
visual inputs. At present, there are Python and MATLAB (through Java)
interfaces.