MIT’s robot VR rig lets them read the mind of a machine

This site may earn affiliate commissions from the links on this page. Terms of use.

What does the scanner see? A team at MIT is taking this question more literally than it was perhaps intended, and is trying to see the world from a robot’s perspective to better understand how they think. That might sound odd, to try to understand the workings of a brain that can be broken down to easily read lines of code, but as soon as you let this idealized creature out into the real world it starts to encounter situations you could never anticipate. Even simple trajectory questions, like how to best cross a room with a number of obstacles (call it the Frogger problem), can lead to complex decision trees. If one of those trees leads to a failure or a sub-optimal outcome, these researchers want to be able to figure out exactly where the problem began.

That’s difficult, because if a robot makes the wrong decision (say, a Roomba runs into your foot) it’s difficult to tell whether that’s due to a failure of thinking (programming) or a failure of perception. Did it not see you there, or did it see you and decide to go anyway? That’s an important distinction, and this team has an innovative way of checking: for research purposes, their robots actually project their thoughts into the world around them.

Thanks to a complex motion-capture and projection rig, the team is able to literally see the decision process splayed out in easy-to-read technicolor. In the words of one researcher, they’ve found a way to turn almost any open lab space into a full-on virtual reality environment for robots.

The system uses motion capture dots to keep track of both the position and orientation of its robot charges, and then renders their perception on the ground below with a lighting rig hanging above. As a quadcopter drone moves through the experimental space, a circle follows it on the ground below to represent the area of land it is concerned with for pathfinding purposes.

A Roomba robot’s intention to go forward them turn left is relayed to the rig and projected before it in the form of glowing dotted line — and if the Roomba encounters an unforeseen obstacle, the researchers can watch as it figures out the best alternate path. What makes this a robot VR sim is that the terrain they move through can be totally digital as well, meaning the experimenters can watch a drone respond to the topology of ravine without leaving the comfort of their own lab.

Seemingly simple predictions about pathfinding get much harder when you start adding multiple robots that need to interact with one another in a small physical space; in that case, the nested interactions of complex algorithms obscure any ability to easily tell why a particular failure occurred. A geographical surveying company, for instance, might employ this technology to bug-test a fleet of drones holding downward-facing range finding cameras. Before sending these expensive pieces of equipment into the field, they can run them through many simulations on a digital one — a stranded drone, in this case, is still only a few feet away.

What MIT has done is provide a more intelligible way to bug-fix robot AI, one that doesn’t require combing through opaque blocks of code but which shows decision processes in real-time. There’s only a few types of robot that could benefit from this approach, but those robots could benefit enormously — or, more to the point, their creators could benefit enormously. Useful, everyday robots have been a long, long time in coming; any technology that streamlines the process of robotics streamlining our lives is a great idea in my book.