Staffing bars and restaurants with machines sure sounds convenient, but getting them to collaborate smoothly in such a frenzied environment poses significant hurdles. Their ability to interact with one another and the world around them is just not quite at the level of your typical wait staff. But MIT researchers have made an impressive advance in this area, showcasing a team of three robots that work together to deliver beer, suggesting the technology responsible could translate to cooperative robotic systems for not only bars and restaurants, but hospitals and disaster situations.

The biggest problem facing the team at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) was finding a way for robots to cope with the uncertainties of the human world, which they say can actually be broken down into three different kinds of uncertainty. These include sensors that aren't as accurate in determining the location and status of themselves and things around them as they could be, unpredictable outcomes, such as dropped items, and an inability for machines to communicate with each other, either due to noise or being out of range of each other.

With these uncertainties in mind, the team came up with a new planning approach aimed at allowing the robots to see tasks more like humans do. Just as we can subconsciously walk to the corner store while daydreaming about dinner, the robots would be able to complete basic tasks without sweating on every single step. The team describes these as "macro-actions," the idea being the robots can be prepared for a general task and various outcomes without needing someone to hold their hand the whole way.

This involves programming them to perform a series of macro-actions that each include multiple steps. For example, when a waiter Turtlebot enters the bar, it needs to be prepared for a variety of situations, such as the bartender being busy serving another waiter robot or not being observable at all.

"You’d like to be able to just tell one robot to go to the first room and one to get the beverage without having to walk them through every move in the process," says MIT graduate student Ariel Anders. "This method folds in that level of flexibility."

Putting their new approach to the test, the team turned their workspace into a makeshift bar. With thirsty humans in different offices awaiting service, a pair of Turtlebots (open-source robots on wheels) wielding small coolers scooted around taking orders. The humans would push a button on the robot to request a drink, prompting the robot to return to the "bar" where a PR2 robot was waiting to fill the order.

Initially there were a few teething problems that confirmed the researchers forecasted uncertainties. The PR2 supply robot could only serve one Turtlebot at a time and the robots were unable to communicate with one another from far away, forcing the team back to the drawing board.

"These limitations mean that the robots don’t know what the other robots are doing or what the other orders are," Anders says. "It forced us to work on more complex planning algorithms that allow the robots to engage in higher-level reasoning about their location, status, and behavior."

The method has been dubbed "MacDec-POMDPs," as it integrates macro-actions into a previous planning model referred to as decentralized partially observable Markov decision processes (Dec-POMDPs), which have generally been too complex to scale up for use in real world applications.

The researchers are now testing out the new planning algorithm on bigger search-and-rescue style problems and say it could inspire cooperative systems on a grander scale.

"Almost all real-world problems have some form of uncertainty baked into them," says the lead author of the research paper, Chris Amato. "As a result, there is a huge range of areas where these planning approaches could be of help."