Main menu

RoboBrain and the knowledge commons

Collaboration between robots is something I’ve touched upon a few times, as researchers attempt to create a shared repository of knowledge that any robots attached to the database can instantly plug into.

For instance, an MIT team developed a team of robots that were capable of collaborating with one another in a bar type environment. Researchers from the University of Maryland and Brown respectively aim to scale up this learning.

The researchers hypothesize that if 300 robots examine objects and share this with each other, it could be possible for 1 million objects to be ‘learned’ in just 11 days.

“By having robots share what they’ve learned, it’s possible to increase the speed of data collection by orders of magnitude,” they say.

Robotic commons

The potential for this is clear, but what is less clear is whether this learning will be proprietary to a particular institution or make of robot, or whether it will be open for free use.

One project that is attempting to make such knowledge freely available is the Cornell led RoboBrain. The venture, which has also attracted support from Stanford and Brown, aims to develop a framework by which robots can easily share their own knowledge with the cloud, and to then access the collective intelligence of others via the same platform.

This collective knowledge bank is achieved by harvesting knowledge from a range of sources, including symbols, shapes, natural language and motions.

These collective repositories of knowledge offer tremendous potential for rapid scale learning as researchers aren’t building the knowledge base for their robots from scratch each time.

This is especially powerful when researchers from divergent fields can pool their knowledge, thus making the whole much richer than its individual parts.

It will be fascinating to see how far this approach goes, especially as robotics becomes more of a commercial endeavor than an academic one.

Seems antiquated in the context of deep learning. I can only see this being useful transitionally until neural-net specific hardware (or someone figures a more optimized way to compute neural nets) can fit in robots.