If we're going to create software that can think and speak for itself, we should at least know what it's saying. Right?

That was the conclusion reached by Facebook researchers who recently developed a sophisticated negotiation software that started off speaking English. Two artificial intelligence agents, however, began conversing in their own shorthand that appeared to be gibberish but was perfectly coherent to themselves.

Advertisement

A sample of their conversation:

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

Dhruv Batra, a Georgia Tech researcher at Facebook's AI Research (FAIR), told Fast Co. Design "there was no reward" for the agents to stick to English as we know it, and the phenomenon has occurred multiple times before. It is more efficient for the bots, but it becomes difficult for developers to improve and work with the software.

"Agents will drift off understandable language and invent codewords for themselves," Batra said. “Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."

Convenient as it may have been for the bots, Facebook decided to require the AI to speak in understandable English.

"Our interest was having bots who could talk to people," FAIR scientist Mike Lewis said.

In a June 14 post describing the project, FAIR researchers said the project "represents an important step for the research community and bot developers toward creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant."