Dave, this conversation can serve no purpose anymore. Goodbye.

An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. Researchers shut the system down when they realized the AI was no longer using English.

The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI “agents.”

Our ability to think about abstract things makes us very different from other animals. It’s why we have big heads, big philosophies, big religions, and, many times, big problems with absolutely no basis in the physical world.

We’re in the middle of a really fascinating experiment in civilization that started around the time of Industrial Revolution, but really got going in the second half of the 20th century when computers (machines) enabled our abstract thinking to affect the physical world by significantly higher orders of magnitude.

We’ve already seen that mixing humans and advanced technology can have undesirable effects. The financial crisis of 2008 happened in large part because really smart people on Wall Street created financial structures that became too abstract for even their creators to fully understand—especially when set loose in the market to mix with human emotion and other financial structures.

The “good news” with failures of financial abstraction is that they can, apparently, be corrected by offsetting measures of abstraction like the creation of additional (abstract) money. Complicated financial structures also collapse when they are no longer believed in—like bad dreams.

AI is different in that it could very well evolve into something that surpasses DNA-based organisms. AI, once fully viable, may not collapse so easily, if at all.