Even the scientists who make our most complicated software are sometimes baffled by how it works–and, more frighteningly, by how it breaks.

Remember a few months ago when Microsoft produced a friendly AI chatbot named Tay, designed to interact like a 19-year-old? It was a social and marketing experiment that quickly morphed into a social nightmare. Within a day, bombarded by hateful Twitter trolls, Tay had turned into a white supremacist–tweeting racist and offensive statements–and Microsoft had to shut it down.

The Tay failure was a bug in the system. The consequences weren’t major, but it was just one example of how AI can act in ways even its creators don’t anticipate.

It’s a story cited by Samuel Arbesman in his new book Overcomplicated: Technology at the Limits of Comprehension. In it, he elucidates how the same technological advances that have improved our lives are also making the world harder and harder to comprehend. We see these in the bugs, accidents, and “flukes” that often arise with little notice, such as the July 8th glitch in the New York Stock Exchange that suspended trading for several hours, or the failures in Toyota cars that led the vehicle to accelerate when the driver hit the brakes.

Arbesman, who is a “complexity scientist” and the scientist in residence at Lux Capital, argues that complexity is everywhere–in obvious places, like a powerful computer, but also in our household appliances, and in the tens of millions of lines of code in our cars.

Everyday consumers are often shielded from this complexity by intuitive user interfaces and may not be aware of it until something goes wrong. But to Arbesman, what’s scary is not that we don’t understand the systems and machinery that are at this point responsible for society’s function and our individual safety. It’s that even those who are supposed to understand them often don’t.

Michael H/Getty Images﻿

“More and more, even the experts don’t really have a full understanding of their system, that they’ve worked on on a daily basis or that they’ve built. That’s really the more startling fact: that we’ve moved into this new era of incomprehensibility,” Arbesman tells Co.Exist.

He notes that there are only a “handful of people on the planet” who understand the software that makes sure airplanes don’t crash into each other in midair–and even they are sometimes surprised by how the software behaves.

advertisement

The goal of Overcomplicated isn’t to scare anyone. Rather, it’s to suggest strategies for living in a complex world run by complex systems. One is to build technologies in closer accordance with the linear ways that human brains think. That could be changing programming languages to align more closely with how people count and speak–therefore reducing the number of mistakes a programmer might make. Another is an acceptance that building complex technology, like self-driving cars, is an iterative process that needs constant improvement and room for error.

An intriguing part of Arbesman’s argument is that a “machine ecology” is evolving that looks much more like natural biological systems than the clear, rule-based systems that humans tend to design. He believes that this could be the key to better understanding the technologies we invent. Before scientists understood evolution, early naturalists used to collect animal specimens and learn through observation how they might be related. A similar approach could be taken to collecting software bugs–bits and pieces that we don’t fully understand–until we can see the bigger picture.

Arbesman dedicates his book to his two children, so I ask what education and job advice he would have for them, growing up in a world that is not very cut-and-dry and all. He says he might advise them to become “generalists” in some way, people who are capable of dabbling in different fields. The “Renaissance man” of yore may be dead–it is impossible to make discoveries in so many fields, the way Leonardo da Vinci did–but he thinks we need to pause on the path to continued, ever more specific specialization of knowledge.

As he writes in the book:

“The places where generalists can thrive best are the places where we understand the least, where the systems are so complicated and interconnected that the best we can do is hope for a chronicling of the miscellaneous. What this means is that the education of generalists will involve not just learning what is known, but also learning ways of exploring the unknown, the new, and the unexpected.”

Have something to say about this article? You can email us and let us know. If it’s interesting and thoughtful, we may publish your response.

About the author

Jessica Leber is a staff editor and writer for Fast Company's Co.Exist. Previously, she was a business reporter for MIT’s Technology Review and an environmental reporter at ClimateWire.