[The planet Earth] has its own external prosthetic devices, and these devices are helping us all to communicate and interact with each other. But when you actually visualize it, all the connections that we’re doing right now — [the main image on this post is] the mapping of the Internet — it doesn’t look technological. It actually looks very organic. This is the first time in the entire history of humanity that we’ve connected in this way. And it’s not that machines are taking over.It’s that they’re helping us to be more human, helping us to connect with each other.

The most successful technology gets out of the way and helps us live our lives. And really, it ends up being more human than technology, because we’re co-creating each other all the time. And so this is the important point that I like to study: that things are beautiful, that it’s still a human connection — it’s just done in a different way. We’re just increasing our humanness and our ability to connect with each other, regardless of geography. So that’s why I study cyborg anthropology.

To me, this is exactly what I aspire to – to work with a team of dedicated folks to make something that helps customers accomplish something more easily and efficiently, and that in the end, allows them to be more human. Something that helps them cull the data they have into the data they need. Something that gives them the freedom to exercise their own human judgment.

At the Tableau Conference in Las Vegas last week, Steven Levitt shared an insight especially directed to the crowd of 15,000 business analysts and data junkies at the conference:

You need to ask the right question. Bumbling around in data doesn’t lead to answers. You need insight.

I wrote it down – because it called to mind something that had just struck me in a podcast:

In mathematics, someone instinctively believes something, and then they set about to prove it or disprove it. But the mathematics they use to prove the theorem – that isn’t the mental process they use to generate the theorem in the first place.

This causes people to think there might be a process for doing this stuff, that if followed, must work every time.

The thesis in both cases is that data is essential – but it’s not enough. It’s not what leads to insight and innovation. In each case, what leads us there is human imagination and judgment. This is a timely message as more and more of our largest companies focus on machine learning and data harvesting, and as the consequences of that begin to come home. Cathy O’Neil wrote about the dangers of big data and algorithms last year:

The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.

The conclusion is inescapable. Data isn’t enough. Just like machine learning isn’t enough. And technology isn’t enough. All of these things are tools. We can become Luddites, trying to ignore the data economy around us, unable to figure out how to use Apple ear pods, or…we can use our uniquely human moral imagination, quoting Cathy O’Neil again:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.