Feb 28, 2017

Humanity's Upgrade To Irrelevance

From 'we think, therefore we are,' to 'we are what data we generate?' JL

Olivia Solon comments in Wired:

Homo sapiens is on the brink of an upgrade. As we become increasingly skilled at deploying artificial intelligence, big data, and algorithms
to do everything from easing traffic to diagnosing cancer, we’ll
transform into a new breed of superhuman.
Which is great, except that we might also become so dependent on these
tools that our species will become irrelevant—our value determined only
by the data we generate.

Humanity has had astonishing success alleviating famine, disease, and war. (It might not always seem that way, but it’s true.) Now, Homo sapiens is on the brink of an upgrade—sort of. As we become increasingly skilled at deploying artificial intelligence, big data, and algorithms to do everything from easing traffic to diagnosing cancer, we’ll transform into a new breed of superhuman, says historian and best-­selling author Yuval Harari in his new book, Homo Deus: A Brief History of Tomorrow. Which is great, except that we might also become so dependent on these tools that our species will become irrelevant—our value determined only by the data we generate. WIRED spoke to Harari about this coming life in the Matrix just before he left for his annual 45-day tech-free meditation retreat.

Wired: In your book you predict the emergence of two completely new religions. What are they?

Harari: Techno-humanism aims to amplify the power of humans, creating cyborgs and connecting humans to computers, but it still sees human interests and desires as the highest authority in the universe.
Dataism is a new ethical system that says, yes, humans were special and important because up until now they were the most sophisticated data processing system in the universe, but this is no longer the case. The tipping point is when you have an external algorithm that understands you—your feelings, emotions, choices, desires—better than you understand them yourself. That’s the point when there is the switch from amplifying humans to making them redundant.

How so?

Take Google Maps or Waze. On the one hand they amplify human ability—you are able to reach your destination faster and more easily. But at the same time you are shifting the authority to the algorithm and losing your ability to find your own way.

What does this mean for Homo sapiens?

We become less important, perhaps irrelevant. In the humanist age the value of an experience came from within yourself. In a Dataist age, meaning is generated by the external data processing system. You go to a Japanese restaurant and have a wonderful dish, and the thing to do is take a picture with your phone, put it on Facebook, and see how many likes you get. If you don’t share your experiences, they don’t become part of the data processing system, and they have no meaning.

Does the shift toward Dataism matter for politics?

In the 20th century, politics was a battleground between grand visions about the future of humankind. The visions were grounded in the Industrial Revolution and the big question was what to do with new technologies like electricity, trains, and radio. Whatever you say about figures like Lenin or Hitler, you cannot accuse them of lacking vision. Today, nobody in politics has any kind of vision; technology is moving too fast, and the political system is unable to make sense of it.

Who can make sense of it?

The only place you hear broad visions about the future of humankind is in Silicon Valley, from Elon Musk or Mark Zuckerberg. Very few other people have competing visions. The political system is not doing its job.

So tech companies become our new rulers, even gods?

When you talk about God and religion, in the end it’s all a question of authority. What is the highest source of authority that you turn to when you have a problem in your life? A thousand years ago you’d turn to the church. Today, we expect algorithms to provide us with the answer—who to date, where to live, how to deal with an economic problem. So more and more authority is shifting to these corporations.

Can we opt out?

The simplest answer is no. It will become extremely difficult to unplug, and it has to do with health care, which will increasingly rely on internet-connected sensors. People will be willing to give up privacy in exchange for medical services that tell you the first day cancer cells start spreading in your body. So we might reach a point when it will be impossible to disconnect.

What can we be hopeful about?

There’s a lot to be hopeful about. In 20 to 30 years the hundreds of millions of people who have no health care will have access to AI doctors on their mobile phones offering better care than anyone gets now. Driverless cars won’t eliminate accidents, but they will drastically reduce them.

Phew … so we’re not doomed?

Humanity has proven its ability to rise to the challenge posed by dangerous new technologies—in the 1950s and ’60s many ­people expected the Cold War to end in a nuclear holocaust. That didn’t happen. After thousands of years in which war seemed to be an inevitable part of human nature, we changed how international politics functioned. I hope we’ll also be able to rise to the challenge of technologies like AI and genetic engineering, but we don’t have any room for error.

Save And Share :

0
comments:

Post a Comment

As a Partner and Co-Founder of Predictiv and PredictivAsia, Jon specializes in management performance and organizational effectiveness for both domestic and international clients. He is an editor and author whose works include Invisible Advantage: How Intangilbles are Driving Business Performance.Learn more...