Homo Deus: A Brief History of Tomorrow

From the author of the international bestseller Sapiens: A Brief History of Humankind comes an extraordinary new book that explores the future of the human species.

Yuval Noah Harari, author of the bestselling Sapiens: A Brief History of Humankind, envisions a not-too-distant world in which we face a new set of challenges. In Homo Deus, he examines our future with his trademark blend of science, history, philosophy and every discipline in between.

Homo Deus explores the projects, dreams and nightmares that will shape the twenty-first century – from overcoming death to creating artificial life. It asks the fundamental questions: Where do we go from here? And how will we protect this fragile world from our own destructive powers? This is the next stage of evolution. This is Homo Deus.

War is obsolete You are more likely to commit suicide than be killed in conflict

Famine is disappearing You are at more risk of obesity than starvation

Death is just a technical problem Equality is out – but immortality is in

You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

Contents
Cover
Other Titles
Title Page
Copyright
Dedication
Contents
1: The New Human Agenda
Part I: Homo Sapiens Conquers the World
2: The Anthropocene
3: The Human Spark
Part II: Homo Sapiens Gives Meaning to the World
4: The Storytellers
5: The Odd Couple
6: The Modern Covenant
7: The Humanist Revolution
Part III: Homo Sapiens Loses Control
8: The Time Bomb in the Laboratory
9: The Great Decoupling
10: The Ocean of Consciousness
11: The Data Religion
Notes
Acknowledgements
Image credits
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
; 266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
Cover
Cover
Title Page
Table of Contents
Start
5
The Odd Couple
Stories serve as the foundations and pillars of human societies. As history unfolded, stories about gods, nations and corporations grew so powerful that they began to dominate objective reality. Believing in the great god Sobek, the Mandate of Heaven or the Bible enabled people to build Lake Fayum, the Great Wall of China and Chartres Cathedral. Unfortunately, blind faith in these stories meant that human efforts frequently focused on increasing the glory of fictional entities such as gods and nations, instead of bettering the lives of real sentient beings.
Does this analysis still hold true today? At first sight, it seems that modern society is very different from the kingdoms of ancient Egypt or medieval China. Hasn’t the rise of modern science changed the basic rules of the human game? Wouldn’t it be true to say that despite the ongoing importance of traditional myths, modern social systems rely increasingly on objective scientific theories such as the theory of evolution, which simply did not exist in ancient Egypt or medieval China?
We could of course argue that scientific theories are a new kind of myth, and that our belief in science is no different from the ancient Egyptians’ belief in the great god Sobek. Yet the comparison doesn’t hold water. Sobek existed only in the collective imagination of his devotees. Praying to Sobek helped cement the Egyptian social system, thereby enabling people to build dams and canals that prevented floods and droughts. Yet the prayers themselves didn’t raise or lower the Nile’s water level by a millimetre. In contrast, scientific theories are not just a way to bind people together. It is often said that God helps those who help themselves. This is a roundabout way of saying that God doesn’t exist, but if our belief in Him inspires us to do something ourselves – it helps. Antibiotics, unlike God, help even those who don’t help themselves. They cure infections whether you believe in them or not.
Consequently, the modern world is very different from the premodern world. Egyptian pharaohs and Chinese emperors failed to overcome famine, plague and war despite millennia of effort. Modern societies managed to do it within a few centuries. Isn’t it the fruit of abandoning intersubjective myths in favour of objective scientific knowledge? And can’t we expect this process to accelerate in the coming decades? As technology allows us to upgrade humans, overcome old age and find the key to happiness, so people would care less about fictional gods, nations and corporations, and focus instead on deciphering the physical and biological reality.
In truth, however, things are far more complicated. Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before. Thanks to computers and bioengineering, the difference between fiction and reality will blur, as people reshape reality to match their pet fictions.
The priests of Sobek imagined the existence of divine crocodiles, while pharaoh dreamt about immortality. In reality, the sacred crocodile was a very ordinary swamp reptile dressed in golden fineries, and pharaoh was as mortal as the simplest of peasants. After death, his corpse was mummified using preservative balms and scented perfumes, but it was as lifeless as one can get. In contrast, twenty-first-century scientists might be able to really engineer super-crocodiles, and to provide the human elite with eternal youth here on earth.
Consequently the rise of science will make at least some myths and religions mightier than ever. To understand why, and to face the challenges of the twenty-first century, we should therefore revisit one of the most nagging questions of all: how does modern science relate to religion? It seems that people have already said a million times everything there is to say about this question. Yet in practice, science and religion are like a husband and wife who after 500 years of marriage counselling still don’t know each other. He still dreams about Cinderella and she keeps pining for Prince Charming, while they argue about whose turn it is to take out the rubbish.
Germs and Demons
Most of the misunderstandings regarding science and religion result from faulty definitions of religion. All too often, people confuse religion with superstition, spirituality, belief in supernatural powers or belief in gods. Religion is none of these things. Religion cannot be equated with superstition, because most people are unlikely to call their cherished beliefs ‘superstitions’. We always believe in ‘the truth’. It’s only other people who believe in superstitions.
Similarly, few people put their faith in supernatural powers. For those who believe in demons, demons aren’t supernatural. They are an integral part of nature, just like porcupines, scorpions and germs. Modern physicians blame disease on invisible germs, and voodoo priests blame disease on invisible demons. There’s nothing supernatural about it: you make some demon angry, so the demon enters your body and causes you pain. What could be more natural than that? Only those who don’t believe in demons think of them as standing apart from the natural order of things.
Equating religion with faith in supernatural powers implies that you can understand all known natural phenomena without religion, which is just an optional supplement. Having understood perfectly well the whole of nature, you can now choose whether to add some ‘super-natural’ religious dogma or not. However, most religions argue that you simply cannot understand the world without them. You will never comprehend the true reason for disease, drought or earthquakes if you do not take their dogma into account.
Defining religion as ‘belief in gods’ is also problematic. We tend to say that a devout Christian is religious because she believes in God, whereas a fervent communist isn’t religious, because communism has no gods. However, religion is created by humans rather than by gods, and it is defined by its social function rather than by the existence of deities. Religion is anything that confers superhuman legitimacy on human social structures. It legitimises human norms and values by arguing that they reflect superhuman laws.
Religion asserts that we humans are subject to a system of moral laws that we did not invent and that we cannot change. A devout Jew would say that this is the system of moral laws created by God and revealed in the Bible. A Hindu would say that Brahma, Vishnu and Shiva created the laws, which were revealed to us humans in the Vedas. Other religions, from Buddhism and Daoism to Nazism, communism and liberalism, argue that the superhuman laws are natural laws, and not the creation of this or that god. Of course, each believes in a different set of natural laws discovered and revealed by different seers and prophets, from Buddha and Laozi to Hitler and Lenin.
A Jewish boy comes to his father and asks, ‘Dad, why shouldn’t we eat pork?’ The father strokes his long white beard thoughtfully and answers, ‘Well, Yankele, that’s how the world works. You are still young and you don’t understand, but if we eat pork, God will punish us and we will come to a bad end. It isn’t my idea. It’s not even the rabbi’s idea. If the rabbi had created the world, maybe he would have created a world in which pork was perfectly kosher. But the rabbi didn’t create the world – God did it. And God said, I don’t know why, that we shouldn’t eat pork. So we shouldn’t. Capeesh?’
In 1943 a German boy comes to his father, a senior SS officer, and asks, ‘Dad, why are we killing the Jews?’ The father puts on his shiny leather boots, and meanwhile explains, ‘Well, Fritz, that’s how the world works. You are still young and you don’t understand, but if we allow the Jews to live, they will cause the degeneration and extinction of humankind. It’s not my idea, and it’s not even the Führer’s idea. If Hitler had created the world, maybe he would have created a world in which the laws of natural selection did not apply, and Jews and Aryans could all live together in perfect harmony. But Hitler didn’t create the world. He just managed to decipher the laws of nature, and then instructed us how to live in line with them. If we disobey these laws, we will come to a bad end. Is that clear?!’
In 2016 a British boy comes to his father, a liberal MP, and asks, ‘Dad, why should we care about the human rights of Muslims in the Middle East?’ The father puts down his cup of tea, thinks for a moment, and says, ‘Well, Duncan, that’s how the world works. You are still young and you don’t understand, but all humans, even Muslims in the Middle East, have the same nature and therefore enjoy the same natural rights. This isn’t my idea, nor a decision of Parliament. If Parliament had created the world, universal human rights might well have been buried in some subcommittee along with all that quantum physics stuff. But Parliament didn’t create the world, it just tries to make sense of it, and we must respect the natural rights even of Muslims in the Middle East, or very soon our own rights will also be violated, and we will come to a bad end. Now off you go.’
Liberals, communists and followers of other modern creeds dislike describing their own system as a ‘religion’, because they identify religion with superstitions and supernatural powers. If you tell communists or liberals that they are religious, they think you accuse them of blindly believing in groundless pipe dreams. In fact, it means only that they believe in some system of moral laws that wasn’t invented by humans, but which humans must nevertheless obey. As far as we know, all human societies believe in this. Every society tells its members that they must obey some superhuman moral law, and that breaking this law will result in catastrophe.
Religions differ of course in the details of their stories, their concrete commandments, and the rewards and punishments they promise. Thus in medieval Europe the Catholic Church argued that God doesn’t like rich people. Jesus said that it is harder for a rich man to pass through the gates of heaven than for a camel to pass through the eye of a needle, and the Church encouraged the rich to give lots of alms, threatening that misers will burn in hell. Modern communism also dislikes rich people, but it threatens them with class conflict here and now, rather than with burning sulphur after death.
The communist laws of history are similar to the commandments of the Christian God, inasmuch as they are superhuman forces that humans cannot change at will. People can decide tomorrow morning to cancel the offside rule in football, because we invented that law, and we are free to change it. However, at least according to Marx, we cannot change the laws of history. No matter what the capitalists do, as long as they continue to accumulate private property they are bound to create class conflict and they are destined to be defeated by the rising proletariat.
If you happen to be a communist yourself you might argue that communism and Christianity are nevertheless very different, because communism is right, whereas Christianity is wrong. Class conflict really is inherent in the capitalist system, whereas rich people don’t suffer eternal tortures in hell after they die. Yet even if that’s the case, it doesn’t mean communism is not a religion. Rather, it means that communism is the one true religion. Followers of every religion are convinced that theirs alone is true. Perhaps the followers of one religion are right.
If You Meet the Buddha
The assertion that religion is a tool for preserving social order and for organising large-scale cooperation may vex many people for whom it represents first and foremost a spiritual path. However, just as the gap between religion and science is smaller than we commonly think, so the gap between religion and spirituality is much bigger. Religion is a deal, whereas spirituality is a journey.
Religion gives a complete description of the world, and offers us a well-defined contract with predetermined goals. ‘God exists. He told us to behave in certain ways. If you obey God, you’ll be admitted to heaven. If you disobey Him, you’ll burn in hell.’ The very clarity of this deal allows society to define common norms and values that regulate human behaviour.
Spiritual journeys are nothing like that. They usually take people in mysterious ways towards unknown destinations. The quest usually begins with some big question, such as who am I? What is the meaning of life? What is good? Whereas many people just accept the ready-made answers provided by the powers that be, spiritual seekers are not so easily satisfied. They are determined to follow the big question wherever it leads, and not just to places you know well or wish to visit. Thus for most people, academic studies are a deal rather than a spiritual journey, because they take us to a predetermined goal approved by our elders, governments and banks. ‘I’ll study for three years, pass the exams, get my BA certificate and secure a well-paid job.’ Academic studies might be transformed into a spiritual journey if the big questions you encounter on the way deflect you towards unexpected destinations, of which you could hardly even conceive at first. For example, a student might begin to study economics in order to secure a job in Wall Street. However, if what she learns somehow causes her to end up in a Hindu ashram or helping HIV patients in Zimbabwe, then we might call that a spiritual journey.
Why label such a voyage ‘spiritual’? This is a legacy from ancient dualist religions that believed in the existence of two gods, one good and one evil. According to dualism, the good god created pure and everlasting souls that lived in a wonderful world of spirit. However, the bad god – sometimes named Satan – created another world, made of matter. Satan didn’t know how to make his creation last, hence in the world of matter everything rots and disintegrates. In order to breathe life into his defective creation, Satan tempted souls from the pure world of spirit, and locked them up inside material bodies. That’s what humans are – a good spiritual soul trapped inside an evil material body. Since the soul’s prison – the body – decays and eventually dies, Satan ceaselessly tempts the soul with bodily delights, and above all with food, sex and power. When the body disintegrates and the soul has a chance to escape back to the spiritual world, its craving for bodily pleasures draws it back inside some new material body. The soul thus transmigrates from body to body, wasting its days in pursuit of food, sex and power.
Dualism instructs people to break these material shackles and undertake a journey back to the spiritual world, which is totally unfamiliar to us, but is our true home. During this quest we must reject all material temptations and deals. Due to this dualist legacy, every journey on which we doubt the conventions and deals of the mundane world and walk towards an unknown destination is called ‘a spiritual journey’.
Such journeys are fundamentally different from religions, because religions seek to cement the worldly order whereas spirituality seeks to escape it. Often enough, the most important demand from spiritual wanderers is to challenge the beliefs and conventions of dominant religions. In Zen Buddhism it is said that ‘If you meet the Buddha on the road, kill him.’ Which means that if while walking on the spiritual path you encounter the rigid ideas and fixed laws of institutionalised Buddhism, you must free yourself from them too.
For religions, spirituality is a dangerous threat. Religions typically strive to rein in the spiritual quests of their followers, and many religious systems were challenged not by laypeople preoccupied with food, sex and power, but rather by spiritual truth-seekers who wanted more than platitudes. Thus the Protestant revolt against the authority of the Catholic Church was ignited not by hedonistic atheists but rather by a devout and ascetic monk, Martin Luther. Luther wanted answers to the existential questions of life, and refused to settle for the rites, rituals and deals offered by the Church.
In Luther’s day, the Church promised its followers very enticing deals. If you sinned, and feared eternal damnation in the afterlife, all you needed to do was buy an indulgence. In the early sixteenth century the Church employed professional ‘salvation peddlers’ who wandered the towns and villages of Europe and sold indulgences for fixed prices. You want an entry visa to heaven? Pay ten gold coins. You want Grandpa Heinz and Grandma Gertrud to join you there? No problem, but it will cost you thirty coins. The most famous of these peddlers, the Dominican friar Johannes Tetzel, allegedly said that the moment the coin clinks in the money chest, the soul flies out of purgatory to heaven.1
The more Luther thought about it, the more he doubted this deal, and the Church that offered it. You cannot just buy your way to salvation. The Pope couldn’t possibly have the authority to forgive people their sins, and open the gates of heaven. According to Protestant tradition, on 31 October 1517 Luther walked to the All Saints’ Church in Wittenberg, carrying a lengthy document, a hammer and some nails. The document listed ninety-five theses against contemporary religious practices, including against the selling of indulgences. Luther nailed it to the church door, sparking the Protestant Reformation, which called upon any human who cared about salvation to rebel against the Pope’s authority and search for alternative routes to heaven.
[image: ]
[image: ]
Credit 1.25
25. The Pope selling indulgences for money (from a Protestant pamphlet).
From a historical perspective, the spiritual journey is always tragic, for it is a lonely path fit for individuals rather than for entire societies. Human cooperation requires firm answers rather than just questions, and those who foam against stultified religious structures end up forging new structures in their place. It happened to the dualists, whose spiritual journeys became religious establishments. It happened to Martin Luther, who after challenging the laws, institutions and rituals of the Catholic Church found himself writing new law books, founding new institutions and inventing new ceremonies. It happened even to Buddha and Jesus. In their uncompromising quest for the truth they subverted the laws, rituals and structures of traditional Hinduism and Judaism. But eventually more laws, more rituals and more structures were created in their name than in the name of any other person in history.
Counterfeiting God
Now that we have a better understanding of religion, we can go back to examining the relationship between religion and science. There are two extreme interpretations for this relationship. One view says that science and religion are sworn enemies, and that modern history was shaped by the life-and-death struggle of scientific knowledge against religious superstition. With time, the light of science dispelled the darkness of religion, and the world became increasingly secular, rational and prosperous. However, though some scientific findings certainly undermine religious dogmas, this is not inevitable. For example, Muslim dogma holds that Islam was founded by the prophet Muhammad in seventh-century Arabia, and there is ample scientific evidence supporting this.
More importantly, science always needs religious assistance in order to create viable human institutions. Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave. Science tells us that humans cannot survive without oxygen. However, is it okay to execute criminals by asphyxiation? Science doesn’t know how to answer such a question. Only religions provide us with the necessary guidance.
Hence every practical project scientists undertake also relies on religious insights. Take, for example, the building of the Three Gorges Dam over the Yangtze River. When the Chinese government decided to build the dam in 1992, physicists could calculate what pressures the dam would have to withstand, economists could forecast how much money it would probably cost, while electrical engineers could predict how much electricity it would produce. However, the government needed to take additional factors into account. Building the dam flooded huge territories containing many villages and towns, thousands of archaeological sites, and unique landscapes and habitats. More than 1 million people were displaced and hundreds of species were endangered. It seems that the dam directly caused the extinction of the Chinese river dolphin. No matter what you personally think about the Three Gorges Dam, it is clear that building it was an ethical rather than a purely scientific issue. No physics experiment, no economic model and no mathematical equation can determine whether generating thousands of megawatts and making billions of yuan is more valuable than saving an ancient pagoda or the Chinese river dolphin. Consequently, China cannot function on the basis of scientific theories alone. It requires some religion or ideology, too.
Some jump to the opposite extreme, and say that science and religion are completely separate kingdoms. Science studies facts, religion speaks about values, and never the twain shall meet. Religion has nothing to say about scientific facts, and science should keep its mouth shut concerning religious convictions. If the Pope believes that human life is sacred, and abortion is therefore a sin, biologists can neither prove nor refute this claim. As a private individual, each biologist is welcome to argue with the Pope. But as a scientist, the biologist cannot enter the fray.
This approach may sound sensible, but it misunderstands religion. Though science indeed deals only with facts, religion never confines itself to ethical judgements. Religion cannot provide us with any practical guidance unless it makes some factual claims too, and here it may well collide with science. The most important segments of many religious dogmas are not their ethical principles, but rather factual statements such as ‘God exists’, ‘the soul is punished for its sins in the afterlife’, ‘the Bible was written by a deity rather than by humans’, ‘the Pope is never wrong’. These are all factual claims. Many of the most heated religious debates, and many of the conflicts between science and religion, involve such factual claims rather than ethical judgements.
Take abortion, for example. Devout Christians often oppose abortion, whereas many liberals support it. The main bone of contention is factual rather than ethical. Both Christians and liberals believe that human life is sacred, and that murder is a heinous crime. But they disagree about certain biological facts: does human life begin at the moment of conception, at the moment of birth or at some middle point? Indeed, some human cultures maintain that life doesn’t begin even at birth. According to the !Kung of the Kalahari Desert and to various Inuit groups in the Arctic, human life begins only after the person is given a name. When an infant is born people wait for some time before naming it. If they decide not to keep the baby (either because it suffers from some deformity or because of economic difficulties), they kill it. Provided they do so before the naming ceremony, it is not considered murder.2 People from such cultures might well agree with liberals and Christians that human life is sacred and that murder is a terrible crime, yet they support infanticide.
When religions advertise themselves, they tend to emphasise their beautiful values. But God often hides in the small print of factual statements. The Catholic religion markets itself as the religion of universal love and compassion. How wonderful! Who can object to that? Why, then, are not all humans Catholic? Because when you read the small print, you discover that Catholicism also demands blind obedience to a pope ‘who never makes mistakes’ even when he orders us to go on crusades and burn heretics at the stake. Such practical instructions are not deduced solely from ethical judgements. Rather, they result from conflating ethical judgements with factual statements.
When we leave the ethereal sphere of philosophy and observe historical realities, we find that religious stories almost always include three parts:
1. Ethical judgements, such as ‘human life is sacred’.
2. Factual statements, such as ‘human life begins at the moment of conception’.
3. A conflation of the ethical judgements with the factual statements, resulting in practical guidelines such as ‘you should never allow abortion, even a single day after conception’.
Science has no authority or ability to refute or corroborate the ethical judgements religions make. But scientists do have a lot to say about religious factual statements. For example, biologists are more qualified than priests to answer factual questions such as ‘Do human fetuses have a nervous system one week after conception? Can they feel pain?’
To make things clearer, let us examine in depth a real historical example that you rarely hear about in religious commercials, but that had a huge social and political impact in its time. In medieval Europe, the popes enjoyed far-reaching political authority. Whenever a conflict erupted somewhere in Europe, they claimed the authority to decide the issue. To establish their claim to authority, they repeatedly reminded Europeans of the Donation of Constantine. According to this story, on 30 March 315 the Roman emperor Constantine signed an official decree granting Pope Sylvester I and his heirs perpetual control of the western part of the Roman Empire. The popes kept this precious document in their archive, and used it as a powerful propaganda tool whenever they faced opposition from ambitious princes, quarrelsome cities or rebellious peasants.
People in medieval Europe had great respect for ancient imperial decrees. They strongly believed that kings and emperors were God’s representatives, and they also believed that the older the document, the more authority it carried. Constantine in particular was revered, because he turned the Roman Empire from a pagan realm into a Christian empire. In a clash between the desires of some present-day city council and a decree issued by the great Constantine himself, it was obvious that people ought to obey the ancient document. Hence whenever the Pope faced political opposition, he waved the Donation of Constantine, demanding obedience. Not that it always worked. But the Donation of Constantine was an important cornerstone of papal propaganda and of the medieval political order.
When we examine the Donation of Constantine closely, we find that this story is composed of three distinct parts:
Ethical judgement
People ought to respect ancient imperial decrees more than present-day popular opinions.
Factual statement
On 30 March 315, Emperor Constantine granted the popes dominion over Europe.
Practical guideline
Europeans in 1315 ought to obey the Pope’s commands.
The ethical authority of ancient imperial decrees is far from self-evident. Most twenty-first-century Europeans think that the wishes of present-day citizens trump the diktats of long-dead kings. However, science cannot join this ethical debate, because no experiment or equation can decide the matter. If a modern-day scientist time-travelled to medieval Europe, she couldn’t prove to our ancestors that the decrees of ancient emperors are irrelevant to contemporary political disputes.
Yet the story of Constantine’s Donation was based not just on ethical judgements. It also involved some very concrete factual statements, which science is highly qualified to either verify or falsify. In 1441 Lorenzo Valla – a Catholic priest and a pioneer linguist – published a scientific study proving that Constantine’s Donation was forged. Valla analysed the style and grammar of the document, and the various words and terms it contained. He showed that the document included words which were unknown in fourth-century Latin, and that it was most probably forged about 400 years after Constantine’s death. Moreover, the date appearing on the document is ‘30 March, in the year Constantine was consul for the fourth time, and Gallicanus was consul for the first time’. In the Roman Empire, two consuls were elected each year, and it was customary to date documents by their consulate years. Unfortunately, Constantine’s fourth consulate was in 315, whereas Gallicanus was elected consul for the first time only in 317. If this all-important document was indeed composed in Constantine’s days, it would never have contained such a blatant mistake. It is as if Thomas Jefferson and his colleagues had dated the American Declaration of Independence 34 July 1776.
Today all historians agree that the Donation of Constantine was forged in the papal court sometime in the eighth century. Even though Valla never disputed the moral authority of ancient imperial decrees, his scientific study did undermine the practical guideline that Europeans must obey the Pope.3
—
On 20 December 2013 the Ugandan parliament passed the Anti-Homosexuality Act, which criminalised homosexual activities, penalising some activities by life imprisonment. It was inspired and supported by evangelical Christian groups, which maintain that God prohibits homosexuality. As proof, they quote Leviticus 18:22 (‘Do not have sexual relations with a man as one does with a woman; that is detestable’) and Leviticus 20:13 (‘If a man has sexual relations with a man as one does with a woman, both of them have done what is detestable. They are to be put to death; their blood will be on their own heads’). In previous centuries, the same religious story was responsible for tormenting millions of people all over the world. This story can be briefly summarised as follows:
Ethical judgement
Humans ought to obey God’s commands.
Factual statement
About 3,000 years ago God commanded humans to avoid homosexual activities.
Practical guideline
People should avoid homosexual activities.
Is the story true? Scientists cannot argue with the judgement that humans ought to obey God. Personally, you may dispute it. You may believe that human rights trump divine authority, and if God orders us to violate human rights, we shouldn’t listen to Him. Yet there is no scientific experiment that can decide this issue.
In contrast, science has a lot to say about the factual statement that 3,000 years ago the Creator of the Universe commanded members of the Homo sapiens species to abstain from boy-on-boy action. How do we know this statement is true? Examining the relevant literature reveals that though this statement is repeated in millions of books, articles and Internet sites, they all rely on a single source: the Bible. If so, a scientist would ask, who composed the Bible, and when? Note that this is a factual question, not a question of values. Devout Jews and Christians say that at least the book of Leviticus was dictated by God to Moses on Mount Sinai, and from that moment onwards not a single letter was either added or deleted from it. ‘But,’ the scientist would insist, ‘how can we be sure of that? After all, the Pope argued that the Donation of Constantine was composed by Constantine himself in the fourth century, when in fact it was forged 400 years later by the Pope’s own clerks.’
We can now use an entire arsenal of scientific methods to determine who composed the Bible, and when. Scientists have been doing exactly that for more than a century, and if you are interested, you can read whole books about their findings. To cut a long story short, most peer-reviewed scientific studies agree that the Bible is a collection of numerous different texts composed by different people in different times, and that these texts were not assembled into a single holy book until long after biblical times. For example, whereas King David probably lived around 1000 BC, it is commonly accepted that the book of Deuteronomy was composed in the court of King Josiah of Judah, sometime around 620 BC, as part of a propaganda campaign aimed to strengthen Josiah’s authority. Leviticus was compiled at an even later date, no earlier than 500 BC.
As for the idea that the ancient Jews carefully preserved the biblical text, without adding or subtracting anything, scientists point out that biblical Judaism was not a scripture-based religion at all. Rather, it was a typical Iron Age cult, similar to many of its Middle Eastern neighbours. It had no synagogues, yeshivas, rabbis – or even a bible. Instead it had elaborate temple rituals, most of which involved sacrificing animals to a jealous sky god so that he would bless his people with seasonal rains and military victories. Its religious elite consisted of priestly families, who owed everything to birth, and nothing to intellectual prowess. The mostly illiterate priests were busy with the temple ceremonies, and had little time for writing or studying any scriptures.
During the Second Temple period a rival religious elite was formed. Due partly to Persian and Greek influences, Jewish scholars who wrote and interpreted texts gained increasing prominence. These scholars eventually came to be known as rabbis, and the texts they compiled were christened ‘the Bible’. Rabbinical authority rested on individual intellectual abilities rather than on birth. The clash between the new literate elite and the old priestly families was inevitable. Luckily for the rabbis, the Romans torched Jerusalem and its temple while suppressing the Great Jewish Revolt (AD 70). With the temple in ruins, the priestly families lost their religious authority, their economic power base and their very raison d’être. Traditional Judaism – a Judaism of temples, priests and head-splitting warriors – disappeared. Its place was taken by a new Judaism of books, rabbis and hair-splitting scholars. The scholars’ main forte was interpretation. They used this ability not only to explain how an almighty God allowed His temple to be destroyed, but also to bridge the immense gaps between the old Judaism described in biblical stories and the very different Judaism they created.4
Hence according to our best scientific knowledge, the Leviticus injunctions against homosexuality reflect nothing grander than the biases of a few priests and scholars in ancient Jerusalem. Though science cannot decide whether people ought to obey God’s commands, it has many relevant things to say about the provenance of the Bible. If Ugandan politicians think that the power that created the cosmos, the galaxies and the black holes becomes terribly upset whenever two Homo sapiens males have a bit of fun together, then science can help disabuse them of this rather bizarre notion.
Holy Dogma
In truth, it is not always easy to separate ethical judgements from factual statements. Religions have the nagging tendency to turn factual statements into ethical judgements, thereby creating terrible confusion and obfuscating what should have been relatively simple debates. Thus the factual statement ‘God wrote the Bible’ all too often mutates into the ethical injunction ‘you ought to believe that God wrote the Bible’. Merely believing in this factual statement becomes a virtue, whereas doubting it becomes a terrible sin.
Conversely, ethical judgements often hide within them factual statements that people don’t bother to mention, because they think they have been proven beyond doubt. Thus the ethical judgement ‘human life is sacred’ (which science cannot test) may shroud the factual statement ‘every human has an eternal soul’ (which is open for scientific debate). Similarly, when American nationalists proclaim that ‘the American nation is sacred’, this seemingly ethical judgement is in fact predicated on factual statements such as ‘the USA has spearheaded most of the moral, scientific and economic advances of the last few centuries’. Whereas it is impossible to scientifically scrutinise the claim that the American nation is sacred, once we unpack this judgement we may well examine scientifically whether the USA has indeed been responsible for a disproportionate share of moral, scientific and economic breakthroughs.
This has led some philosophers, such as Sam Harris, to argue that science can always resolve ethical dilemmas, because human values always hide within them some factual statements. Harris thinks all humans share a single supreme value – minimising suffering and maximising happiness – and all ethical debates are factual arguments concerning the most efficient way to maximise happiness.5 Islamic fundamentalists want to reach heaven in order to be happy, liberals believe that increasing human liberty maximises happiness, and German nationalists think that everyone would be better off if they only allowed Berlin to run this planet. According to Harris, Islamists, liberals and nationalists have no ethical dispute; they have a factual disagreement about how best to realise their common goal.
Yet even if Harris is right, and even if all humans cherish happiness, in practice it would be extremely difficult to use this insight to decide ethical disputes, particularly because we have no scientific definition or measurement of happiness. Consider the case of the Three Gorges Dam. Even if we agree that the ultimate aim of the project is to make the world a happier place, how can we tell whether generating cheap electricity contributes more to global happiness than protecting traditional lifestyles or saving the rare Chinese river dolphin? As long as we haven’t deciphered the mysteries of consciousness, we cannot develop a universal measurement for happiness and suffering, and we don’t know how to compare the happiness and suffering of different individuals, let alone different species. How many units of happiness are generated when a billion Chinese enjoy cheaper electricity? How many units of misery are produced when an entire dolphin species becomes extinct? Indeed, are happiness and misery mathematical entities that can be added or subtracted in the first place? Eating ice cream is enjoyable. Finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?
Consequently, although science has much more to contribute to ethical debates than we commonly think, there is a line it cannot cross, at least not yet. Without the guiding hand of some religion, it is impossible to maintain large-scale social orders. Even universities and laboratories need religious backing. Religion provides the ethical justification for scientific research, and in exchange gets to influence the scientific agenda and the uses of scientific discoveries. Hence you cannot understand the history of science without taking religious beliefs into account. Scientists seldom dwell on this fact, but the Scientific Revolution itself began in one of the most dogmatic, intolerant and religious societies in history.
The Witch Hunt
We often associate science with the values of secularism and tolerance. If so, early modern Europe is the last place you would have expected a scientific revolution. Europe in the days of Columbus, Copernicus and Newton had the highest concentration of religious fanatics in the world, and the lowest level of tolerance. The luminaries of the Scientific Revolution lived in a society that expelled Jews and Muslims, burned heretics wholesale, saw a witch in every cat-loving elderly lady and started a new religious war every full moon.
If you travelled to Cairo or Istanbul around 1600, you would find there a multicultural and tolerant metropolis, where Sunnis, Shiites, Orthodox Christians, Catholics, Armenians, Copts, Jews and even the occasional Hindu lived side by side in relative harmony. Though they had their share of disagreements and riots, and though the Ottoman Empire routinely discriminated against people on religious grounds, it was a liberal paradise compared with Europe. If you then travelled to contemporary Paris or London, you would find cities awash with religious extremism, in which only those belonging to the dominant sect could live. In London they killed Catholics, in Paris they killed Protestants, the Jews had long been driven out, and nobody in his right mind would dream of letting any Muslims in. And yet, the Scientific Revolution began in London and Paris rather than in Cairo and Istanbul.
It is customary to tell the history of modernity as a struggle between science and religion. In theory, both science and religion are interested above all in the truth, and because each upholds a different truth, they are doomed to clash. In fact, neither science nor religion cares that much about the truth, hence they can easily compromise, coexist and even cooperate.
Religion is interested above all in order. It aims to create and maintain the social structure. Science is interested above all in power. It aims to acquire the power to cure diseases, fight wars and produce food. As individuals, scientists and priests may give immense importance to the truth; but as collective institutions, science and religion prefer order and power over truth. They can therefore make good bedfellows. The uncompromising quest for truth is a spiritual journey, which can seldom remain within the confines of either religious or scientific establishments.
It would accordingly be far more correct to view modern history as the process of formulating a deal between science and one particular religion – namely, humanism. Modern society believes in humanist dogmas, and uses science not in order to question these dogmas, but rather in order to implement them. In the twenty-first century the humanist dogmas are unlikely to be replaced by pure scientific theories. However, the covenant linking science and humanism may well crumble, and give way to a very different kind of deal, between science and some new post-humanist religion. We will dedicate the next two chapters to understanding the modern covenant between science and humanism. The third and final part of the book will then explain why this covenant is disintegrating, and what new deal might replace it.
PART III
Homo Sapiens Loses Control
Can humans go on running the world and giving it meaning?
How do biotechnology and artificial intelligence threaten humanism?
Who might inherit humankind, and what new religion might replace humanism?
[image: title image]
[image: title image]
PART I
Homo sapiens Conquers the World
What is the difference between humans and all other animals?
How did our species conquer the world?
Is Homo sapiens a superior life form, or just the local bully?
1
The New Human Agenda
At the dawn of the third millennium, humanity wakes up, stretching its limbs and rubbing its eyes. Remnants of some awful nightmare are still drifting across its mind. ‘There was something with barbed wire, and huge mushroom clouds. Oh well, it was just a bad dream.’ Going to the bathroom, humanity washes its face, examines its wrinkles in the mirror, makes a cup of coffee and opens the diary. ‘Let’s see what’s on the agenda today.’
For thousands of years the answer to this question remained unchanged. The same three problems preoccupied the people of twentieth-century China, of medieval India and of ancient Egypt. Famine, plague and war were always at the top of the list. For generation after generation humans have prayed to every god, angel and saint, and have invented countless tools, institutions and social systems – but they continued to die in their millions from starvation, epidemics and violence. Many thinkers and prophets concluded that famine, plague and war must be an integral part of God’s cosmic plan or of our imperfect nature, and nothing short of the end of time would free us from them.
Yet at the dawn of the third millennium, humanity wakes up to an amazing realisation. Most people rarely think about it, but in the last few decades we have managed to rein in famine, plague and war. Of course, these problems have not been completely solved, but they have been transformed from incomprehensible and uncontrollable forces of nature into manageable challenges. We don’t need to pray to any god or saint to rescue us from them. We know quite well what needs to be done in order to prevent famine, plague and war – and we usually succeed in doing it.
True, there are still notable failures; but when faced with such failures we no longer shrug our shoulders and say, ‘Well, that’s the way things work in our imperfect world’ or ‘God’s will be done’. Rather, when famine, plague or war break out of our control, we feel that somebody must have screwed up, we set up a commission of inquiry, and promise ourselves that next time we’ll do better. And it actually works. Such calamities indeed happen less and less often. For the first time in history, more people die today from eating too much than from eating too little; more people die from old age than from infectious diseases; and more people commit suicide than are killed by soldiers, terrorists and criminals combined. In the early twenty-first century, the average human is far more likely to die from bingeing at McDonald’s than from drought, Ebola or an al-Qaeda attack.
Hence even though presidents, CEOs and generals still have their daily schedules full of economic crises and military conflicts, on the cosmic scale of history humankind can lift its eyes up and start looking towards new horizons. If we are indeed bringing famine, plague and war under control, what will replace them at the top of the human agenda? Like firefighters in a world without fire, so humankind in the twenty-first century needs to ask itself an unprecedented question: what are we going to do with ourselves? In a healthy, prosperous and harmonious world, what will demand our attention and ingenuity? This question becomes doubly urgent given the immense new powers that biotechnology and information technology are providing us with. What will we do with all that power?
Before answering this question, we need to say a few more words about famine, plague and war. The claim that we are bringing them under control may strike many as outrageous, extremely naïve, or perhaps callous. What about the billions of people scraping a living on less than $2 a day? What about the ongoing AIDS crisis in Africa, or the wars raging in Syria and Iraq? To address these concerns, let us take a closer look at the world of the early twenty-first century, before exploring the human agenda for the coming decades.
The Biological Poverty Line
Let’s start with famine, which for thousands of years has been humanity’s worst enemy. Until recently most humans lived on the very edge of the biological poverty line, below which people succumb to malnutrition and hunger. A small mistake or a bit of bad luck could easily be a death sentence for an entire family or village. If heavy rains destroyed your wheat crop, or robbers carried off your goat herd, you and your loved ones may well have starved to death. Misfortune or stupidity on the collective level resulted in mass famines. When severe drought hit ancient Egypt or medieval India, it was not uncommon that 5 or 10 per cent of the population perished. Provisions became scarce; transport was too slow and expensive to import sufficient food; and governments were far too weak to save the day.
Open any history book and you are likely to come across horrific accounts of famished populations, driven mad by hunger. In April 1694 a French official in the town of Beauvais described the impact of famine and of soaring food prices, saying that his entire district was now filled with ‘an infinite number of poor souls, weak from hunger and wretchedness and dying from want, because, having no work or occupation, they lack the money to buy bread. Seeking to prolong their lives a little and somewhat to appease their hunger, these poor folk eat such unclean things as cats and the flesh of horses flayed and cast onto dung heaps. [Others consume] the blood that flows when cows and oxen are slaughtered, and the offal that cooks throw into the streets. Other poor wretches eat nettles and weeds, or roots and herbs which they boil in water.’1
Similar scenes took place all over France. Bad weather had ruined the harvests throughout the kingdom in the previous two years, so that by the spring of 1694 the granaries were completely empty. The rich charged exorbitant prices for whatever food they managed to hoard, and the poor died in droves. About 2.8 million French – 15 per cent of the population – starved to death between 1692 and 1694, while the Sun King, Louis XIV, was dallying with his mistresses in Versailles. The following year, 1695, famine struck Estonia, killing a fifth of the population. In 1696 it was the turn of Finland, where a quarter to a third of people died. Scotland suffered from severe famine between 1695 and 1698, some districts losing up to 20 per cent of their inhabitants.2
Most readers probably know how it feels when you miss lunch, when you fast on some religious holiday, or when you live for a few days on vegetable shakes as part of a new wonder diet. But how does it feel when you haven’t eaten for days on end and you have no clue where to get the next morsel of food? Most people today have never experienced this excruciating torment. Our ancestors, alas, knew it only too well. When they cried to God, ‘Deliver us from famine!’, this is what they had in mind.
During the last hundred years, technological, economic and political developments have created an increasingly robust safety net separating humankind from the biological poverty line. Mass famines still strike some areas from time to time, but they are exceptional, and they are almost always caused by human politics rather than by natural catastrophes. In most parts of the planet, even if a person has lost his job and all of his possessions, he is unlikely to die from hunger. Private insurance schemes, government agencies and international NGOs may not rescue him from poverty, but they will provide him with enough daily calories to survive. On the collective level, the global trade network turns droughts and floods into business opportunities, and makes it possible to overcome food shortages quickly and cheaply. Even when wars, earthquakes or tsunamis devastate entire countries, international efforts usually succeed in preventing famine. Though hundreds of millions still go hungry almost every day, in most countries very few people actually starve to death.
Poverty certainly causes many other health problems, and malnutrition shortens life expectancy even in the richest countries on earth. In France, for example, 6 million people (about 10 per cent of the population) suffer from nutritional insecurity. They wake up in the morning not knowing whether they will have anything to eat for lunch; they often go to sleep hungry; and the nutrition they do obtain is unbalanced and unhealthy – lots of starch, sugar and salt, and not enough protein and vitamins.3 Yet nutritional insecurity isn’t famine, and France of the early twenty-first century isn’t France of 1694. Even in the worst slums around Beauvais or Paris, people don’t die because they have not eaten for weeks on end.
The same transformation has occurred in numerous other countries, most notably China. For millennia, famine stalked every Chinese regime from the Yellow Emperor to the Red communists. A few decades ago China was a byword for food shortages. Tens of millions of Chinese starved to death during the disastrous Great Leap Forward, and experts routinely predicted that the problem would only get worse. In 1974 the first World Food Conference was convened in Rome, and delegates were treated to apocalyptic scenarios. They were told that there was no way for China to feed its billion people, and that the world’s most populous country was heading towards catastrophe. In fact, it was heading towards the greatest economic miracle in history. Since 1974 hundreds of millions of Chinese have been lifted out of poverty, and though hundreds of millions more still suffer greatly from privation and malnutrition, for the first time in its recorded history China is now free from famine.
Indeed, in most countries today overeating has become a far worse problem than famine. In the eighteenth century Marie Antoinette allegedly advised the starving masses that if they ran out of bread, they should just eat cake instead. Today, the poor are following this advice to the letter. Whereas the rich residents of Beverly Hills eat lettuce salad and steamed tofu with quinoa, in the slums and ghettos the poor gorge on Twinkie cakes, Cheetos, hamburgers and pizza. In 2014 more than 2.1 billion people were overweight, compared to 850 million who suffered from malnutrition. Half of humankind is expected to be overweight by 2030.4 In 2010 famine and malnutrition combined killed about 1 million people, whereas obesity killed 3 million.5
Invisible Armadas
After famine, humanity’s second great enemy was plagues and infectious diseases. Bustling cities linked by a ceaseless stream of merchants, officials and pilgrims were both the bedrock of human civilisation and an ideal breeding ground for pathogens. People consequently lived their lives in ancient Athens or medieval Florence knowing that they might fall ill and die next week, or that an epidemic might suddenly erupt and destroy their entire family in one swoop.
The most famous such outbreak, the so-called Black Death, began in the 1330s, somewhere in east or central Asia, when the flea-dwelling bacterium Yersinia pestis started infecting humans bitten by the fleas. From there, riding on an army of rats and fleas, the plague quickly spread all over Asia, Europe and North Africa, taking less than twenty years to reach the shores of the Atlantic Ocean. Between 75 million and 200 million people died – more than a quarter of the population of Eurasia. In England, four out of ten people died, and the population dropped from a pre-plague high of 3.7 million people to a post-plague low of 2.2 million. The city of Florence lost 50,000 of its 100,000 inhabitants.6
[image: ]
[image: ]
Credit 1.2
2. Medieval people personified the Black Death as a horrific demonic force beyond human control or comprehension.
The authorities were completely helpless in the face of the calamity. Except for organising mass prayers and processions, they had no idea how to stop the spread of the epidemic – let alone cure it. Until the modern era, humans blamed diseases on bad air, malicious demons and angry gods, and did not suspect the existence of bacteria and viruses. People readily believed in angels and fairies, but they could not imagine that a tiny flea or a single drop of water might contain an entire armada of deadly predators.
[image: ]
[image: ]
Credit 1.3
3. The real culprit was the minuscule Yersinia pestis bacterium.
The Black Death was not a singular event, nor even the worst plague in history. More disastrous epidemics struck America, Australia and the Pacific Islands following the arrival of the first Europeans. Unbeknown to the explorers and settlers, they brought with them new infectious diseases against which the natives had no immunity. Up to 90 per cent of the local populations died as a result.7
On 5 March 1520 a small Spanish flotilla left the island of Cuba on its way to Mexico. The ships carried 900 Spanish soldiers along with horses, firearms and a few African slaves. One of the slaves, Francisco de Eguía, carried on his person a far deadlier cargo. Francisco didn’t know it, but somewhere among his trillions of cells a biological time bomb was ticking: the smallpox virus. After Francisco landed in Mexico the virus began to multiply exponentially within his body, eventually bursting out all over his skin in a terrible rash. The feverish Francisco was taken to bed in the house of a Native American family in the town of Cempoallan. He infected the family members, who infected the neighbours. Within ten days Cempoallan became a graveyard. Refugees spread the disease from Cempoallan to the nearby towns. As town after town succumbed to the plague, new waves of terrified refugees carried the disease throughout Mexico and beyond.
The Mayas in the Yucatán Peninsula believed that three evil gods – Ekpetz, Uzannkak and Sojakak – were flying from village to village at night, infecting people with the disease. The Aztecs blamed it on the gods Tezcatlipoca and Xipe, or perhaps on the black magic of the white people. Priests and doctors were consulted. They advised prayers, cold baths, rubbing the body with bitumen and smearing squashed black beetles on the sores. Nothing helped. Tens of thousands of corpses lay rotting in the streets, without anyone daring to approach and bury them. Entire families perished within a few days, and the authorities ordered that the houses were to be collapsed on top of the bodies. In some settlements half the population died.
In September 1520 the plague had reached the Valley of Mexico, and in October it entered the gates of the Aztec capital, Tenochtitlan – a magnificent metropolis of 250,000 people. Within two months at least a third of the population perished, including the Aztec emperor Cuitláhuac. Whereas in March 1520, when the Spanish fleet arrived, Mexico was home to 22 million people, by December only 14 million were still alive. Smallpox was only the first blow. While the new Spanish masters were busy enriching themselves and exploiting the natives, deadly waves of flu, measles and other infectious diseases struck Mexico one after the other, until in 1580 its population was down to less than 2 million.8
Two centuries later, on 18 January 1778, the British explorer Captain James Cook reached Hawaii. The Hawaiian islands were densely populated by half a million people, who lived in complete isolation from both Europe and America, and consequently had never been exposed to European and American diseases. Captain Cook and his men introduced the first flu, tuberculosis and syphilis pathogens to Hawaii. Subsequent European visitors added typhoid and smallpox. By 1853, only 70,000 survivors remained in Hawaii.9
Epidemics continued to kill tens of millions of people well into the twentieth century. In January 1918 soldiers in the trenches of northern France began dying in their thousands from a particularly virulent strain of flu, nicknamed ‘the Spanish Flu’. The front line was the end point of the most efficient global supply network the world had hitherto seen. Men and munitions were pouring in from Britain, the USA, India and Australia. Oil was sent from the Middle East, grain and beef from Argentina, rubber from Malaya and copper from Congo. In exchange, they all got Spanish Flu. Within a few months, about half a billion people – a third of the global population – came down with the virus. In India it killed 5 per cent of the population (15 million people). On the island of Tahiti, 14 per cent died. On Samoa, 20 per cent. In the copper mines of the Congo one out of five labourers perished. Altogether the pandemic killed between 50 million and 100 million people in less than a year. The First World War killed 40 million from 1914 to 1918.10
Alongside such epidemical tsunamis that struck humankind every few decades, people also faced smaller but more regular waves of infectious diseases, which killed millions every year. Children who lacked immunity were particularly susceptible to them, hence they are often called ‘childhood diseases’. Until the early twentieth century, about a third of children died before reaching adulthood from a combination of malnutrition and disease.
During the last century humankind became ever more vulnerable to epidemics, due to a combination of growing populations and better transport. A modern metropolis such as Tokyo or Kinshasa offers pathogens far richer hunting grounds than medieval Florence or 1520 Tenochtitlan, and the global transport network is today even more efficient than in 1918. A Spanish virus can make its way to Congo or Tahiti in less than twenty-four hours. We should therefore have expected to live in an epidemiological hell, with one deadly plague after another.
However, both the incidence and impact of epidemics have gone down dramatically in the last few decades. In particular, global child mortality is at an all-time low: less than 5 per cent of children die before reaching adulthood. In the developed world the rate is less than 1 per cent.11 This miracle is due to the unprecedented achievements of twentieth-century medicine, which has provided us with vaccinations, antibiotics, improved hygiene and a much better medical infrastructure.
For example, a global campaign of smallpox vaccination was so successful that in 1979 the World Health Organization declared that humanity had won, and that smallpox had been completely eradicated. It was the first epidemic humans had ever managed to wipe off the face of the earth. In 1967 smallpox had still infected 15 million people and killed 2 million of them, but in 2014 not a single person was either infected or killed by smallpox. The victory has been so complete that today the WHO has stopped vaccinating humans against smallpox.12
Every few years we are alarmed by the outbreak of some potential new plague, such as SARS in 2002/3, bird flu in 2005, swine flu in 2009/10 and Ebola in 2014. Yet thanks to efficient counter-measures these incidents have so far resulted in a comparatively small number of victims. SARS, for example, initially raised fears of a new Black Death, but eventually ended with the death of less than 1,000 people worldwide.13 The Ebola outbreak in West Africa seemed at first to spiral out of control, and on 26 September 2014 the WHO described it as ‘the most severe public health emergency seen in modern times’.14 Nevertheless, by early 2015 the epidemic had been reined in, and in January 2016 the WHO declared it over. It infected 30,000 people (killing 11,000 of them), caused massive economic damage throughout West Africa, and sent shockwaves of anxiety across the world; but it did not spread beyond West Africa, and its death toll was nowhere near the scale of the Spanish Flu or the Mexican smallpox epidemic.
Even the tragedy of AIDS, seemingly the greatest medical failure of the last few decades, can be seen as a sign of progress. Since its first major outbreak in the early 1980s, more than 30 million people have died of AIDS, and tens of millions more have suffered debilitating physical and psychological damage. It was hard to understand and treat the new epidemic, because AIDS is a uniquely devious disease. Whereas a human infected with the smallpox virus dies within a few days, an HIV-positive patient may seem perfectly healthy for weeks and months, yet go on infecting others unknowingly. In addition, the HIV virus itself does not kill. Rather, it destroys the immune system, thereby exposing the patient to numerous other diseases. It is these secondary diseases that actually kill AIDS victims. Consequently, when AIDS began to spread, it was especially difficult to understand what was happening. When two patients were admitted to a New York hospital in 1981, one ostensibly dying from pneumonia and the other from cancer, it was not at all evident that both were in fact victims of the HIV virus, which may have infected them months or even years previously.15
However, despite these difficulties, after the medical community became aware of the mysterious new plague, it took scientists just two years to identify it, understand how the virus spreads and suggest effective ways to slow down the epidemic. Within another ten years new medicines turned AIDS from a death sentence into a chronic condition (at least for those wealthy enough to afford the treatment).16 Just think what would have happened if AIDS had erupted in 1581 rather than 1981. In all likelihood, nobody back then would have figured out what caused the epidemic, how it moved from person to person, or how it could be halted (let alone cured). Under such conditions, AIDS might have killed a much larger proportion of the human race, equalling and perhaps even surpassing the Black Death.
Despite the horrendous toll AIDS has taken, and despite the millions killed each year by long-established infectious diseases such as malaria, epidemics are a far smaller threat to human health today than in previous millennia. The vast majority of people die from non-infectious illnesses such as cancer and heart disease, or simply from old age.17 (Incidentally cancer and heart disease are of course not new illnesses – they go back to antiquity. In previous eras, however, relatively few people lived long enough to die from them.)
Many fear that this is only a temporary victory, and that some unknown cousin of the Black Death is waiting just around the corner. No one can guarantee that plagues won’t make a comeback, but there are good reasons to think that in the arms race between doctors and germs, doctors run faster. New infectious diseases appear mainly as a result of chance mutations in pathogen genomes. These mutations allow the pathogens to jump from animals to humans, to overcome the human immune system, or to resist medicines such as antibiotics. Today such mutations probably occur and disseminate faster than in the past, due to human impact on the environment.18 Yet in the race against medicine, pathogens ultimately depend on the blind hand of fortune.
Doctors, in contrast, count on more than mere luck. Though science owes a huge debt to serendipity, doctors don’t just throw different chemicals into test tubes, hoping to chance upon some new medicine. With each passing year doctors accumulate more and better knowledge, which they use in order to design more effective medicines and treatments. Consequently, though in 2050 we will undoubtedly face much more resilient germs, medicine in 2050 will likely be able to deal with them more efficiently than today.19
In 2015 doctors announced the discovery of a completely new type of antibiotic – teixobactin – to which bacteria have no resistance as yet. Some scholars believe teixobactin may prove to be a game-changer in the fight against highly resistant germs.20 Scientists are also developing revolutionary new treatments that work in radically different ways to any previous medicine. For example, some research labs are already home to nano-robots, that may one day navigate through our bloodstream, identify illnesses and kill pathogens and cancerous cells.21 Microorganisms may have 4 billion years of cumulative experience fighting organic enemies, but they have exactly zero experience fighting bionic predators, and would therefore find it doubly difficult to evolve effective defences.
So while we cannot be certain that some new Ebola outbreak or an unknown flu strain won’t sweep across the globe and kill millions, we will not regard it as an inevitable natural calamity. Rather, we will see it as an inexcusable human failure and demand the heads of those responsible. When in late summer 2014 it seemed for a few terrifying weeks that Ebola was gaining the upper hand over the global health authorities, investigative committees were hastily set up. An initial report published on 18 October 2014 criticised the World Health Organization for its unsatisfactory reaction to the outbreak, blaming the epidemic on corruption and inefficiency in the WHO’s African branch. Further criticism was levelled at the international community as a whole for not responding quickly and forcefully enough. Such criticism assumes that humankind has the knowledge and tools to prevent plagues, and if an epidemic nevertheless gets out of control, it is due to human incompetence rather than divine anger.
So in the struggle against natural calamities such as AIDS and Ebola, the scales are tipping in humanity’s favour. But what about the dangers inherent in human nature itself? Biotechnology enables us to defeat bacteria and viruses, but it simultaneously turns humans themselves into an unprecedented threat. The same tools that enable doctors to quickly identify and cure new illnesses may also enable armies and terrorists to engineer even more terrible diseases and doomsday pathogens. It is therefore likely that major epidemics will continue to endanger humankind in the future only if humankind itself creates them, in the service of some ruthless ideology. The era when humankind stood helpless before natural epidemics is probably over. But we may come to miss it.
Breaking the Law of the Jungle
The third piece of good news is that wars too are disappearing. Throughout history most humans took war for granted, whereas peace was a temporary and precarious state. International relations were governed by the Law of the Jungle, according to which even if two polities lived in peace, war always remained an option. For example, even though Germany and France were at peace in 1913, everybody knew that they might be at each other’s throats in 1914. Whenever politicians, generals, business people and ordinary citizens made plans for the future, they always left room for war. From the Stone Age to the age of steam, and from the Arctic to the Sahara, every person on earth knew that at any moment the neighbours might invade their territory, defeat their army, slaughter their people and occupy their land.
During the second half of the twentieth century this Law of the Jungle has finally been broken, if not rescinded. In most areas wars became rarer than ever. Whereas in ancient agricultural societies human violence caused about 15 per cent of all deaths, during the twentieth century violence caused only 5 per cent of deaths, and in the early twenty-first century it is responsible for about 1 per cent of global mortality.22 In 2012 about 56 million people died throughout the world; 620,000 of them died due to human violence (war killed 120,000 people, and crime killed another 500,000). In contrast, 800,000 committed suicide, and 1.5 million died of diabetes.23 Sugar is now more dangerous than gunpowder.
Even more importantly, a growing segment of humankind has come to see war as simply inconceivable. For the first time in history, when governments, corporations and private individuals consider their immediate future, many of them don’t think about war as a likely event. Nuclear weapons have turned war between superpowers into a mad act of collective suicide, and therefore forced the most powerful nations on earth to find alternative and peaceful ways to resolve conflicts. Simultaneously, the global economy has been transformed from a material-based economy into a knowledge-based economy. Previously the main sources of wealth were material assets such as gold mines, wheat fields and oil wells. Today the main source of wealth is knowledge. And whereas you can conquer oil fields through war, you cannot acquire knowledge that way. Hence as knowledge became the most important economic resource, the profitability of war declined and wars became increasingly restricted to those parts of the world – such as the Middle East and Central Africa – where the economies are still old-fashioned material-based economies.
In 1998 it made sense for Rwanda to seize and loot the rich coltan mines of neighbouring Congo, because this ore was in high demand for the manufacture of mobile phones and laptops, and Congo held 80 per cent of the world’s coltan reserves. Rwanda earned $240 million annually from the looted coltan. For poor Rwanda that was a lot of money.24 In contrast, it would have made no sense for China to invade California and seize Silicon Valley, for even if the Chinese could somehow prevail on the battlefield, there were no silicon mines to loot in Silicon Valley. Instead, the Chinese have earned billions of dollars from cooperating with hi-tech giants such as Apple and Microsoft, buying their software and manufacturing their products. What Rwanda earned from an entire year of looting Congolese coltan, the Chinese earn in a single day of peaceful commerce.
In consequence, the word ‘peace’ has acquired a new meaning. Previous generations thought about peace as the temporary absence of war. Today we think about peace as the implausibility of war. When in 1913 people said that there was peace between France and Germany, they meant that ‘there is no war going on at present between France and Germany, but who knows what next year will bring’. When today we say that there is peace between France and Germany, we mean that it is inconceivable under any foreseeable circumstances that war might break out between them. Such peace prevails not only between France and Germany, but between most (though not all) countries. There is no scenario for a serious war breaking out next year between Germany and Poland, between Indonesia and the Philippines, or between Brazil and Uruguay.
This New Peace is not just a hippie fantasy. Power-hungry governments and greedy corporations also count on it. When Mercedes plans its sales strategy in eastern Europe, it discounts the possibility that Germany might conquer Poland. A corporation importing cheap labourers from the Philippines is not worried that Indonesia might invade the Philippines next year. When the Brazilian government convenes to discuss next year’s budget, it’s unimaginable that the Brazilian defence minister will rise from his seat, bang his fist on the table and shout, ‘Just a minute! What if we want to invade and conquer Uruguay? You didn’t take that into account. We have to put aside $5 billion to finance this conquest.’ Of course, there are a few places where defence ministers still say such things, and there are regions where the New Peace has failed to take root. I know this very well because I live in one of these regions. But these are exceptions.
There is no guarantee, of course, that the New Peace will hold indefinitely. Just as nuclear weapons made the New Peace possible in the first place, so future technological developments might set the stage for new kinds of war. In particular, cyber warfare may destabilise the world by giving even small countries and non-state actors the ability to fight superpowers effectively. When the USA fought Iraq in 2003 it brought havoc to Baghdad and Mosul, but not a single bomb was dropped on Los Angeles or Chicago. In the future, though, a country such as North Korea or Iran could use logic bombs to shut down the power in California, blow up refineries in Texas and cause trains to collide in Michigan (‘logic bombs’ are malicious software codes planted in peacetime and operated at a distance. It is highly likely that networks controlling vital infrastructure facilities in the USA and many other countries are already crammed with such codes).
However, we should not confuse ability with motivation. Though cyber warfare introduces new means of destruction, it doesn’t necessarily add new incentives to use them. Over the last seventy years humankind has broken not only the Law of the Jungle, but also the Chekhov Law. Anton Chekhov famously said that a gun appearing in the first act of a play will inevitably be fired in the third. Throughout history, if kings and emperors acquired some new weapon, sooner or later they were tempted to use it. Since 1945, however, humankind has learned to resist this temptation. The gun that appeared in the first act of the Cold War was never fired. By now we are accustomed to living in a world full of undropped bombs and unlaunched missiles, and have become experts in breaking both the Law of the Jungle and the Chekhov Law. If these laws ever do catch up with us, it will be our own fault – not our inescapable destiny.
[image: ]
[image: ]
Credit 1.4
4. Nuclear missiles on parade in Moscow. The gun that was always on display but never fired.
What about terrorism, then? Even if central governments and powerful states have learned restraint, terrorists might have no such qualms about using new and destructive weapons. That is certainly a worrying possibility. However, terrorism is a strategy of weakness adopted by those who lack access to real power. At least in the past, terrorism worked by spreading fear rather than by causing significant material damage. Terrorists usually don’t have the strength to defeat an army, occupy a country or destroy entire cities. Whereas in 2010 obesity and related illnesses killed about 3 million people, terrorists killed a total of 7,697 people across the globe, most of them in developing countries.25 For the average American or European, Coca-Cola poses a far deadlier threat than al-Qaeda.
How, then, do terrorists manage to dominate the headlines and change the political situation throughout the world? By provoking their enemies to overreact. In essence, terrorism is a show. Terrorists stage a terrifying spectacle of violence that captures our imagination and makes us feel as if we are sliding back into medieval chaos. Consequently states often feel obliged to react to the theatre of terrorism with a show of security, orchestrating immense displays of force, such as the persecution of entire populations or the invasion of foreign countries. In most cases, this overreaction to terrorism poses a far greater threat to our security than the terrorists themselves.
Terrorists are like a fly that tries to destroy a china shop. The fly is so weak that it cannot budge even a single teacup. So it finds a bull, gets inside its ear and starts buzzing. The bull goes wild with fear and anger, and destroys the china shop. This is what happened in the Middle East in the last decade. Islamic fundamentalists could never have toppled Saddam Hussein by themselves. Instead they enraged the USA by the 9/11 attacks, and the USA destroyed the Middle Eastern china shop for them. Now they flourish in the wreckage. By themselves, terrorists are too weak to drag us back to the Middle Ages and re-establish the Jungle Law. They may provoke us, but in the end, it all depends on our reactions. If the Jungle Law comes back into force, it will not be the fault of terrorists.
—
Famine, plague and war will probably continue to claim millions of victims in the coming decades. Yet they are no longer unavoidable tragedies beyond the understanding and control of a helpless humanity. Instead, they have become manageable challenges. This does not belittle the suffering of hundreds of millions of poverty-stricken humans; of the millions felled each year by malaria, AIDS and tuberculosis; or of the millions trapped in violent vicious circles in Syria, the Congo or Afghanistan. The message is not that famine, plague and war have completely disappeared from the face of the earth, and that we should stop worrying about them. Just the opposite. Throughout history people felt these were unsolvable problems, so there was no point trying to put an end to them. People prayed to God for miracles, but they themselves did not seriously attempt to exterminate famine, plague and war. Those arguing that the world of 2016 is as hungry, sick and violent as it was in 1916 perpetuate this age-old defeatist view. They imply that all the huge efforts humans have made during the twentieth century have achieved nothing, and that medical research, economic reforms and peace initiatives have all been in vain. If so, what is the point of investing our time and resources in further medical research, novel economic reforms or new peace initiatives?
Acknowledging our past achievements sends a message of hope and responsibility, encouraging us to make even greater efforts in the future. Given our twentieth-century accomplishments, if people continue to suffer from famine, plague and war, we cannot blame it on nature or on God. It is within our power to make things better and to reduce the incidence of suffering even further.
Yet appreciating the magnitude of our achievements carries another message: history does not tolerate a vacuum. If incidences of famine, plague and war are decreasing, something is bound to take their place on the human agenda. We had better think very carefully what it is going to be. Otherwise, we might gain complete victory in the old battlefields only to be caught completely unaware on entirely new fronts. What are the projects that will replace famine, plague and war at the top of the human agenda in the twenty-first century?
One central project will be to protect humankind and the planet as a whole from the dangers inherent in our own power. We have managed to bring famine, plague and war under control thanks largely to our phenomenal economic growth, which provides us with abundant food, medicine, energy and raw materials. Yet this same growth destabilises the ecological equilibrium of the planet in myriad ways, which we have only begun to explore. Humankind has been late in acknowledging this danger, and has so far done very little about it. Despite all the talk of pollution, global warming and climate change, most countries have yet to make any serious economic or political sacrifices to improve the situation. When the moment comes to choose between economic growth and ecological stability, politicians, CEOs and voters almost always prefer growth. In the twenty-first century, we shall have to do better if we are to avoid catastrophe.
What else will humanity strive for? Would we be content merely to count our blessings, keep famine, plague and war at bay, and protect the ecological equilibrium? That might indeed be the wisest course of action, but humankind is unlikely to follow it. Humans are rarely satisfied with what they already have. The most common reaction of the human mind to achievement is not satisfaction, but craving for more. Humans are always on the lookout for something better, bigger, tastier. When humankind possesses enormous new powers, and when the threat of famine, plague and war is finally lifted, what will we do with ourselves? What will the scientists, investors, bankers and presidents do all day? Write poetry?
Success breeds ambition, and our recent achievements are now pushing humankind to set itself even more daring goals. Having secured unprecedented levels of prosperity, health and harmony, and given our past record and our current values, humanity’s next targets are likely to be immortality, happiness and divinity. Having reduced mortality from starvation, disease and violence, we will now aim to overcome old age and even death itself. Having saved people from abject misery, we will now aim to make them positively happy. And having raised humanity above the beastly level of survival struggles, we will now aim to upgrade humans into gods, and turn Homo sapiens into Homo deus.
The Last Days of Death
In the twenty-first century humans are likely to make a serious bid for immortality. Struggling against old age and death will merely carry on the time-honoured fight against famine and disease, and manifest the supreme value of contemporary culture: the worth of human life. We are constantly reminded that human life is the most sacred thing in the universe. Everybody says this: teachers in schools, politicians in parliaments, lawyers in courts and actors on theatre stages. The Universal Declaration of Human Rights adopted by the UN after the Second World War – which is perhaps the closest thing we have to a global constitution – categorically states that ‘the right to life’ is humanity’s most fundamental value. Since death clearly violates this right, death is a crime against humanity, and we ought to wage total war against it.
Throughout history, religions and ideologies did not sanctify life itself. They always sanctified something above or beyond earthly existence, and were consequently quite tolerant of death. Indeed, some of them have been downright fond of the Grim Reaper. Because Christianity, Islam and Hinduism insisted that the meaning of our existence depended on our fate in the afterlife, they viewed death as a vital and positive part of the world. Humans died because God decreed it, and their moment of death was a sacred metaphysical experience exploding with meaning. When a human was about to breathe his last, this was the time to call priests, rabbis and shamans, to draw out the balance of life, and to embrace one’s true role in the universe. Just try to imagine Christianity, Islam or Hinduism in a world without death – which is also a world without heaven, hell or reincarnation.
Modern science and modern culture have an entirely different take on life and death. They don’t think of death as a metaphysical mystery, and they certainly don’t view death as the source of life’s meaning. Rather, for modern people death is a technical problem that we can and should solve.
How exactly do humans die? Medieval fairy tales depicted Death as a figure in a hooded black cloak, his hand gripping a large scythe. A man lives his life, worrying about this and that, running here and there, when suddenly the Grim Reaper appears before him, taps him on the shoulder with a bony finger and says, ‘Come!’ And the man implores: ‘No, please! Wait just a year, a month, a day!’ But the hooded figure hisses: ‘No! You must come NOW!’ And this is how we die.
In reality, however, humans don’t die because a figure in a black cloak taps them on the shoulder, or because God decreed it, or because mortality is an essential part of some great cosmic plan. Humans always die due to some technical glitch. The heart stops pumping blood. The main artery is clogged by fatty deposits. Cancerous cells spread in the liver. Germs multiply in the lungs. And what is responsible for all these technical problems? Other technical problems. The heart stops pumping blood because not enough oxygen reaches the heart muscle. Cancerous cells spread because a chance genetic mutation rewrote their instructions. Germs settled in my lungs because somebody sneezed on the subway. Nothing metaphysical about it. It is all technical problems.
[image: ]
[image: ]
Credit 1.5
5. Death personified as the Grim Reaper in medieval art.
And every technical problem has a technical solution. We don’t need to wait for the Second Coming in order to overcome death. A couple of geeks in a lab can do it. If traditionally death was the speciality of priests and theologians, now the engineers are taking over. We can kill the cancerous cells with chemotherapy or nano-robots. We can exterminate the germs in the lungs with antibiotics. If the heart stops pumping, we can reinvigorate it with medicines and electric shocks – and if that doesn’t work, we can implant a new heart. True, at present we don’t have solutions to all technical problems. But this is precisely why we invest so much time and money in researching cancer, germs, genetics and nanotechnology.
Even ordinary people, who are not engaged in scientific research, have become used to thinking about death as a technical problem. When a woman goes to her physician and asks, ‘Doctor, what’s wrong with me?’ the doctor is likely to say, ‘Well, you have the flu,’ or ‘You have tuberculosis,’ or ‘You have cancer.’ But the doctor will never say, ‘You have death.’ And we are all under the impression that flu, tuberculosis and cancer are technical problems, to which we might someday find a technical solution.
Even when people die in a hurricane, a car accident or a war, we tend to view it as a technical failure that could and should have been prevented. If the government had only adopted a better policy; if the municipality had done its job properly; and if the military commander had taken a wiser decision, death would have been avoided. Death has become an almost automatic reason for lawsuits and investigations. ‘How could they have died? Somebody somewhere must have screwed up.’
The vast majority of scientists, doctors and scholars still distance themselves from outright dreams of immortality, claiming that they are trying to overcome only this or that particular problem. Yet because old age and death are the outcome of nothing but particular problems, there is no point at which doctors and scientists are going to stop and declare: ‘Thus far, and not another step. We have overcome tuberculosis and cancer, but we won’t lift a finger to fight Alzheimer’s. People can go on dying from that.’ The Universal Declaration of Human Rights does not say that humans have ‘the right to life until the age of ninety’. It says that every human has a right to life, period. That right isn’t limited by any expiry date.
An increasing minority of scientists and thinkers consequently speak more openly these days, and state that the flagship enterprise of modern science is to defeat death and grant humans eternal youth. Notable examples are the gerontologist Aubrey de Grey and the polymath and inventor Ray Kurzweil (winner of the 1999 US National Medal of Technology and Innovation). In 2012 Kurzweil was appointed a director of engineering at Google, and a year later Google launched a sub-company called Calico whose stated mission is ‘to solve death’.26 Google has recently appointed another immortality true-believer, Bill Maris, to preside over the Google Ventures investment fund. In a January 2015 interview, Maris said, ‘If you ask me today, is it possible to live to be 500, the answer is yes.’ Maris backs up his brave words with a lot of hard cash. Google Ventures is investing 36 per cent of its $2 billion portfolio in life sciences start-ups, including several ambitious life-extending projects. Using an American football analogy, Maris explained that in the fight against death, ‘We aren’t trying to gain a few yards. We are trying to win the game.’ Why? Because, says Maris, ‘it is better to live than to die’.27
Such dreams are shared by other Silicon Valley luminaries. PayPal co-founder Peter Thiel has recently confessed that he aims to live for ever. ‘I think there are probably three main modes of approaching [death],’ he explained. ‘You can accept it, you can deny it or you can fight it. I think our society is dominated by people who are into denial or acceptance, and I prefer to fight it.’ Many people are likely to dismiss such statements as teenage fantasies. Yet Thiel is somebody to be taken very seriously. He is one of the most successful and influential entrepreneurs in Silicon Valley with a private fortune estimated at $2.2 billion.28 The writing is on the wall: equality is out – immortality is in.
The breakneck development of fields such as genetic engineering, regenerative medicine and nanotechnology fosters ever more optimistic prophecies. Some experts believe that humans will overcome death by 2200, others say 2100. Kurzweil and de Grey are even more sanguine. They maintain that anyone possessing a healthy body and a healthy bank account in 2050 will have a serious shot at immortality by cheating death a decade at a time. According to Kurzweil and de Grey, every ten years or so we will march into the clinic and receive a makeover treatment that will not only cure illnesses, but will also regenerate decaying tissues, and upgrade hands, eyes and brains. Before the next treatment is due, doctors will have invented a plethora of new medicines, upgrades and gadgets. If Kurzweil and de Grey are right, there may already be some immortals walking next to you on the street – at least if you happen to be walking down Wall Street or Fifth Avenue.
In truth they will actually be a-mortal, rather than immortal. Unlike God, future superhumans could still die in some war or accident, and nothing could bring them back from the netherworld. However, unlike us mortals, their life would have no expiry date. So long as no bomb shreds them to pieces or no truck runs them over, they could go on living indefinitely. Which will probably make them the most anxious people in history. We mortals daily take chances with our lives, because we know they are going to end anyhow. So we go on treks in the Himalayas, swim in the sea, and do many other dangerous things like crossing the street or eating out. But if you believe you can live for ever, you would be crazy to gamble on infinity like that.
Perhaps, then, we had better start with more modest aims, such as doubling life expectancy? In the twentieth century we have almost doubled life expectancy from forty to seventy, so in the twenty-first century we should at least be able to double it again to 150. Though falling far short of immortality, this would still revolutionise human society. For starters, family structure, marriages and child–parent relationships would be transformed. Today, people still expect to be married ‘till death us do part’, and much of life revolves around having and raising children. Now try to imagine a person with a lifespan of 150 years. Getting married at forty, she still has 110 years to go. Will it be realistic to expect her marriage to last 110 years? Even Catholic fundamentalists might baulk at that. So the current trend of serial marriages is likely to intensify. Bearing two children in her forties, she will, by the time she is 120, have only a distant memory of the years she spent raising them – a rather minor episode in her long life. It’s hard to tell what kind of new parent–child relationship might develop under such circumstances.
Or consider professional careers. Today we assume that you learn a profession in your teens and twenties, and then spend the rest of your life in that line of work. You obviously learn new things even in your forties and fifties, but life is generally divided into a learning period followed by a working period. When you live to be 150 that won’t do, especially in a world that is constantly being shaken by new technologies. People will have much longer careers, and will have to reinvent themselves again and again even at the age of ninety.
At the same time, people will not retire at sixty-five and will not make way for the new generation with its novel ideas and aspirations. The physicist Max Planck famously said that science advances one funeral at a time. He meant that only when one generation passes away do new theories have a chance to root out old ones. This is true not only of science. Think for a moment about your own workplace. No matter whether you are a scholar, journalist, cook or football player, how would you feel if your boss were 120, his ideas were formulated when Victoria was still queen, and he was likely to stay your boss for a couple of decades more?
In the political sphere the results might be even more sinister. Would you mind having Putin stick around for another ninety years? On second thoughts, if people lived to 150, then in 2016 Stalin would still be ruling in Moscow, going strong at 138, Chairman Mao would be a middle-aged 123-year-old, and Princess Elizabeth would be sitting on her hands waiting to inherit from the 121-year-old George VI. Her son Charles would not get his turn until 2076.
Coming back to the realm of reality, it is far from certain whether Kurzweil’s and de Grey’s prophecies will come true by 2050 or 2100. My own view is that the hopes of eternal youth in the twenty-first century are premature, and whoever takes them too seriously is in for a bitter disappointment. It is not easy to live knowing that you are going to die, but it is even harder to believe in immortality and be proven wrong.
Although average life expectancy has doubled over the last hundred years, it is unwarranted to extrapolate and conclude that we can double it again to 150 in the coming century. In 1900 global life expectancy was no higher than forty because many people died young from malnutrition, infectious diseases and violence. Yet those who escaped famine, plague and war could live well into their seventies and eighties, which is the natural life span of Homo sapiens. Contrary to common notions, seventy-year-olds weren’t considered rare freaks of nature in previous centuries. Galileo Galilei died at seventy-seven, Isaac Newton at eighty-four, and Michelangelo lived to the ripe age of eighty-eight, without any help from antibiotics, vaccinations or organ transplants. Indeed, even chimpanzees in the jungle sometimes live into their sixties.29
In truth, so far modern medicine hasn’t extended our natural life span by a single year. Its great achievement has been to save us from premature death, and allow us to enjoy the full measure of our years. Even if we now overcome cancer, diabetes and the other major killers, it would mean only that almost everyone will get to live to ninety – but it will not be enough to reach 150, let alone 500. For that, medicine will need to re-engineer the most fundamental structures and processes of the human body, and discover how to regenerate organs and tissues. It is by no means clear that we can do that by 2100.
Nevertheless, every failed attempt to overcome death will get us a step closer to the target, and that will inspire greater hopes and encourage people to make even greater efforts. Though Google’s Calico probably won’t solve death in time to make Google co-founders Sergey Brin and Larry Page immortal, it will most probably make significant discoveries about cell biology, genetic medicines and human health. The next generation of Googlers could therefore start their attack on death from new and better positions. The scientists who cry immortality are like the boy who cried wolf: sooner or later, the wolf actually comes.
Hence even if we don’t achieve immortality in our lifetime, the war against death is still likely to be the flagship project of the coming century. When you take into account our belief in the sanctity of human life, add the dynamics of the scientific establishment, and top it all with the needs of the capitalist economy, a relentless war against death seems to be inevitable. Our ideological commitment to human life will never allow us simply to accept human death. As long as people die of something, we will strive to overcome it.
The scientific establishment and the capitalist economy will be more than happy to underwrite this struggle. Most scientists and bankers don’t care what they are working on, provided it gives them an opportunity to make new discoveries and greater profits. Can anyone imagine a more exciting scientific challenge than outsmarting death – or a more promisin