Newsletter

Science and Religion

Intro

Controlling the world

It is often argued that science and religion are enemies, because both seek the truth, yet each finds a different truth.

The fact is that science and religion are allies. Science is interested above all in power. Religion is interested above all in order. Together, they are a winning team.

Science is a very expensive affair, and it has managed to achieve wonders thanks only to the willingness of governments and businesses to channel billions into research and development. Governments and businesses have funded science not out of pure curiosity, but because they believe it can help them gain more power and attain some cherished goals. And who sets these goals? Not science – but religions and ideologies.

Our religious and ideological beliefs are the ultimate source of funding for science, and in return, they get to shape the scientific agenda and to determine what to do with the resulting discoveries.

LECTURES

Articles

Religion without God

We often assume that religions and gods go hand in hand. This seems obvious to Westerners, who are familiar mainly with monotheistic and polytheist creeds. Yet the religious history of the world does not boil down to the history of gods. During the first millennium BC, religions of an altogether new kind began to spread through Afro-Asia. The newcomers, such as Jainism and Buddhism in India, Daoism and Confucianism in China, and Stoicism, Cynicism and Epicureanism in the Mediterranean basin, were characterized by their disregard of gods.

These creeds maintained that the superhuman order governing the world is the product of natural laws rather than of divine wills and whims. Some of these natural-law religions continued to espouse the existence of gods, but their gods were subject to the laws of nature no less than humans, animals, and plants were. Gods had their niche in the ecosystem, just as elephants and porcupines had theirs, but could no more change the laws of nature than elephants can. A prime example is Buddhism, the most important of the ancient natural law religions, which remains one of the major faiths.

The central figure of Buddhism is not a god but a human being, Siddhartha Gautama. According to Buddhist tradition, Gautama was heir to a small Himalayan kingdom, some time around 500 BC. The young prince was deeply affected by the suffering he saw all around him. He saw that men and women, children and old people, all suffer not just from occasional calamities such as war and plague, but also from anxiety, frustration, and discontent, all of which seem to be an inseparable part of the human condition. People pursue wealth and power, acquire knowledge and possessions, beget sons and daughters, and build houses and palaces. Yet no matter what they achieve, they are never content. Those who live in poverty dream of riches. Those who have a million want two million. Those who have two million want ten. Even the rich and famous are rarely satisfied. They too are haunted by ceaseless cares and worries, until sickness, old age, and death put a bitter end to them. Everything that one has accumulated vanishes like smoke. Life is a pointless rat race. But how to escape it?

At the age of 29 Gautama slipped away from his palace in the middle of the night, leaving behind his family and possessions. He traveled as a homeless vagabond throughout northern India, searching for a way out of suffering. He visited ashrams and sat at the feet of gurus but nothing liberated him entirely—some dissatisfaction always remained. He did not despair. He resolved to investigate suffering on his own until he found a method for complete liberation. He spent six years meditating on the essence, causes, and cures for human anguish. In the end he came to the realization that suffering is not caused by ill fortune, by social injustice, or by divine whims. Rather, suffering is caused by the behavior patterns of one’s own mind.

Gautama’s insight was that no matter what the mind experiences, it usually reacts with craving, and craving always involves dissatisfaction. When the mind experiences something distasteful it craves to be rid of the irritation. When the mind experiences something pleasant, it craves that the pleasure will remain and will intensify. Therefore, the mind is always dissatisfied and restless. This is very clear when we experience unpleasant things, such as pain. As long as the pain continues, we are dissatisfied and do all we can to avoid it. Yet even when we experience pleasant things we are never content. We either fear that the pleasure might disappear, or we hope that it will intensify. People dream for years about finding love but are rarely satisfied when they find it. Some become anxious that their partner will leave; others feel that they have settled cheaply, and could have found someone better. And we all know people who manage to do both.

Great gods can send us rain, social institutions can provide justice and good healthcare, and lucky coincidences can turn us into millionaires, but none of them can change our basic mental patterns. Hence even the greatest kings are doomed to live in angst, constantly fleeing grief and anguish, forever chasing after greater pleasures.

Gautama found that there was a way to exit this vicious circle. If, when the mind experiences something pleasant or unpleasant, it simply understands things as they are, then there is no suffering. If you experience sadness without craving that the sadness go away, you continue to feel sadness but you do not suffer from it. There can actually be richness in the sadness. If you experience joy without craving that the joy linger and intensify, you continue to feel joy without losing your peace of mind.

But how do you get the mind to accept things as they are, without craving? To accept sadness as sadness, joy as joy, pain as pain? Gautama developed a set of meditation techniques that train the mind to experience reality as it is, without craving. These practices train the mind to focus all its attention on the question, “What am I experiencing now?” rather than on “What would I rather be experiencing?” It is difficult to achieve this state of mind, but not impossible.

Gautama grounded these meditation techniques in a set of ethical rules meant to make it easier for people to focus on actual experience and to avoid falling into cravings and fantasies. He instructed his followers to avoid killing, promiscuous sex, and theft, since such acts necessarily stoke the fire of craving (for power, for sensual pleasure, or for wealth). When the flames are completely extinguished, craving is replaced by a state of perfect contentment and serenity, known as nirvana (the literal meaning of which is “extinguishing the fire”). Those who have attained nirvana are fully liberated from all suffering. They experience reality with the utmost clarity, free of fantasies and delusions. While they will most likely still encounter unpleasantness and pain, such experiences cause them no misery. A person who does not crave cannot suffer.

According to Buddhist tradition, Gautama himself attained nirvana and was fully liberated from suffering. Henceforth he was known as “Buddha,” which means “The Enlightened One.” Buddha spent the rest of his life explaining his discoveries to others so that everyone could be freed from suffering. He encapsulated his teachings in a single law: Suffering arises from craving; the only way to be fully liberated from suffering is to be fully liberated from craving; and the only way to be liberated from craving is to train the mind to experience reality as it is.

This law, known as Dharma or Dhamma, is seen by Buddhists as a universal law of nature. That “suffering arises from craving” is always and everywhere true, just as in modern physics e always equals mc². Buddhists are people who believe in this law and make it the fulcrum of all their activities. Belief in gods, on the other hand, is of minor importance to them. The first principle of monotheist religions is “God exists. What does He want from me?” The first principle of Buddhism is “Suffering exists. How do I escape it?”

Buddhism does not deny the existence of gods—they are described as powerful beings who can bring rains and victories—but they have no influence on the law that suffering arises from craving. If the mind of a person is free of all craving, no god can make him miserable. Conversely, once craving arises in a person’s mind, all the gods in the universe cannot save him from suffering.

The War Against Death

Of all mankind’s ostensibly insoluble problems, one has remained the most vexing, interesting, and important: the problem of death itself. Before the late modern era, most religions and ideologies took it for granted that death was our inevitable fate. Moreover, most faiths turned death into the main source of meaning in life. Try to imagine Islam, Christianity, or the ancient Egyptian religion in a world without death. These creeds taught people that they must come to terms with death and pin their hopes on the afterlife, rather than seek to overcome death and live forever here on earth. The best minds were busy giving meaning to death, not trying to escape it.

That is the theme of the most ancient myth to come down to us—the Gilgamesh myth of ancient Sumer. Its hero is the strongest and most capable man in the world, King Gilgamesh of Uruk, who could defeat anyone in battle. One day, Gilgamesh’s best friend, Enkidu, died. Gilgamesh sat by the body and observed it for many days, until he saw a worm dropping out of his friend’s nostril. At that moment Gilgamesh was gripped by a terrible horror, and he resolved that he himself would never die. He would somehow find a way to defeat death. Gilgamesh then undertook a journey to the end of the universe, killing lions, battling scorpion-men, and finding his way into the underworld. There he shattered the stone giants of Urshanabi and the ferryman of the river of the dead, and found Utnapishtim, the last survivor of the primordial flood. Yet Gilgamesh failed in his quest. He returned home empty-handed, as mortal as ever, but with one new piece of wisdom. When the gods created man, Gilgamesh had learned, they set death as man’s inevitable destiny, and man must learn to live with it.

Disciples of progress do not share this defeatist attitude. For men of science, death is not an inevitable destiny, but merely a technical problem. People die not because the gods decreed it, but due to various technical failures—a heart attack, cancer, an infection. And every technical problem has a technical solution. If the heart flutters, it can be stimulated by a pacemaker or replaced by a new heart. If cancer rampages, it can be killed with drugs or radiation. If bacteria proliferate, they can be subdued with antibiotics. True, at present we cannot solve all technical problems. But we are working on them. Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal, and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments, and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself.

Until recently, you would not have heard scientists, or anyone else, speak so bluntly. “Defeat death?! What nonsense! We are only trying to cure cancer, tuberculosis, and Alzheimer’s disease,” they insisted. People avoided the issue of death because the goal seemed too elusive. Why create unreasonable expectations? We’re now at a point, however, where we can be frank about it. The leading project of the Scientific Revolution is to give humankind eternal life. Even if killing death seems a distant goal, we have already achieved things that were inconceivable a few centuries ago. In 1199, King Richard the Lionheart was struck by an arrow in his left shoulder. Today we’d say he incurred a minor injury. But in 1199, in the absence of antibiotics and effective sterilization methods, this minor flesh wound turned infected and gangrene set in. The only way to stop the spread of gangrene in twelfth century Europe was to cut off the infected limb, impossible when the infection was in a shoulder. The gangrene spread through the Lionheart’s body and no-one could help the king. He died in great agony two weeks later.

As recently as the nineteenth century, the best doctors still did not know how to prevent infection and stop the putrefaction of tissues. In field hospitals doctors routinely cut off the hands and legs of soldiers who received even minor limb injuries, fearing gangrene. These amputations, as well as all other medical procedures (such as tooth extraction), were done without any anesthetics. The first anesthetics—ether, chloroform, and morphine—entered regular usage in Western medicine only in the middle of the nineteenth century. Before the advent of chloroform, four soldiers had to hold down a wounded comrade while the doctor sawed off the injured limb. On the morning after the battle of Waterloo (1815), heaps of sawn-off hands and legs could be seen adjacent to the field hospitals. In those days, carpenters and butchers who enlisted to the army were often sent to serve in the medical corps, because surgery required little more than knowing your way with knives and saws.

In the two centuries since Waterloo, things have changed beyond recognition. Pills, injections, and sophisticated operations save us from a spate of illnesses and injuries that once dealt an inescapable death sentence. They also protect us against countless daily aches and ailments, which pre-modern people simply accepted as part of life. The average life expectancy jumped from around 25–40 years to around 67 in the entire world, and to around 80 years in the developed world.

How long will the Gilgamesh Project—the quest for immortality—take to complete? A hundred years? Five hundred years? A thousand years? When we recall how little we knew about the human body in 1900, and how much knowledge we have gained in a single century, there is cause for optimism. Genetic engineers have recently doubled the average life expectancy of Caenorhabditis elegans worms. Could they do the same for Homo sapiens? Nanotechnology experts are developing a bionic immune system composed of millions of nano-robots, who would inhabit our bodies, open blocked blood vessels, fight viruses and bacteria, eliminate cancerous cells, and even reverse aging processes. A few serious scholars suggest that by 2050, some humans will become a-mortal (not immortal, because they could still die of some accident, but a-mortal, meaning that in the absence of fatal trauma their lives could be extended indefinitely).

Whether or not Project Gilgamesh succeeds, from a historical perspective it is fascinating to see that most late-modern religions and ideologies have already taken death and the afterlife out of the equation. Until the eighteenth century, religions considered death and its aftermath central to the meaning of life. Beginning in the eighteenth century, religions and ideologies such as liberalism, socialism, and feminism lost all interest in the afterlife. What, exactly, happens to a communist after he or she dies? What happens to a capitalist? What happens to a feminist? It is pointless to look for the answer in the writings of Marx, Adam Smith, or Simone de Beauvoir. The only modern ideology that still awards death a central role is nationalism. In its more poetic and desperate moments, nationalism promises that whoever dies for the nation will forever live in its collective memory. Yet this promise is so fuzzy that even most nationalists do not really know what to make of it.

Enslaved by timetables

While all these Sapiens have grown increasingly impervious to the whims of nature, they have become ever more subject to the dictates of modern industry and government. The Industrial Revolution opened the way to a long line of experiments in social engineering and an even longer series of unpremeditated changes in daily life and human mentality. One example among many is the replacement of the rhythms of traditional agriculture with the uniform and precise schedule of industry.

Traditional agriculture depended on cycles of natural time and organic growth. Most societies were unable to make precise time measurements, nor were they terribly interested in doing so. The world went about its business without clocks and timetables, subject only to the movements of the sun and the growth cycles of plants. There was no uniform working day, and all routines changed drastically from season to season. People knew where the sun was, and watched anxiously for portents of the rainy season and harvest time, but they did not know the hour and hardly cared about the year. If a lost time traveler popped up in a medieval village and asked a passerby, “What year is this?” the villager would be as bewildered by the question as by the stranger’s ridiculous clothing.

In contrast to medieval peasants and shoemakers, modern industry cares little about the sun or the season. It sanctifies precision and uniformity. For example, in a medieval workshop each shoemaker made an entire shoe, from sole to buckle. If one shoemaker was late for work, it did not stall the others. However, in a modern footwear factory assembly line, every worker mans a machine that produces just a small part of a shoe, which is then passed on to the next machine. If the worker who operates machine no. 5 has overslept, it stalls all the other machines. In order to prevent such calamities, everybody must adhere to a precise timetable. Each worker arrives at work at exactly the same time. Everybody takes their lunch break together, whether they are hungry or not. Everybody goes home when a whistle announces that the shift is over—not when they have finished their project.

The Industrial Revolution turned the timetable and the assembly line into a template for almost all human activities. Shortly after factories imposed their timeframes on human behavior, schools too adopted precise timetables, followed by hospitals, government offices, and grocery stores. Even in places devoid of assembly lines and machines, the timetable became king. If the shift at the factory ends at 5 p.m., the local pub had better be open for business by 5:02.

A crucial link in the spreading timetable system was public transportation. If workers needed to start their shift by 8:00, the train or bus had to reach the factory gate by 07:55. A few minutes’ delay would lower production and perhaps even lead to the layoffs of the unfortunate latecomers. In 1784 a carriage service with a published schedule began operating in Britain. Its timetable specified only the hour of departure, not arrival. Back then, each British city and town had its own local time, which could differ from London time by up to half an hour. When it was 12:00 in London, it was perhaps 12:20 in Liverpool and 11:50 in Canterbury. Since there were no telephones, no radio or television, and no fast trains—who could know, and who cared?

The first commercial train service began operating between Liverpool and Manchester in 1830. Ten years later, the first train timetable was issued. The trains were much faster than the old carriages, so the quirky differences in local hours became a severe nuisance. In 1847, British train companies put their heads together and agreed that henceforth all train timetables would be calibrated to Greenwich Observatory time, rather than the local times of Liverpool, Manchester, or Glasgow. More and more institutions followed the lead of the train companies. Finally, in 1880, the British government took the unprecedented step of legislating that all timetables in Britain must follow Greenwich. For the first time in history, a country adopted a national time and obliged its population to live according to an artificial clock rather than local ones or sunrise-to-sunset cycles.

This modest beginning spawned a global network of timetables, synchronized down to the tiniest fractions of a second. When the broadcast media—first radio, then television—made their debut, they entered a world of timetables and became its main enforcers and evangelists. Among the first things radio stations broadcasted were time signals, beeps that enabled far-flung settlements and ships at sea to set their clocks. Later, radio stations adopted the custom of broadcasting the news every hour. Nowadays, the first item of every news broadcast—more important even than the outbreak of war—is the time. During World War II, BBC News was broadcasted to Nazi occupied Europe. Each news program opened with a live broadcast of Big Ben tolling the hour—the magical sound of freedom. Ingenious German physicists found a way to determine the weather conditions in London based on tiny differences in the tone of the broadcasted ding-dongs. This information offered invaluable help to the Luftwaffe. When the British secret service discovered this, they replaced the live broadcast with a set recording of the famous clock.

In order to run the timetable network, cheap but precise portable clocks became ubiquitous. In Assyrian, Sassanid or Inca cities there might have been at most a few sundials. In European medieval cities there was usually a single clock—a giant machine mounted on top of a high tower in the town square. These tower clocks were notoriously inaccurate, but since there were no other clocks in town to contradict them, it hardly made any difference. Today, a single affluent family generally has more timepieces at home than an entire medieval country. You can tell the time by looking at your wrist watch, glancing at your Android, peering at the alarm clock by your bed, gazing at the clock on the kitchen wall, staring at the microwave, catching a glimpse of the TV or DVD, or taking in the taskbar on your computer out of the corner of your eye. You need to make a conscious effort not to know what time it is.

The typical person consults these clocks several dozen times a day, because almost everything we do has to be done on time. An alarm clock wakes us up at 7 a.m., we heat our frozen bagel for exactly 50 seconds in the microwave, brush our teeth for 3 minutes until the electric toothbrush beeps, catch the 7:40 train to work, run on the treadmill at the gym until the beeper announces that half an hour is over, sit down in front of the TV at 7 p.m. to watch our favorite show, get interrupted at preordained moments by commercials that cost $1,000 per second, and eventually unload all our angst on a therapist who restricts our prattle to the now standard 50-minute therapy hour.

The Marriage of Science and Religion

We are living in a technical age. Many are convinced that science and technology hold the answers to all our problems. We should just let the scientists and technicians go on with their work, and they will create heaven here on earth. But science is not an enterprise that takes place on some superior moral or spiritual plane above the rest of human activity. Like all other parts of our culture, it is shaped by economic, political, and religious interests.

Science is a very expensive affair. A biologist seeking to understand the human immune system requires laboratories, test tubes, chemicals, and electron microscopes, not to mention lab assistants, electricians, plumbers, and cleaners. An economist seeking to model credit markets must buy computers, set up giant databanks, and develop complicated data processing programs. An archaeologist who wishes to understand the behavior of archaic hunter-gatherers must travel to distant lands, excavate ancient ruins, and date fossilized bones and artifacts. All of this costs money.

During the past 500 years modern science has achieved wonders thanks largely to the willingness of governments, businesses, foundations, and private donors to channel billions of dollars into scientific research. These billions have done much more to chart the universe, map the planet, and catalogue the animal kingdom than did Galileo Galilei, Christopher Columbus, and Charles Darwin. If these particular geniuses had never been born, their insights would probably have occurred to others. But if the proper funding were unavailable, no intellectual brilliance could have compensated for that. If Darwin had never been born, for example, we’d today attribute the theory of evolution to Alfred Russel Wallace, who came up with the idea of evolution via natural selection independently of Darwin and just a few years later. But if the European powers had not financed geographical, zoological, and botanical research around the world, neither Darwin nor Wallace would have had the necessary empirical data to develop the theory of evolution. It is likely that they would not even have tried.

Why did the billions start flowing from government and business coffers into labs and universities? In academic circles, many are naïve enough to believe in pure science. They believe that government and business altruistically give them money to pursue whatever research projects strike their fancy. But this hardly describes the realities of science funding.

Most scientific studies are funded because somebody believes they can help attain some political, economic, or religious goal. For example, in the sixteenth century, kings and bankers channeled enormous resources to finance geographical expeditions around the world but not a penny for studying child psychology. This is because kings and bankers surmised that the discovery of new geographical knowledge would enable them to conquer new lands and set up trade empires, whereas they couldn’t see any profit in understanding child psychology.

In the 1940s the governments of America and the Soviet Union channeled enormous resources to the study of nuclear physics rather than underwater archeology. They surmised that studying nuclear physics would enable them to develop nuclear weapons, whereas underwater archeology was unlikely to help win wars. Scientists themselves are not always aware of the political, economic, and religious interests that control the flow of money; many scientists do, in fact, act out of pure intellectual curiosity. However, only rarely do scientists dictate the scientific agenda.

Even if we wanted to finance pure science unaffected by political, economic, or religious interests, it would probably be impossible. Our resources are limited, after all. Ask a congressman to allocate an additional million dollars to the National Science Foundation for basic research, and he’ll justifiably ask whether that money wouldn’t be better used to fund teacher training or to give a needed tax break to a troubled factory in his district. To channel limited resources we must answer questions such as “What is more important?” and “What is good?” And these are not scientific questions. Science can explain what exists in the world, how things work, and what might be in the future. By definition, it has no pretensions to knowing what should be in the future. Only religions and ideologies seek to answer such questions.

Consider the following quandary: Two biologists from the same department, possessing the same professional skills, have both applied for a million-dollar grant to finance their current research projects. Professor Slughorn wants to study a disease that infects the udders of cows, causing a ten percent decrease in their milk production. Professor Sprout wants to study whether cows suffer mentally when they are separated from their calves. Assuming that the amount of money is limited, and that it is impossible to finance both research projects, which one should be funded?

There is no scientific answer to this question. There are only political, economic, and religious answers. In today’s world, it is obvious that Slughorn has a better chance of getting the money. Not because udder diseases are scientifically more interesting than bovine mentality, but because the dairy industry, which stands to benefit from the research, has more political and economic clout than the animal rights lobby.

Perhaps in a strict Hindu society, where cows are sacred, or in a society committed to animal rights, Professor Sprout would have a better shot. But as long as she lives in a society that values the commercial potential of milk and the health of its human citizens over the feelings of cows, she’d best write up her research proposal so as to appeal to those assumptions. For example, she might write that “Depression leads to a decrease in milk production. If we understand the mental world of dairy cows, we could develop psychiatric medication that will improve their mood, thus raising milk production by up to ten percent. I estimate that there is a global annual market of 250 million dollars for bovine psychiatric medications.”

Science is unable to set its own priorities. It is also incapable of determining what to do with its discoveries. For example, from a purely scientific viewpoint it is unclear what we should do with our increasing understanding of genetics. Should we use this knowledge to cure cancer, to create a race of genetically engineered supermen, or to engineer dairy cows with super-sized udders? It is obvious that a liberal government, a Communist government, a Nazi government, and a capitalist business corporation would use the very same scientific discovery for completely different purposes, and there is no scientific reason to prefer one usage over others.

In short, scientific research can flourish only in alliance with some religion or ideology. The ideology justifies the costs of the research. In exchange, the ideology influences the scientific agenda and determines what to do with the discoveries. Hence in order to comprehend how humankind has reached Alamogordo and the Moon—rather than any number of alternative destinations—it is not enough to survey the achievements of physicists, biologists, and sociologists. We have to take into account the ideological, political and economic forces that shaped physics, biology, and sociology, pushing them in certain directions while neglecting others.