教学方

François Dermange

Professeur d'éthique

Ghislain Waterlot

Professeur d'éthique et de philosophie

脚本

Today, I'm going to continue discussing utilitarianism but in a new context for us: that of neuroscience. But first, let's quickly go over the principal aspects of today's ethical debate -- and by today, I mean approximately since the beginning of this century. Although there is a diversity of ethical currents today, two in particular are considered the leading, and most important, theories: utilitarianism and deontological, or Kantian, ethics. Deontological ethics asserts there are moral duties characterized by a certain degree of absoluteness. Think, for instance, of the Ten Commandments. "You shall not kill" is given as an absolute duty. Likewise, the theory of human rights confers a certain absoluteness and moral predominance onto the concept of rights. On the other hand, utilitarianism seeks to maximize the happiness of the greatest number of people, and sees things like rights as important, but secondary, notions. Now, there is a neuropsychologist named Joshua Greene who believes that neuroscience can help us determine which of these two competing ethical doctrines is right. At first, the idea that the correctness of a theory can be determined empirically may seem strange and counter-intuitive. Yet if it is indeed possible, it would have to be considered a major advance in the history of ethics and philosophy, and a turning point in our moral history. So how does Greene proceed? He starts with a moral dilemma. Greene set up an experiment in which he measured how subjects -- in this case, his students -- respond to a famous moral thought experiment known as the "Trolley Problem." The point of the experiment was to measure, using functional magnetic resonance imaging (fMRI), the brain's response to a moral conundrum. There are two versions of the Trolley Problem. The first, the original version, is set up as follows. A runaway trolley is hurtling down a track, out of control, towards a place where five people are working. There is no way to alert them of the imminent danger. But there is a switch you can pull to send the trolley down a side track, where only a single person is working. Participants in the experiment are asked: do you have the right to pull the switch and divert the trolley onto the side track? About 90% of participants answered yes, on the grounds that such a decision would lead to only one death rather than five. So that's the original version of the problem. The second version is known as the "Fat Man" variant. The setup is the same: a runaway trolley heading for a group of five people. This time, however, there is no switch. This time, you're standing on a bridge straddling the tracks, and next to you is a very fat man. If you push the fat man onto the tracks, this will stop the trolley and save the lives of the five workers. The fat man, of course, will die. The question, this time, is: do you have the moral right to push the fat man onto the tracks in order to save five other lives? And here -- surprise, surprise -- only about 10% of subjects say yes. Note that they are not being asked to push the man, only whether one has a right to do so. Here's the interesting thing: the actions in each version seem more or less the same. They have the same outcome: either one person dies, or five. But in the first version, almost everyone agrees on minimizing loss of life, whereas the second version yields a diametrically opposed response. The differing responses, in fact, correspond with the two moral theories we've been comparing. Minimizing loss of life is the utilitarian approach; respecting the moral commandment not to kill -- an essential moral value in our tradition -- is the duty ethics position. How, then, are we to resolve the impasse? According to Greene, brain imaging provides us with the information required to come to a definitive conclusion. Indeed, in the Fat Man version of the problem, the parts of the brain associated with emotions are recruited, whereas the original version recruits the cerebral areas associated with rational thought. In other words, when we use our reason, we follow the guidelines of utilitarianism and when we use our emotions, or when our emotions come into play, we tend to follow the principles of deontological, or Kantian, ethics. This is perhaps a little paradoxical insofar as Kant sought to establish practical reason as the basis for morality. Greene says Kant was mistaken. He was wrong to believe that reason tells us to respect the rights of each individual, and not to kill even if it means saving a large number of people. For Greene, reason tells us to sacrifice one individual to save many. Emotion is what prevents us from pushing the fat man off the bridge. And emotions, for Greene, do not constitute morally sufficient grounds to justify ethical decisions. This is not to say that Greene characterizes the emotions as psychological elements that incite us to act wrongly. The belief that following one's emotions in making moral decisions is a mistake has been with us for a long, long time. But this isn't what Greene contends. His contention is that there are some emotions that are morally "correct," so to speak, and others that prevent us from acting morally. Greene calls these emotions, which are mobilized in the brain by deontological-type problems, "alarm bell emotions." One question comes to mind almost immediately: why should we follow reason rather than emotion? After all, don't our emotions sometimes prevent us from doing harm, as seems to be the case in the Fat Man scenario? Greene answers that the "alarm bell emotions" were probably useful, psychologically speaking, back when we were hunter-gatherers. Any emotion that stops you from committing a direct and immediate harmful act, like pushing the man off the bridge, was undoubtedly very beneficial when primitive humans (and protohumans) lived in small groups. Today, however, they no longer serve their purpose; they represent archaic psychological/behavioral remnants no longer suited to social systems as technologically advanced and complex as our 20th and 21st century civilization. In particular, one feature of our civilization is that it enables us to do great harm at a distance. The reason we are less uncomfortable doing harm at a distance -- the original version of the Trolley Problem asks you to pull a switch, an indirect act very different from pushing a man to his death -- is that such a dilemma doesn't trigger the "alarm bell" emotions. Which is not to say, of course, that doing harm at a distance is a good thing. Greene's argument is that each situation requires that we weigh the consequences of our action -- exactly what utilitarianism tells us. We need to make sure we don't rely on our immediate emotions, because they are liable to induce us into moral error. We are rational beings, says Greene, and thus able to self-consciously carry out the decision to use reason and set emotion aside. Greene's ultimate conclusion, then, is that utilitarianism is ethically superior to Kantian duty ethics because of to its ability to sidestep archaic and morally obsolete emotional patterns. Now, as you can imagine, proponents of duty ethics disagree. Let's look at their response to Greene's position. They point out an especially problematic aspect of Greene's theory: the fact that sociopaths -- individuals whose social behavior is extremely harmful to others -- tend to behave in a much more utilitarian manner than normal people. Interestingly, sociopathic behavior is sometimes the result of brain lesions. A sociopath will find it much easier to push the fat man off the bridge. Isn't it strange, ask the deontologists, that your supposedly morally superior theory is the one embraced by those individuals whose behavior is at best asocial, and at worst, immoral? Clearly, then, the debate is far from over. Greene may not have succeeded in bringing the discussion to an end, but his ideas have certainly stirred the pot in a most fertile way.