Pages

Tuesday, February 25, 2014

I’m going through a bit of a blogging lull at the moment due to the intervention of “real life” pressures. But since I don’t like to go without posting for too long I thought I would do something today to fill the gap until I can recommence the more serious stuff. And what better way to do that than to return to one of the predominant themes of this blog: morality and the philosophy of religion. This post is drawn from my reading of Louise Antony’s article “The Failure of Moral Arguments” in the book Debating Christian Theism. The article makes many excellent points about the alleged relationship between God and morality, I want to zone in on one of those points in this post.

1. Craig and the Nothing But Argument
In some of his debates and scholarly writings, Christian-apologist-extraordinaire William Lane Craig has been known to make disparaging remarks about the implications of materialist/naturalist ontologies for meaning and value. Several of these remarks take the form of a “Nothing But”-argument. Consider, for example, the following passage:

After all, on the naturalistic view, there’s nothing special about human beings. They’re just accidental byproducts of nature which have evolved relatively recently on an infinitesimal speck of dust called the planet Earth…

The key moves in this passage are highlighted in bold: the claim is that (if naturalism is true) there is nothing special about human beings because we are simply byproducts of random evolution. Elsewhere in the same book, he clarifies that the “nothing special” claim has to do with the moral value/status of human beings.

Similar instances of “nothing buttery” litter his writings. For example:

On the atheistic view, humans are just animals, and animals have no moral obligations to each other.

But if man has no immaterial aspect to his being (call it soul or mind or what have you), then we’re not qualitatively different from other animal species. On a materialistic anthropology there’s no reason to think that human beings are objectively more valuable than rats. When a terrorist bomb rips though a market in Baghdad, all that happens is a rearrangement of the molecules that used to be a little girl.

As Antony notes in her analysis, the argument uniting these passages is somewhat opaque, being stated in rhetorically pleasing, enthymematic terms. If we subject the rhetoric to some critical analysis, we can see more clearly what is going on, and use this determine the true strengths and weaknesses of the argument.

Doing so, we observe the same “nothing but” logic, albeit with three different variations on what exactly it is that humans are nothing but. The three variations are: (i) humans are nothing but animals; (ii) humans are nothing but accidental byproducts of nature; and (iii) humans are nothing but collections of molecules arranged in a particular sequence. Plugging these three variations into a formal argumentative framework, we end up with the following take on the “nothing but” argument:

(1) If materialism (or naturalism) is true, then human beings are nothing but [animals/accidental byproducts of nature/collections of molecules].

(2) [Animals/accidental byproducts of nature/collections of molecules] are devoid of moral value.

(3) Therefore, if materialism is true, human beings are devoid of moral value.

On particular variants of this basic argument, the premises can seem quite objectionable. For example, Paul Draper has criticised the second premise of the “just animals” argument, noting that there is good reason to think that animals have moral status/value and that any belief in the “special” moral status of humans is prejudicial. Antony makes a similar observation but is more ambitious in her critique. She tries to identify a general structural problem shared by all versions of the argument.

2. The Three Meanings of “Nothing But”
The general problem has to do with the meaning of phrase “nothing but” and its impact on the plausibility of the argument. As she points out, the argument needs an extra premise in order to be valid. As follows:

(4) If Qs are nothing but Ps, then Qs are not qualitatively different from Ps.

At first glance, we might think this principle is too obvious to be worth stating, but in reality whether this principle is true or not depends on the meaning we attach to the phrase “nothing but” and the consistency with which we apply that meaning to each premise in the argument. In other words, to make the argument work, we need to ensure we need to ensure that premise 4 is true in virtue of the meaning of “nothing but”, and that we apply the same meaning of “nothing but” to premise one and premise two. Antony suggests that there are at least three ways in which to cash out the meaning of “nothing but”.

The first is to view is as a compositional claim, i.e., to say that Qs are nothing but Ps is to say that Qs are made up of Ps. This take on the argument sits most comfortably with the “collections of molecules” variation. The problem is that it is patently false. Consider how it affects the key principle:

(4*) If human beings are made up of collections of molecules, then they are not qualitatively different from other collections of molecules.

It’s hard to imagine anyone seriously endorsing this principle. It is obviously true that different collections of molecules have different properties, and these properties could well affect moral value. A river is different from a rock, a block of ice is different from liquid water, a car is different from a horse, and so on. These things are all composed of the same stuff, but it would be pretty silly to swim in a rock, pour a block of ice into a bucket, or put a key in a horse. Composition matters to some degree, but organisation matters even more.

A second take on the meaning of “nothing but” is to view it as a contrastive claim, i.e. to use it to compare one type of thing to another. For example, if I claim that a particular politician is “nothing but” a shill for the pharmaceutical industry, I am implicitly drawing attention to something else that the politician might have been. Thus, he might have been an honest spokesperson for patient’s interests. How would a contrastive version of Craig’s argument work? Well, it could be used to draw attention to the immaterial beings or souls that humans might be on the theistic view:

(4**) If human beings are animals/collections of matter as opposed to immaterial souls, then they are not qualitatively different from other animals/collections of matter.

The problem is that this creates a non-sequitur: the mere fact that humans are not immaterial does not imply that they are not qualitatively different from other animals or collections of matter. Likewise, the fact that the contrast remains implicit is, I think, significant. It allows the proponent of the argument to sidestep the important question of why immaterial beings would have moral value. Implicit assumptions of this sort are always bothersome to me. It reminds of naive indeterminists in the free will debate who assume that if they can prove that human behaviour is indeterministic they can also prove that it is “free”. Clearly this is not the case: indeterminism is insufficient for freedom. Similarly, immateriality is insufficient for moral value.

The final take on the meaning of “nothing but” is to view it as a deflationary claim, i.e. to use it to reduce or eliminate value from Qs. Antony argues that this is the only take on the argument that allows it to work in favour of Craig’s conclusion. The problem is that the argument is then question-begging in the extreme. It simply disparages or criticises things with property Q in a particularly intransigent and blinkered manner. Shelly Kagan put the point rather nicely in his debate with Craig. In a crucial phase, Craig had asked how something could possess intrinsic value solely in virtue of having a complex nervous system. To which Kagan replied:

If you put it as “complex nervous systems” it sounds pretty deflationary. What’s so special about a complex nervous system? But of course, that complex nervous system allows you to do calculus. It allows you to do astrophysics… to write poetry… to fall in love. Put under that description, when asked “What’s so special about humans…?”, I’m at a loss to know how to answer that question. If you don’t see why we’d be special… because we can do poetry [and] think philosophical thoughts [and] we can think about the morality of our behavior, I’m not sure what kind of answer could possibly satisfy you at that point.

Part one clarified what was meant by the term “sousveillance”, and considered an initial economic argument in its favour. To briefly recap, “sousveillance” refers to the general use of veillance technologies (i.e. technologies that can capture and record data about other people) by persons who are not in authority. This is to be contrasted with “surveillance” which is explicitly restricted to the use of veillance technologies by authorities. The initial economic argument for sousveillance was based on the notion that it could smooth the path to efficient economic exchanges by minimising the risk involved in such exchanges.

As it happens, this economic argument is really the main argument offered by Ali and Mann. They simply restate in a couple of different ways. This is not to denigrate their efforts. Restating or rephrasing an argument can often be highly beneficial by drawing attention to different qualities or features of the argument. One of the goals of today’s post is to see whether this holds true in the case of Ali and Mann’s argument. This requires us to look at two further economic arguments for sousveillance. The first based on the claim that sousveillance technologies can reduce information asymmetries and thereby reduce inefficiences in economic markets. The second based on the claim that sousveillance technologies minimise the scope for opportunism in economic exchanges (as often occurs in principal-agent transactions).

Additionally, this post will look at one other argument defended by Ali and Mann. This argument shifts attention away from economic exchanges and onto exchanges between ordinary citizens and government bureaucracies. The claim being sousveillance helps to correct for the inequalities of power that are common in such exchanges.

Are these arguments any good? Do they make a persuasive case for sousveillance? Let’s see.

1. Sousveillance and Information Asymmetries
Friedrich Hayek’s famous argument for the free market (and against central planning) was based on the notion that the free market was an information processing and signalling system par excellence. Every society needs to make decisions about what goods should be made and services provided. It’s difficult for a central planner to make those decisions in an efficient and timely manner. They have to collate information about the preferences of all the people in the society, they have to work out the costs of the various goods and services, and they then have to implement production plans and supply schedules. The system is slow and cumbersome, and those within the planning administrations are often improperly incentivised.

It’s much easier for a distributed network of producers and suppliers to do this by responding to changes on the ground, and adjusting their production and supply to match local demand. They will be facilitated in doing this by the prices that are charged for goods and services on various markets. The prices are a signal, telling them which goods and services are worth providing, and which aren’t worth the hassle. By responding to subtle fluctuations in prices, this distributed network of agents will be able to coordinate on a schedule of production and supply that is maximally efficient.

The Hayekian argument turns on the value of the price signal. Provided that all relevant information is reflected in price signals, the free market system should indeed be the most efficient. The problem is that market prices often fail to reflect all the relevant information. This happens for a variety of reasons. For example, businesses might fail to incorporate long-term environmental costs into their short-term production costs because those costs are borne by society as a whole, not the producer. This has a knock-on effect on the market price. Similarly, and more importantly for present purposes, certain markets are infected by information asymmetries between buyers and sellers. These arise when one of the parties to an economic exchange has more exchange-relevant information than the other. This gives rise to a number of problems.

A classic example comes from George Akerlof’s paper “A Market for Lemons”. In it, Akerlof suggests that the market for second-hand cars is characterised by information asymmetry. The person selling the car knows far more about the quality of the car than the buyer. This puts the buyer at a disadvantage, which will be reflected in the price s/he is willing to offer for the car. The net effect of this information asymmetry is that sellers with good second-hand cars are driven out of the market — they won’t be offered a price they are willing to accept — and hence bad second-hand cars (“lemons”) predominate. This is all down to the fact that the price signal cannot incorporate all exchange-relevant information in this particular market.

The second-hand car example is an illustration of one of two problems with information asymmetries (and it should be noted that it’s not clear that the second-hand car market does exemplify the problem discussed by Akerlof). Since these problems are central to Ali and Mann’s argument, it is worth defining them a little more precisely:

Adverse Selection: This is the problem at the heart of Akerlof’s market for lemons. It refers to the notion that “bad” customers or products tend to predominate in certain markets, due to information asymmetries. Another classic illustration of this is the customer self-selection effect in markets for insurance. It is sometimes felt that those who demand insurance (e.g. health insurance) are those who know that their lifestyle is such that they are more likely to need it. But, of course, it is difficult for an insurance company to be better informed than the customer about such lifestyles. So the insurance companies err on the side of caution, and charge higher premiums to compensate for the potential risk. This means that low-risk customers are put at a disadvantage: they can’t credibly distinguish themselves from the high-risk customers.

Moral Hazard: This is a problem that arises from the fact that the costs of certain activities are not borne by the agents who carry them out. This is fuelled by information asymmetries between the party bearing the costs and the agent carrying out the acts (the latter have more information about their activities than the former). Insurance is again a classic example of a transaction involving moral hazard: the insured knows more about what they are going to do than the insurer. The bailout of major financial institutions post-2008 was also believed to give rise to moral hazard. The reason being that “too big to fail” institutions could now engage in high-risk activities, safe in the knowledge that if things got too bad, the government would come to their rescue.

Both problems devalue price signals by altering prices from what they would have been had the transactions been undertaken in conditions of perfect information, and by incentivising undesirable behaviour.

Now, you may well be wondering: what has all this got to do with sousveillance? Well, Ali and Mann argue that sousveillance can help to solve the problems of adverse selection and moral hazard by minimising information asymmetries. In other words, they argue (numbering continuing from part one):

(4) The problems of adverse selection and moral hazard reduce the efficiency of economic exchanges.

(5) Sousveillance can help to minimise the problems of adverse selection and moral hazard.

(6) Therefore, sousveillance can increase the efficiency of economic exchanges.

The key to this argument is premise (5). Ali and Mann make the case in favour of it in two parts. First, they note that adverse selection can be minimised through pre-transaction screening of “bad” customers or sellers, and through credible signalling. Sousveillance helps to facilitate both. For instance, an insurance customer who has carefully documented his life up until the point that he needs insurance can credibly signal to the insurance company that he is low-risk (if he is); or the owner of the second-hand car can provide meticulous records of his personal use of the car to demonstrate its quality. The same is true for moral hazard. In that case, the problem really has to do with post-transaction monitoring of the active party by the party who bears the costs. Sousveillance can, of course, facilitate such post-transaction monitoring.

Here’s my problem with all of this: Although I have no doubt that constant monitoring of activities with veillance technologies — both pre and post-transaction — could reduce (some) information asymmetries, I find it hard to see how this wouldn’t simply give rise to sur-veillance of a highly coercive and insidious nature, rather than sousveillance of a positive and autonomy-enhancing nature. As you’ll recall from part one, surveillance is when de facto authorities impose surveillance on ordinary people; sousveillance is when everybody uses veillance technologies. Whenever there are inequalities of power, there is a potential de facto authority. If those authorities can insist upon monitoring our activities, it seems to me like we have the conditions for liberty-undermining surveillance. I suspect this is what would happen in the case of things like insurance contracts.

Consider, in the first instance, the signalling powers of sousveillance technologies could indeed be quite autonomy-enhancing. The early-adopters could credibly signal that they are low-risk customers, and reap all the benefits of reduced costs of insurance premiums. But this could easily set-up a slippery slope to the compulsory use of such technologies. After all, given the benefits, why wouldn’t the insurance company insist upon monitoring every customer’s waking move before agreeing to give them insurance. And since the exchange between insurance providers and customers is characterised by inequalities of bargaining power (e.g. we are often legally obliged to buy insurance), it is hard to see why this wouldn’t amount to a kind of coercive surveillance. You would be dominated by the company: subtly encouraged to bring your behaviour in-line with their preferences, whatever those preferences happen to be. And since similar inequalities of bargaining power are present in other markets, I think this is a general problem for the economic case for sousveillance. (There are possibly counterarguments to this domination-style argument. I’d be interested in hearing about them in the comments section)

2. Sousveillance, Opportunism and Bargaining with Bureaucracies
Now we’ve spent a long time looking at the information asymmetry example. This is because it is indicative of Ali and Mann’s style of argument, and because it gave me the opportunity to raise one of my main concerns about their claims. Fortunately, this means that the remaining arguments can be dealt with in a more cursory manner. They are simply variations on a theme.

The next argument Ali and Mann offer is based on an analysis of economic opportunism. This is the idea that certain economic transactions create conditions in which parties can engage in opportunistic, self-serving, and socially costly behaviour. Ali and Mann identify three types of opportunism, each of which they claim can be combatted by sousveillance:

First-degree Opportunism: This arises from the imperfect enforceability of contracts, i.e. people reneging on their contractual promises. Opportunism of this nature can be legally remedied, provided that the breach of contract can be proved. Obviously, the idea is that sousveillance can facilitate this. It can also allow for contractees to offer credible gestures that will encourage people to enter into otherwise risky contracts. Ali and Mann give the example of a painter who agrees that his work can be sousveilled by those availing of his service, so that they can see that he followed their instructions.

Second-degree Opportunism: This arises from unanticipated eventualities in long-term contracts, e.g. employment contracts. The idea is that no contract can cover every possible eventuality. If unanticipated eventualities come to pass, one or more of the parties to the contract could exploit the ambiguities in the contract that fail to cover those eventualities. Sousveillance can apparently minimise this in two ways. First, by providing a perfect record of the original negotiations, and hence a basis for arguing about implicit understandings. Second, by providing a cheap way in which the ongoing execution of the contract can be measured and enforced.

Third-degree Opportunism: This arises from discretions in relational contracts. The classic example here is the principal-agent contract, where a principal hires an agent to perform a certain task, and grants the agent discretionary powers in carrying out that task. The agent (partly due to information asymmetries; partly due to misaligned incentives) can sometimes exploit those discretionary powers. Again, sousveillance comes to the rescue by providing the principal with the means of monitoring the use of those discretionary powers.

I have three brief objections to this. First, on the notion that records of negotiations could help to minimise opportunism arising from unforeseen eventualities, I worry that Ali and Mann are being slightly naive about the complexity of negotiations and the vagueness and ambiguity of language. We often have meticulous records of the contexts in which varies legal instruments (e.g. constitutions and statutes) are drafted, but that doesn’t eliminate the uncertainty, or reduce the scope for opportunistic interpretations of those instruments. I see no reason why sousveillance would change things in this regard.

Second, I return to my earlier worry about liberty-undermining surveillance. It may be true that an enterprising sole trader — like a painter — can take advantage of veillance technologies and make himself a more attractive commodity. But that’s to ignore other contexts in which there are inequalities of bargaining power. For example, suppose (as is already the case in some industries) that constant veillance becomes a compulsory part of all employee contracts.

Third, there is the possibility that the monitoring of parties to some economic exchanges is counterproductive. I’m not sure whether it is an entirely credible theory of worker-motivation, but I’ll use the example anyway: Dan Pink’s book Drive argues that carrot-and-stick style incentives for workers are often counterproductive, particularly when it comes to non-routine, creative, and problem-solving forms of work. In those types of work, what is needed is not the threat of punishment or the allure of reward, but rather a sense of autonomy, mastery and purpose. If we follow Ali and Mann’s logic, however, sousveillance is advantageous precisely because it provides a monitoring tool that makes threats of punishment or reward more credible. That is to say: sousveillance only really helps to enhance the system of carrot-and-stick incentives. It may actually detract from creating a sense of autonomy, mastery and purpose, as workers may feel they are not being trusted.

Turning then to the last of Ali and Mann’s arguments. This one has to do with the positive impact of sousveillance on negotiations with bureaucracies. The mechanics of the argument should be familiar to us by now. The claim is that bureaucratic decision-making can be impersonal, and often based on incomplete or imperfect information. This induces feelings of terror and helplessness among those affected by bureaucratic decision-making (think Kafka!). Sousveillance can improve things. Those of us forced to deal with bureaucracies will be able to provide full documentary evidence about ourselves and our actions. This will put us on a firmer basis when it comes to challenging the decisions that affect our lives.

I think some of the problems mentioned above apply to this argument as well. There is also something unrealistic about it all. Our ability to challenge bureaucratic decision-making will depend largely on whether such decisions are rational (not arbitrary) and whether we can know their rational basis. Ali and Mann suggest that Freedom of Information protocols will help us in this regard by giving us access to the internal regulations and guidelines of bureaucracies. But, of course, mere access to information is not always helpful, as anyone who has dealt with FOI documents will attest. There is still the fact that the internal regulations might be exceedingly complex, couched in vague and ambiguous language, or replete with discretionary powers.

3. Conclusion
So that brings us to the end of this aspect of my series on Mann’s arguments for sousveillance. The next post on the topic will be more concerned with the general concept of sousveillance and different types of veillance society. Consequently, it’s worth briefly recapping the arguments discussed so far.

As we have seen, Ali and Mann’s primary case in favour of sousveillance is based on its potential economic advantages. They present this argument in three different ways. The first being a general argument about trust and the risks of economic exchange; the second being about information asymmetries; and the third being about opportunism. They then add to this economic case the claim that sousveillance will help to reduce feelings of terror/helplessness in the face of bureaucratic decision-making.

In each case, I’ve suggested that Ali and Mann may have overstated the arguments for sousveillance. Although there may be some benefits, it’s possible that several of the examples discussed by Ali and Mann would give rise to liberty-undermining surveillance, rather than autonomy-enhancing sousveillance. This is due to their underappreciation of inequalities of bargaining power in economic exchanges. Furthermore, although sousveillance may encourage some kinds of good behaviour, its widespread use may be counterproductive. This is because many people might perceive it as an affront to their autonomy, or a sign of a lack of trust.

Sunday, February 9, 2014

Steve Mann (pictured) has been described as the world’s first cyborg, and as a pioneer in wearable computing. He is certainly the latter. I’m not so sure about the former (I believe Mann rejects the title himself). He is also one of the foremost advocates for sousveillance in the contemporary era. Sousveillance is the inverse of surveillance. Instead of recording equipment solely being used by those in authority to record data about the rest of us, sousveillance advocates argue for a world in which ordinary citizens can turn the recording equipment back onto the authorities (and one another). This is thought to be beneficial in numerous ways.

I’m interested in whether there is a strong case to be made for sousveillance, particularly in light of the increasingly prominent role of data-monitoring in our lives. Fortunately for me, Mann has recently released two papers that develop the case in several ways. One of the papers sets out a series of economic and social justice arguments for sousveillance; the other develops a framework for thinking about different types of “veillance” in society. I want to analyse both papers in this series of posts.

I do so with some trepidation and with a forewarning to the reader. I wouldn’t usually say this — I prefer to see the good in everything — but on this occasion I fear I must: Mann’s papers are not of the highest quality. They are strangely written and poorly focused, sometimes engaging in opaque and incomplete argumentation, and sometimes going off on strange etymological and historical tangents. I’m going to try my best to focus on the main arguments, and to reconstruct them in as charitable a way as I can. Nevertheless, I will occasionally point out some lacunae and weaknesses.

With that warning out of the way, I shall proceed. This post (and the next) will deal with Mann’s economic/social justice arguments for sousveillance. It starts off with some definitions and clarifications. It then looks at Mann’s claim that sousveillance can facilitate beneficial economic exchanges.

1. Definitions and Clarifications: The Wrong of Surveillance?
Mann seems to love etymology. Both of his papers are riddled with odd excursions into the etymology of particular words (surveillance, sousveillance, terrorism, economics and so on). There is no clear sense of why this is done. I find etymology pretty interesting myself, but I’m inclined to think it is somewhat of a distraction in this instance: I don’t think it gives us any real insight into the phenomena in question. Nevertheless, if we are going to consider the case for sousveillance, and if sousveillance is introduced by way of contrast with surveillance, we need some definitions in place at the outset (even if they are purely stipulative in nature).

Fortunately, Mann (and his co-author Mir Adnan Ali) oblige. They define veillance as the watching (or recording) of a person. This can be done through video cameras, but the definition of “watching” is not restricted to the visual sphere. The collection of any personal data that can be recorded and transmitted will count. From this core concept of veillance, the definitions of sur- and sous-veillance arise. As follows (from p. 243):

Surveillance: Monitoring undertaken by an entity in a position of authority, with respect to the intended subject of the veillance, that is transmitted, recorded or creates an artifact (i.e. like a digital recording or video).

Sousveillance: Monitoring undertaken by an entity notin a position of authority, with respect to the subject of the veillance, that is transmitted, recorded or creates an artifact.

Mann and Ali are clear that “authority” here is understood in terms of ability and legitimacy. In other words, a person possesses authority over another if they have the ability and the legitimacy to impose their will on that other.

This is the first slip-up in the argument for me. They explicitly say that legitimacy is understood in a “normative sense”, but I don’t see why they say that. Indeed, many of the most problematic cases of surveillance — ones that sousveillance may be able to counteract — arise precisely because the person doing watching can illegitimately impose their will on another. Furthermore, depending on how you define legitimacy, this definition of “authority” risks foreclosing much of the ethical debate about surveillance and sousveillance. If legitimacy entails the moral right to enforce your will, it’s difficult to see how or why the use of surveillance equipment would be of major ethical concern. I would suggest, then, that we drop “legitimacy” from the definition of authority.

This raises the next issue. It seems obvious that in making a case for sousveillance you must, implicitly or explicitly, believe that there is something morally problematic or sub-optimal about a world in which surveillance dominates. But what is that something? Well, first of all, let’s consider the advantages of surveillance. Clearly, surveillance has advantages from the perspective of authority. It can be used to police and enforce behavioural norms (e.g. street cameras and laws against vandalism) or to prevent the breach of such norms. Consequently, to the extent that these norms are morally valid, surveillance is of benefit to us all. The obvious disadvantages of surveillance are when it goes too far, and personal rights such as the right to privacy are traded-off against the good of enforcement, or when the norms being enforced are not morally valid.

Another, perhaps more subtle, problem with a surveillance culture is best-expressed using republican conceptions of liberty and non-domination. One thing that constant surveillance seems to carry with it is the implicit threat that if you do something to displease the de facto authority figure, you risk punishment or sanction. You live in the permanent shadow of a threat. This will force you to engage in ingratiating, self-censoring and extra-cautious acts. The position strikes me as being similar to that of the happy slave in neo-republican political theory. The happy slave is happy only to the extent that he or she doesn’t step out of line. That’s not real freedom (according to the republican theory). Those us living under the domination of surveilling authorities might have a similarly restricted type of freedom.

This problem of surveillance and domination needs further exploration, but it seems important to me. Indeed, I think it can be used to great effect when evaluating the strengths and weaknesses of Mann’s case for sousveillance. Let’s turn to that case next.

2. Trust, Exchange and the Case for Sousveillance
The main argument that Ali and Mann make for sousveillance is based on the value of efficient economic exchange. As any classical economist will tell you, free and fully-informed exchanges between rational agents should increase societal well-being. The idea being that such a system of exchange ensures that resources are distributed to their highest expected value uses. Now, there are many problems with this model, particularly in terms of the idealistic assumptions one needs to make in order for the conclusion to hold. Nevertheless, Mann and Ali’s arguments are based on the notion that sousveillance gets us closer to those idealistic assumptions.

Central to this argument is an analysis of the conditions for efficient economic exchange. It has long been clear that social cooperation can be mutually advantageous. It has also long been clear that such cooperation carries risks. Hume’s story of the two corn farmers illustrates the point rather nicely. Imagine that there are two farmers, A and B, both with crops of corn. These crops will ripen at different times. Each farmer will need the help of the other to ensure that they can harvest their crops, and put them in storage in good time. Without such help, a portion of the crops will start rotting in the field. In this case, cooperation would be mutually beneficial. The problem is that a purely rationalistic analysis suggests that they won’t help each other: if farmer A helps B before his crops ripen, then farmer B will have no real incentive to help farmer A later on. Reasoning backwards, A will expect B to betray him and so won’t bother helping B. This is sometimes referred to as the Farmers’ Dilemma.

There are various solutions to this dilemma. A legal system that enforces promises is one: if the breach of promise carries with it a risk of legal sanction, then more people might be inclined to keep their promises. So too is trust: simply voluntarily committing yourself to help another, in spite of the risk. Some people argue that trust is a social emotion that evolved so that we could solve the problem of social cooperation. And some people argue that trust of this sort is incredibly virtuous. Something that society should be keen to promote and protect. Indeed, trust plays a considerable role in many important social exchanges. People can feel offended if you don’t trust them, and may back out of an exchange if you seem to lack trust. Think, for example, of how betrayed you might feel if you caught your partner snooping through your text messages just to make sure you were being faithful.

Ali and Mann argue that sousveillance can facilitate beneficial social exchanges. They say it does so by making the parties to an exchange less vulnerable to exploitation. The mechanism for this is not stated, but I assume the authors are imagining something like the following: There is a system in place that will enforce promises if they are breached (this system could be a formal legal system or an informal social one). This system relies on proof of claim before enforcement. Those who use sousveillance will record every detail of every negotiated promise. In this way, they will always be able to prove a claim should they need to rely on the social system of enforcement. Consequently, every exchange with a sousveiller will carry an implicit threat of enforcement. If both parties are sousveillers (as they should be according to Ali and Mann), they can keep each other honest, and thereby clear the path to beneficial exchange. In this sense, sousveillance is a trust-substitute: it overrides the vulnerabilities inherent in exchange without forcing us to voluntarily assume the risk of exploitation.

To lay this out more formally:

(1) If mutually advantageous social exchanges carried less risk of exploitation, people will be more likely to undertake them.

(2) Sousveillance helps to reduce the risk of exploitation inherent in mutually advantageous social exchanges.

(3) Therefore, sousveillance increases the likelihood of people undertaking mutually advantageous social exchanges.

We’ve already considered the case for premise (2) in the preceding paragraph. I want to dwell on premise (1). It seems to me that this is potentially vulnerable to a counterargument. The counterargument brings us back to the virtue of trust. Although some exchanges could be facilitated by sousveillance, it could also be the case that people insist on the voluntary assumption of risk as a gesture of good faith prior to entering into an exchange. We could imagine, for example, one of the CEOs of two major corporations, negotiating a merger deal, asking the other to “switch off” his sousveillance equipment before they agree on the final terms of the deal. After all, if this merger is going to work, they need to trust each other, and they can’t do that if they are constantly monitoring and recording one another’s words.

This might, of course, be terribly naive, and agreeing to this gesture of good faith may be costly, but humans are sometimes irrational and one could imagine this kind of insistence taking place. What matters is whether the number of valuable exchanges in which people insist upon trust, will be larger than the number of valuable exchanges facilitated by sousveillance. In their discussion, Ali and Mann break the possible exchanges down into three categories: (i) those that are unaffected by the presence of sousveillance; (ii) those that are facilitated by sousveillance; and (iii) those that are discouraged or prevented by sousveillance. As long as categories (i) and (ii) are larger and more valuable than category (iii), a case for sousveillance can be made.

So which is it? Ali and Mann try to argue that the number of exchanges in category (iii) will be minimal. They do so on the grounds of desensitisation. As sousveillance becomes more widespread, people will adjust their expectations to accommodate it. They will be less “creeped out” or offended by its use. Arguably, this has already happened with surveillance technologies. Whenever I get the train, I see little signs reminding me that my every movement is being recorded by CCTV. It doesn’t bother me. I’ve become so used to it. Why wouldn’t the same thing happen with sousveillance?

I think there are some problems with this argument. While it is true that we could become desensitised to sousveillance if it achieved sufficient social penetration, that seems to assume what needs to be proved, namely: that sousveillance will achieve sufficient social penetration. Elsewhere in their article, Ali and Mann defend this on the grounds of economic inevitability: sousveillance will be so economically beneficial that it will become widespread. But, again, that seems to assume what needs to be proved: that sousveillance really is economically beneficial. If the economic benefit of technology depends on whether it facilitates voluntary exchange between parties, and if a sufficient number of parties are offended by the use of sousveillance technologies, then they won’t be economically beneficial. People won’t consent to their use. This is markedly different from the surveillance case. Since surveillance technologies are imposed from the top-down — by de facto authorities — they don’t require the immediate consent of those being watched. Economic exchanges arguably do.

Admittedly, this is a technical objection, based more on how Ali and Mann make their argument, than on what I think the reality is going to be. The fact is that the marketplace is currently characterised by inequalities of bargaining power between parties to economic exchange. It is perfectly possible that those inequalities create conditions in which veillance technologies will get a foothold. This may clear the path to widespread sousveillance

Okay, that’s it for part one. In part two, I’ll consider the remainder of Ali and Mann’s economic arguments for sousveillance. I’ll then turn to their social justice argument, which is based on an analysis of the role of bureaucratic power in modern society.

Thursday, February 6, 2014

In this post I’m going to take a look at Joshua Greene’s modular myopia hypothesis (MMH), as detailed in his recent book Moral Tribes. The MMH is both an attempt to explain our anomalous responses to the multiple variants of the trolley problem, and an attempt to account for other aspects of our moral decision-making. To fully understand the significance of the MMH, you will need to read the previous entry on advanced trolleyology and the doctrine of double effect. I’m not going to restate all the details from that post here.

Nevertheless, one detail does need to be restated at the outset. As you’ll recall, an argument was presented at the end of the previous post. This argument purported to “debunk” our commitment to the intuitively compelling doctrine of double effect. It did so by showing how that commitment was contaminated by the presence morally irrelevant factors.

One thing that the MMH is designed to do is to build upon this contamination argument. Only this time, instead of showing how one intuitively compelling moral principle is questionable, the goal is to show how several of the outputs of our moral decision-making faculties are questionable. This is a point that will be re-emphasised at the end of this post.

In the meantime, we will preoccupy ourselves with the following four topics. First, we’ll get a bird’s eye view of the MMH, paying particular attention its reliance on the “dual process” theory of moral reasoning. Second, we’ll look at Greene’s evolutionary explanation for the existence of the MMH. Third, we’ll get into the details of the MMH by considering exactly why it gives rise to anomalous results in the trolley cases. Then fourth and finally, we’ll see what the debunking potential of the MMH really is.

1. The Modular Myopia Hypothesis, in brief
One of the key inferences from the experimental analysis of trolley problems is that our intuitive moral responses are sensitive to two factors: (i) whether harm is used as a means to an end or whether it is a mere side effect; and (ii) whether the harm was administered personally or impersonally. If harm occurs as a side effect, or if it is brought about impersonally, then we tend to think little of it; if it occurs as a means to an end, and we personally cause the harm, then we tend to think a lot of it. Why is this?

That’s what the MMH tries to answer. The central plank of the hypothesis is that our moral reasoning is modular. In other words, that we have different brain modules that are responsible for different styles of moral reasoning. Two such modules have emerged from Greene’s experimental work (this is the “dual process” aspect of the theory). The first is a “fast” or automatic module, which issues moral responses on essentially emotive grounds: “that feels wrong”, “that feels right” etc. The second is a “slow” or manual module, which is much more dispassionate and rationalistic, focusing primarily on the costs and benefits of our actions.

The slow, manual module is stolidly consequentialist in nature. Across all versions of the trolley problem it simply weighs the costs and benefits and, ceteris paribus, comes down in favour of saving five by killing one. The fast, automatic module is much more erratic in nature — at least, erratic in the sense that the principles it adheres to are initially opaque. Analysis of the experimental results, however, reveals that this module is simply “myopic” in the principles it applies. It attaches strong negative moral emotions to acts that are personal and which use harm as a means to an end, but ignores other morally salient factors.

This then is the essence of the MMH: our automatic moral reasoning is myopic in nature. As Greene sees it, once we understand the essence of the MMH, there are two questions to ask about it. Why do we have myopic moral modules? And why is our module myopic in that particular way? The first question takes us into the evolutionary origins of the module. The second question forces us to confront the mechanisms of the myopic module.

2. An Evolutionary Account of the MMH
The evolutionary account offered by Greene has all the hallmarks of good “just so” story. Such stories are often criticised due to their lack of empirical content. Nevertheless, Greene maintains that his story generates predictions that can be tested. This makes it more scientifically satisfying.

The just so story works something like this. At some point in our evolutionary history (probably at a pre-human point), we developed a brain that was capable of advance planning. It could take internal goals and develop action plans that could be used to realise those goals. With this capacity came a problem: our ancestors could now plan premeditated acts of violence. These acts of violence could be used for achieving their goals. While this capacity might be beneficial in many organisms (e.g. solitary predators), it created particular problems for human beings. Human beings are social animals, and any member of human society who repeatedly and wantonly used violence to get what they wanted would quickly find themselves on the receiving end of retaliatory attacks and the like. The result being that violence would exclude them the benefits of social cooperation.

In order to overcome this problem evolution programmed our decision-making modules to be more discerning in their penchant for violence. To be precise, it evolved an internal monitoring system that sounded an “alarm” whenever our ancestors thought about performing a socially counter-productive act of violence. This is what our automatic moral module does. The problem is that it does so in a myopic way, by ignoring many features of our actions.

We’ll get back to those features in a minute. What is important for now is that this hypothesis, according to Greene, generates some predictions:

Predictions of the MMH

(a) The system didn’t evolve to respond to artificial thought dilemmas like the trolley problem; so what should really get it going is real-world violence.

(b) The system should respond to certain cues of violence, irrespective of whether those cues actually mean that someone is being violently harmed. In other words, it should respond to simulated violence. This is down to the “myopia” of the system

(c) Since the system evolved as an “internal” monitor of violence, it should respond less strongly to simulated acts performed by others than to simulated acts performed by oneself.

Greene argues that these predictions have been confirmed by a series of experiments performed by Fiery Cushman, Wendy Mendes and their colleagues. The experiments involved real-world simulations of violent acts. For example, in one experiment subjects were asked to strike someone’s leg with a fake (but real-looking) hammer; in another they were asked to smash a fake baby’s head off a table. The experimenters found that people had a very strong negative emotional reaction when they performed these simulated acts of violence themselves, but not when they watched others perform them. This was all in spite of the fact that the experimental subjects were fully aware that their actions would not really cause harm to anyone (it’s safe to say the experiment would never have received ethics approval if they were kept in the dark about this!).

I guess my one difficulty with all this is that I don’t know how predictive those predictions really are. It’s possible that Greene is simply retrospectively cherry-picking the experimental data to find results that fit his hypothesis. This might be okay as a starting point, but further confirmation and experimentation is surely needed (perhaps this is being done). And since I don’t have mastery of the experimental literature myself, it’s possible that there is disconfirming evidence out there that is simply ignored by Greene. I’m not well-positioned to say. For the time being, I remain somewhat sceptical of the evolutionary story being told here.

3. Actions Plans and Moral Myopia
Leaving the evolutionary bit to one side, the second question to ask of the MMH relates to the actual mechanisms underlying it. Why is it that our automatic module is sensitive to some features of our actions but not to others? To answer this, Greene tries to combine his own, dual-process theory of moral reasoning with an alternative theory, defended by John Mikhail.

I haven’t read Mikhail’s defence of this theory, though I did read some of his older papers. As I recall, Mikhail is a moral grammarian. He proposes that human brains have an innate moral grammar: from a few simple components they can morally evaluate an infinite range of actions and outcomes. This is similar to they way in which they have an innate linguistic grammar, from which they can evaluate an infinite set of sentences. The analogy to Chomsky’s theory of language is immediate and direct. Fortunately, we don’t need to worry about the nuances of Mikhail’s theory in this post. We just need to focus on one of his ideas.

The idea in question is that of the action plan. This is something originally developed by Alvin Goldman and Michael Bratman. The proposal is that human brains represent actions in terms of branching action plans. Each plan has a primary “trunk” that begins with some bodily movement and terminates in the agent’s goal. Every point along the primary trunk is an event that is necessary (in a weak, empirical sense) for the realisation of the goal. From the primary trunk a number of additional branches (secondary, tertiary and so on) emerge. Along these branches we find alternative routes to the same goal or foreseen side effects of the primary action. The action plans for the Switch and Footbridge trolley dilemmas (see previous entry) are illustrated below.

Greene argues that these action plan diagrams can be used to understand the myopia of our automatic moral module. In essence, his claim is that the module simply inspects the primary branches of our moral action plans. If it finds some morally troubling feature on that primary branch (such as the use of personal force, or the infliction of harm) it will sound the alarm. If it doesn’t find such features, it will give the action plan the all clear. It ignores all sub-branches and their outcomes (including the oft-neglected side effects in the footbridge case).

Greene believes that this explains some of the particularly odd results we find in the trolley experiments. Take the results of the Loop experiment (discussed previously). In this case, subjects are asked whether they would divert a trolley onto a sidetrack that loops back onto the main track. The catch being that this diversion will only save the lives of the five people on the main track if it collides with one worker who happens to be on the sidetrack. In this case, the death of one person is being used as a means to an end, and so is contrary to the doctrine of double effect. Nevertheless, experiments suggest that there is high approval for diverting the trolley onto the sidetrack.

Action Plan for the Loop Case

Why is this? Greene holds that the MMH has the answer. The Loop case is odd in that there is a secondary branch off the primary trunk.* It is along that secondary branch that harm is being used as a means to an end. The automatic module doesn’t see it though. It simply inspects the primary branch of the action plan, doesn’t find anything morally troubling along that branch, and so approves of the action. It ignores the secondary branch.

This isn’t the complete picture. There is still the role of the slower manual moral module to factor in. This module is not so myopic. It can “see” the secondary branch. But it is stolidly consequentialist in nature. Remember? So it inspects the secondary branch and gives it the thumbs up. This is why we get such high approval for diversion in the Loop case. This is a key point. Greene’s theory is that our moral reactions in trolley cases result from the combined effect of both modules: the manual one, which weighs costs and benefits, and the automatic one, which is myopic in the various ways described.

A final question arises: why is the automatic module so very myopic? Why does it only focus on the primary branch? Greene submits that there are sound evolutionary (and indeed a priori reasons for this). Any action we pursue will have a huge number of side effects (both foreseen and unforeseen). To trace out each branch of the primary action plan would be massively cognitively costly. It makes sense that evolution would try to minimise this cost by focusing solely on the primary branch.

4. The MMH and the Debunking Argument
So how does all this tie in to Greene’s debunking project? Well, with the MMH we get a much “evolutionary”-flavoured debunking argument. This is a type of argument I’ve addressed in detail before, with some comments on Green’s own work. The basic idea is that our myopic moral module is the product of causal and evolutionary history that we are not warranted in trusting.

Guy Kahane’s template for understanding causal/evolutionary debunking arguments is helpful in this regard. He suggests that all such arguments fit within the following mold:

Causal Premise: S’s evaluative belief that P is caused by process Y.

Epistemic Premise: Process Y does not track the truth of evaluative propositions of type P.

Conclusion: S’s belief that P is unjustified, or unwarranted.

Greene’s argument can be made to fit this mold too. Greene is claiming that some of our moral beliefs are products of an evolved psychological mechanism which may not give rise to warranted moral beliefs.

(1) Some of our moral beliefs (e.g. the belief that killing is wrong in the Footbridge case) are caused by the myopic automatic moral module in our brains.

(2) The myopic automatic moral module does not reliably track the truth of (all) moral propositions.

(3) Therefore, we should not trust, at least some of, our moral beliefs.

Premise (1) rests on the truth of Greene’s MMH. Moral psychologists might critique that hypothesis, but I’m willing to grant it for the time being. Premise (2) is more interesting to me. Greene never explicitly defends it in his book — indeed, he never explicitly outlines the argument he is defending at all — but there are implicit defences of it scattered throughout the text. And it must be conceded that it has a degree of plausibility. If Greene is right, then the MMH evolved to solve a very particular kind of problem: the problem of premeditated violence in small social groups. A module that is designed to solve this problem may work well for some moral problems (Greene concedes as much) but may be inapplicable to a broad range of other moral problems (this is the major thesis of Greene’s book). Furthermore, simple rational reflection suggests that we shouldn’t always trust the module. If it is myopic in the way Greene describes, then it does ignore a lot of factors that might be relevant to moral decision-making, namely all the unexplored branches of our action plans.

The problem, however, is how do we get from this argument to the endorsement of utilitarianism? That’s ultimately where Green is trying to lead us, and by itself the debunking of some of our moral beliefs — however persuasive that debunking might be — wouldn’t seem to be sufficient for us to embrace utilitarianism. Alas, a fuller treatment of this issue is beyond the scope of this post. To see what Greene has to say, I recommend reading his book.

* There is a technical objection one could make to this, viz. why is it a secondary branch at all? Isn't there just a single causal pathway to the goal and hence isn't the claim that there is a branch simply arbitrary? Greene tries to address this objection in a footnote. He argues that the secondary branch pathway is parasitic on the primary trunk because "the turning of the trolley away from the five makes sense as a goal-directed action all by itself, without reference to the secondary causal chain, that is, to what happens after the trolley is turned. But the secondary chain cannot stand alone."

Tuesday, February 4, 2014

Most people are familiar with the trolley problem and the influence it has had on contemporary applied ethics. Originally formulated by Philippa Foot in 1967, and subsequently analysed by virtually every major philosopher in the latter half of the 20th Century, the trolley problem has provoked debates about the merits of utilitarianism and deontology, and provided the basis for a whole sub-branch of moral psychology: trolleyology. Since the original formulation and its two variants, experimenters have created multiple variations on the basic trolley dilemma, each one tweaking and adjusting the conditions in order to provoke a different moral response.

But what has all this scrutiny really achieved? Joshua Greene thinks it has achieved quite a lot. Greene suggests that it can illuminate the underlying psychological mechanisms of moral choice, and cast doubt upon some traditional and much-beloved ethical principles. That, at least, is one of the central arguments in his recent book Moral Tribes.

I agree with Greene, in part. I think the results of the psychological experiments are fascinating, and I think some of the proposed psychological mechanisms of moral choice are informative, but I’m less sure about the broader philosophical implications. Nevertheless, in the spirit of educating myself in public, I thought I might do a couple of blog posts dealing with some of the themes and ideas from Greene’s work. The primary advantage of this that it gives me an excuse to cover the experimental findings and theoretical models; but I’ll try not to avoid the deeper moral questions either.

The remainder of this post is divided into four parts. First, I consider (very briefly) the classic trolley problem and one of the proposed solutions to that problem. Second, I look at some variants of the classic problem that examine the role of personal/impersonal force in explaining people’s moral responses. Third, I look at some variants of the classic problem that examine the role of the means/side-effect distinction in explaining the different responses to the problem. Fourth, I turn to Greene’s analysis of these variants and try to reconstruct what I think his argument is.

(Note on sources: I take this from Chapter 9 of Moral Tribes; many of the experimental results are taken from Greene, Cushman et al 2009)

1. Classic Trolleyology and the Doctrine of Double Effect
Apart from trolleys and train tracks, the multiple variants of the trolley problem all have one thing in common: they each ask us to imagine a scenario in which we can (a) perform some action that will result in one person being killed and five people being saved; or (b) do nothing, which will result in five people being killed and one person remaining alive. They then ask us whether we would perform that action or not. The variations in the trolley problem relate to the causal connection between our actions and the end result.

The classic presentation involved two cases:

Switch: A trolley car is hurtling out of control down a train track. If it continues on its current course, it will collide with (and kill) five workers who are on the track. You are standing beside the track, next to a switch. If you flip the switch, the trolley will be diverted onto a sidetrack, where it will collide with (and kill) one worker. Do you flip the switch?

Footbridge: A trolley car is hurtling out of control down a train track. If it continues on its current course, it will collide with (and kill) five workers who are on the track. You are standing on a footbridge over the track, next to a very fat man. If you push him off the footbridge, he will collide with the trolley car, slowing it down sufficiently to save the five workers. He, however, will die in the process. Do you push the fatman?

These two scenarios have been presented to innumerable experimental subjects over the years. The reactions seem pretty consistent. In the set of experiments discussed by Greene, 87% of respondents said they would flip the switch; but only 31% said that they would push the fatman. Why are the reactions to these two cases so different? Especially given that the utilitarian calculus is similar in both.

One common suggestion is that these experiments pay testament to a non-consequentialist prohibition against causing harm as a means to an end (versus causing harm as a side effect). This is the so-called doctrine of double effect, which has had many supporters over the years:

Doctrine of Double Effect (DDE): It is impermissible to cause harm as a means to a greater good; but it may be permissible to cause harm as a side effect of bringing about a greater good.

Experimental data from the two scenarios above suggests that the DDE is a robust, widely-shared, moral intuition. And given the role of robust intuitions in moral argument, this is good enough for many people. But should it be? Greene argues that it shouldn’t. He does so by asking us to consider experiments dealing with other variants of trolley problem. These experiments, it is argued, collectively point to the irrationality of our moral intuitions.

2. Trolley Problems involving Personal/Impersonal Force
The difficulty with simply accepting the DDE as the explanation of, and justification for, the different responses to Switch and Footbridge is that there are other potential explanations that seem less morally compelling. Consider the fact that Switch involves the impersonal administration of force (the flipping of the switch), whereas Footbridge involves the personal administration of force (pushing the fatman).

If this distinction accounted for the different responses, it might give us pause. After all, we generally don’t think that the personal/impersonal nature of the lethal force is all that relevant to our moral calculations. Greene has an illustration of this. He asks us to imagine that one of our friends is landed in a real-world version of the trolley problem. This friend then phones us up asking whether he should kill one to save five. We wouldn’t ask this friend whether he was administering the lethal force personally or not, would we? If we wouldn’t, it suggests that the impersonal/personal distinction is morally irrelevant. (This might beg the question ever-so-slightly)

But if that’s right we have a problem. Some experimental results suggest that the personal or impersonal nature of the force does make a difference to how people react. Consider the following variants on the original cases (along with the percentage of experimental subjects who approved of killing in each case):

Remote Footbridge: The set up is similar to the original footbridge case, only this time you are not standing alongside the fatman. Instead, you are standing next to the track, beside a switch which would release a trapdoor that the fatman is standing on. Do you flip the switch? 63% of experimental subjects said “Yes”.

Footbridge Switch: This was a control for the previous scenario which tested to see whether “remoteness” from the victim was a decisive factor. The set-up was the same, only this time you were standing next to a switch on the footbridge, i.e. you were standing in close proximity to the fatman. In this case, 59% of experimental subjects said they would flip the switch and release the trapdoor.

Footbridge Pole: This time you are standing at far end of the footbridge from the fatman. You cannot reach him and push him off with your own hands. You can, however, use a long pole to knock him off. Should you use it? Only 33% of experimental subjects said “yes” in this case.

Taken together, these experimental results suggest that the personal application of force — even if it is done via a long pole — makes a difference to people’s intuitive reactions. If such a morally irrelevant distinction can make a difference like this, proponents of the DDE might be less sanguine about their beloved principle.

3. Trolley Problems and the Means/Side Effect Distinction
This is not to say that our intuitive judgments do not track the differences between harm as a means versus harm as a side effect. The experimental evidence suggests that they do. But they do so in an odd manner. Careful manipulation of the variables within the trolley problem highlights this fact. Consider:

Obstacle Collide: You are standing on a narrow footbridge. The footbridge is over the sidetrack, not the main track. At the far end, there is a switch. If you get to it in time, you can flip it and divert the out-of-control trolley car onto the sidetrack. Doing so will save the lives of the five workers. The problem is that to get to the switch in time you will have to deal with an obstacle: a very fat man (who we assume cannot be communicated to within the relevant timeframe). The only thing to do is to run into him and knock him off the bridge. This will lead to his death. Should you do it? 81% of experimental subjects approved.

Loop: The set-up is like the original Switch case, Only this time the sidetrack loops back onto the main track. If there was nothing on the sidetrack, flipping the switch would not save the five workers (the trolley would collide with them eventually). Fortunately (or unfortunately), there is a single worker on the sidetrack. So if you flip the switch, the trolley will collide with (and kill) him and therefore stop before it loops back to the main track. Do you flip the switch? 81% of experimental subjects said “yes”.

Collision Alarm: This one is complicated as it involves two separate, parallel tracks. On the first track there is a trolley hurtling out of control, about to collide with five workers. On the second track, there is another trolley, not hurtling out of control into anything. But there is a sidetrack to this second track on which we find a single worker and an alarm sensor. You are standing next to a switch that can divert the trolley onto the sidetrack. If you do so, the trolley will collide with (and kill) the worker, but will also trigger the railway alarm system. This will automatically shut down the trolley on the first track (thereby saving the five). Do you flip the switch? 87% approved in this case.

These three cases all play around with the means/side-effect distinction. In Obstacle Collide, you need to push the fatman off in order to get to the switch. His death is a (foreseeable) side effect of your primary intention. You’d prefer if he didn’t die. Contrariwise, in Loop, you need to kill the one worker: if he wasn’t on the sidetrack there would be no point in flipping the switch. And in Collision Alarm we also need to kill the worker, although the mechanism of causation in the same as in the original Switch case.

The fact that there is widespread approval of killing one to save five in both the Loop and Collision Alarm cases, even though they involve killing as a means to end, suggests that our intuitive commitment to the DDE may not be that consistent after all.

4. The Contamination-Debunking Argument
So what’s going on here? What accounts for the different responses to the different dilemmas? Greene argues for something like a contamination-effect (the language is mine): our commitment to the DDE is contaminated by our intuitive response to the personal/impersonal distinction. This can be seen if we array the results of the various experiments on a two-by-two matrix.

What inferences can we draw from this diagram? Well, there seems to be some agreement that if you cause harm as a side effect of doing good, it is okay. That is consistent with the DDE. Furthermore, there is agreement that if you personally cause harm as a means of doing good, it is not okay. That too is consistent with the DDE. What is not consistent with the DDE is the lower right hand box. This suggests that if you impersonally cause death as a means to a positive end, it is okay. In fact it gets a very high approval rating.

The argument in all this is somewhat opaque. Greene never sets it out explicitly. It is clearly some species of debunking argument. Greene means to debunk our commitment to the DDE by revealing its psychological quirks. I’ve covered debunking arguments on the blog before. Indeed, I once discussed Guy Kahane’s template for understanding these arguments which used Greene’s work as an exemplar of this style of argument. But it seems to me that this particular argument about the DDE is not easily subsumed within Kahane’s template.

The best I can do for now is suggest something like the following:

(1) If our only basis for endorsing a normative principle is our intuitive commitment to that principle, and if our intuitive commitment to that principle is sensitive to the presence of irrelevant factors (i.e. is contaminated by irrelevant factors), we should not endorse that principle.

This has an air of plausibility about it. I think the contamination of our intuitive responses should give pause for thought. But whether that is enough to ditch the moral principle completely is another question. I certainly don’t think it gives us the right to embrace utilitarianism (which seems to be Greene’s argumentative goal), since each of these cases also suggests that our commitment to utilitarianism can be contaminated by irrelevant factors. Furthermore, I guess one could come back at Greene and argue that the personal/impersonal force distinction is not morally irrelevant (Greene himself concedes that it can be relevant when it comes to the assessment of moral character - that might be the wedge needed to pry open his argumentative enterprise).

Still, to be fair to the guy, he doesn’t rest everything on this one argument. In fact, his discussion of the DDE and these particular experiments is just a warm-up. His main argument develops a more detailed explanation of the psychological mechanisms underlying intuitive moral judgments. Once he reveals the details of those mechanisms, he thinks he has a more persuasive debunking argument. I’ll try to cover it in another post.