Pages

Thursday, December 31, 2015

This post is the second in a short series looking at the arguments against the use of fully autonomous weapons systems (AWSs). As I noted at the start of the previous entry, there is a well-publicised campaign that seeks to pre-emptively ban the use of such systems on the grounds that they cross fundamental moral line and fail to comply with the laws of war. I’m interested in this because it intersects with some of my own research on the ethics of robotic systems. And while I’m certainly not a fan of AWSs (I’m not a fan of any weapons systems), I’m not sure how strong the arguments of the campaigners really are.

That’s why I’m taking a look at Purves, Jenkins and Strawser’s (‘Purves et al’) paper ‘Autonomous Machines, Moral Judgment, and Acting for the Right Reasons’. This paper claims to provide more robust arguments against the use of AWSs than have traditionally been offered. As Purves et al point out, the pre-existing arguments suffer from two major defects: (i) they are contingent upon the empirical realities of current AWSs (i.e. concerns about targeting mechanisms, dynamic adaptability) that could, in principle, be addressed; and (ii) they fail to distinguish between AWSs and other forms of autonomous technology like self-driving cars.

To overcome these deficiencies, the authors try to offer two ‘in principle’ objections to the use of AWSs. I covered the first of those in the last post. It was called the ‘anti-codifiability objection’. It argued against the use of AWSs on the grounds that AWSs could not exercise moral judgment and the exercise of moral judgment was necessary in order to comply with the requirements for just war. The reason that AWSs could not exercise moral judgment was because moral judgment could not be codified, i.e. reduced to an exhaustive set of principles that could be programmed into the AI.

I had several criticisms of this objection. First, I thought the anticodifiability thesis about moral judgment was too controversial and uncertain to provide a useful basis for a campaign against AWSs. Second, I thought that it was unwise to make claims being about what is ‘in principle’ impossible for artificially intelligent systems, particularly when you ignore techniques for developing AIs that could circumvent the codifiability issue. And third, I thought that the objection ignored the fact that AWSs could be better at conforming with moral requirements even if they themselves didn’t exercise true moral judgment.

To be fair to them, Purves et al recognise and address the last of these criticisms in their paper. In doing so, they introduce a second objection to the use of AWSs. This one claims that mere moral conformity is not enough for an actor in a just war. The actor must also act for the right reason. Let’s take a look at this second objection now.

1. The Right Reason Objection
The second objection has a straightforward logical structure (I’m inferring this from the article; it doesn’t appear in the suggested form in the original text):

(1) An actor in a just war cannot simply conform with moral requirements, they must act for the right moral reasons (‘Right Reason Requirement’).

(2) AWSs cannot act for the right moral reasons.

(3) Therefore, the use AWSs is not in compliance with the requirements of just war theory.

This is a controversial argument. I’ll have a number of criticisms to offer in a moment. For the time being, I will focus on how Purves et al defend its two main premises.

They spend most of their time on the first premise. As best I can tell, they adduce three main lines of argument in favour of it. The first is based around a thought experiment called ‘Racist Soldier’:

Racist Soldier: “Imagine a racist man who viscerally hates all people of a certain ethnicity and longs to murder them, but he knows he would not be able to get away with this under normal conditions. It then comes about that the nation-state of which this man is a citizen has a just cause for war: they are defending themselves from invasion by an aggressive, neighboring state. It so happens that this invading state’s population is primarily composed of the ethnicity that the racist man hates. The racist man joins the army and eagerly goes to war, where he proceeds to kill scores of enemy soldiers of the ethnicity he so hates. Assume that he abides by the jus in bello rules of combatant distinction and proportionality, yet not for moral reasons. Rather, the reason for every enemy soldier he kills is his vile, racist intent.”

You have got to understand what is going on in this thought experiment. The imagined soldier is conforming with all the necessary requirements of just war. He is not killing anybody who it is impermissible to kill. It just so happens that he kills to satisfy his racist desires. The intuition they are trying to pump with this thought experiment is that you wouldn’t want to allow such racist soldiers to fight in a war. It is not enough that they simply conform with moral requirements. They also need to act for the right moral reasons.

The second line of argument is a careful reading and analysis of the leading writers on just war theory. These range from classical sources such as Augustine to more modern writers such as Michael Walzer and Jeff McMahan. There is something of a dispute in this literature as to whether individual soldiers need to act for the right moral reasons or whether it is only the states who direct the war that need to act for the right moral reasons (the jus ad bellum versus jus in bello distinction). Obviously, Purves et al favour the former view and they cite a number of authors in support of this position. In effect then, this second argument is an argument from authority. Some people think that arguments from authority are informally fallacious. But this is only true some of the time: often an argument from authority is perfectly sound. It simply needs to be weighted accordingly.

The third line of argument expands the analysis to consider non-war situations. Purves et al argue that the right reason requirement is persuasive because it also conforms with our beliefs in more mundane moral contexts. Many theorists argue that motivation makes a moral difference to an act. Gift-giving is an illustration of this. Imagine two scenarios. One in which I give flowers to my girlfriend in order to make her feel better and another in which I give her flowers in order to make a rival for her affections jealous. The moral evaluation of the scenarios varies as a function of my reasons for action. Or so the argument goes.

So much for the first premise. What about the second? It claims that AWSs cannot act for moral reasons. Unsurprisingly, Purves et al’s defence of this claim follows very much along the lines of their defence of the anticodifiability objection. They argue that artificially intelligent agents cannot, in principle, act for moral reasons. There are two leading accounts of what it means to act for a reason (the belief-desire model and the taking as a reason model) and on neither of those is it possible for AWSs to act for a reason:

Each of these models ultimately requires that an agent possess an attitude of belief or desire (or some further propositional attitude) in order to act for a reason. AI possesses neither of these features of ordinary human agents. AI mimics human moral behavior, but cannot take a moral consideration such as a child’s suffering to be a reason for acting. AI cannot be motivated to act morally; it simply manifests an automated response which is entirely determined by the list of rules that it is programmed to follow. Therefore, AI cannot act for reasons, in this sense. Because AI cannot act for reasons, it cannot act for the right reasons.

2. Evaluating the Right Reason Objection
Is this objection any good? I want to consider three main lines of criticism, some minor and some more significant. The criticisms are not wholly original to me (some elements of them are). Indeed, they are all at least partly addressed by Purves et al in their original paper. I’m just not convinced that they are adequately addressed.

I’ll start with what I take to be the main criticism: the right reason objection does not address the previously-identified problem with the anticodifiability objection. Recall that the main criticism of the anticodifiability objection was that the inability of AWSs to exercise moral judgment may not be a decisive mark against their use. If an AWS was better at conforming with moral requirements than a human soldier (i.e. if it could more accurately and efficiently kill legitimate targets, with less deleterious side effects), then the fact that it was not really exercising moral judgment would be inconclusive. It may provide a reason not to use it, but this reason would not be decisive (all things considered). I think this continues to be true for the right reason objection.

This is why I do not share the intuition they are trying to pump with the Racist Soldier example. I take this thought experiment to be central to their defence of the objection. Their appeals to authority are secondary: they have some utility but they are likely to be ultimately reducible to similar intuitive case studies. So we need to accept their interpretation of the racist soldier in order for the argument to really work. And I’m afraid it doesn’t really work for me. I think the intentions of the racist soldier are abhorrent but I don’t think they are decisive. If the soldier really does conform with all the necessary moral requirements — and if, indeed, he is better at doing so than a soldier that does act for the right reasons — then I fail to see his racist intentions as a decisive mark against him. If I had to compare two equally good soldiers — one racist and one not — I would prefer the latter to the former. But if they are not equal — if the former is better than latter on a consequentialist metric — then the racist intentions would not be a decisive mark against the former. Furthermore, I suspect the thought experiment is structured in such a way that other considerations are doing the real work on our intuitions. In particular, I suspect that the real concern with the racist soldier is that we think his racist intentions make him more likely to make moral mistakes in the combat environment. That might be a legitimate concern for AWSs that merely mimic moral judgment, but then the concern is really with the consequences of their deployment and not about whether they act for the right reasons. I also suspect that the thought experiment is rhetorically effective because it works off the well-known Knobe effect. This is a finding from experimental philosophy suggesting that people asymmetrically ascribe moral intentions to negative and positive behaviours. In short, the Knobe effect says I’m more likely to a ascribe a negative intention if a decision leads to a negative outcome than I am in if a decision leads to a positive outcome.

To be fair, Purves et al are sensitive to some of these issues. Although they don’t mention the Knobe effect directly, they try to address the problem of negative intentions by offering an alternative thought experiment involving a sociopathic soldier. This soldier doesn’t act from negative moral intentions; rather, his intentions are morally neutral. He is completely insensitive to moral reason. They still think we would have a negative intuitive reaction to this soldier’s actions because he fails to act for the right moral reasons. They argue that this thought experiment is closer to the case of the AWS. After all, their point is not that an AWS will act for bad moral reasons but that it will be completely incapable of acting for moral reasons. That may well be correct but I think that thought experiment also probably works off the Knobe effect and it still doesn’t address the underlying issue: what happens if the AWS is better at conforming with moral requirements? I would suggest, once again, that the lack of appropriate moral intentions would not be decisive in such a case. Purves et al eventually concede this point (p. 867) by claiming that theirs is a pro tanto, not an all things considered, objection to the use of AWSs. But this is to concede the whole game: their argument is now contingent upon how good these machines are at targeting the right people, and so not inherently more robust than the arguments they initially criticise.

As I say, that’s the major line of criticism. The second line of criticism is simply that I don’t think they do enough to prove that AWSs cannot act for moral reasons. Their objection is based on the notion that AWSs cannot have propositional attitudes like beliefs, desires and intentions. This is a long-standing debate in the philosophy of mind and AI. Suffice to say I’m not convinced that AIs could never have beliefs and desires. I think this is a controversial metaphysical claim and I think relying on such claims is unhelpful in the context of supporting a social campaign against killer robots. Indeed, I don’t think we ever know for sure whether other human beings have propositional attitudes. We just assume they do based on analogies with our own inner mental lives and inferences from their external behaviour. I don’t see why we couldn’t end up doing the same with AWSs.

The third line of criticism is one that the authors explore at some length. It claims that their objection rests on a category mistake, i.e. the mistaken belief that human moral requirements apply to artificial objects. We wouldn’t demand that a guided missile or a landmine act for the right moral reason, would we? If so, then we shouldn’t demand the same from an AWS. Purves et al respond to this in a couple of ways. I’m relatively sympathetic to their responses. I don’t think the analogy with a guided missile or a landmine is useful. We don’t impose moral standards on such equipment because we don’t see it as being autonomous from its human users. We impose the moral standards on the humans themselves. The concern with AWSs is that they would be morally independent from their human creators and users. There would, consequently, be a moral gap between the humans and the machines. It seems more appropriate to apply moral standards in that gap. Still, I don’t think that’s enough to save the objection.

That brings me to the end of this post. I’m going to do one final post on Purves et al’s paper. It will deal with two remaining issues: (i) can we distinguish between AWSs and other autonomous technology? and (ii) what happens if AWSs meet the requirements for moral agency? The latter of these is something I find particularly interesting. Stay tuned.

Wednesday, December 30, 2015

I’ve been struggling with blogging over the holiday period. Writing is a strange compulsion for me. I never quite feel satisfied if I close out a day without writing something. But people keep telling me I should ‘switch off’ and relax now and then. So I’ve tried to step back from it over the past two weeks. I think this has had the opposite of the desired effect. The dissatisfaction grows with each passing day and I feel frustrated by the various social and family obligations which block my return to writing.

This has led me to reflect on issues of psychological suffering and the causes of anxiety, which has, in turn, has led me back to one of my first philosophical loves: Stoicism. Like many in the modern world, I have long been a fan of the pragmatic branches of Stoic philosophy. Its somewhat gloomy realism and psychological coping mechanisms have been a source of solace over the years. I have developed a more critical stance towards its central tenets in more recent times but continue to be attracted to the writings of Seneca and Epictetus (less so Marcus Aurelius) as well as their more modern equivalents.

Anyway, I thought I would write a short post on what I take to be the central message of stoicism and one of the paradoxes to which it gives rise. This will serve the dual purpose of combatting my frustration with the lack of writing and encouraging me to think about ways to cope with that frustration. In writing this I draw, in particular, on the work of Epictetus, along with the relevant chapters in Jules Evans’s book Philosophy for Life, which is an enjoyable, pragmatically-oriented overview of classical philosophy.

1. The Central Teaching of Stoicism: Understand Zones of Control
For me, the central teaching of stoicism is that there are some things that under our control and some things that are not. Learning to distinguish between the two is the key to psychological health and well-being. This teaching is summed up in Epictetus’s the Enchiridion:

Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.
The things in our control are by nature free, unrestrained, unhindered; but those not in our control are weak, slavish, restrained, belonging to others. Remember, then, that if you suppose that things which are slavish by nature are also free, and that what belongs to others is your own, then you will be hindered. You will lament, you will be disturbed, and you will find fault both with gods and men. But if you suppose that only to be your own which is your own, and what belongs to others such as it really is, then no one will ever compel you or restrain you. Further, you will find fault with no one or accuse no one. You will do nothing against your will. No one will hurt you, you will have no enemies, and you will not be harmed.

As with much of classic philosophy, this has to be translated and modified in order to be palatable to modern ears. For instance, I don’t think I could quite agree with Epictetus that our bodies are not within our control, and I could only agree that ‘desire’ and ‘aversion’ are within our control if we first had a long conversation about what is meant by that word ‘control’. But these are the tedious concerns of someone who is too well-versed in the intricacies of analytical argumentation to deem anything to be uncontroversial. The basic insight seems to be correct. There are things within our control and things without. Most of the former involve things that take place within our bodies and minds; most of the latter involve things in the world around us.

For simplicity’s sake, let’s distinguish between the two realms by calling the former Zone 1 and the latter Zone 2 (I take these terms from Evans’s book). Zone 1 is our sovereign domain and it contains the things that are within our control. Zone 2 is the external world and it contains things that are beyond our control. This is illustrated in the diagram below.

The Stoic route to psychological well-being comes from two key insights about these zones. The first key insight is to realise that Zone 1 is much smaller than Zone 2. Indeed, in some classical Stoic works Zone 1 has remarkably spartan furnishings, consisting solely of our beliefs about the world around us. A more luxuriant furnishing of Zone 1 may be possible, but as a starting point we can accept that Zone 1 will be sparsely populated. The second key insight then comes from realising that much of our anxiety, anger and frustration stems from (a) thinking that we can control things in Zone 2 and (b) failing to take control over things in Zone 1. Evans sums this up pretty nicely so I’ll just quote from him:

A lot of suffering arises, Epictetus argues, because we make two mistakes. Firstly, we try to exert absolute sovereign control over something in Zone 2, something external which is not in our control. Then, when we fail to control it, we feel helpless, out of control, angry, guilty, anxious or depressed. Secondly, we don’t take responsibility for Zone 1, our thoughts and beliefs…Instead, we blame our thoughts on the outside world, on our parents, our friends, our lover, our boss, the economy, the environment, the class system, and then we end up, again, feeling bitter, helpless, victimised, out of control, and at the mercy of external circumstances.

I think there is wisdom in this. Whenever I find myself veering towards anger and frustration I try to take the Stoic shift in perspective, re-focusing on what I can control and dissolving my frustration with what I cannot. This is the approach I have taken toward to my recent frustration with the lack of writing. I have realised that there are ways in which to accommodate my desire to write within the social calendar of the holiday period, and that you cannot force yourself to relax by conforming to others’ expectations.

2. The Paradox of Stoicism: A Philosophy of Empowerment or Resignation?
Despite the practical wisdom I see in this central Stoic teaching, I can’t help but criticise certain elements of it. The problems are right there in the passage I quoted from Jules Evans. The advice seems to encourage a kind of passive disengagement from the world around us. The larger social forces that affect our lives (the economy, the class system etc) are deemed beyond our control. The source of misery lies in thinking we can control those forces. We need to step back and focus on what we can control: our beliefs and psychological reactions to the world. The implication seems to be that we should accommodate those to our social reality and not constantly strive to change the world. This seems to be a philosophy of resignation. How can a concern for social justice and technological progress find a foothold within this worldview?

At this point we have to confront the central paradox of stoicism. Even though there is something within the central teaching that lends itself to this resigned and disengaged point of view, the fact is that many of the founding fathers of the Stoic tradition, and many of its contemporary adherents, are deeply ambitious and active people. Many of them do strive to make the world a better place. They don’t collapse into a form of learned helplessness. Why is this? What accounts for the seeming paradox?

The answer is that the paradox is more superficial than real. Stoicism can be a philosophy of empowerment not simply one of resignation. Taking the Stoic shift in perspective enables empowerment. You don’t waste time on that which you cannot control; you focus on what is within your control and you realise that what is within your control can have some effect on what happens in Zone 2. It is limited and attenuated, to be sure, but it is real nevertheless. By carefully leveraging this attenuated form of control you can still engage with the world around you. You can still care about social justice and progress. But you can do so in a better way, maintaing your enthusiasm and stamina without burning out and feeling frustrated and let down when you don’t achieve all your initial goals.

That’s not to say that resignation is not part of stoicism. It still lurks in the background. It’s to say that there are two pathways to Stoic contentment:

Path of Resignation: accepting that you can change very little in the world and resigning yourself to this by focusing on accommodating your beliefs to the broader reality.

Path of Empowerment: accepting that you can change very little in the world but realising that you do have some power to change things and focusing on what you can change, not what you can’t.

The difference between the two pathways is marginal. The second pathway is the tougher of the two. To make positive changes in the broader reality you need good evidence about the causal influence our your actions, and you need to make good decisions about how to exercise your causal power. This requires far more patience, dedication and care than most people are able to muster. But it’s better than perpetual frustration.

For what it is worth, I think that the effective altruist (EA) movement is good example of a social movement that encourages people to take the second pathway. I appreciate this is controversial but, at its heart, the EA movement is about demonstrating how individual decision-making can have a profound positive effects on the world. It is also about focusing on what the individual can control not what they cannot. It is often criticised for this limited perspective (e.g. for not trying to change social institutions), and I have no doubt that members of the movement feel frustrated and angry when they confront a perceived lack of progress, but I still think there is something quintessentially Stoic about the outlook of the EA movement as a whole. I might write about this topic more in the future.
Anyway, that’s all for this post.

As the year winds to a close I thought I would provide links to the most-viewed posts on this blog over the past year. These aren't my own particular favourites but they may say something about the internet's viewing preferences and the place of this blog within that world of preferences.

Anyway, I'll start with the most-viewed posts on this blog over the past year. The third of these was not actually published in 2015 but was the third most-viewed of 2015 due to being mentioned on Bill Gates's reddit AMA back in January. Because of this anomaly I've made this a 'top 11' not a 'top 10' (insert Spinal Tap joke here). Here's the full list:

I also publish most of my stuff on the IEET webpage and I get more views over there than I do on here. Indeed, I was surprised/pleased to learn that I was the most viewed writer of the year on the IEET. Because of this I thought I would provide a 'top 10' list for my posts on that website too:

Tuesday, December 29, 2015

For the next in my year-in-review series I thought I'd provide links to all the interviews, talks and debates I did in 2015. Turns out I did more than I remember doing:

'Enhanced!' - This was an interview I did on the Robot Overlordz podcast. This one covered the topics of technological unemployment, human enhancement and the extended mind thesis.

'Sex Machina!' - This was another interview on the Robot Overlordz podcast. This one looked at the ethics of sex robots and artificial intelligence. It also included a brief discussion of the movie Ex Machina, hence the title.

'Superintelligence' - A long video conversation with Adam Ford about the arguments in Nick Bostrom's book Superintelligence. This was a live broadcast and unfortunately we lost our link-up for a few minutes slightly after the half hour mark. So there is some awkward silence at that point. Just skip ahead to the point where we start talking again.

'The Case for Commercial Surrogacy' - Audio of my opening speech at a debate about the legalisation of commercial surrogacy. I present three reasons for thinking we should legalise the practice.

'Does life have meaning in a world without work?' - My most recent podcast with Jon Perry and Ted Kupper from the excellent Review the Future podcast. This one looks at the meaning of life in a future world of rampant technological unemployment.

Sunday, December 27, 2015

It's that time of year when all good bloggers indulge in some end-of-year review. Here's the first of several from me. This one lists all the academic papers I had accepted for publication in 2015. I've included abstracts and links below:

Why AI doomsayers are like sceptical theists and why it matters?Minds and Machines 25(3): 231 - 246 - An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.

Common Knowledge, Pragmatic Enrichment and Thin OriginalismJurisprudence, forthcoming DOI: 10.1080/20403313.2015.1065644 - The meaning of an utterance is often enriched by the pragmatic context in which it is uttered. This is because in ordinary conversations we routinely and uncontroversially compress what we say, safe in the knowledge that those interpreting us will “add in” the content we intend to communicate. Does the same thing hold true in the case of legal utterances like “This constitution protects the personal rights of the citizen” or “the parliament shall have the power to lay and collect taxes”? This article addresses this question from the perspective of the constitutional originalist — the person who holds that the meaning of a constitutional text is fixed at some historical moment. In doing so, it advances four theses. First, it argues that every originalist theory is committed to some degree of pragmatic enrichment, the debate is about how much (enrichment thesis). Second, that in determining which content gets “added in”, originalists typically hold to a common knowledge standard for enrichment, protestations to the contrary notwithstanding (common knowledge thesis). Third, that the common knowledge standard for enrichment is deeply flawed (anti-CK thesis). And fourth, that all of this leads us to a thin theory of original constitutional meaning — similar to that defended by Jack Balkin and Ronald Dworkin — not for moral reasons but for strictly semantic ones (thinness thesis). Although some of the theses are extant in the literature, this article tries to defend them in a novel and perspicuous way. (Official; Philpapers; Academia.edu)

Human Enhancement, Social Solidarity and the Distribution of ResponsibilityEthical Theory and Moral Practice, forthcoming DOI: 10.1007/s10677-015-9624-2 - This paper tries to clarify, strengthen and respond to two prominent objections to the development and use of human enhancement technologies. Both objections express concerns about the link between enhancement and the drive for hyperagency (i.e. the ability to control and manipulate all aspects of one’s agency). The first derives from the work of Sandel and Hauskeller and is concerned with the negative impact of hyperagency on social solidarity. In responding to their objection, I argue that although social solidarity is valuable, there is a danger in overestimating its value and in neglecting some obvious ways in which the enhancement project can be planned so as to avoid its degradation. The second objection, though common to several writers, has been most directly asserted by Saskia Nagel, and is concerned with the impact of hyperagency on the burden and distribution of responsibility. Though this is an intriguing objection, I argue that not enough has been done to explain why such alterations are morally problematic. I try to correct for this flaw before offering a variety of strategies for dealing with the problems raised. (Official; Philpapers; Academia.edu)

The Threat of Algocracy: Reality, Resistance and Accommodation, Philosophy and Technology, forthcoming - One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithm-driven decision-making does pose a significant threat to the legitimacy of such processes. Modeling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat, and addresses two possible solutions (named, respectively, “resistance” and “accommodation”). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation. (Official; Philpapers; Academia.edu)

Thursday, December 24, 2015

There is well-publicised campaign to Stop Killer Robots. The goal of the campaign is to pre-emptively ban the use of fully autonomous weapons systems (AWSs). This is because:

Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war.

But is this really true? Couldn’t it be the case that sufficiently advanced AWSs would be morally superior to human beings and that their deployment could be safer and more morally effective than the deployment of human beings? In short, is there really a fundamental moral line that is being crossed with the development of AWSs?

These are questions taken up by Purves, Jenkins and Strawser (‘Purves et al’) in their paper ‘Autonomous Machines, Moral Judgment, and Acting for the Right Reasons’. As they see it, existing objections to the use of AWSs suffer from two major problems. First, they tend to be contingent. That is to say, they focus on shortcomings in the existing technology (such as defects in targeting systems) that could be overcome. Second, they tend not to distinguish between weaponised and non-weaponised autonomous technology. In other words, if the objections were sound they would apply just as easily to other forms of autonomous technology (e.g. driverless cars) that are generally encouraged (or, at least, not so actively opposed).

They try to correct for these problems by presenting two objections to the use of AWSs that are non-contingent and that offer some hope of distinguishing between AWSs and non-weaponised autonomous systems. I want to have a look at their two objections. The objections are interesting to me because they rely on certain metaphysical and ethical assumptions that I do not share. Thus, I think their case against AWSs is weaker than they might like. Nevertheless, this does lead to an interesting conclusion that I do share with the authors. I’ll try to explain as I go along. It’ll take me two posts to cover all the relevant bits and pieces. I’ll start today by looking at their first objection and the criticisms thereof.

1. The Anti-Codifiability Objection
Purves et al’s paper must be appreciated in the appropriate context. That context is one in which just war theory plays a significant role. They refer to this and to leading figures in the field of just war theory throughout the paper. Many of their arguments presume that in order for AWSs to be permissibly deployed in warfare requires compliance with the principles of just war theory. I’m willing to go along with that with two caveats: (i) I cannot claim to be an expert in just war theory — I have a very sketchy knowledge of it, but many of the principles that I am aware of have their equivalents in mundane moral contexts (e.g. the principle of right intention, discussed in part two); (ii) we shouldn’t assume that the principles of just war theory are beyond criticism — several of them strike me as being questionable and I’ll indicate some of my own qualms as I go along.

Some of the relevant principles of just war theory will be introduced when I outline their arguments. One basic background principle should be made clear at the outset. It is this: the actors in a war should be moral agents. That is to say, they should be capable of appreciating, responding to and acting for moral reasons. They should be capable of exercising moral judgment. In addition to this, moral accountability should be present in military operations: there must be someone acting or controlling the actions of soldiers that can be rightfully held to moral account. The major concern with AWSs is that they will circumvent these requirements. AWSs operate without the direct control and guidance of human commanders (that’s why they are ‘autonomous’). So the only way for them to be legitimately deployed is if they themselves have the appropriate moral agency and/or accountability. Both of the objections mounted by Purves et al dispute this possibility.

The first of these objections is the anti-codifiability objection. This one is based on the notion that an AWS must be programmed in such a way that they can replicate human moral judgment. In order for this to happen, human moral judgment must be codifiable, i.e. capable of being reduced to an exhaustive list of principles. This is because robots are programmed using lists of step-by-step instructions that are specified (in some fashion) by their designers. The problem for AWSs is that moral judgment may not be codifiable. Indeed, several moral philosophers explicitly argue that it is not codifiable. If they are right then AI could not, in principle, be moral actors of the appropriate kind. Their deployment in war would not comply with the relevant principles of just war theory.

To put this a little more formally:

(1) In order for AWSs to exercise moral judgment, moral judgment must be codifiable.

(2) Moral judgment is not codifiable.

(3) Therefore, AWSs cannot exercise moral judgment.

You could tack on further premises to this simple version of the argument. These further premises could lead you to the conclusion that AWSs should not be used in a military context.

Interestingly, Purves et al don’t spend that much time defending the first premise of this argument. They seem to take it to be a relatively obvious truth about the nature of robot construction. I guess this is right in a very general sense: contemporary programming systems do rely on step-by-step instructions (algorithms). But these instructions can take different forms, some of which may not require the explicit articulation of all the principles making up human moral judgment. Furthermore, there may be alternative ways of programming a robot that avoid the codifiability issue. I’ll say a little more about this below.

Purves et al dedicate most of their attention to the second premise, identifying a number of different moral theorists who believe that moral judgment is not codifiable. Two famous examples would be John McDowell and Jonathan Dancy. McDowell famously argued against codifiability in his 1979 paper ‘Virtue and Reason’ (he was focusing in particular on moral virtue, which we can view as part of moral judgment more generally):

[T]o an unprejudiced eye it should seem quite implausible that any reasonably adult moral outlook admits of any such codification. As Aristotle consistently says, the best generalizations about how one should behave hold only for the most part. If one attempted to reduce one’s conception of what virtue requires to a set of rules, then, however subtle and thoughtful one was in drawing up the code, cases would inevitably turn up in which a mechanical application of the rules would strike one as wrong.

Dancy has a similar view, arguing that general moral principles are not part of human moral judgment. Instead, human moral judgment is particularised, i.e. always tailored to the particularities of a real or imagined moral decision problem. This has led to a school of moral theory known as moral particularism. What’s more, even those who don’t explicitly endorse McDowell or Dancy’s views have argued that moral judgment is highly context sensitive, or that to make use of articulable moral principles one must already have a faculty of moral judgment. The latter view would pose a problem for defenders of AWSs since it would rule out the construction of a faculty of moral judgment from the articulation of moral principles.

If moral judgment is not capable of being codified, what is required for it to exist? Purves et al remain agnostic about this. They highlight a range of views, some suggesting that phenomenal consciousness is needed, others suggesting a capacity for wide reflective equilibrium is the necessary precursor, and yet others focusing on practical wisdom. It doesn’t matter which view is right. All that matters for their argument is that none of these things is, in principle, within the reach of a computer-programmed robot. They think they are on firm ground in believing this to be true. They cite the views of Hubert Dreyfus in support of this belief. He argues that a computer would have to rely on a discrete list of instructions and that these instructions could never replicate all the tacit, intuitive knowledge that goes into human decision-making. Similarly, they note that only a minority of philosophers of mind think that a robot could replicate human phenomenal consciousness. So if that is necessary for moral judgment, it is unlikely that an AWS will ever be able to exercise moral judgment.

That is the essence of their first objection to AWSs. Is it any good?

2. Assessing the Anti-Codifiability Object
Purves et al consider two main criticisms of their anti-codifiability objection. The first argues that they fail to meet one of their own criteria for success — non-contingency — because AWSs could potentially exercise moral judgment if some of their claims are false. The second argues that even if AWSs could never exercise moral judgment, their deployment might still be morally superior to the deployment of humans. Let’s look at both of these criticisms in some detail.

The first criticism points out that two things must be true in order for Purves et al’s objection to work: moral judgment must not be codifiable; and AWSs must (in principle) be incapable of exercising such non-codifiable moral judgment. Both of these things might turn out to be false. This would render their argument contingent upon the truth of these two claims. Hence it fails to meet their non-contingency criterion of success for an objection to AWS.

Purves et al respond to this criticism by accepting that the two claims could indeed be false, but that if they are true they are true in-principle. In other words, they are making claims about what is necessarily* true of morality and AWSs. If they are correct, these claims will be correct in a non-contingent way. It would be a hollow victory for the opponent to say the objection was contingent simply because these two in-principle claims could be false. Many in-principle objections would then share a similar fate:

While it is possible that what we say in this section is mistaken, unlike other objections, our worries about AWS are not grounded in merely contingent facts about the current state of AI. Rather, if we are correct, our claims hold for future iterations of AI into perpetuity, and hold in every possible world where there are minds and morality.

To some extent, I agree with Purves et al about this. But largely because this is a particularly unfavourable way to frame the criticism. I think the problem with their argument is that the in-principle claims upon which they rely are much weaker than they seem to suppose. Both claims seem to me to rest on metaphysical suppositions that are deeply uncertain. Although there is some plausibility to the anti-codifiability view of moral judgment, I certainly wouldn’t bet my life on it. And, more importantly, I’m not sure anyone could prove it in such a way that it would encourage everyone to reject the deployment of AWSs (the context for the paper is, after all, the need for a campaign against killer robots). The same goes for the inability of AWSs to replicate moral judgment. In general, I think it is a bad idea to make claims about what is or is not in principle possible for an artificial intelligence. Such claims have a tendency to be disproven.

More particularly, I think Purves et al ignore techniques for automating moral judgment that could get around the codifiability problem. One such technique is machine learning, which would get machines to infer their own moral principles from large sets of sample cases. Here, the programmer is not instructing the machine to follow an articulated set of moral principles, rather they are instructing it to follow certain statistical induction rules to come up with their own. There are definitely problems with this technique, and some have argued for limitations, but it is getting more impressive all the time. I wouldn’t be sure that it is limited in the way that Purves et al require. Another technique that is ignored in the paper is human brain-emulation, which could also get around the problem of having to articulate a set of moral principles by simply emulating the human brain on a silicon architecture. This technique is in its infancy, and perhaps there are limits to what it is possible to emulate on a silicon architecture but, once again, I’d be inclined to avoid making absolute claims about what is or is not possible for such techniques in the future.

To be fair, Purves et al do concede the bare possibility of a machine emulating human moral judgment. They argue towards the end of their paper that if a machine could do this it would throw up some deeply troubling questions for humanity. I happen to agree with them about this but I’ll address it in part two.

The second major criticism of the anti-codifiability objection holds that even if it is true, it could be that AWSs are morally superior to human moral agents on other grounds. Purves et al develop this criticism at some length in their paper. If you think about it, there are three main types of mistake that an AWS could make: (i) empirical mistakes, i.e. the system might fail to detect empirical facts that are relevant to moral decision-making; (ii) genuine moral mistakes, i.e. the system might fail to use the right moral judgments to guide its actions (ii) practical mistakes, i.e. the system might implement a morally sound decision in the wrong manner (e.g. reacting too late or too soon). They concede that AWSs might be less inclined to make empirical and practical mistakes. In other words, the AWS might be better at selecting appropriate targets and implementing morally sound decisions. We could call this kind of behaviour ‘moral conformism’. Their objection is simply that even if the AWS was better at conforming with moral requirements it would not be acting for the right moral reasons. Hence it would continue to make moral mistakes. The question is whether the continued presence of moral mistakes is enough to impugn the use of the AWS.

Let’s put it in practical terms. Suppose you have a human soldier and a conforming AWS. The human soldier is capable of exercising sound moral judgment, but they are slow to act and make mistakes when trying to discover morally salient targets in the field. Thus, even though the soldier knows when it is morally appropriate to target an enemy, he/she selects a morally inappropriate target 20% of the time. The AWS is much faster and more efficient, it is more likely to select morally salient targets in the field, but it is incapable of using moral reasons to guide its decision. Nevertheless, the AWS selects morally inappropriate targets only 5% of the time. Should we really favour the use of the human soldier simply because they are acting in accordance with sound moral judgment? The critics say ‘no’ and their view becomes even more powerful when you factor in the moral imperfection of many human agents.

Purves et al concede some of the force of this objection. Their main reply is to argue that, in order to comply with the requirements of just war theory, agents in a combat environment must act in accordance with right moral reason. This leads to their second, in principle, objection to the use of AWSs. I’ll take a look at that in part two.

* I say nothing here about the particular brand of necessity at work in the argument. It could be logical or metaphysical necessity, but I suspect Purves et al only need physical necessity for their argument to work (at least when it comes to what is or is not possible for AWSs).

Wednesday, December 23, 2015

I have a new paper coming out in the journal Philosophy and Technology. The final version won't be out for a couple of months but you can access the most up to date pre-print version below. Here are the full details:

Abstract: One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithm-driven decision-making does pose a significant threat to the legitimacy of such processes. Modeling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat, and addresses two possible solutions (named, respectively, “resistance” and “accommodation”). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation.

Friday, December 18, 2015

I recently had the pleasure of being a repeat guest on the Review the Future podcast. I spoke with the hosts (Jon Perry and Ted Kupper) about a topic close to my own heart: the meaning of life in a world without work. You can listen at the link below:

The set-up for the discussion was a simple one: suppose the predictions about technological unemployment come true. Humans are no longer required for economically productive work. Suppose further that the gains from technology are shared among the general population. In other words, technological displacement from work does not result in hardship. Instead, we live in an age of abundance: everyone can have what they want thanks to machine labour. Would it still be possible to live a meaningful life in such a world? That's the question we explore in this podcast.

The discussion is rich, with many interesting diversions and offshoots. It's really a conversation between all three of us, not an interview in the traditional sense. Four main themes emerge:

The Anti-Work Critique - I introduce the arguments from various left-wing critics of capitalism suggesting that we would be better off if we didn't have to work. These arguments are applied to the debate about technological unemployment.

Philosophical Theories of Meaning of Life - We discuss different philosophical theories of meaning in life, their strengths and weaknesses, and how they may or may not survive in a postwork world.

Games, Art and the Good Life - We consider the possibility that the best life of all is the life of games (triumphing over unnecessary obstacles) and art. This offers some hope because these things could survive (and flourish) in an era of technological unemployment.

Racing Against Machines and Integration with Technology - I close by suggesting that increased integration with technology may be the best way to address the 'meaning deficit' that could arise in an era of technological unemployment.

Thursday, December 17, 2015

Symbols are valuable. People debate the ethical status of racial slurs, obscene paintings and blasphemous films. In these debates symbols (words, artworks and films) are deemed to have all sorts of (negative) value-laden connotations. In an earlier post, I looked at Andrew Sneddon’s work on the nature of these symbolic values. He argued that there were two main types of symbolic value. The first involved symbols being valued in virtue of what they represented (what he called ‘symbols as a mode of valuing’). The second involved symbols being valued, at least in part, for their own sake (what he called ‘symbols as a ground of value’).

In this post, I want to continue my analysis of Sneddon’s work by focusing on two further questions: (i) Why is it that symbols have value? and (ii) How much value do they have, or to put it another way, how much weight should they be accorded in our practical reasoning? Sneddon offers some programmatic assistance in answering both questions. Let’s see what he has to say.

1. Why do symbols have value?
To answer the first question we need to consider the general nature of value. In philosophical terms, this means we must conduct an axiological investigation into the grounds of value (‘axiology’ being the study of value). This is a controversial issue. Some people deny that there is any unifying ground of value. They argue that certain values are basic, incommensurable and sui generis. This, for example, is the view of natural law theorists like John Finnis who argue that one can identify approximately seven basic goods that determine what is valuable in human life.

Sneddon thinks it is possible to be a little bit more unificatory. He suggests that there are several basic value-laden concepts that we employ in our ethical discussions. These include things like harm/benefit, rights, and virtues. These concepts have distinct conditions of application, but they are unified insofar as they are all concerned with ensuring that human beings have a reasonable prospect of living a decent life. Thus, we worry about harms and celebrate benefits because they enhance individual well-being; we endorse rights because they protect individuals from unwarranted interference and entitle them to benefits; and we cherish virtues because we think they make possible a prosperous and fulfilling life. In all cases, the moral concepts seem reducible to concerns about the ‘contours of minimally acceptable human living’ (Sneddon 2015, s.4). Symbolic values fit within this general matrix as they too are concerned with conditions of acceptable human living but they are concerned with them in a distinctive way.

What is this distinctive way? To answer that, we need to consider the differences between the various value-laden concepts. In particular, we need to consider their differences along two dimensions: (i) the constitutive dimension, i.e. what is the concept constituted by? and (ii) the evaluative dimension, i.e. why do we care about it? Take harm/benefit as a first example. Sneddon argues that harms and benefits are constitutively individualistic. That is to say: harms/benefits are things that happen to or accrue to an individual human being. So, being physically assaulted is harmful because it negatively impacts on an individual’s life. Harms and benefits are also evaluatively individualistic. That is to say, we care about them because of what they do to individuals. This should be contrasted with rights. Rights are constitutively relational because they set conditions on how individuals can relate to one another, but they are evaluatively individualistic because these conditions are valued in virtue of how they improve individual lives.

Symbols are distinctive because they are both constitutively and evaluatively relational. Thus, Sneddon thinks they are important because of their centrality to the relational aspects of human life. Symbols are constitutively relational because they are objects, signs, practices (etc) that represent or stand for something else. Hence, they always stand in relations to both human interpreters and that which is being represented in symbolic form. This is clear from the Peircean account of symbols that I outlined in a previous post. Furthermore, they are evaluatively relational because they are important in virtue of how they mediate the relationship we have with others and the world around us. Thus, a racial slur is (negatively) value-laden because of what it says about the relationship between the user of the slur and the person or race in question.

I have tried to illustrate this categorisation of symbolic value in the diagram below. It shows how symbolic values differ from harms/benefits and rights.

Before I move on, I should say that Sneddon’s views about axiology and the categorisation of different value-laden concepts are disputable. For instance, one could challenge the notion that value-laden concepts are ultimately reducible to concerns about human living. Some people argue that there are genuine impersonal goods, i.e. goods that do not depend or rely upon human beings for their existence. Certain arguments in environmental ethics try to make the case for such goods. Likewise, one could dispute the claim that rights are constitutively relational but evaluatively individualistic. It is possible that there are group rights that are valued in virtue of how they affect relations between groups. The same could be true for certain harms and benefits. That said, I’m not sure that any of these criticisms would undermine the fundamental point, which is that symbols are valuable in virtue of how they mediate the relational aspects of human life.

2. How much value do symbols have?
This brings us to the second question. We can grant that symbols have some sort of value in virtue of their relational properties. The question is how much value do they have? It is impossible to answer that question in the abstract. The value of symbols will vary across contexts and cases. The symbolic disvalue of the N-word is relatively high and relatively fixed. It should probably be afforded a significant amount of weight in our practical reasoning. Other symbols might be more flaky and less weighty. Nevertheless, despite the fact that there is no abstract answer to the question, there are certain general guidelines for thinking about the status of symbolic values in ethical reasoning.

One such guideline concerns the polysemous nature of symbols. ‘Polysemy’ is a fancy word meaning simply that symbols often have more than one legitimate interpretation. This is true even of something as contentious as the N-word. Although this word usually has significant disvalue and so should be avoided at all costs, in certain contexts (e.g. rap music) it could have more positive connotations and could be used for more positive ends. The polysemous nature of symbols means that claims about symbolic values should often be viewed as being essentially contestable. This is a point I made at greater length in a previous post about Brennan and Jaworski’s paper on symbolic objections to markets. The upshot of that post was that symbolic practices are themselves subject to ethical scrutiny and can be overridden by other ethical considerations. Sneddon makes pretty much the same point, adding that it is rarely going to be worthwhile to dispute particular interpretations of symbols. Instead, one is more likely to make progress by navigating through ‘webs of competing symbolic claims’.

Other guidelines in relation to reasoning about symbolic values come in the shape of models for thinking about such values. Sneddon offers two such models in his paper. The first suggests that we think about symbolic values as risks and opportunities. The second suggests that we think about symbolic values as insults and compliments. Both models serve to highlight the relational aspects of symbolic values and utilise analogies with more familiar patterns of reasoning. I’ll try to briefly explain both.

The risk/opportunity model builds upon how we think about harms and benefits. Harms and benefits are positive and negative outcomes that accrue directly to individuals. Risks and opportunities are different. Risks are potential harms. They are events that presage or increase the likelihood of a harm accruing to an individual. Opportunities are the opposite: events that presage or increase the likelihood of a benefit accruing to an individual. Risks and opportunities thus have representational properties: they represent potential harms and benefits. This makes it relatively easy to fit them within the framework of symbolic values. We can understand symbols as things that represent potential harms and benefits — i.e. things that are risks and opportunities. Sneddon illustrates this with the examples of Serrano’s Piss Christ and the flag of the USA. The former could be deemed disvaluable because of the risk it presents (of harm to particular communities of belief etc.); the latter could be deemed valuable because of the opportunity it represents (USA as a land of hope and opportunity). The weight that is given to these risks and opportunities would then depend on their relative magnitude and the context in which we are making decisions. In some cases, minimal risks are worth taking seriously. In others, the risk would have to cross some threshold before it was taken seriously. In every case, it would need to be weighed against actual or potential benefits.

I’m not sure that the risk/opportunity model is the most natural way to think about symbolic value, though it may be useful in some instances. One problem is that it is primarily consequentialist in nature. The second model is more deontological and focuses on symbols as insults or compliments. We are familiar with insults and compliments in our everyday lives. If I insult someone, I fail to respect them in the appropriate manner. This may result in some emotional harm to that individual and this harm might explain why the insult is problematic. But sometimes insults do not result in any direct harm. They are deemed problematic because they violate some duty that we owe to a person or group of persons. The same goes for compliments, albeit in the positive direction. Compliments are above and beyond the call of duty. Both insults and compliments are relational in nature: they are valued in virtue of, and constituted by, relationships between persons. Sneddon argues that we can view symbols as types of insult and compliment. So, for instance, a racial slur can be deemed problematic because it insults a person or group of persons. It may not result in any direct harm to that person or group, but it nevertheless violates a duty that was owed. The significance of these duties in our practical reasoning depends on how one approaches moral duties more generally. Must one comply with all duties? Can one duty be overridden by another? Can duties be abandoned if following them results in significant consequential harm to another person? These questions are beyond the scope of this post. They have to do with the clash between consequentialist and deontological theories more generally.

Okay, that’s it. To briefly recap, Sneddon argues that symbols are valuable in virtue of their relational properties, specifically in virtue of how they mediate and guide our relationship with others and the world around us. The importance of symbolic values will vary depending on the context, but it should be remembered that the interpretation of many symbols is essentially contestable. We can also use the models of risk/opportunity and insult/compliment to think about the role of symbolic values in our practical reasoning.

Tuesday, December 15, 2015

13th November 2015 is a date that will live in infamy. It was, of course, the day of the Paris terrorist attacks in which 130 people lost their lives. It was the deadliest attack in France since WWII, and the deadliest in Europe since the Madrid bombings in 2004. One interesting feature of the attacks was the response they engendered on various social media outlets. In the aftermath many people took to sites like Twitter and Facebook to proudly display images of the Eiffel Tower (mocked up as a peace symbol) and the Tricolour on their profiles. Indeed, Facebook explicitly offered people the option of adding a Tricolour-overlay to their profile pictures as a mark of solidarity with the people of France.

These gestures were not without controversy. While most accepted that they were well-meaning, some lamented the fact that equivalent gestures were not made in response to similar attacks that occurred just prior to the Paris attacks in Beirut. Whatever the merits of that particular argument, the whole debate itself is testimony to the power of symbols in human life. It seems clear that symbols like the Tricolour are taken to have value, and that the use of those symbols in certain contexts has additional value (if viewed as a symbol of solidarity) and, indeed, disvalue (if viewed as a symbol of exclusion or a lack of solidarity with non-Western peoples).

Why is it that symbols take on such value-laden connotations? And what is the precise nature of symbolic value? These are questions that Andrew Sneddon tries to address in his article ’Symbolic Value’, which I recently stumbled across in the Journal of Value Inquiry. The article is an exercise in expository clarification. It is synoptic and abstract in nature. It does not concern itself with particular debates about the moral value of symbols but, rather, with the preliminary question of why it is that symbols feature so prominently in such debates.

I liked the article a lot. It helped me to clarify my own thinking about symbolic value. Consequently, I want to share some of the key insights here, starting with Sneddon’s general characterisation of symbols and his delineation of two distinct types of symbolic value.

1. What is a symbol anyway?
Sneddon adopts C.S. Peirce’s characterisation of symbols. According to this characterisation, symbols have three main components: (i) the symbol itself, i.e. some object, practice, word (etc.) that stands for or represents something else; (ii) an interpreter who determines what it stands for; and (iii) a ground of representation, i.e. something that justifies some particular interpretation.

An example will help. There is a painting on the wall in front of me. This painting is a symbol. In addition to its aesthetic merits, it represents or stands for something else. In this case, the painting is a representation of the city of Galway (where I currently live). It was purchased by someone with whom I am particularly close so it could also be said to represent their love and affection for me. I am the interpreter: I am the one that imbues the painting with this representational meaning. There are grounds for my particular interpretations. One of these grounds is the resemblance between the painting and the actual city of Galway. The painting depicts buildings and geographical landmarks that are distinctive of the city. Another ground is causal history, i.e. the fact that it was purchased by a particular person at a particular time. This causal history is arguably what makes it a symbol of love and affection. The grounds of interpretation are interesting as they can come in several different forms. Sneddon mentions resemblance, convention, stipulation and causal connection in the article as prominent grounds for interpretation.

There are three points that are worth emphasising about symbols before we address the values that attach to them. First, symbols can be communicative but they need not be. Communication involves someone (or some group of people) trying to communicate with another person (or group of people) via a symbol of some sort. In communicative contexts, the attitudes and intentions of the communicator are often a relevant factor in the interpretation of the symbol. But symbols do not always have communicators. All you need for a symbolic practice is an interpreter with some ground of interpretation. This is important because it means that objects or practices could be taken to have symbolic value, even if no one creates them for a symbolic purpose. The second point is that symbols can be (but need not always be) polysemous. That is to say, the same symbol can legitimately be taken to represent several different things. This also has ethical significance because it affects how strong claims to symbolic value are in specific contexts.

The final point is that symbols are hugely important in human society. This is obvious. Some anthropologists and historians have even referred to us as the symbolic species. Harari’s recent and popular overview of human history, Sapiens, provides an interesting twist on this view. It argues that symbolic representations play a decisive role in human history. For better or worse, human societies are marked by the fact that they create imaginative representations (religious origin myths, social hierarchies and prejudices, money, scientific theories) that are then overlayed onto the reality they experience. These imaginative representations blend with that reality, at least in our experiences of it. This is all mediated and maintained through symbols.

2. Two Types of Symbolic Value
That’s all by way of introduction. One of Sneddon’s primary contributions to the analysis of symbolic value is his attempt to delineate two main types of symbolic value, with the second type breaking down into two sub-types. They are:

Symbols as a mode of valuing: The symbol has value in virtue of that which it represents. So in order to understand the value of the symbol you must understand the value of that which it represents.

Symbols as a ground of value: The symbol has value in itself, apart from that which it represents. There are two ways in which this can happen:

Hybrid cases: The symbol has value in virtue of that which it represents, but this doesn’t explain all the value that attaches to the symbol (i.e. the symbol itself must ground some value)

Pure cases: The symbol alone has value, in and of itself, apart from that which it represents (though to be symbol it must be taken to represent something).

This taxonomy may make little sense without some concrete examples. So let’s go through a few.

We’ll start with symbols as a mode of valuing. This is probably the most common and intuitive case. Here, there is something in the world that we take to be valuable and we construct a symbol to represent and remind us of that value. In an earlier post, I discussed the famous example of symbolic rituals demonstrating respect for the dead from the work of Herodotus. The Greeks symbolised their respect for the dead by burning their bodies on a funeral pyre; the Callations symbolised their respect for the dead by eating the bodies. Both found the others’ rituals bizarre, but both agreed on the underlying value: that the dead deserved respect. They merely represented that value in different ways. The symbol had value in virtue of what it represented.

In his paper, Sneddon uses a different example. He considers the various statues around Canada that commemorate the victories of the women’s suffrage movement. Here, it is that victories that are deemed important and valuable. The statues have value in virtue of the fact that they represent and mark those victories. If someone defaced the statues, it would usually be taken as a lack of respect for those victories (unless we had some evidence to suggest the attack was motivated by some other factor, e.g. hatred for the artist). So, again, it is that which is being represented that confers value on the symbol.

This is to be contrasted with cases in which the symbol itself is a ground of value. Sneddon notes that there are relatively few ‘pure’ cases of this type. The far more common type is the hybrid case. Such cases often follow a pattern: the symbol starts off by having value in virtue of that which it represents, but over time, due to complex historical and social factors, the symbol itself starts to have some independent value. Sneddon provides one good example of this. Indeed, it is so good that I need to preface my discussion of it with a warning. In the next paragraph I am going to mention (not use) a word for black people that is deemed so incendiary that one usually has to refer to it by using a harmless euphemism. However, I am going to drop that typical convention because it illustrates the point that Sneddon is trying to make.

The word, of course, is ‘nigger’. The word is a symbol, as are all words: it is used to refer to black people (particularly, though not exclusively, black people in the USA). It has tremendous disvalue. Part of this is because the word is, for historical and cultural reasons, a highly derogatory and dehumanising way of referring to black people. But this history does not explain all of the disvalue that attaches to the word. The word itself now has its own disvalue. This is clear from the fact that people cannot even mention the word without provoking a negative reaction. Instead, they have to mention the word indirectly by using the euphemism ’N-word’.

This suggests an interesting test for whether a symbol grounds value. The test relies on the philosopher’s use/mention distinction. When one uses a word one tries to get the listener to look past the word itself to that which it represents. Consider a sentence like ‘there is an apple on the tree’. In that sentence, I use the word ‘apple’ to describe something on a tree. I try to get the listener to see past the word to the object in the real world. Contrast that with a sentence like ‘the word ‘apple’ has five letters’. In that sentence, I mention the word ‘apple’ and try to draw the reader’s attention to the word itself, irrespective of that which it represents. One’s reaction to the use and mention of symbols can say a lot about the value that attaches to them. If the symbol is merely a mode of valuing, then we would expect uses of the symbol to be the way in which to provoke a value-laden reaction. But if the symbol itself grounds value, then we would expect mentions of the symbol to provoke such reactions too. This is clearly what happens in the case of a word like ‘nigger’. So let’s formalise this into a test:

The Use/Mention Test: One way of testing to see whether a symbol itself grounds value is to see whether mentions (as opposed to uses) of the symbol provoke a value-laden reaction. If they do, then it could indicate that the symbol has value independent of that which it represents.

Sneddon does discuss this idea in his article, but doesn’t formulate it into a test as I have done. This might be because the use/mention test works well in the case of words, but not so well in the case of other symbols. I’m not sure about that though. I think it could work in other contexts. Consider visual images. The Danish cartoons controversy suggests that at least some (I know this is disputed to an extent) visual representations of the prophet Muhammad are highly offensive. This is presumably because of the value that attaches to that which is being represented. But some of the media reaction to that event suggested that symbol itself may ground disvalue. Media outlets refused to even show the cartoons as part of their news coverage about the controversy. In other words, they refused not only to use the symbols but to mention them as well. This might be indicative of disvalue attaching directly to the symbol. That said, there is a good competing explanation of the media reaction: they were afraid to ‘mention’ the cartoons for fear of reprisal.

Thus far, I have been talking about hybrid cases in which the symbol has value in virtue of that which it represents but that doesn’t account for all the value. Are there any ‘pure’ cases, i.e. cases in which all the value is grounded in the symbol? Sneddon admits that examples of this sort are relatively harder to come across but does suggest one: Serrano’s Piss Christ. This was a photograph created by Andres Serrano which depicted a crucifix submerged in a jar of his own urine. The work courted considerable controversy, largely because of its symbolism. Now, it is possible that much of the (dis)value in this case is attributable to what is being represented. But Sneddon suggests that someone could object to the work on purely symbolic grounds. In other words, they might think that the work harms no real person, nor violates their rights, nor undermines their virtues, but nevertheless is morally problematic. I imagine that the hypothetical objector here is a purely secular one, who does not accept the Christian story in any way, but is worried about the meaning of the symbol itself. I’m not sure if this is a perfect example of a purely symbolic ground of value, but it may gesture in the general direction of one.

Anyway, that’s all for this particular post. The goal has been to consider different forms of symbolic value. Nothing in this post should be construed as making a claim about the general importance of symbolic value in human life. I’ll do a follow up post looking at that issue.

Friday, December 11, 2015

I nearly forgot about it this year. The 9th of December 2015 was the sixth anniversary for this blog. It is interesting to see how my style has changed over the years. I'd like to think that practice improves writing, but I wonder whether that's true. It seems to me that I've gotten more long-winded over the years. Here's one post from every December in the life of this blog (which amounts to seven posts since there have been seven Decembers):