Pages

Friday, December 28, 2012

This year was a busy one on the blogging front, though perhaps less busy than previous years. I'll just scrape past the 100 posts-barrier this year, despite having cleared it fairly comfortably in the past. Still, it's quality not quantity that counts, and the majority of my posts this year have been pretty substantive.

They have also covered a diverse range of topics, including: the ethics of the death penalty, incest, abortion and infanticide, blackmail, neuroscience-based lie detection, immortality, theism and the meaning of life, enhancement in sports and education, new natural law, the same-sex marriage debate, skeptical theism, the ontological foundations of morality and more.

Anyway, here are some of my favourites from the past twelve months. They are not arranged hierarchically, but chronologically instead. I've tried to include one post or group of posts from each month, with February being the sole exception (not enough blogging was going on back then). I haven't focused on the popularity of the posts when compiling this list, but you may like to know that the most popular post(s were the two I did on the epistemic objection to torture. They got around 8,000 individual hits. That's not a great indicator of the number of people who read them though for two reasons: (i) most people probably don't read what they click through to; and (ii) the front page of this blog is set up in such a way that you can read posts in full without clicking through to them.

Two Italian ethicists, Alberto Giubilini and Francesca Minerva, got into trouble earlier this year for an article arguing for an equivalency between abortion and infanticide. In these two posts, I took a look at what their controversial article actually said.

William Lane Craig argues that it's impossible for life to have any meaning in the atheistic worldview; conversely he argues that it's possible on the theistic view. In this series of posts, I look at his arguments.

Some people think that torture is morally permissible in cases of extreme emergency. Typically, this is said to be because torture can help us to obtain valuable information. But is this correct? Roger Koppl argues that it isn't, for reasons related to the formal structure of an epistemic system. In these posts, I see what he has to say.

Skeptical theism is the major contemporary response to the evidential problem of evil. But its foundations are often poorly defended. This post looks at Trent Dougherty recent article about this important topic.

Nothing provokes the "yuck" response more readily than incest, but are the arguments for its criminalisation any good? Following an article by Vera Bergelson, I consider five traditional rationales for the criminalisation of incest.

Bernard Williams argued for the tedium of immortality in the 1970s in his famous article "The Makropulos Case: Reflections on the Tedium of Immorality". This series of posts looks at some recent developments in this debate.

New natural lawyers frequently argue against the legalisation of same-sex marriage on the grounds that it violates the intrinsic good of marriage. They often claim that their arguments can be embraced for purely secular reasons. These two posts consider the opposing view: that their arguments are inherently religious.

One major objection to the death penalty is that it is not revocable. If a person is wrongly imprisoned, they can be released from jail, but if they are wrongly killed they can never be resuscitated. Is this a good objection to the death penalty? These posts try to find out.

In everyday conversation, our utterances often contain more meaning than is present in their linguistic structure. This is due to the phenomenon of conversational implicature. Can this phenomenon affect how we interpret the law as well?

Monday, December 24, 2012

Roughly (I’ll refine later on) the “technological singularity” (or “singularity” for short, and in the right context) is the name given to point in time at which greater-than-human superintelligent machines are created. The concept (and name) was popularised by the science fiction author Vernor Vinge in the 1980s and 90s, though its roots can be traced further back in time to the work of John Von Neumann and I.J. Good.

The notion of a technological singularity is treated with derision in some quarters. But that attitude seems inapposite. As David Chalmers points out in his long (and excellent) discussion, the singularity is worthy of serious philosophical and intellectual consideration. The arguments for its occurrence are themselves intrinsically fascinating, raising all sort of philosophical and scientific questions. Furthermore, if it ever did happen, it would have serious practical, ethical and metaphysical consequences. Why should their consideration warrant derision?

Many now seem to agree. The California-based Singularity Institute has long been calling for (and more recently delivering) serious research on this topic. The Oxford-based Future of Humanity Institute has several researchers who dedicate themselves either full or part-time to the issue. And the recently-launched Cambridge Centre for the Study of Existential Risk includes the possible creation of superintelligent AI as one the four greatest threats to humanity, each of which it will be studying.

I also agree. Like Chalmers, I think there are some intrinsically and instrumentally important philosophical issues to consider here, but I’m not entirely sure where I come down on any of those issues. One of the reasons for this is that I’m not entirely sure what the main issues are. I’m a philosopher (sort of - not officially!). I work best when complex topics are broken down into a set of theses, each of which can be supported by an argument (or set of arguments), and each of which is a potential object of further research and contestation. Unfortunately, I have yet to come across a general philosophical overview of the singularity that breaks it down in a way that I like. Chalmers’s article certainly attempts to do this, and succeeds in many areas, but his focus is both too narrow, and too complex for my liking. He doesn’t break down the core theses in a way that clearly and simply emphasises why the singularity is a matter of serious practical concern, something that may pose a genuine existential threat to humanity.

This post is my attempt to correct for this deficiency. I write it in full or partial awareness of other attempts to do the same thing, and with full awareness of the limitations of any such attempt. Breaking a complex topic down into discreet theses, while often useful, runs the risk of overlooking important connections, de-emphasising some issues, and obscuring others. I’m happy to entertain critiques of what I’m about to do that highlight failings of this sort. This is very much a first pass at this topic, one that I’m sure I’ll revisit in the future.

So what am I about to do? I’m going to suggest that there are three main theses to contend with when one thinks about the technological singularity. They are, respectively: (i) the Explosion thesis; (ii) the Unfriendliness thesis; and (iii) the Inevitability thesis. Each one of these theses is supported by a number of arguments. Identifying and clarifying the premises of these arguments is an important area of research. Some of this work has been done, but more can and should be done. Furthermore, each of three main theses represents a point on scale of possibilities. These other possibilities represent additional points of argumentation and debate, ones that may be taken up by opponents of the core theses. As a consequence, one’s view on the technological singularity could be thought to occupy a location within a three-dimensional space of possible views. But I’ll abjure this level of abstraction and focus instead on a couple of possible positions within this space.

With that in mind, the remainder of this post has the following structure. First, I’ll present each of the three main theses in order. As I do so, I’ll highlight the conceptual issues associated with these theses, the arguments that support them, and the scale on which each thesis lies. Second, I’ll consider which combinations of beliefs about these theses are most interesting. In particular, I’ll highlight how the combination of all three theses supports the notion of an AI-pocalypse, but that weaker versions of the theses might support AI-husbandry or AI-topia. And third, I’ll briefly present my own (tentative) views on each of the theses (without supporting argumentation).

1. The Explosion Thesis
The core thesis of singularitarian thinking is the Explosion Thesis. It might be characterised in the following manner.

Explosion Thesis: There will be an intelligence explosion. That is: a state of affairs will arise in which for every AIn that is created, AIn will create AIn+1 (where AIn+1 is more intelligent than AIn) up to some limit of intelligence or resources.

Here, “AI” refers to an artificial intelligence with human level cognitive capacities. Sometimes, the label AGI is used. This refers to an Artificial General Iintelligence and when used can help us to avoid confusing the phenomenon of interest to singularitarians with computer programs that happen to be highly competent in a narrow domain. Still, I’ll use the term AI throughout this discussion.

The first human level AI will be numerically designated as “AI0”, with AI1 used to designate the first AI with greater than human capacities. Some useful shorthand expressions were introduced by Chalmers (2010), and they will be followed here. Chalmers used “AI+” to refer to any AI with greater than human intelligence, and “AI++” to refer to vastly more intelligent AIs of the sort that could arise during an intelligence explosion.

Conceptual and practical issues related to the Explosion Thesis include: (a) the nature of intelligence and (b) the connection between “intelligence” and “power”. Intelligence is a contested concept. Although many researchers think that there is a general capacity or property known as intelligence (labelled g) others challenge this notion. Their debate might be important here because if there is no such general capacity we might not be able to create an AI. But this is a red herring. As Chalmers (2010) argues, all that matters is that we can create machines with some self-amplifying capacity or capacities that correlate with other capacities we are interested in and are of general practical importance. For example, the capacity to do science, or to engage in general means-end reasoning, or to do philosophy or so on. Whether there is some truly general capacity called “intelligence” matters not. The relationship between “intelligence” and “power” is, however, more important. If “power” is understood as the capacity to act in the real world, then one might think it is possible to be super-smart or intelligent without having any power. As we shall see, this possibility is an important topic of debate.

The original argument for the Explosion Thesis was I.J. Good’s. He said:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. (LINK)

Good’s argument has, in recent years, been refined considerably by Chalmers and Hutter (there may be others too whose work I have not read). I hope to look at Chalmers’s argument in a future post so I’ll say no more about it now.

As noted at the outset, the Explosion Thesis is a point on a scale. The scale is illustrated below. Below the Explosion Thesis one finds the AI+ Thesis. Defenders of this thesis agree that greater than human level AI is likely, but do not think this will lead to the recursive intelligence explosion identified by Good. In terms of practical implications, there may be little difference between the two positions. Many of the potentially negative (and positive) consequences of the Explosion Thesis would also follow from the AI+ Thesis, though the levels of severity might be different. Next on the scale is the AI Thesis, and below that is the No AI Thesis. I think these speak for themselves. Proponents and advocates of the singularity take up some position along this scale.

The Explosion Thesis is the core of singularitarianism. Unless one accepts the Explosion Thesis, or at a minimum the AI+ Thesis, the other theses will be irrelevant. But what does it mean to “accept” the thesis? I adopt a Bayesian or probabilistic approach to the matter. This means that even if I think the probability of an intelligence explosion is low, it might still be worth dedicating serious effort to working out its implications. For example, I think a 10% chance of an intelligence explosion would be enough for it to warrant attention. Where the threshold comes for theses that are not worthy of consideration is another matter. I couldn’t give a precise figure, but I could probably give analogous probabilities, e.g. the probability of my car suddenly morphing into a shark seems sufficiently low to me to not warrant my attention. If the probability of an intelligence explosion was similar to that, I wouldn’t care about it.

2. The Unfriendliness Thesis
This thesis may be characterised in the following manner:

Unfriendliness Thesis: It is highly likely that any AI+ or AI++ will be unfriendly to us. That is: will have goals that are antithetical to those of us human beings, or will act in a manner that is antithetical to our goals.

This thesis, and its complement The Friendliness Thesis, form an obvious spectrum of possible outcomes: at one end of the scale it is highly likely that AI+ will be unfriendly, and at the other end of the scale it is highly likely that AI+ will be friendly. One’s beliefs can then be represented as a probability distribution over this spectrum.

There are a couple of conceptual issues associated with this thesis. First, there is the question of what is meant by “friendliness” or “unfriendliness”. If the concept were introduced by a philosopher, it would probably be couched in terms of objective and subjective value, not friendliness. So, in other words, they would view the important issue as being whether an AI+ would act in manner that promotes or undermines objective or subjective value. Objective value would concern what is best from an agent-neutral perspective; subjective value would concern what is best from the perspective of one or more agents. The use of the term “us” in the definition of the thesis suggests that it is some form of subjective value that is being invoked — perhaps an aggregate form — but that may not be the case. What really seems to be at issue is whether the creation of AI+ would be “good” or “bad”, all things considered. Those terms are fuzzy, for sure, but their fuzziness probably enables them to better capture the issues at stake.

If the thesis does invoke some form of (aggregate) subjective value, it raises another conceptual question: who is the “us” that is being referred to? It might be that the “us” refers to us human beings as we currently exist, thus invoking a species relative conception of value, but maybe that’s not the best way to think about it. Within the debate over human enhancement, similar issues arise. Some philosophers — such as Nicholas Agar — argue that human enhancement is not welcome because enhanced human beings are unlikely to share the values that current unenhanced human beings share. But so what? Enhanced human beings might have better values, and if we are the ones being enhanced we’re unlikely to care too much about the transition (think about your own shift in values as you moved from childhood to adulthood). Something similar may be true in the case of an intelligence explosion. Even if an AI has different values to our own, “we” may not care, particularly if our intelligence is enhanced as part of the process. This may well be the case if we integrate more deeply with AI architectures.

The main argument in favour of the Unfriendliness Thesis would be the Vastness of MindspaceArgument. That is: the multidimensional space of possible AI+ minds is vast; most locations within that space are taken up by minds that are antithetical to our values; therefore, it is highly probable that any AI+ will have values that are antithetical to our own. Of course, this argument is flawed because it assumes that all locations within that mindspace are equally probable. This may not be the case. Additional arguments are needed.

The Orthogonality Argument (which I’ve looked at before) is one such argument. It has been developed by Armstrong and Bostrom and holds that, because intelligence and final goals are orthogonal, a high level of intelligence is compatible with (nearly) any final goal. In other words, there is no reason to think that intelligence correlates with increased moral virtue. Similarly, others (e.g. Muehlhauser and Helm) argue that values are incredibly complex and difficult to specify and stabilise within an AI architecture. Thus, there is no reason to think we could create an AI with goals that match our own.

The Friendliness Thesis might be held by a certain type of moral cognitivist. One that believes that as intelligence goes up, so too does the likelihood of having accurate moral beliefs. Since these beliefs are motivating, it would follow that AI+ is highly likely to be morally virtuous. Although many reject this claim, they may point to the possibility of a Friendly AI+ as a welcome one, and give this as a reason to be favourably disposed to the project of creating AI+. More on this later.

3. The Inevitability Thesis
The third thesis may be characterised in the following manner:

Inevitability Thesis: The creation of an unfriendly AI+ is inevitable.

The term “inevitable” is being used here in the same sense in which Daniel Dennett used it in his book Freedom Evolves. It means “unavoidable” and can be contrasted with the opposing concept of “evitability” (avoidability). One may wonder why I don’t simply use the terms “unavoidable” and “avoidable” instead. My response is that I like the idea of drawing upon the rich connotations of the “inevitability-evitability” distinction that arise from Dennett’s work.

In any event, the Inevitability Thesis obviously represents a point on a scale of possibilities that includes the Evitability Thesis at its opposite end. Those who subscribe to the latter will believe that it is possible to avoid the creation of unfriendly AI+. But we must be careful here. There are different ways in which unfriendly AI+ might be inevitable and different ways in which it might be evitable. Indeed, I identify at least two types of each. As follows:

Inevitability1: The creation of an unfriendly AI+ with the ability to realise its goals is inevitable.(Strong Inevitability Thesis)

Inevitability2: The creation of an unfriendly AI+ is inevitable, but it need not have the ability to realise its goals.(Containment/Leakproofing Thesis)

Evitability1: It is possible to avoid the creation of an unfriendly AI+ and to realise the creation of a friendly AI+.(The AI-Husbandry Thesis)

Evitability2: It is possible to avoid the creation of both unfriendly and friendly AI+.(Strong Evitability Thesis)

The two types of inevitability draw upon the distinction between intelligence and power to which I alluded earlier. Those who believe in strong inevitability think that intelligence carries power with it. In other words, any sufficiently intelligent AI will automatically have the ability to realise its goals. Those who believe in containment/leakproofing think it is possible to disentangle the two. Proponents or Oracle AI or Tool AI take this view.

The first type of evitability is the view adopted by the likes of Kurzweil. They are generally optimistic about the benefits of technology and intelligence explosions, and think we should facilitate them because they can be of great benefit to us. It is also the view that people in the Singularity Institute (and the FHI) would like to believe in. That is: they think we should try to control AI research in such a way that it brings about friendly rather than unfriendly AI+. Hence, I call it the AI-husbandry thesis. Its proponents think we can corral AI research in the direction of friendly AI+.

The arguments in this particular area are complex and multifarious. They range over a huge swathe of intellectual territory including international politics, the incentive structures of academic research, human psychology and foresightedness, technological feasibility and so on. It would be difficult to give a precis of the arguments here.

4. The AI-pocalypse or the AI-topia?
So there we have it: three main theses, each of which is supported by a number of arguments, and each of which can be viewed as a point on a scale of possible theses which in turn are supported by a variety of arguments. In other words, a complex web of premises and conclusions that is ample fodder for philosophical and scientific investigation.

Some combinations of views within this complex web are significant. The most obvious is the one in which the Explosion Thesis (or the AI+ thesis), the Unfriendliness Thesis, and the Strong Inevitability Thesis are accepted. The combination of these three views is thought to imply the AI-pocalypse: the state of affairs in which AI+ take over and do something very bad. This could range from enslaving the human population, to exterminating them, to destroying the world and to many other imaginable horrors. The precise form of the AI-pocalypse is not important, the fact that it is implied (or suggested) by the combination of the three theses is what matters.

Other combinations of views are more benign. For example, if you don’t think that AI+ is possible, then there’s nothing particularly significant to worry about. At least, nothing over and above what we already worry about. If you accept the likelihood of AI+, and the Unfriendliness Thesis, and the AI-Husbandry Thesis, then you have some reason for cautious optimism. Why? Because you think it is possible for us to create a friendly AI+, and a friendly AI+ could help us to get all sorts of things that are valuable to us. And if you accept the likelihood of AI+ and the Friendliness Thesis, you should be very optimistic indeed. Why? Because you think that the more intelligence there is, the more likely it is that valuable states of affairs will be brought about. So we’d better ramp-up intelligence as quickly as we can. This is the view of AI-topians.

I suspect, though I can’t say for sure, that cautious optimism is probably the prevalent attitude amongst researchers in this area, though this is modified by a good dose of AI-pocalypticism. This is useful for fundraising.

5. What do I think?
Honestly, I have no idea. I’m not sure I could even estimate my subjective probabilities for some of the theses. Still, here are my current feelings about each:

The Explosion Thesis: Good’s argument, along with Chalmers’s more recent refinement of it, seems plausible to me, but it relies crucially on the existence of self-amplifying capacities and the absence of intelligence and resource limitations. I couldn’t estimate a probability for all of these variables. I’d be a little bit more confident about the potential creation of AI+ for the simple reason that (a) I think there are many skills and abilities that humans have improved and will improve in the future (some through collective action), (b) artificial agents have several advantages over us (processing speed etc) and (c) I see no obvious reason why an artificial agent could not (some day) replicate our abilities. So I would put my subjective probability for this at approximately 0.75. I should also add that I’m not sure that the creation of a truly general AI is necessary for AI to pose a significant threat. A generally naive AI that was particularly competent at one thing (e.g. killing humans) would be bad enough (but some may argue that to be truly competent at that one thing, general intelligence would be required, I’ll not comment on that here).

The Unfriendliness Thesis: I have criticised Bostrom’s orthogonality thesis in the past, but that was mainly due some dubious argumentation in Bostrom’s paper, not so much because I disagree with the notion tout court. I am a moral cognitivist, and I do believe that benevolence should increase with intelligence (although it would depend on what we mean by “intelligence”). Still, I wouldn’t bet my house (if I owned one) on this proposition. I reckon it’s more likely than not that a highly competent AI+ would have goals that are antithetical to our own.

The Inevitability Thesis: This is where I would be most pessimistic. I think that if a particular technology is feasible, if its creation might bring some strategic advantage, and if people are incentivised to create it, then it probably will be created. I suspect the latter two conditions hold in the case of AI+, and the former is likely (see my comments on the Explosion Thesis). I’m not sure that there are any good ways to avoid this, though I’m open to suggestions.

At no point here have I mentioned the timeframe for AI+ because I’m not sure it matters. To paraphrase Chalmers, if it’s a decade away, or a hundred years, or a thousand years, who cares? What’s a thousand years in the scale of history? If it’s probable at some point, and its implications are significant, it’s worth thinking about now. Whether it should be one’s main preoccupation is another matter.

Saturday, December 22, 2012

This is the second part in my series looking at pornography qua speech act. The series is working from Mary Kate McGowan’s article “Conversational Exercitives and the Force of Pornography”. Before getting into the topic of this post, let’s review some of the material covered so far.

The feminist legal theorist Catherine MacKinnon has argued that pornography qua speech act silences and subordinates women. In other words, she argues that pornography does not merely cause the silencing and subordination of women, but is in fact itself an act of silencing and subordination. The distinction is subtle but important because it could take pornography out of the class of legally protected speech.

MacKinnon’s argument, such as it is, lacks the clarity and logical force of a typical philosophical argument. But then again MacKinnon isn’t a philosopher (not primarily anyway). Luckily, some prominent philosophers have tried to recast MacKinnon’s argument more perspicuous philosophical garb. One of them is Rae Langton. She has suggested that MacKinnon’s argument can be understood in terms of exercitive speech acts.

An exercitive speech act is one that sets the permissibility conditions on particular kinds of actions in particular domains. For instance, a club president who says “There shall be no more smoking on club premises!” performs an exercitive speech act. Taking this concept on board, Langton’s version of MacKinnon’s argument has two stages. First, she argues that pornographic speech could be exercitive in nature, specifically it could determine the permissibility conditions in the heterosexual sociosexual arena. Second, she argues that the kinds of permissibility conditions set out by pornographic speech could be such that women are silenced and subordinated.

Both stages of Langton’s argument are controversial, but McGowan criticised the first. She argued that Langton’s argument was flawed because it relied on the Austinian concept of the exercitive. According to this concept, in order to successfully and non-defectively perform an exercitive, a variety of conditions must be met. To be precise: (i) the speaker must intend (directly or otherwise) for their utterance to have an exercitive effect; (ii) the utterance must somehow convey the desired exercitive content; (iii) the listeners must be able to appreciate the exercitive nature of the utterance; and (iv) the speaker must have the requisite authority. The problem is that few, if any, of these conditions are met in the case of pornographic speech.

But McGowan is not deterred by this. She thinks it is possible for Langton’s argument to be salvaged. This is because there is an alternative type of exercitive speech act, one which is not subject beholden to the same success conditions as the Austinian exercitive, and which may justify the conclusion that pornographic speech silences and subordinates women. This is the “conversational exercitive”.

In the remainder of this post, I will explain what a conversational exercitive is and show how it might salvage Langton’s argument. It should be noted at the outset, however, that no strong conclusion about the actual effectiveness of the revised argument will be reached. Instead, we will be considering a proposal, an intriguing one for sure, but one with many crucial details yet to be worked out.

1. What is a Conversational Exercitive?
David Lewis said that a conversation is like a baseball game. Both are rule-governed enterprises, though the conversation rather more loosely so (more on this anon). In both, the permissibility of future behaviour depends on what has gone on before. And both have a “score”, where this is understood broadly to include all those facets of the game that are relevant to its assessment and proper play.

I don’t know enough about baseball to explain this analogy coherently, so I’ll focus purely on conversations. Two examples will help to flesh out the three features discussed by Lewis (both are taken from McGowan’s article, as are all other examples in this post):

Mike’s Dog: Mike and I are having a conversation. He mentions that his dog has been to the vet. I later ask whether “the” dog is okay.

Italian Boot?: Donal and several of his friends are talking about geography. Donal says that Ireland looks like a teddy bear lying on its side. Nobody questions his claim. But later on Seamus says that Italy doesn’t look like a boot because “it’s squiggly on both sides and boots usually aren’t”.

In the first example, Mike’s mentioning of his dog alters the salience of certain facts in the conversation. Although there are many dogs in the world, the fact that he has specifically referred to his dog means it is okay for me to later refer to “the dog” without there being any confusion as to which dog I am referring to. This is interesting because it suggests that what I am entitled to say within the conversation is altered by what Mike has said. In other words, the conversational “score” is adjusted by Mike’s initial utterance.

Something similar is true in the second example. When Seamus says that Italy looks nothing like a boot because of its squigglyness, he alters the standards of accuracy that operate within the conversation. His claim is true enough, but only because the standards of accuracy have been raised. This affects future contributions to the conversation. Once again, the score has been adjusted.

Both examples show how conversations are loosely rule-governed enterprises in which the permissibility of future behaviour depends on what has gone before. In my conversation with Mike, I could not have used the expression “the dog” to refer to my own dog, or some dog other than Mike’s. Well, that’s not quite true. Of course, I could have done so, but confusion would certainly have ensued. If I wanted to introduce a new dog into the conversation, I should have done so and changed the salience facts myself. The same is true of Seamus. His assertion may be challenged on the grounds that the standards of accuracy are not that high in this conversation, but, equally, the participants could adjust to the higher standards of accuracy. That would make it impermissible to say that Ireland was like a sideways teddy bear later on.

Now we come to the crux of the matter. McGowan’s central claim is this: conversational contributions like those in Mike’s Dog and Italian Boot? are exercitive in nature. They are conversational exercitives. That is to say, like the Austinian exercitive, they enact permissibility conditions for future conversational contributions, but unlike the Austinian exercitive, they do not rely on the four conditions of success mentioned earlier.

McGowan argues for this in depth in her article, I’ll only sketch the argument here. First, conversational exercitives, unlike their Austinian brethren, don’t have to express their exercitive content. When the club president bans smoking, his utterance has to say as much, but Mike does not have to say anything like “all future references to “the” dog must be references to my dog” for his conversational contribution to have its exercitive effect. Second, this implies that conversational exercitives are not sensitive to speakers’ intentions in the same way that Austinian exercitives are. Rather, they function in a largely covert and automatic way. Third, listeners need not consciously recognise the exercitive intent or purpose of the utterance. And fourth, authority is not an issue here. All conversational participants have the authority to alter the permissibility conditions in this manner.

2. Pornography as a Conversational Exercitive
The conversational exercitive is certainly an intriguing concept, one worthy of deeper analysis and consideration, but what good does it do here? Remember, the ultimate goal is to defend something akin to MacKinnon’s argument and we haven’t really begun to approach that goal yet. Several further steps must be taken. For starters, it needs to be shown that pornographic speech is a conversational contribution. Then, it needs to be shown that it is exercitive in nature. And finally, it needs to be shown that as an exercitive it serves to silence and subordinate women (at least some of the time). What for Langton was a two stage argument is now a three stage argument.

So let’s start with stage one. Is pornographic speech a conversational contribution? McGowan argues that it is. Echoing Langton, her claim is the following:

(1) Pornographic “speech” is part of the ongoing “conversation” in the sociosexual arena.

The inverted commas are intended to highlight the fact that we are not dealing with a paradigmatic conversation here; rather, we are dealing with something that it is akin to a conversation in all important respects. Thus, the supporting argument for (1) is analogical in nature. It suggests that the properties typical of paradigmatic conversations are also typical of what goes on in the heterosexual sociosexual arena.

Like a paradigmatic conversation, the sociosexual arena is loosely rule-governed. At any given time, in a given sexual context, certain kinds of behaviour are permissible and certain kinds are not. This is what happens in conversation: certain things can be said and certain things cannot. Further, the rules that operate within the sociosexual arena adapt to fit the behaviour of the participants. If two sexual partners are turned on by yodeling, then yodeling becomes an appropriate form of foreplay, even if this would not be appropriate in many other sexual encounters. This suggests that the sociosexual arena is, in general (and ideally), a cooperative one: one in which the individuals coordinate their activities for mutual gain. Many linguistic theorists, such as Grice, hold that this is true of conversations too. If we accept that the sociosexual arena is akin to a conversation, the notion that pornography is a contribution to that conversation becomes much more palatable.

That brings us to the next stage of the argument. Once we accept that pornographic speech is a contribution to the sociosexual “conversation”, we need to show that it can be exercitive in nature. At this point McGowan’s argument gets rather sketchy (as she herself acknowledges). But one could well imagine that pornography has an exercitive function. Suppose that two sexual partners watch pornography together, that pornography repeatedly depicts people engaging in sexual act X, and neither of them objects to this act (and maybe even signal approval), one could then say that the pornography enacts permissibility conditions for sexual behaviour. The pornography says that act X is acceptable or permissible, and it becomes difficult for the two partners to deny this in the future. Thus:

(2) Pornographic speech, on at least some occasions, functions as a conversational exercitive within the sociosexual arena, i.e. it enacts permissibility conditions for sexual behaviour.

But even allowing for this possibility, does it enact permissibility conditions that silence or subordinate women? The argument gets even sketchier at this point, but again McGowan suggests that it is possible that, on at least some occasions, it does. For instance, pornography might repeatedly “say” that whenever women say “no” to certain kinds of sexual advance they are really coyly signalling sexual acceptance. It may thus become impossible for some women to refuse sexual advances in some contexts. This would silence them because they would be unable to perform speech acts that they ought to be able to perform.

This could also hold true for subordination. Again, pornographic material might repeatedly signal that women are permissibly treated in a subordinating and dehumanising manner, and this might become part of the “score” of the sociosexual conversation. This gives us:

(3) The permissibility conditions enacted by pornographic speech may, on at least some occasions, silence and subordinate women.

Which allows us to reach the conclusion:

(4) It is possible that pornography, qua speech act, serves to silence and subordinate women.

Note how weak the conclusion here actually is. It does not say that pornography does silence and subordinate women, it merely says it is possible for it do this. A lot more would need to be done to satisfy the anti-porn position held by the likes of MacKinnon. But that’s in keeping with the “modest” (her words) aim of McGowan’s article. She is merely introducing a proposal that might work in MacKinnon’s favour. She is not actually defending MacKinnon’s view, nor is she saying that pornography should be banned or otherwise restricted.

Still, McGowan thinks there might be some flaws in her argument even when understood in this weak, proposal-like form. She addresses three of them. Let’s close by looking at these.

3. Is the Proposal Flawed?
The first objection is the mere fiction-objection. It has been pointed out by others that whenever a conversational participant tells a story or joke, the normal illocutionary force of the utterances that make up that joke seem to be suspended. Thus, if I say “A rabbi, a priest and a nun walked into a bar…” as the preface to a joke, I am not taken to be genuinely asserting this to be the case. In other words, I am not claiming that three such people did walk into the bar and I am not taking responsibility for the falsity of this claim.

But if the illocutionary force of an utterance is suspended when telling a joke or story, why is this not true of the exercitive force of pornographic speech? If pornography is mere fiction, then maybe it doesn’t have the exercitive force of other non-fictional contributions to the sociosexual conversation.

(5) Pornography is mere fiction hence it does not have the illocutionary force typical of non-fictional contributions to the sociosexual conversation.

McGowan says this is wrong. Although it is true to say that some illocutionary forces are suspended during the telling of a story, the exercitive force need not be. Furthermore, the exercitive force can extend beyond the fictional domain in which it is first presented. She gives a long example of this, which I’ll abbreviate here. Suppose I start telling a joke about a chicken pecking at a Guinness tap in bar. By doing so, I have altered the salience facts of the conversation. This renders certain future conversational contributions permissible. For example, it is suddenly okay for my conversational partner(s) to start talking about chickens and his like or dislike for them. The joke enacts permissibility conditions for future conversational contributions and these permissibility conditions hold outside of the joke-telling portion of the conversation.

(6) Purely fictional utterances can still have an exercitive effect within a conversation, and that exercitive effect can extend beyond the fictional realm of the utterance.

Of course, that’s just one example. McGowan never shows how the same is true of pornographic speech. Indeed, this is a general problem with her proposal. Even if it is sketchy, and intended merely to provoke further debate and discussion, it is really opaque about the mechanisms through which pornographic speech achieves its exercitive effect. Or, as in this case, how it manages to avoid having a purely fictional effect.

A second, and arguably more serious, problem is that the permissibility conditions enacted by conversational exercitives are extraordinarily weak. Go back to the Mike’s Dog example from earlier on. If Mike first talks about his dog, and I later refer to “the dog” without intending it to refer to Mike’s dog, the claim is that I have violated some permissibility condition in the conversation. But so what? Mike could easily adjust to or accommodate my utterance. For instance, if I say “the dog slept in my room last night”, Mike will realise that I’m not referring to his dog since he knows his dog slept in the shed last night. As with any good conversational partner, Mike will be inclined to cooperate with me and adjust the rules accordingly. Hence, I don’t really violate any rule. Not in a serious way anyway. Couldn’t the same be true of pornography?

(7) The permissibility conditions enacted by conversational exercitives are extremely weak: it is very difficult to violate them in a serious manner because the rules of the conversation constantly adjust to accommodate the behaviour of the parties.

McGowan responds by saying that, although this may sometimes be true, it doesn’t rule out the serious violation of permissibility conditions in some conversational contexts. This is because the violation may go unnoticed at first, which then creates tensions or problems downstream. For instance, when Mike and I start talking, respectively, about “the dog” and its various antics, we might not realise that we are talking about different dogs. In this case, the violation of the rule goes unnoticed and problems will ensue at a later stage as we become more and more confused about the dog and what it got up to. How this might work in the case of pornography is another question, but this is an interesting idea nevertheless.

(8) It is possible for there to be a serious violation of some of the permissibility conditions enacted by conversational exercitives, e.g. as when the initial violation goes unnoticed.

The final, and related, problem is that the kinds of permissibility conditions enacted by conversational exercitives are easily reversed. Thus, to use the dog example again, though our repeated use of the phrase “the dog” to refer to different dogs might violate permissibility conditions within the conversation we are having, Mike could easily rectify the problem by taking some time out to clarify that the phrase actually refers to his dog. Again, couldn’t the same be true of pornography? In other words, even if the pornography changes the permissibility conditions within the sociosexual arena, couldn’t one of the participants reverse those changes by enacting a new set of permissibility conditions?

(9) Conversational exercitives are easily reversed, thus any exercitive effect that pornographic speech might have can be overturned by future conversational contributions.

In response to this, McGowan argues that some conversational exercitives might be hard to reverse. For example, in the Italian Boot? case given earlier, Seamus changed the standards of accuracy within the conversation by asserting that Italy didn’t look like a boot because of its squiggly outline. It has been observed by other philosophers of language (Lewis is specifically mentioned) that once standards of accuracy are raised within a conversation, it is difficult to reverse them. (I don’t know why)

In this instance, McGowan argues that something similar is true in the sociosexual arena. Specifically, she says that once a formerly taboo sexual practice (e.g. anal or oral sex) becomes an accepted part of the sociosexual conversation, it becomes difficult to revert to the taboo. Pornographic speech may play an important role in breaking down these sexual taboos. Similarly, if it really is true that pornographic speech makes it difficult for some women (in at least some contexts) to refuse sexual advances, this might be very difficult to reverse. If “no” is really taken to mean “yes”, then it’s difficult to know what someone could do to reverse that state of affairs. Repeatedly saying “no” doesn’t solve the problem.

4. Concluding Thoughts
That brings us to the end of McGowan’s article. To briefly recap, McGowan has argued that it is possible that pornography qua speech act silences and subordinates women. This is because pornographic speech might be exercitive in nature: it might enact permissibility conditions within the sociosexual arena. To support this claim, McGowan introduced the concept of the conversational exercitive, which is distinct from the Austinian exercitive, and can more plausibly be used in this debate.

Two concluding thoughts occur to me. First, although McGowan’s proposal is pretty sketchy in this article — particularly about the mechanisms through which pornographic speech replicates the effect of the conversational exercitive — she has not been idle since it was originally published. She has written several other articles that expand upon these basic ideas. I have not read those articles. It could well be that the necessary detail is found in them.

Second, if McGowan is right to view the sociosexual arena as akin to a conversation, and to view pornography as a contribution to that conversation, it is difficult to see why pornography should be singled out for approbation. Many other contributions to sociosexual conversations — e.g. actual conversations about sex, the publication of sex tips and advice columns etc. — could be exercitive. What’s more, those contributions could well have the silencing and subordinating effect alluded to by the likes of MacKinnon. Whether that’s a sensible thought or not, I leave to the reader to decide.

Thursday, December 20, 2012

Pornography is typically viewed as a form of legally protected speech. But could it, as a form as speech, actually constitute a harmful act? Before even attempting to answer that question a brief divagation is required.

If you’ve ever studied feminist legal theory (and really, who hasn’t?), the name of Catherine MacKinnon should be familiar to you. She is one of the so-called “radical” feminists, famous for reshaping sexual harassment laws, developing a Marxist theory of feminism, and her role in highlighting the use forced impregnation in genocide. She is also infamous for her campaign against pornography in the 1980s and 1990s. A campaign waged alongside fellow radical feminist Andrea Dworkin.

As part of this campaign, MacKinnon tried to argue that pornography should not be legally protected speech. On the contrary, pornography, as a form of speech, was itself harmful and discriminatory to women. The argument is most clearly presented in her 1993 book Only Words. In it, MacKinnon essentially argues that the “speaking” of pornography is itself harmful and discriminatory, not merely something that contributes to, causes or encourages harm and discrimination (though it may do those things too).

MacKinnon’s argument is a legal/political one, not a true philosophical one, and her work tends to provoke negative reactions due to its often sensationalistic claims, and turgid academic style. I have certainly never enjoyed reading anything she has written. But other philosophers have attempted to develop her argument with greater sophistication and philosophical clarity. I want to look at some of those attempts here.

In doing so, I will be guided by Mary Kate McGowan’s article “Conversational Exercitives and the Force of Pornography”. In the article, McGowan considers Rae Langton’s version of MacKinnon’s argument, one that takes advantage of the tools of speech act theory. Though ultimately finding Langton’s argument lacking, McGowan uses it as a springboard for developing her own argument.

I want to take you through the stages of McGowan’s analysis in this series of posts. I do so for two reasons. First, the creative use of speech act theory in this debate fascinates me as I’m interested in the use of speech act theory in legal philosophy. Second, the ethics of pornography is a controversial topic, one that’s always sure to provoke debate.

In the remainder of this post, I will do three things. First, I’ll give a brief primer on speech act theory, giving particular attention to the concept of an Austinian exercitive. Second, I’ll outline and explain Langton’s version of MacKinnon’s argument. And third, I’ll present McGowan’s critique of that argument.

1. Speech Acts and Austinian Exercitives
The basic presumption of speech act theory is that words and sentences don’t merely report how the world is, they also do things to the world. So, for example, if I say “I accept your offer” to someone offering me a car for sale, I thereby create a legally binding contract. My utterance has done something: it has changed the nature of the relationship between me and the person offering the car for sale. How does this happen?

In his classic discussion, How to do Things with Words, J.L. Austin helped to answer this question by distinguishing between the three forces of an utterance: (i) the locutionary force; (ii) the illocutionary force; and (iii) the perlocutionary force. The locutionary force is simply the proposition asserted. The illocutionary force is the action constituted by the utterance. And the perlocutionary force is the effect that the utterance actually has on its audience. To give an example, if I say to my friend “I promise to pick you up at the train station”, then the proposition asserted is that I promise to pick them up, the action constituted by the utterance is that of promising, and the effect could be any number of things, e.g. that they believe me, or that they are gratified/reassured.

For our purpose, it is the illocutionary force that is key. For if it is true that utterances are themselves a type of action, MacKinnon’s argument has a foundation on which to build. Consider an example. Suppose I hire a hitman to kill my wife, saying to him “I hereby hire you to kill my wife”. The illocutionary force of that utterance is a promise to pay him in return for killing my wife. This is itself immoral and illegal: it is immoral/illegal to promise to pay someone to do something like this. That is true irrespective of the actual causal effect of my utterance. The hitman may or may not kill my wife. That would not make my speech act any more or less immoral. This is the kind of claim MacKinnon is trying to make about pornography.

But the foundation is not enough. We need a plausible reason to think that the pornographic speech act is immoral. This is where Lanton’s argument comes in. She claims that pornography is an exercitive speech act and that as an exercitive speech act it constitutes a form of harm to and discrimination against women. This raises the question: what the heck is an exercitive speech act?

An exercitive is a particular type of utterance which has the effect of setting out the permissibility conditions on actions in a particular kind of environment. It is almost impossible to understand this idea in the abstract, so an example is in order. Imagine that I am the president of a private club. During one of the club meetings, I declare that “smoking shall not be permitted on club premises any more”. I have just performed an exercitive speech act. I have made it the case that smoking is no longer permitted in the club. (Call these “Austinian exercitives” to differentiate them from another type of exercitive that will introduced later in the discussion).

Note, however, that things are not that straightforward. In order for an exercitive speech act to be successfully performed, a number of conditions must be satisfied. First, the utterance must somehow express the exercitive content. It need not expressly say that “X is no longer permitted”, but it needs to communicate the message that X is no longer permitted. Thus, I could say “No more smoking” or “I am against smoking” and this may communicate the same exercitive content, without expressly saying that “Smoking is no longer permitted”.

Second, in order to successfully perform an exercitive, the speaker must have the authority to determine what is permissible and impermissible. That is why my being president of the club is important in the example just given. If I were not president, I would not have the authority to render smoking impermissible. Official title is not always needed for authority. Parents, for example, have the authority to set permissibility conditions for their children, without needing officially recognised authority.

In addition to these success conditions, a number of conditions must be met in order to prevent the exercitive from being defective. This success/defectiveness distinction is common in speech act theory. If I say “There is a pink unicorn in my garden”, then I have successfully performed an assertive speech act. But the act is defective because the assertion is not true. Likewise, an exercitive may be defective if, for example, no one understood what the speaker was trying to communicate, or if it never has the intended effect.

We’ll return to some of these ideas later when looking at McGowan’s critique of Langton. For now, let’s move on and see exactly what Langton’s argument is.

2. Langton’s Speech Act argument against Pornography
Langton’s argument — as filtered through the lens of McGowan’s article — is that pornography unjustly subordinates and silences women. In reality then, Langton is presenting two separate arguments, the first defending the subordination conclusion, and the second defending the silencing conclusion. Furthermore, she divides each of the arguments into two phases. In the first phase, she argues that pornography is an exercitive speech act i.e. one that enacts a set of permissibility conditions in a particular domain. In the second phase, she argues that these permissibility conditions have the effect of silencing or subordinating women.

Let’s look at the subordination argument first. As McGowan describes it, the argument runs like this:

The first premise is contentious. Indeed, it is what McGowan explicitly criticises in the remainder of her article. But something can be said in its favour. The idea is that heterosexual pornographic material, through its depiction of male-female or female-female sexual contact, stipulates what kind of conduct is appropriate in heterosexual sexual relationships. In other words, it is effectively saying “This is how things are to be done! No other way is permissible”. It is thus an Austinian exercitive, covering the permissibility conditions within the heterosexual sociosexual arena.

The second premise then takes up the baton by saying that the actual content of those permissibility conditions is such that women are ranked as inferior, discriminated against, and deprived of important powers. The idea here is that the actual depiction of sexual relationships in heterosexual pornography is such that it is deemed permissible to treat women inferiorly or to discriminate against them. This too is contentious claim, depending as it does on an interpretation of what heterosexual pornographic materials (typically) “say”. No doubt many would dispute the interpretation, but I won’t be getting into that issue here.

The third premise is simply a general principle classifying certain kinds of conduct as subordinative. I don’t see anything particularly contentious about this, although a word or two must be said about the use of “unfairly” in this principle. It is possible that we can fairly subordinate or discriminate certain people, psychopaths or criminals for example. Certain conditions are met in these cases that renders the unfair treatment acceptable. But I think everyone would agree that those conditions could not be met in the case of women. If that’s right, and if premises (1) and (2) are acceptable, then the argument as a whole goes through.

What about the silencing argument? It follows the same general pattern as the subordination argument, but is rather more complex. Instead of focusing directly on permissibility conditions for the treatment of women, it claims that pornography is such that it enacts the success conditions for particular kinds of speech. These conditions work in such a way that they prevent women from saying the kinds of things they would like to say.

To set out the argument in full:

(5) Pornographic “speech” sets out the success conditions for certain kinds of speech act (“success conditions” being a type of permissibility condition specifically related to speech, they determine what it is permissible or possible for certain people to say).

(6) These success conditions stipulate that there are certain kinds of speech act that women cannot perform.

(7) Women ought to be able to perform these speech acts.

(8) Therefore, pornography unjustly silences women.

This argument is a lot trickier to explain, but its starting premise (5) is broadly equivalent to that found in the subordination argument. The idea, once again, is that the depiction of women in pornography enacts a set of permissibility conditions concerning what they can say and do.

The specific claim, in premise (6), is that these permissibility conditions are such that women are not allowed to perform some kinds of speech act. The classic example, and one used by Langton, is that of saying “no” to certain types of sexual advance. In at least some pornographic material, women are depicted in such a way that it is deemed impermissible (perhaps impossible) for them to deny these advances. Or so the argument goes. Again, this is highly contentious, depending on an interpretation of what pornography does or does not say. But if one can get over that contentiousness, the remainder of the argument should go through.

The net result is that MacKinnon’s overarching conclusion — that pornography unjustly silences and subordinates women — is defended. This is depicted in the diagram below.

3. McGowan’s Critique
Is any of this remotely persuasive? As I say, the second premise of each argument is highly contentious. But since there is such a huge volume of pornographic material out there one could probably make a good case that at least some (maybe a lot) of it depicts women in such a manner that they are deemed inferior or incapable of denying sexual advances. The more philosophically interesting premise, and the one McGowan focuses on, is the first: is it really true to say that pornographic speech performs an Austinian exercitive?

An initial worry would be that pornography is not really of form of speech, or rather not classifiable as an utterance. This can probably dismissed on both pragmatic and conceptual grounds. Pragmatically, one can simply argue that pornography is legally classified as a form of speech by many of its advocates. This allows them to avail of free speech exemptions from regulation. Langton and MacKinnon are merely playing on the advocates territory in this regard. Conceptually, one can argue that it does make sense to view certain kinds of artistic or cinematographic output as “speech”, something that is “spoken” by its producers and distributors to an audience or group of listeners.

A more serious set of worries is that pornography, even though it is a type of speech, cannot be exercitive in nature. This is the argument that McGowan pushes. To make it, she returns to the notion of success and defectiveness conditions in the analysis of speech acts. Let’s use the earlier example of I, as club president, banning smoking on club property. In this example, I was clearly performing an exercitive speech act. But this was only because: (a) I directly or indirectly intended for my speech to be exercitive in nature; (b) the semantic content of my utterance conveyed my intended meaning; c) the relevant audience would have been aware that this was my intention; and (d) I had the requisite authority to perform the exercitive (I was club president after all).

The problem is that these conditions are not typically met in the case of pornography. Maybe some producers of pornography do intend to silence and subordinate women, but many may not. Even if they did have that intention, the consumers of pornography would probably not recognise it. This will most often be caused by the fact that the semantic content of pornography will be exceptionally opaque — not an efficient means to convey the intended exercitive. Finally, and perhaps most fatally, producers and distributors of pornography do not have the authority to enact permissibility conditions concerning heterosexual behaviour or women’s speech acts. And if someone thinks that they do, they are severely mistaken (note: this leaves open that possibility that pornography may indirectly or subconsciously have the effects identified by MacKinnon. This, however, transforms the argument into a purely causal one, not one based on the intrinsic flaws in the pornographic speech act)

Although a failure to meet one or two of these conditions might be tolerable, and might still result in an exercitive speech act being performed, the failure to meet all of them would be fatal to Langton’s analysis. That gives us the following:

(10) In order to successfully and non-defectively perform an Austinian exercitive, a series of conditions must be met: (i) the speaker must intend (directly or indirectly) for their utterance to have the exercitive effect; (ii) the semantic content of the utterance must convey that intention; (iii) the audience must be able to appreciate the exercitive intention; and (iv) the speaker must have the requisite authority.

(11) In most cases, these conditions are not met in the case of pornographic speech.

(12) Therefore, it is unlikely that pornographic speech enacts the permissibility conditions for heterosexual behaviour or for the kinds of speech act women can perform.

This has been added to the argument diagram below.

Where does that leave us? As McGowan sees it, the flaw in Langton’s argument is its reliance on the Austinian conception of the exercitive. She thinks it is possible for the argument to made with an alternative conception of the exercitive, something she calls the “conversational exercitive”. This would avoid the problems just highlighted, but may have some problems of its own. We’ll consider McGowan’s proposal in part two.

Tuesday, December 18, 2012

This series is about implicature, and the role it might play in the interpretation of the law. The necessary theoretical background was sketched in part one. In this part, I turn to consider the legal ramifications. A nice way to introduce this topic is to consider a couple of examples. So here goes.

The Ninth Amendment to the U.S. constitution says the following:

The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.

The use of the phrase “others retained by the people” seems to imply that people have other rights and that the constitution can’t be used to override them. Needless to say, much debate has ensued over the years as to what these other rights might be, and how they should be used, if at all, in constitutional jurisprudence. Originalists such as Randy Barnett argue that the rights in question are natural libertarian-esque rights; but liberal justices, such William O. Douglas, have used the implication to support their preferred flavour of rights too.

Consider another example, this time drawn from the Irish Constitution (Bunreacht na hEireann). Articles 40.3.1 and 40.3.2 state:

40.3.1 - The State guarantees in its laws to respect, and, as far as practicable, by its laws to defend and vindicate the personal rights of the citizen.

40.3. 2° - The State shall, in particular, by its laws protect as best it may from unjust attack and, in the case of injustice done, vindicate the life, person, good name, and property rights of every citizen.

The implicature here is somewhat more subtle, but the italicised portions suggest that (a) the constitution protects personal rights and (b) these rights are not exhaustively listed in Article 40.3.2. In other words, the implication here is similar to that in the Ninth Amendment of the U.S. Constitution: there are more rights protected by the constitution than explicitly mentioned. This implication became a cornerstone of Irish constitutional jurisprudence in the latter half of the 20th Century when Irish courts identified and protected a set of “unenumerated rights”.

These two examples have much in common. They both involve constitutional provisions that imply more than they say. That is: they suggest that there are more legal rights and correlative duties than are explicitly listed in the constitutional text itself. But should such implicatures really play a role in the interpretation and application of the law? Should the U.S. courts really make use of the unstated rights in the Ninth Amendment? Should the Irish courts really have developed a doctrine of unenumerated rights? These are significant questions and the answers to them make a big difference to the life of the law.

In her article, “Law and Conversational Implicatures”, Francesca Poggi argues that implicatures have a limited role to play in the interpretation and application of authoritative legal acts such as constitutions and statutes, but that they may have a more extensive role to play in the interpretation of “private acts of autonomy” such as are found in contract law. In the remainder of this post I want to discuss her arguments.

1. Implicature and Authoritative Legal Acts
Let’s start with the ordinary conversation, something explored in more detail in part one. In the ordinary conversation, implicature is both common and necessary. We imply more than we say because we adhere (in general) to Grice’s cooperative principle: we say no more than needs to be said, in keeping with the purpose and aims of the conversation in which we are engaging. In this context, it is both right and proper for the speaker and listener to appeal to the implications of their speech when trying to figure out what is really being said.

But what about in the law? A simple argument in favour of implicature is that the production and interpretation authoritative legal acts is directly analogous to and ordinary conversation. The legislatures and drafters of legal texts are the “speakers” and the interpreters and appliers of the law are the “listeners” (who, in turn, pass on the message to the people affected by the law). Thus, why shouldn’t the Irish or American courts appeal heavily to implied meaning when interpreting their respective constitutions? They would do so if they heard the same utterances in everyday conversation and the law is like those conversations.

This suggests the following analogical argument:

(1) In ordinary conversations, implicature should play a significant role in the interpretation of what has been said.

(2) The production and interpretation of authoritative legal acts is like an ordinary conversation in all important respects.

(3) Therefore, probably, implicature should play a significant role in the interpretation of authoritative legal acts.

Simple, right? Not so fast. A tsunami of objections have probably just flooded into your brain. One of them is particularly obvious. If the production of legal texts is like an ordinary conversation, it is a very unusual conversation since the “listeners” don’t seem to play any role in it. In an ordinary conversation, the listener gets to respond to the speaker, sometimes asking them follow up questions. This can be a boon when it comes to clarifying the background context and the relevant implicatures. But this is not possible in the case of legal texts, at least not directly. They are simply spoken, and the listener has to make of it what they will. Indeed, sometimes the speaker is temporally distant from the listener, typically by a measure of years, and occasionally by a measure of centuries. (Note: there may be some scope for “back-and-forth” in the legal conversation: courts may highlight problems with a legal text and legislatures may respond, but this isn’t necessarily the same thing since even if there are problems, the text still needs to be interpreted and applied).

There are ready responses to this objection. One could argue that there is no need for a listener to talk back before implicature becomes acceptable. Monologues and directives are features of ordinary conversation, and no one doubts that implicature has a role to play in their interpretation. But this is too easy. As Poggi points out, because judges and litigants are not immediate participants in the legal conversation, they must rely heavily on imperfect secondary information to work out what the background context to the legal texts is/was. But this background context is the soil in which implicatures grow. If they don’t know what that it is, then it’s not right for them to make use of implicatures.

She explains with an analogy of her own. Imagine I run up to you on the street and say “I’m really stressed out, I need to calm down”. You know nothing about me, or about what has stressed me out. But I am an inveterate smoker, everyone who knows me knows this. When I say “I need to calm down” I expect you to understand the implication: “I want a cigarette”. Surely, in this context, my expectation is unreasonable? You and I do not share the necessary background context that makes such an implicature both obvious and reasonable. Fair enough? But then isn’t the legal “conversation” basically like this? The drafters of the constitution can’t reasonably expect temporally distant listeners to know all the details of the context in which they speak. And if they can’t expect this, then they can’t expect implicature to play a significant role in how their texts are interpreted.

That gives us the following:

(4) The production and interpretation of authoritative legal acts is not like an ordinary context because (a) the listeners (judges and litigants) are not true parties to the conversation and thus rely on imperfect secondary information to flesh out the background context; (b) knowledge of this context is needed for working out implicatures; and c) this being so, it is unreasonable for legal “speakers” to expect the “listeners” to understand the implicatures of what they say.

This is an interesting argument, and has a lot similarities with some of the arguments proffered by constitutional originalists in the U.S.. I looked at those arguments in a previous post.

Although interesting, I think Poggi has a more decisive objection to the use of implicatures in the interpretation of authoritative legal texts. This objection also relies on a disanalogy between the ordinary conversational context and the law, and to understand it properly we need to go back to Grice’s account of implicature. As noted, Grice felt that implicatures arose from the fact that the cooperative principle governed everyday conversation. The participants in those conversations want to communicate and want to be understood. Consequently, they need not always say everything they mean: they can rely on the other party to fill in the blanks. So, for example, when I say “Could you pass the salt?” you don’t, unless you want to be smarmy, reply by saying “I certainly have the physical capacity to do so”. You understand that I was not really asking about your ability to pass the salt; rather, I was making an indirect request.

But as Poggi points out, this is definitely not true of the conversations between law-makers and their subjects. That context is not a purely cooperative one, but rather a highly strategic one, one in which cooperation is just one of many strategies available to the subject. Legal subjects don’t necessarily want to be nice, and cooperate by following the law. Indeed, they will often look for ways to avoid the reach of the law. As Oliver Wendell Holmes said in his famous lecture “The Path of Law”, to truly understand the law, we need to look at it from the perspective of the “Bad Man”, the one wants to break the law and get away with it.

This being so, the “speakers” of authoritative legal texts can’t rely on the good faith of their “listeners”. If the background context is uncertain, as it typically is, they can expect their listeners to twist the evidence to support whichever version of that context that is consistent with their interests. So we have another objection to (2):

(5) The production and interpretation of legal texts is not like an ordinary conversation because the speakers cannot rely on the good faith of their listeners: the context is a strategic one, not a purely cooperative one.

I think this an appealing argument, but it leaves me somewhat cold. Although I agree that legal subjects are often non-cooperative, and that lawyers will present evidence that supports whichever view of the context most suits their client, I’m not sure that this reduces the role for implicature. It seems to me that, despite their best efforts, authoritative legal texts will imply more than they say (as seems clear in the case of the US and Irish constitutions). Are courts simply to ignore this? Or will they just have to muddle along, figuring out the most appropriate implicatures they can, based on relevant normative arguments and historical evidence.

2. Implicature and Private Acts of Autonomy
So much for constitutions and statutes, what about private acts of autonomy such as contract? Here, the analogy with ordinary conversations is cleaner and less contentious. But there are still some difficulties that need to be worked out. Consider first simple, everyday contractual agreements, such as those created when you go into a store, pick an item off a shelf, a purchase it at the till. Some contract theorists don’t even view these as contracts, but assuming they are, they seem like the kinds of contracts in which the rules of implicature would apply (if they need to apply). Such contracts are negotiated face to face and in conversation.

Things get trickier when contracts are put in writing. When this happens it is typically to create greater certainty. One might be inclined to think that this desire for certainty stems from the realisation that the cooperative principle does not always govern such relationships. Sometimes people are trying to “pull a fast one” or take advantage of one another. Thus, perhaps the kinds of conversations that lead to the creation of contracts are more like the conversation between the law-makers and their subjects, than they are like ordinary everyday conversations.

Perhaps. But Poggi has another interesting argument to make here. As she puts it, one of key elements of the background context to the contractual conversation is the legal system in which that conversation takes place. Many of those legal systems appeal to something they call the principle of bona fides or good faith. According to this principle, which derives from Roman Law, contracts are to be interpreted on the assumption that the parties negotiated with one another in good faith. Thus, they are assumed not to be “pulling a fast one” on each other. This principle effectively overrides any empirical concerns we may have about the strategic manipulation in contract negotiation, and instead demands that we assume good faith. This may be unrealistic, but the norm dominates reality, and the parties will suffer if they try to be manipulative.

Poggi gives an example. Suppose the following dialogue took place during the negotiation of a contract for a horse:

Offeror: I would like to buy your horse, but first I want to know if there’s anything wrong with it?

Offeree: Well, the horse does suffer from weak hooves.

Following the principle of good faith, we would say that the offeree’s statement implies that there is nothing else wrong with the horse. This is despite the fact that the statement is consistent with there being many other things wrong with the horse. And despite the fact that the offeree may be deliberately using this phrase with this in mind. The principle of bona fides will override any mala fides on the part of the offeree. If it turns out that other things are wrong with the horse, and they were known to the offeree, he or she will be liable for misrepresentation. The court will assume that their statement implied that nothing else was wrong with the horse.

3. Conclusion
To sum up, implicature may have some role to play in the interpretation of legal texts, but the role could be limited. Authoritative legal texts could be analogised to ordinary conversations, but there are some crucial differences. First, the “listeners” (judges and litigants) are not true participants in the conversation. Consequently, the “speakers” cannot reasonably expect the listeners to share the background context needed to flesh out the relevant implicatures. Second, the “conversation” between law-makers and subjects is not a cooperative one, and cooperation is often needed to make implicatures work.

Things are different when it comes to private acts of autonomy such as contracts. Here, the principle of good faith may be part of the legal context in which the contract is negotiated. That essentially makes Grice’s cooperative principle part of the context of contractual conversation, which in turns facilitates implicature.

A gangster walks into a local restaurant. The restaurant has been doing well recently, and the local criminal gangs are aware of this fact. The gangster walks over to the restaurant owner, stares conspicuously around the room, and says “This is real nice place you got here. It would be a shame if something happened to it.”

Ostensibly, the gangster’s statement is one of fact: depending on what the “something” in question is, it may indeed be a shame if it happened to the restaurant. But of course no one reading the statement really thinks it is as innocuous as that. Everyone knows that it constitutes a thinly-veiled threat. Why is this?

The answer lies in something known as conversational implicature, which is the fancy label given to the mundane phenomenon that the semantic content of a particular utterance or sentence is not exhausted by the meaning of the words that make up that utterance. Which is to say: it is possible for an utterance to have an implied meaning, which is just as important and just as readily understood as that of its explicit meaning. Indeed, sometimes it is more important than the explicit meaning, as in the case of the gangster’s veiled threat: If the restaurant owner didn’t pick up on the implied meaning, he could create problems for himself.

While the phenomenon of implicature is mundane, it can lead to problems in particular contexts. One of those contexts is the law. In a certain sense, laws are created through speech acts. Legislatures and legal officials “speak” the law in the form of both written and oral utterances. Is it possible for those utterances to imply more than they explicitly say? And if so, is it acceptable for judges to appeal to those implied meanings when interpreting and applying the law?

Over the next two posts I want to look at these questions, and I do so with the help of Francesca Poggi’s article “Law and Conversational Implicatures” (which appears in the impressively obscure International Journal of Semiotics and Law). In this post, I kick things off by outlining Grice’s classic theory of conversational implicature, before then considering the distinction between generalised and particularised implicatures. In the next post, I’ll address the application of these concepts to the law. As we’ll see, Poggi thinks that implicature has a limited role to play in the interpretation of statutes and other “authoritative legal acts”, but it could have a more expansive role to play in the interpretation of contracts and other “private acts of autonomy”.

1. Grice on Conversational Implicature
The classic model for understanding how conversational implicature works was developed by the philosopher Paul Grice (pictured above). His model is built around something he calls the cooperative principle. This principle allegedly governs most ordinary conversational exchanges, and is constituted by a number of maxims. Let’s work our way through Grice’s account in a bit more detail.

Let’s start with a model utterance which will illustrate the phenomenon of implicature:

(a) “I am reading John’s book.

Although perfectly natural as a linguistic construct, this utterance is ambiguous. If we focused purely on the semantic content of the words that it contains, we would be left with at least two plausible interpretations. Either I am saying that I am reading a book that was written by John, or I am saying that I am reading a book that is owned or possessed by John. Nothing in the words tells us which of the two meanings should apply.

If the utterance really is ambiguous in this manner, then one is left with the burning question: why say things this way? Why is an utterance like this perfectly natural even though it has two possible meanings? The answer lies in the cooperative principle. According to Grice, in ordinary conversational exchanges, we all tend to adhere to the following principle:

Cooperative Principle (CP): Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.

In essence, the cooperative principle holds that whenever you make a contribution to a conversation, you should say whatever is required to convey your intended meaning, but no more than is required. In other words, if the context in which the conversation takes place makes it clear that utterance (a) is a reference to a book that John has written, then utterance (a) is the acceptable way in which to convey that meaning, despite the latent linguistic ambiguity. The participants in the conversation will be able to work out the implication for themselves; no more needs to be said. For example, suppose we are attending a book launch, to celebrate John’s recently published book. You find me thumbing through the pages of a book, and ask me what I am reading. I reply by saying “I am reading John’s book”. In this context, it’s perfectly clear which of the two possible meanings applies.

Grice unpacks the cooperative principle by breaking it down into a series of maxims. They are as follows:

Maxims of Quantity:

Be as informative as is required.
But be no more informative than is required.

Maxims of Quality:

Do not say what you believe to be false.
Do not say that for which you lack evidence.

Maxim of Relation:

Be relevant.

Maxims of Manner:

Be clear, avoid obscurity and ambiguity. Be brief and be orderly.

Now some of these maxims seem a little unhelpful, particularly those counseling against ambiguity, since the phenomenon of implicature, at least as illustrated by the example of utterance (a), seems arise even though they are violated. But in many ways that’s the whole point. As Poggi notes, implicature depends both on the meaning of the words used and on the maxims that apply to the particular conversational context. But which maxims apply in which context is variable. Thus, in some contexts the avoidance of ambiguity is trumped by the efficiency and brevity of communication.

A good example of this is sarcasm. If I say to you “that was a real funny joke”, it’s likely that I’m being sarcastic. This will usually be obvious, thanks to both the context (no one laughed) and the manner of speech (inflection and tone). This is a classic example of implicature, since the implied meaning of what I say diverges considerably (indeed, orthogonally) from the linguistic meaning of what I say. But this is made possible by the deliberate and obvious violation of the first maxim of quality: do not say what you believe to be false. We both know that this maxim usually applies to our conversations, but in this context its deliberate violation creates a dramatic effect without leading to any confusion about the intended meaning.

All of which leaves us wondering about the precise the status of the cooperative principle and the associated maxims. Are they prescriptive? In other words, should we follow them? Or are they descriptive? Do they merely describe what is typically happening when people communicate successfully?

Poggi opts for a quasi-prescriptive interpretation of the principle and the maxims. She views them as “customary hermeneutical technical rules”, which means:

Poggi’s Rule: If you follow the CP and its associated maxims then you will (in general) cooperate, understand what others are saying, and be understood.

The “in general” clause is key here (and is my addition) since, as we have just seen, it is possible to be understood even when you do violate the maxims. But that is only because (and if) the context makes clear what the implicature really is. If I send you a text message saying “I am reading John’s book”, and there is no preceding context in which my utterance is situated, then ambiguity becomes a problem. It’s highly likely that you’ll need to ask me to clarify the intended meaning. All of which brings us to the next issue: the distinction between particularised and generalised implicatures.

2. General and Particularised Implicatures
The basic idea of implicature is straightforward: utterances can often mean more than what they say. But its manifestations are many and complex. One of the complexities arises from the fact that there can be generalised and particularised implicatures. That is to say: implicatures that hold true across all contexts, and implicatures that only arise in specific contexts. Here’s an example of the former:

(b) “I went into a house”

This carries the general implicature that the house was not mine. Thus, the utterance could be construed as “I went into a house and the house was not mine”, but the italicised portion is left unsaid. The reason being that referring to the house using the indefinite article is generally understood as being the way to refer to a house that is not yours. The normal way of referring to one’s own house would be to say “I went into my house”.

According to Poggi, generalised implicatures are made possible by the maxim of quantity — one says no more than needs to be said — and are partially (if not entirely) independent of the speaker’s intentions. In other words, the implicature arises even if the speaker did not directly intend it. This could actually be a problem in some instances, for a sentence could carry a generalised implicature that actually defeats the speaker’s intentions. For example, I could say “I went into a house” and intend for it to be understood that the house was mine, but unfortunately listeners would not pick up on this due to the generalised implicature. This, however, would be my fault since I chose an inappropriate string of words to convey my intended meaning.

The situation is very different when it comes to particularised implicatures. These only arise in a specific context, and they only work when that context is shared by both the speaker and the listener. Consider the following conversation in a restaurant after the bill has been paid:

(c) Andy: “I’m sorry I made Paul pay the bill.”

Barry: “Paul owns four houses.”

Here, the implied meaning of Barry’s utterance is that Andy should not feel sorry for Paul, since Paul owns four houses and is thus wealthy enough to pay for the meal. The implied meaning is understood by both the speaker and the listener in the specific context. But if we detached Barry’s utterance from the specific context, no such implicature would arise.

Furthermore, Barry’s implied meaning might not be appreciated by Andy if the background context of the conversation is not fully shared. For instance, Andy might know that Paul is in a lot of financial trouble because of his properties, but Barry might not. Thus, Andy might think that Barry is merely emphasising his own remorse by highlighting the financial troubles. But Barry might intend the exact opposite because he knows nothing of the financial troubles. In this instance, there is a communication failure, and it is attributable to the lack of a shared context. This is a significant point, and one we shall return to in part two when discussing implicature in the law.

All of which brings us back to the example at the start of this blog post. As we saw, when the gangster says to the restaurant owner “this is a real nice place, it would be a shame if something happened to it”, the implication is that this is a threat. But is this implication generalised or particularised? The obvious answer is to say that it is particularised. After all, detached from the story of the gangster and the restaurant owner, that string of words carries with it no obvious implication.

Or does it? This is an interesting case. If I saw those words strung together in that particular order, but detached from a specific conversational context, I would still be inclined to think they contained an implied threat. This is because this linguistic form of the implied threat is so common in popular culture. Thus, it might be that the gangster’s utterance has a generalised implicature. Pinker points to a similar phenomenon in relation to the request “Would you like to come up and see my etchings?”, which, in our culture, is almost always understood to imply an invitation to sexual congress. These examples suggest that the line between the particularised and the generalised implicature might be a fuzzy and somewhat fluid one. Something could start out life as a particularised implicature, but if it becomes widely known, it may end up a generalised implicature.

Anyway, we shall leave it there for now. As we have seen, utterances often contain implicatures. That is: they imply more than they actually say. This is made possible, according to Grice, by the cooperative principle of ordinary conversation, and its associated maxims, although the applicability of these maxims may vary depending on the context. Furthermore, implicatures can be generalised or particularised. If generalised, they always arise whenever the relevant utterance is made. If particularised, they only arise in a specific context, provided that the context is shared by both the speaker and the listener. This raises all sorts of interesting questions for the law. We’ll look at these in part two.