Pages

Saturday, April 30, 2011

I'm sure most people reading this will be familiar with the concept of a cognitive bias. There are innumerable papers in experimental psychology highlighting and exploring the various biases that human beings exhibit.

Well, with the term "bias" floating around so often, it's worth stepping back for a moment and asking: what exactly is a bias and how are different kinds of biases related to one another? It's here that I take my lead from Ryan McKay and Charles Efferson. They offer a useful definition of behavioural and cognitive biases that I want outline in this post.

Before getting into their definitions, it's worth calling up the commonsense understanding of a bias, which I understand to be the following:

Bias: For any X, if X can take on a range of values (1...n), then the actual value of X can be said to be biased if it departs systematically from the expected distribution of those values.

That's a little clunky, but I think it captures in an abstract form what we usually mean by "bias".

1. Behavioural Biases
Moving on then to the concept of a behavioural bias. McKay and Efferson note that there are trivial and non-trivial understandings of a behavioural bias. We'll look first at the trivial sense:

Trivial Behavioural Bias: Assuming there are N possible behaviours that X could engage in, X displays a trivial behavioural bias when the probability distribution over these N behaviours is not uniform.

So, if you are guessing which side of coin will show after it has been flipped, your guesses can be said to be trivially biased if you choose heads more often than tails.

What about a non-trivial behavioural bias? This concept incorporates the notion of error (something that was absent from the trivial definition). To understand this definition, one must assume that the world can take on one of N possible states. One must then assume that X can exhibit one of N possible behaviours. For each possible state of the world, there is one (and only one) correspondingly optimal type of behaviour, which implies there are N(N-1) possible errors that X could make.

This gives us the following definition:

Non-trivial Behavioural Bias: Assuming there N(N-1) possible behavioural errors that X could make, X exhibits a non-trivial behavioural bias if, over a sufficiently large period of time, the empirical distribution over these errors is not uniform.

Going back to the coin example, your selection of heads might be non-trivially biased if the coin more often displays tails (and vice versa).

Note, interestingly, that a behaviour that is trivially biased need not be non-trivially biased; and a behaviour that is non-trivially biased need not be trivially biased. The authors (writing as they do in the context of behavioural biology) appeal to the following example.

Consider the interaction between single men and women at a bar. It is (evolutionarily, anyway) appropriate for a man to approach a woman if she is receptive to his advances; it is embarassing for him to approach her if she is not receptive to his advances. Now imagine a man who approaches 50% of the women in a bar. His behaviour is trivially unbiased according to the earlier definition. But now imagine that the man in question is George Clooney. His behaviour might be non-trivially biased because more than 50% might be receptive to his advances. Or, to put it another way, it is biased because it seems suboptimal.

2. Cognitive Biases
Behavioural biases relate to the actual patterns of behaviour that agents engage in; cognitive biases relate to the beliefs that agents have about the world. Again, cognitive biases have trivial and non-trivial forms. A trivial cognitive bias can be defined as follows:

Trivial Cognitive Bias: Assuming there are N possible states of the world, X exhibits a trivial bias when their subjective probability distribution over these N states is non-uniform.

To continue with George Clooney's travails at the bar: there are two possible states for the women to be in {receptive to his advances; not receptive to his advances} and so two possible beliefs he can have about their receptivity. He would exhibit a trivial cognitive bias if he happened to attach a probability of greater than 0.5 to the possibility of the woman being receptive to his advances.

Again, as might be obvious with this example, there is no notion of error being incorporated into the trivial definition. George Clooney might be perfectly justified to have a non-uniform subjective probability distribution over the possibility that women are receptive to his advances. We need to bring error into the picture before we get a non-trivial form of cognitive bias.

So how do we do this? Well, if you look at the definition of the trivial form given above you will see that it appeals to subjective probability distributions. We can deal with such probabilities using Bayes theorem. What's more, we can deal with error by appealing explicitly to the beliefs we would expect a Bayesian rational agent to have. This gives us:

Non-trivial Cognitive Bias: An agent X exhibits a non-trivial cognitive bias whenever his or her beliefs depart from the beliefs of a Bayesian rational agent.

A Bayesian rational agent is one that updates his or her subjective probabilities in light of the available evidence (following Bayes' Rule). Bayesian rationality is the typical standard in epistemic game theory.

So those are the definitions. McKay and Efferson go on to show how these definitions might affect findings in behavioural biology and evolutionary psychology. In particular, they worry about the problems arising from the conflation of behaviour with cognition. You can read their paper for all the details on this.

Friday, April 29, 2011

A couple of weeks ago I started a series on game theory. Although I have written a few more posts for this series now, things have stalled somewhat due to my inability to write more complex mathematical formulas (equations, functions etc.) in blog-friendly format. I'm sure I could look around myself, but I thought I might ask: Does anyone know of any good online tutorials or resources on this issue? Or is it that the only way to do it by importing image files with the equations displayed on them?

As we learned in part one, an evolutionary debunking argument (EDA) is something that attempts to undermine the warrant or justification for a particular belief by pointing out its evolutionary origins. All such arguments begin with a causal premise which specifies how evolution brings about the belief in question; they follow it up with an epistemic premise arguing that evolutionary processes do not track truth; and they thereby conclude that belief is unwarranted.

We saw in part two how such arguments are sometimes employed in disputes in normative ethics. The example given came from the work of Joshua Greene and Peter Singer. Both of these authors seemed to argue that deontological intuitions could be undermined by an EDA. This fact could be marshalled in support of utilitarian principles.

In response, it was argued that Singer and Greene’s argument is difficult to sustain since it needs to show that their preferred utilitarian principles do not draw upon other debunked intuitions. In other words, we need to be given some reason for thinking that global EDAs are not possible.

In this entry we consider whether global EDAs are possible.

1. Joyce and Street
In recent times, two authors in particular have pushed the idea of a global EDA. One of them is Richard Joyce; the other is Sharon Street. (I’ve discussed Street’s work at considerable length elsewhere on this blog, should you want more detail than you’ll be getting here). Michael Ruse should probably get an honourable mention as well.

Joyce argues that all our moral judgments can trace their origin to cultural and environmental influences affecting the hominid line. If we were, say, evolved from the social insects, we would come with a completely different set of pre-packaged moral commitments.

Joyce thinks this won’t do. On semantic grounds, Joyce maintains that moral discourse is committed to a type of absolutism, i.e. our moral discourse purports to provide us with a set of reasons for action that apply to all times, places and subjective dispositions. The contingency implied by modern evolutionary theory is diametrically opposed to this kind of absolutism. Thus we are forced to embrace a form of error theory about morality. (Joyce thinks we can still be happy with pragmatic, subjective reasons for action).

Street makes similar claims, but arrives at a different implication. She thinks that moral realists (particularly of the non-natural variety) should be deeply troubled by evolutionary history.

This history implies that many of our evaluative beliefs are directly moulded by the pressures of survival and reproduction. For example, altruism towards kin can be readily explained through evolutionary game theory. Despite this, realists must still believe that somehow these beliefs line-up with abstract moral truths. But surely this is incredible? Wouldn’t it be too much to think that the selective pressures of evolution just happened to coincide with the abstract, causally inert moral truth?

Street thinks this argument provides good reason for rejecting metaethical realism and embracing some form of antirealism (constructivism in particular). This position is not nihilistic or sceptical about moral truth. It just thinks that moral truths are not mind-independent.

Note that neither Joyce nor Street quite goes “all the way” with their debunking. Joyce still thinks it is rational to act in accordance with our subjectively perceived self-interest; and Street thinks moral truth can still exist. It might be possible to go even further with the debunking and point out that all normative beliefs (including beliefs about epistemic norms) are undermined by evolution. This is, effectively, what Alvin Plantinga does in his argument against evolutionary naturalism.

2. Responding to the Global EDA
At this stage its worth identifying the potential responses to EDAs by proponents of ethical objectivism/realism. There are three of them, and they should be unsurprising to anyone familiar with epistemological debates of this sort:

They can say that no evaluative beliefs are affected by the argument.

They can say that some evaluative beliefs are affected by the argument.

They can say that all evaluative beliefs are affected by the argument.

The third option seems unattractive for a variety of reasons. As noted above, if the proposed scepticism leaks into other normative domains then it’s basically impossible to rationally justify anything. The first option looks equally unappealing. Someone wishing to make this response would need to argue that evolutionary processes really do track moral truth (see here for a version of that response).

The second option is probably the most attractive but it is precariously balanced. Its defender needs to show why certain beliefs are unaffected. Basically, this requires that they show how the evaluative belief they wish to protect originates in or is supported by considerations that override evolutionary history. It is this kind of position that interests Kahane since it is maintained by the likes of Singer and Greene.

Consider once more Singer’s position. He thinks that an EDA can undermine deontological intuitions but not utilitarian ones. How can he be so sanguine? Because he thinks utilitarianism is supported by rational reflection that is not the outcome of our evolutionary past.

Does this kind of response work? Here is where the role of the reflective equilibrium (RE) in normative reasoning might be important. The RE proposes a kind of test for ethical beliefs. The test is coherentist in nature. It begins with a set of moral principles, it then tests these against a range of scenarios, and then modifies these principles in accordance with what seems reasonable, usually appealing to intuition when doing so.

Such an approach to normative reasoning might be uniquely susceptible to an EDA. Why? Because the equilibrium could be based on debunked intuitions. If that is how Singer ultimately justifies his utilitarian principle then he could be in trouble.

Wednesday, April 27, 2011

I don't usually share videos on this blog, but I decided to make an exception for the following. I think it is a nice expose of the dangers of circular reasoning. I doubt Craig is the only one who is guilty of such a thing. It's difficult to maintain a coherent web of beliefs....

As we learned in part one, Kahane’s article sets out to examine the use of evolutionary debunking arguments (EDAs) in normative and meta ethics. An EDA is an argument which attempts to show that the evolutionary origin of a particular belief undermines the warrant for that particular belief.

We’ve only really considered the use of such arguments in the abstract to this point. In this post we’ll look at an actual example of such an argument being used in a normative debate.

1. Deontology vs. Utilitarianism in the Trolley Problem
Most people reading this will be aware of the widespread use of trolley-problem thought experiments in ethics. The experiments are designed to show how people’s moral intuitions can vary as a result of seemingly insignificant changes in circumstances.

In the classic example, the trolley problem involves you having to make a decision to kill one person in order to save five or alternatively, let the five people die. In one scenario, the killing of the one person is an indirect consequence of another action (flipping a switch or a lever); in the other scenario, the killing of the one person is a direct consequence of your own actions (e.g. pushing someone off a bridge).

Most people, when asked what they would do in these scenarios, say they would be willing to indirectly kill the one in order to save the five in the first scenario, but they would not be willing to directly kill the one in order to save the five in the second scenario.

From a utilitarian perspective, these responses are difficult to explain. The utilitarian calculation is surely the same in both: one person dies, five people live. Killing the one person is clearly merited in both cases, isn’t it?

From a deontological perspective, the responses might be a little easier to explain. There is an absolute injunction against an action which directly kills another; there is no such injunction against an action which indirectly kills another. No problem.

For some reason, the majority of humanity seems to intuitively back the deontological position. Are they right to do so?

Joshua Greene

2. Singer and Greene: Debunking Deontology
Peter Singer and Joshua Greene think not. They reckon that the deontological intuition can be debunked using an EDA. Greene in particular seems to reason as follows:

(1) Our commitment to the deontological intuition in the trolley problem is merely due to the fact that “up close and personal” violence was common in our environments of evolutionary adaptation (EEAs) and so an aversion to direct violence was selected for; indirect methods of killing are more evolutionarily recent and have not been selected against.

(2) Evolution does not track the truth of evaluative propositions.

(3) Therefore, we are not justified (or warranted) in our commitment to the deontological intuition.

There are several things that could be said about this argument. For one, the causal premise (1) could easily be challenged (as such explanations often are). For another, as noted in part one, the argument seems to assume metaethical realism/objectivism. If the person who is committed to deontology is not a realist, then they will be unswayed by this argument.

We will not focus on these points here. Instead, we’ll address the supposed implications of this argument. No doubt, both Singer and Green take it that this argument somehow supports utilitarianism. But clearly this is not a straightforward implication, however seductive or appealing it may seem to be.

To successfully infer that, one would have to show that the commitment to utilitarian principles is not also undermined by this argument. In other words, one needs to provide some reason for thinking that the EDA does not spread to all of our evaluative principles.

This raises the potentially unwelcome possibility of global debunking arguments. We’ll discuss these in part three.

I must not have read the abstract closely enough before downloading this one since I was expecting Kahane to offer a more fulsome critique of the use of such arguments, but it turns out that wasn’t really what he was interested in.

Nevertheless, the article is a good one and I thought I might share some of its details here. One thing I particularly liked about it was Kahane’s attempt to elucidate the general structure of such arguments and to show how they are used in normative and meta- ethics.

In this first part, I’ll cover Kahane’s general analysis of debunking arguments. In the next part I'll consider their use in normative ethics.

1. An Introduction to Debunking Arguments
First things first, we need to be clear about the nature of a debunking argument (DA). Obviously, “debunking” is, in its everyday usage, intended to denote the practice of undermining or exposing some set of beliefs as being false. Given that this seems to be a common goal in philosophy, there could be many philosophically interesting DAs worth considering.

For present purposes, the focus is on what might be called causal-DAs. These are arguments that try to undermine some belief or set of beliefs by explaining its causal origins. Such arguments are, in fact, widespread. You probably use them all the time.

To consider an obvious example, anyone who has studied Marxism will know that Marx seemed to think (or at least, can be interpreted as thinking) that offering a causal-historical explanation of an ideology could be an effective way of undermining that ideology.

Now expressed in these terms such an argument is patently unsuccessful. It is an example of the genetic fallacy: Just because an ideology has a particular causal origin, does not mean that the ideology is false. But this in itself doesn’t mean that causal DAs are without merit. They just need to be reinterpreted in a more subtle way - as arguments that don’t undermine the truth of particular belief, but do undermine the justification (or warrant) that the believer might have for holding that particular belief.

Here’s an example. Suppose Bob believes that there is a particular object (X) outside his house. Now suppose further that we learn that Bob has a rather peculiar process for determining what to believe is outside his house. Every morning he flips a coin, and if the coin comes up heads, he believes there is an X outside his house. Surely, such a causal explanation for a belief undermines the warrant or justification for holding that belief?

Contrast this with an alternative causal explanation. According to this explanation Bob acquires the belief that X is in front of his house through visual perception of the object. Surely this causal explanation (assuming no additional defeaters) supports the justification or warrant for Bob’s belief?

We can refer to the difference between these causal explanations in terms of processes that are “on track” (i.e. track the truth of whatever proposition is under consideration), and processes that are “off track” (i.e. don’t track the truth of whatever proposition is under consideration). This is illustrated below.

Note that in the example given, the “off track” process is purely random. A more common variety of “off track” process might be one that is biased in a particular direction.

2. The General Structure of (Evolutionary) Debunking Arguments
Building upon the preceding example, we can specify the general template or structure that is shared by all causal DAs. It looks something like the following:

Causal Premise: S’s belief that P is caused by causal process X.

Epistemic Premise: Causal process X is off track

Conclusion: The belief that P is unjustified (or unwarranted).

As you can see, this is an argument that could work with any type of belief and any allegedly off track causal process. The concern in this article is to specifically examine arguments that focus on the evolutionary process and its implications for our evaluative beliefs.

Such arguments will have the following structure:

Causal Premise: S’s evaluative belief that P is caused by the process of evolution.

Epistemic Premise: The process of evolution does not track the truth of evaluative propositions of type P.

Conclusion: S’s belief that P is unjustified, or unwarranted.

Before we consider specific examples, there are three things to note about this style of argument.

First, the causal premise refers to the process of evolution in general. It does refer to any specific mechanisms of evolutionary change. Although it is fair to say that most attention has been paid to adaptive explanations that appeal to the mechanism of natural selection, genetic drift or other processes could also be targeted.

Second, the epistemic premise needs to be supported in a particular way. It must be shown that the off track process (in this case evolution) completely removes the influence of any potentially on-track processes. If there are on-track processes that also contribute to the causal explanation of the belief, then they might restore justification or, at the very least, not completely undermine it.

Third, and this is one of Kahane’s key observations, the argument assumes that S is an ethical realist/objectivist. To be more precise, it assumes that S believes in mind-independent moral properties. Someone who believes that moral properties are constructed out of whatever evaluative attitudes we happen to have will be unpersuaded by this argument.

Okay, that’s enough for now. In part two we’ll consider an example of an EDA drawn from the field of normative ethics.

Sunday, April 24, 2011

You should probably read part one before attempting this, but if that seems like too much to ask, then be reassured that I’ll try my best to summarise the important parts of that discussion when it seems important to do so.

Anyway, I’ve been trying to work out exactly what Craig means when he says that God makes objective moral values and duties possible. As noted in part one, this is an important exercise since imprecision in what is meant by objectivity can be a way to both confuse and be confused.

Craig dedicates two paragraphs in his book Reasonable Faith to clarifying what he means by objectivity. Last time out, I subjected the first of these paragraphs to a close analysis and concluded that Craig seems to mean at least two things when he claims that an entity or property is "objective": (i) that it is mind-independent; and (ii) that it is non-relative. It is important to keep these two concepts distinct since they need not always go together.

In this entry, I’ll first look at paragraph two in more detail and then consider moral platitudes and metaethical theories in general.

“To say that there are objective moral values is to say that something is good or bad independently of whether any human believes it to be so. Similarly to say that we have objective moral duties is to say that certain actions are right or wrong for us independently of whether any human being believes them to be so. For example, to say that the Holocaust was objectively wrong is to say that it was wrong even though the Nazis who carried it out thought that it was right, and it would still have been wrong even if the Nazis had won World War II and succeeded in exterminating or brainwashing everybody who disagreed with them so that it was universally believed that the Holocaust was right. The of premise (1) [of Craig’s moral argument] is that if there is no God, then moral values and duties are not objective in this sense.”

In many ways, this paragraph just further elaborates on the distinctions and clarifications we discussed in part one. For instance, the concept of mind-independency is once again clearly to the fore, as is the idea that moral truths are not relative to particular sets of cultural beliefs and practices (e.g. those of the Nazis).

What is interesting about this paragraph is what it has to say about the truth value of moral propositions. It is clear that Craig thinks, in addition to being mind-independent and non-relative, moral propositions such as “the Nazis were wrong to carry out the Holocaust” have a truth value and, indeed, that that truth value is positive in this instance.

In some ways this observation probably ties in better with a discussion of premise (2) of Craig’s argument (in which he affirms the existence of objective moral values and duties), but I think it is worth making here for a couple of reasons.

First, not all metaethical theories accept that moral statements like “X is good” or “I have a duty to do A” have truth value. Such theories are non-cognitivist in nature. Examples include, universal prescriptivism, expressivism, and emotivism. These theories usually argue that a non-cognitivist approach is more plausible on semantic and ontological grounds, and that it can actually give us a good deal of what we want from morality in return for sacrificing the possibility of truth value.

Second, it is possible to believe that moral propositions are intended to refer to mind-independent phenomena, to be non-relative in their scope, and to have truth value, and still deny that they have positive truth value. In other words, it is possible to be a moral nihilist or error theorist and still accept Craig’s other claims about what is needed to have an objective morality. This would really be a rejection of premise (2) of Craig’s arugment. We call theories that accept the positive truth value of moral propositions “realist”.

Third, and most importantly, although it might be correct to say that cognitivism follows straightforwardly from mind-independence and non-relativity -- and so, one who accepts those claims automatically accepts cognitivism -- it is not necessarily true to say that one who rejects these properties rejects cognitivism. In other words, it could be argued that moral propositions are mind-dependent but still capable of having a positive truth value.

This is a significant observation since it seems to me that Craig’s argument as whole is intended to appeal to the fact that people really want to be able to affirm that certain things are good and bad ( or right and wrong). Like Craig, I think we all really want to be able to say that what the Nazis did was just wrong and that no amount of cultural brainwashing will make it otherwise. But if that’s what we want to be able to say, then it seems like all we really care about is positive truth value - mind-independency and non-relativity might be purely incidental.

2. What are the key moral platitudes?
So far I’ve been analysing what Craig has to say about objectivity. I now want to step back from this analysis and offer some more general comments on metaethics and successful metaethical theories.

Let's ask the question: What is so important about moral objectivity? Why does Craig latch onto objectivity in his moral argument? I hinted at this above, but to be more explicit now, I think Craig’s discussion is only interesting to the extent that it tells us something about the kinds of things we demand from a successful moral theory. These are sometimes referred to as the "platitudes" of moral discourse.

Craig clearly demands objectivity, but as we have seen his use of this term is imprecise. He could be referring to mind-independence, non-relativity, or cognitivism, or some combination of the three. It could also be that he has other demands that are more implied than explicit.

So let's not be too wedded to what Craig wants, let's focus instead on the kinds of things we all might want from a successful moral theory. I don’t know if I can do justice to all the demands we might place on a successful moral theory, but the following list seems to me to be fairly typical. I’ll define the terms as I go along (there’s going to be some overlap with what I’ve already said):

(a) Mind-Independence: We hope that moral properties exist independently of the desires and beliefs of human beings. If we wish, we can be even more discriminating in how we define mind-independency. We could, for example, distinguish between desire-independent and belief-independent theories. I won’t do that here, but it’s worth noting anyway. We could also distinguish between theories that make moral properties independent of our own personal minds and theories that make moral properties independent of all minds.

(b) Absolutivity: We want moral propositions and claims to hold true for all people, at all times and all places. How absolute is “absolute”? That depends. We may include a number of contextual factors in our moral theories (e.g. “X is morally obliged not to kill another human being (Y) provided that Y is not trying to kill X), but we could hold that anyone who find themselves in a similar context is bound to affirm the truth of such a proposition. This would be a kind of absolute morality. To distinguish it from an even more absolute morality, we might like to refer to "strong" and "weak" forms of absolutivity.

(c) Practicality: We want moral properties to actually affect how people behave. In other words, if someone recognises that they have a moral obligation not to kill another human being (unless certain contextual factors apply), then we would expect them to acknowledge that they must act in a way that is compliant with this obligation. We could add an “overridingness” condition to this if we wish. This condition would stipulate that we expect the recognition of a particular moral status to supply a reason for action that overrides any other reasons for action that a particular agent may think he has.

(d) Positive Truth Value: We want it to be the case that at least some moral propositions are true and at least some are false. We don’t want to be moral nihilists.

(e) Non-trivial truth value: This might be a slightly unusual condition and so requires some explication. I’m adding it in here to exclude a certain type of moral theory. The theory I have in mind is one which satisfies our demand for positive truth value, but does so in a manner that is trivial or vacuous. As it turns out, this might be one ground upon which to object to a theistic metaethics, but it is not prejudicial against this position. It can apply equally well to non-theistic theories. Here’s an example:

In Plato's Republic, Cleitophon proposes at one point (340b) that “justice” is whatever the strongest believes to be to his advantage. Suppose we drop the “whatever the strongest believes...” clause and apply this to all individuals? This would mean that a proposition like “X is just” has a positive truth value. But that truth value is surely trivial in nature: the individual could never be mistaken about what is just since it is relativised to their occurrent beliefs about what is to their advantage.

(f) Non-Scepticism: We want it to be possible to find out which moral propositions are true and which are false; we don’t want to be moral sceptics. This is an epistemic constraint and Craig seems to think such constraints are irrelevant in metaethical debates (at least in the ones he has had). I disagree (see here for more). It seems to me that a moral theory which satisfies the other conditions but which forces us to be moral sceptics would be unwelcome.

(g) Fit with Background Knowledge: We want the theory to be consistent with other parts of our worldview. Obviously this might be difficult to apply in some cases: theists are likely to think that morality must be consistent with God’s existence, and naturalists are likely to think that it would be nice if moral properties were somehow reducible to natural properties. Despite its difficulty, it still seems like a reasonable condition since it applies to all theories, not just moral ones.

These are the platitudes or conditions of success I’ll work with for the remainder of this entry. I could probably add more, and refine them a little bit, but I don’t want this to become too tedious. If you have any complaints about what I’ve said, or think certain refinements are necessary, feel free to comment below.

I doubt very much that any metaethical theory can satisfy all of these demands. The history of moral philosophy would seem to support this observation. Still, I think it is possible for some metaethical theories to satisfy a good number of these demands and so get us a good deal of what we want from morality. Indeed, I think the way to assess competing theories is on the basis of which ones get us the most of what we want.

This suggests that when deciding which is to be our preferred metaethical theory, we may be forced to compromise. We may have to give up one of the things we want in order to have more of the others.

All of which begs the question: which of these platitudes would we be willing to give up?

I’ll lay my own cards on the table at this point. I’d be willing to give up mind-independence. If I could formulate a metaethical theory which allowed for some degree of absolutivity, practicality, non-triviality, positive truth value, non-scepticism, and is consistent with my background knowledge, I’d be delighted. It would be like all my Christmases come at once. It would be bordering on petulance if I were to hold out for mind-independence after getting all of these things. Wouldn’t it?

3. The Scorecard Exercise
Now that we have some idea of what we want from morality, we can put it to use in the analysis of Craig’s theory. As was the case in part one, I am going to propose an exercise here. Its completion is left to the reader.

First, I suggest that we construct a scorecard with which we can evaluate different metaethical theories. On the "row" side of the scorecard we will have the moral platitudes outlined above. On the "column" side of the scorecard we will align all the different metaethical theories. Obviously, the goal is to see how well the different theories do with respect to the platitudes.

To perform this evaluation properly, we will have to have some idea of the available competitor theories. This is one area in which Craig’s discussion in Reasonable Faith is sorely lacking. He only seriously considers non-natural realism (or Platonic realism, as he calls it) as an alternative to theistic metaethics. I think we need to be far more expansive in our search of the available theoretic space.

Here is an abbreviated list of the available moral theories to get the ball rolling (I’m going to exclude non-cognitive theories):

(1) Non-natural realism: Moral properties are real, but they exist in some abstract, non-natural realm.

(2) Theological voluntarism: Moral properties are real, but they depend for their existence on one or more of God’s voluntary acts (willings, desirings, or commands).

(3) Paradigmatic Exemplarism: Moral properties are real, but only in virtue of the fact that there exists a being that is a paradigmatic exemplar of those properties (I confess, you probably won’t find this theory mentioned in the available textbooks. As far as I know, the name is of my own invention. Nevertheless, it comes close to Craig’s own views with respect to how God grounds the existence of certain moral properties; and it is distinct from theological voluntarism).

(4) Naturalistic Realism: Moral properties are real, but they are *reducible* to natural properties. I use *reducible* somewhat reluctantly since there are different theories and some of them object to the label of reductionism (Luke Muehlhauser hasbeendoing a series on these recently, which is well worth reading).

(5) Extreme Moral Subjectivism: Moral properties are real but their content is directly determined by the beliefs and desires of each individual.

(6) External Moral Subjectivism: Moral properties are real but they depend for their existence on the subjective attitudes of one or more individuals external to oneself.

(7) Constructivism: Moral properties are real, but they depend for their existence on some appropriately construed constructive procedure (e.g. a social contract). This comes in a variety of forms (Kantian, Humean and Decision-theoretic) and has been discussed on this blog before.

(8) Nihilism: Moral properties are not real. We might think they are, but they’re really not.

Here then is the proposed scorecard (click to embiggen).

Now all we have to do is determine how each of the available theories does in relation to each of the proposed platitudes. We could place a tick or an “x” in each box for comparative purposes, and whichever theory satisfies the most of these demands “wins” (gets the most ticks), in the sense that it is the best we can hope for. I've kick-started the process by marking the score for nihilism.

I suggest that if we perform this exercise, Craig’s theory might not do nearly as well as some of its competitors. Furthermore, even if it does equally as well as some of its competitors, Craig’s moral argument is undercut. Why? Because he claims that God’s existence is essential for getting what we want from morality, but if there are other theories that get the same amount (or more) then this is wrong.

We must careful when doing the necessary comparison. This is for two reasons. First, it is probably best to construct a separate scorecard for each of the moral properties under consideration, i.e. one for moral values and another for moral obligations. Second, we should only reach a determination on a particular criterion after having evaluated the associated argument.

I’m not going to go ahead and actually fill in the remainder scorecard here since it requires a good deal more careful analysis to do this properly. Still, I think this is a good methodology to adopt when addressing Craig’s (and others’) arguments about metaethics.

4. Conclusion
The discussion in this post has taken us away somewhat from the original concerns with Craig’s definition of objectivity. I’ll try to tie it all back together.

As we have seen, Craig thinks the existence of God is the only way in which to secure the objectivity of morality. The problem is that it is not entirely clear what he means by objectivity. He might mean mind-independence, or non-relativity or positive truth value, or some combination of all three.

In light of his lack of clarity, I suggest we are better off interpreting Craig’s claims about objectivity as being akin to those of someone placing demands or constraints on successful moral theories. I further suggest that there are many such demands and that those employed by Craig may not even be that important.

In order to properly evaluate proposed metaethical theories, I think we should construct a kind of scorecard on which we determine how many of our moral demands are satisfied by the different theories and compare them with one another.

Saturday, April 23, 2011

[Note: I’m hoping that this might end up being part of longer a series, but I don’t want to get too ambitious at this stage. I would appreciate any comments and suggestions on improving my analysis, but do bear in mind that I have two more entries on the objectivity issue planned]

“So many debates in philosophy revolve around the issue of objectivity versus subjectivity that one may be forgiven for assuming that someone somewhere understands this distinction.”

I like Richard Joyce. He has written some lucid and highly engaging stuff on moral philosophy over the years. Now I don’t ultimately agree with him, but I like the above quote because it captures one of the key difficulties in all philosophical arguments: the imprecision of terminology.

Many concepts that we use in everyday speech, like “objectivity” or “subjectivity”, have been subjected to highly intricate and elaborate philosophical theorisations. So to say that something is objective, or to affirm the existence of objective entities, can be both uninformative and misleading.

I mention all this because, of course, the centrepiece of Craig’s moral argument is the claim that “objective moral values and duties exist”, and that they can only do so if God exists as well. What does Craig mean by this? In the text of Reasonable Faith (p. 173, to be precise), Craig dedicates two paragraphs to directly answering this question (further indications of what he means by “objective” are peppered throughout his discussion). Let’s see if things become clearer if we subject these paragraphs to close scrutiny.

1. Paragraph One: Objectivity and Relativity
For ease of reference, I’ll quote the first paragraph in its entirety here (emphasis added):

“Let me say something to clarify the distinction between something’s being objective and something’s being subjective. To say that something is objective is to say that it is independent of what people think or perceive. By contrast, to say that something is subjective is just to say that it is not objective; that is to say, it is dependent on what human persons think or perceive. So, for example, the distinction between being on Mars and not being on Mars is an objective distinction; a particular rock’s being on Mars is in no way dependent on our beliefs. By contrast the distinction between “here” and “there” is not objective: whether a particular event, at a certain spatial location occurs here or there depends upon a person’s point of view.”

Noting the highlighted phrases, it becomes pretty clear that I think Craig makes two interesting clarifications here about the nature of objectivity. I want to discuss these in some detail.

The first clarification is the idea that in order for something to be objective, it must be mind-independent. In other words, like the proverbial tree falling in the forest, an objective entity continues to exist even in the absence of a mind which perceives its existence.

I like to think of this as the core, or privileged, sense of “objectivity”. In other words, as the sense that is closest to that implied in the paradigmatic applications of the term. But because I think objectivity is a confusing term, I will use the more precise term “mind-independence” to refer to this sense of objectivity. Similarly, I’ll use the term “mind-dependent” to refer to that which is typically considered under the term “subjective”.

The second interesting clarification comes from Craig’s use of the indexical terms “here” and “there”. When I see this it immediately sets certain alarm bells off in my head. Indexicals are probably the distinguishing mark of relativistic claims (moral or otherwise). So I take it that Craig’s use of the terms here is intended to exclude the relative from the domain of the objective.

But this raises the question: what does it mean for something to be relative? Roughly, a proposition or state of affairs is relative when its truth/falsity (or existence/non-existence) cannot be determined on its own terms, but only relative to some other proposition or phenomenon.

To modify Craig’s example, the claim “there is a desk ten feet in front of me” can only be assessed for its truth value relative to my current spatial location. It cannot be assessed in the absolute. It is perfectly possible for this claim to be true for me and not for you; no contradiction is entailed by this possibility (but note: it can still be true or false).

2. Do we need to distinguish the objective from the relative?
If Craig is conflating relativism and mind-independence in the course of clarifying the nature of the “objective” in paragraph one, then there might be a problem. While it might generally be true that mind-dependency and relativity are present at the same time, it is not necessarily true.

Mind-Independent Relativism: Michael Jordan might be tall relative to most other men, but he might be average in height relative to other basketball players. Thus the truth of any claim about his tallness is going to be relatively determined, but at the same time there is nothing particularly mind-dependent about it.

Mind-Dependent Absolutivity: We might decide that “X is good” means “My grandmother approves of X”. This would make goodness a mind-dependent property, but would it make the claim “X is good” a relativistic one? No; the truth conditions of that statement would hold true for all individuals at all spatial and temporal locations (it might be epistemically difficult for some of those individuals to gain access to those truth conditions, but that’s another matter).

This suggests that we can construct the following two-by-two matrix.

In future entries, I’m going to suggest that this two-by-two matrix is inadequate because there are even more dimensions to be considered when assessing metaethical theories. But for the time being we can use it to perform two interesting exercises.

First, considering the two dimensions we have identified, ask yourself: what is it that we would like from a metaethical theory? Is mind-independency really that important? Or is absolutivity the most important thing? I’m not going to propose any answers here. But it’s worth thinking about since one of the most common ways to do metaethics is to first specify what we demand from our moral terms and then look around to see if any of the available theories satisfy these demands. It could well be that no single theory satisfies all the demands and so when it comes to actually figuring out which theory is the most acceptable we have to compromise.

The second exercise we can perform is to try to position some moral theories on the grid. I have provided an illustration below. I put “extreme moral subjectivism” in the top left. This is the theory that makes moral truth entirely dependent on your subjective whims, which can vary from time to time and place to place. I put “external moral subjectivism” in the top right. This is theory that makes moral truth dependent on the subjective whims of another. This would be akin to the grandmother subjectivism discussed above. I put nothing in the bottom left because I am unsure what belongs there. And I tentatively place Craig’s preferred metaethics in the bottom right.

The placement is tentative for a couple of reasons. Although it is true that -- based on what he has said in paragraph one -- Craig wants his theistic metaethics to belong in the bottom right quadrant, his desires are obviously not going to determine the issue. We need to see whether his actual theory merits this placement. It may not.

For instance, Luke Muehlhauser has been arguing for some time that Craig’s theory really belongs in the upper right quadrant since he makes moral truth dependent on God’s mind (much like I did with my grandmother earlier in this post).

We need to be somewhat careful about this claim since it raises issues relating to the Euthyphro dilemma and Craig has some responses to this (or, rather, he adopts the responses of others) which I’ll have to comment on later (even though I have discussed them before).

But I should note that I have similar concerns with Craig’s theory myself. Elsewhere in his discussion, Craig seems to endorse the idea that moral obligations are relativistic. To be precise, he endorses the claim that moral obligations arise from relations between human beings and God. This would seem to make obligations (if not values) relativistic. Now maybe he could argue in response that the obligations we have towards God never change and are hence (contingently) absolute. The problem with this is that it is not obviously or necessarily true. For instance, I am sure a Christian (who believes in Hell) would have to believe that our obligations changed in the post-Jesus era. If this is right, then our obligations are not absolute.

Anyway, that’s enough for this post. I’ll look into paragraph two next time round.

Saturday, April 16, 2011

There’s been much internet chatter over the past week or so about the William Lane Craig/Sam Harris debate. For instance, Luke, over at commonsenseatheism, posted a two-part review earlier this week. It has been suggested that Harris failed in his response to Craig because he brought up issues in moral epistemology when the debate was really about moral ontology. In the comments to Luke's review, I suggested one way in which epistemological issues could be relevant to a debate about moral ontology.

Some people responded, and I promised to provide some follow-up comments but, being somewhat busy at the moment, and not being inclined to rush anything, I decided to think about it for awhile and post my further thoughts on this blog. That’s the purpose of this post.

As I started writing these further thoughts I found that I couldn’t make the points I wanted to without getting into some of the more general features of theological voluntarism. As a result these comments are far more expansive than originally intended and probably longer than most are willing to tolerate. (I mean it’s taken me three paragraphs just to clear my throat, so imagine what it’ll be like when I start saying something substantive!)

Anyway, it’ll come as no surprise to learn that I’ve broken the discussion down into a number of sections. First, I look at moral properties and metaethics in general. Second, I look at the structural features of the metaethical theory known as theological voluntarism. Third, I consider the basic problems associated with this position. Fourth, I consider whether moral values are dependent on God. Fifth, I outline the modified Divine Command Theory (DCT). Sixth, I try to explain the motivation behind Adams's version of the modified DCT. And seventh, I outline my own argument concerning epistemology and the modified DCT and I consider how the dialectic is likely to play out after that argument is made.

Those who are interested in my follow-up comments to what I said on CSA should just skip ahead to section 7.

If you want commentary on the Harris/Craig debate, and refutations of Craig’s arguments, you should probably look elsewhere. You'll be spoiled for choice.

One last thing before I get started: this is an attempt to clarify my own thinking about these issues. I’m not proffering myself as an authority on theistic metaethics and I’m certainly not entirely convinced by my own arguments.

1. Moral Properties and Metaethics
There are lots of moral terms that we use in our everyday lives. We talk of things being “good”, “bad”, “virtuous”, “vicious”, “permissible”, “obligatory” and so on. Obviously, these terms all mean different things. One fairly typical way of breaking them down along theoretically motivated lines is the following:

Values: These are the things -- states of affairs, character traits and events -- that are morally good. They are the kinds of things toward which our moral lives should be directed or oriented. A classical hedonic utilitarian might say that states of affairs in which people experience conscious pleasure are good; a natural law theorist might say that there are many basic goods (knowledge, friendship etc), all of which contribute to the virtuous and flourishing life. Our systematic specification of what is valuable can be called our theory of the good.

Right Actions: It is not enough to simply specify which states of affairs or events are valuable, we must also specify how we should act in relation to that which is valuable. Must we always act so as to promote that which is valuable, as the consequentialist might argue? Or should we act so as to honour that which is valuable, as the deontologist would argue? A theory of morally right action must answer these questions.

Moral theorists usually identify four morally significant types of action:

(a) Mere Permissions: these are actions that we are allowed to perform and have no impact upon the moral value of the world.

(b) Acts of Supererogation: these are actions that we are allowed to perform and that have a positive impact upon the moral value of the world.

(c) Forbidden acts: these are actions that we must not perform. They can also be referred to as negative obligations.

(d) Obligatory acts: these are actions that we must perform. They can also be referred to as positive obligations.

Before going any further, I should note that some theorists also distinguish between perfect and imperfect obligations. Roughly, the idea is that perfect obligations can only be fulfilled by performing a particular type of act; whereas imperfect obligations can be fulfilled in a number of different ways.

Metaethics as a whole asks a number of different questions about the moral properties that we have just identified. For present purposes, the question concerning the ontological grounding of these moral properties is the relevant one.

We want to know: do things like values and obligations actually exist, and if so what is it that makes them exist? Many theists think that values and obligations can only exist if there is a God. Why do they think this? How do they imagine that the grounding of these moral properties takes place? This leads us to consider the position known as theological voluntarism.

2. The Structure of Theological Voluntarism
Many readers will be familiar with the Divine Command Theory (DCT) of metaethics. The idea behind this theory is that moral properties such as obligations and values can only exist if God issues commands specifying what they are.

Theological voluntarism is a slightly broader notion. Where DCT only focuses on God’s commands, theological voluntarism can include any voluntary act on the part of God. If you want to learn more, it’s really worth checking out the article over on the SEP (I draw heavily from that article in what follows).

Theological voluntarism can be said to have the following abstract structure:

Theological Voluntarism: For any event, act or state of affairs X, X has the moral property Y if and only if God φ-s that it have Y.

There are two key variables in this theory: (i) the moral property Y; and (ii) the divine act φ. Technically, theological voluntarism could cover all moral properties (values and right actions), usually it just covers a particular subset of moral properties. As we shall see, the trend nowadays is to focus on obligations. As for the relevant divine act, this could be an intention, a desire, a command, a willing and so on.

For the remainder of this discussion I’ll focus solely on command-based versions of theological voluntarism for reasons that will become apparent.

Euthyphro Dilemma: For any state of affairs, act, or event X with moral status Y, does X have that moral status because God φ-s it, or does God φ it because of its moral status?

[Edit: This seems sloppy to me. Can someone think of a better formulation?]

The dilemma is usually supposed to confront the defender of theological voluntarism with two equally unpleasant conclusions. In fact, however, there are at least three unpleasant outcomes to contend with (Joyce, 2002). So it's more like a trilemma, although two of the horns are closely related:

The Independence Problem: If we accept that God only φ-s X because X has a particular moral status, then we seem to be claiming that the ontological grounding of moral statuses lies outside of God. No one who thinks God is necessary for the existence of morality likes this conclusion.

The Arbitrariness or Modal Vulnerability Problem: If we accept that X only has the moral status that is does because God φ-s that it have that moral status, and if we assume God is unconstrained in his voluntary acts, then it seems like X could have had any moral status at all. This contradicts the assumption that moral statuses hold true across all possible worlds.

The Vacuousness Problem: This one is often overlooked. It follows from the arbitrariness problem. The idea is that believers often use moral predicates when describing God (e.g. God is good, or God is just). But if God voluntarily decides what the appropriate application of these predicates is, then aren’t such descriptions of his moral status vacuous? I’ve discussed this before, over here.

4. The Modified DCT
Joyce (2002) suggests that each of these problems is not as serious as it first appears. I have some issues with Joyce’s arguments and I might discuss them another time, for now I would simply note that some theists concede the force of some of these problems.

Consider for instance our moral obligations, the actions that we must perform if we are to be moral beings. Many of these obligations have a fairly fixed status. As Craig often says in his debates, it’s pretty obviously true that we have an obligation not to rape, torture or murder children, and pretty difficult to see how that obligation could change.

But acknowledging that obligations have this seemingly fixed status, sends us on a collision course with the second horn of the Euthyphro dilemma. After all, if our obligation not to rape, murder or torture children only derives its existence from a voluntary divine command, then there’s a chance that that command could have been other than what it was. This seem contrary to what we believe about these particular obligations.

One way of resolving this problem is to suggest that God’s commands are constrained by his moral nature. The idea here is that God has certain core virtues (e.g. justice, lovingness, mercy, charity etc.) and that these prevent him from commanding anything that seems morally outrageous. So when the theist is confronted with the claim:

They can respond by saying that the claim has an impossible antecedent: God could never command such things.

Leaving aside the complication that arises from the fact that the God some people believe is supposed to have commanded some of these things (more on this anon), this is certainly one way to resolve the arbitrariness problem. But it raises a couple of additional problems. We’ll consider two here.

5. Are Values Dependent on God?
First, doesn’t it imply that at least some moral properties are ontologically independent of God? Specifically, the moral virtues? The objection here is that in trying to constrain God’s commands by pointing to his virtues, the religious believer is simply appealing to a set of abstract moral properties that God has, but are metaphysically prior to or more basic than he. It is these properties, not God, that do all the heavy metaethical lifting.

This is the position of the Moral Platonist, i.e. the person who believes that moral properties are abstract, non-natural and metaphysically basic. Although I have some issues with this position, I think it makes a good deal of sense. I’m willing to accept that there are some abstract properties (such as mathematical or logical truths) and that these properties can have some normative force (e.g. on belief formation).

Craig rejects Moral Platonism. He thinks that values and virtues have to be grounded in a being or person, that they cannot exist in a detached or abstract manner. I don’t see why this has to be the case. Indeed, it often seems like a form of special pleading. As long as one accepts at least some detached, or abstract metaphysical entities there seems to be no good reason to think moral values must be excluded from the abstract realm.

Furthermore, I think a theist has to accept some such entities. For example I can’t see how they could seriously believe that logical truths depend on God for their existence. God surely doesn’t have to will or command or intend the truth of the modus ponens? It is simply true, part of the basic metaphysical furniture of existence. (I believe Craig may argue against mathematical Platonism as well, if I recall).

6. Are Obligations Dependent on God?
Anyway, as far as I can tell, some theists are willing to accept the idea that not all moral properties depend on God for their existence. But they still hold out hope for obligations. Why is this?

Well consider the argument of Robert Adams. He maintains that commands are necessary for the existence of obligations. His reasoning is that obligations are essentially concerned with the relationships between persons (think of your familial, or social obligations). In such relationships, demands are made by the participants on one another. These demands constitute the basis of the obligations.

The social, legal and political obligations that we all have are probably too flimsy and too subject to social upheaval and change, to count as truly moral (or so the argument might go). We need a firmer foundation to explain the existence of moral obligations. Unsurprisingly, it is argued that God can provide this foundation by issuing commands telling us what we can and cannot do.

Here’s where things get interesting. Adams says that commands -- actual direct communications to conscious agents -- are necessary for the existence of obligations, divine intentions or desires just won’t fit the bill. His argument is that without the communication, there’s nothing to distinguish an obligatory act from a supererogatory act.

Here’s an analogy to help clarify this idea:

Suppose you and I draw up a contract stating that you must supply me with a television in return for a sum of money. By signing our names to this contract we create certain obligations: I must supply the money; you must supply the TV. Now suppose that I would really like it if you delivered the TV to my house, rather than forcing me to pick it up. However, it was never stipulated in the contract that you must deliver it to my door. As it happens, you actually do deliver it to my door. What is the moral status of this? The argument here would be that it is supererogatory (above and beyond the call of duty), not obligatory. My wishing or desiring that you do something is not enough create an obligation.

Applying this analogy to our relationship with God, Adams would argue that unless God actually communicates the content of our positive and negative obligations to us, they cannot exist. His desiring, intending or willing that something be obligatory is not enough to create obligations.

Adams’s point is controversial. Some would argue that we can have obligations without communicated demands. Perhaps, for example, a direct apprehension of the good supplies us with obligations; or even an awareness of the structure of our relationships. This could well be the case, but it would seem unattractive for the theist since it suggests, once again, that moral properties such as obligations can be grounded independently of God. In this case they would directly grounded in the kinds of abstract properties discussed above or in a process of deduction or inference from these abstract properties. I’ll need to return to this point.

7. Obligations and Epistemology
If we accept Adams’s claim for sake of argument, then it seems like epistemological issues (i.e. those pertaining to knowledge of God’s commands) could very well become relevant in a debate over moral ontology. In particular, it seems like one could make something like the following argument:

(1) Moral obligations exist if and only if God communicates his will to us in the form of a command.

(2) God has not communicated his will to us in the form of a command.

(3) Therefore, there are no moral obligations.

The conclusion, as you see, forces the believer to a form of nihilism or (weaker) scepticism about the existence of a core moral property, “the obligatory”. One could extend the argument by saying that, if the believer still thinks that moral obligations exist, then they must provide another grounding for their existence and reject premise (1) but let’s leave that to the side for the moment.

The argument as presented will no doubt seem circumspect to many. In particular, premise (2) will seem dubious. First, let me consider what kind of evidence could be marshalled in support of it and then the obvious objections.

As regards the support for premise (2), I think one can appeal to inconsistent commands (different religions have competing commands and competing obligations); and apparently morally abhorrent commands (e.g. genocide, child sacrifice etc.). Reference to scripture or to the claims made by individual believers who think God has spoken to them could be used to back this up.

Of course, inconsistencies and abhorrent commands do not necessarily imply that God has not communicated his will to us. In other words, they do not necessarily falsify (2). But they do, I think, give reason to doubt that we actually have epistemic access to his commands. Indeed, I think they might give us pretty strong reason to doubt this since in responding to them we tend to fall back on our own judgments about what would and would not count as a morally appropriate command. In this respect, the commands are clearly not self-authenticating. This would seem to increase the warrant for premise (2).

I think there are two directions in which the dialectic could go after this. We could end up in a debate about the inconsistencies or abhorrent commands themselves. In other words, the believer could argue that things are not so bad as they first appear, that scripture is not inconsistent with our beliefs about our obligations, and so we do not have as much reason to endorse premise (2). The non-believer could respond by saying that the believer’s responses are ad hoc or examples of special pleading, something that they themselves could not believe. This kind of debate could get quite messy, but it might be fun.

Alternatively, it could be argued that premise (2) works with too narrow a conception of communication. This observation was made by two commenters on commonsenseatheism (Rufus and Ajay) when I originally offered the argument. The suggestion, I take it, is that God could be communicating to us through means other than direct revelation. For example, by writing his commands on our hearts, in which case obligations might just become known to us through our moral intuitions. Or else, maybe, it is through the use of our own faculty of practical reason that we come to know what God demands of us.

There are a few things that can be said in response to this.

First, it’s not at all clear to me that we don’t get into similar problems with inconsistencies and abhorrencies by appealing to intuitions or practical reason. I’m not sure that people don’t arrive at competing, inconsistent and abhorrent conclusions when using these faculties. Still, I accept that there is considerable evidence suggesting a relative uniformity in moral intuitions so maybe this isn’t very strong.

Second, the response assumes that our intuitions and our practical reason are created by God and are media through which he can communicate to us. It’s not clear to me that we are entitled to make that assumption here. We’d have to provide additional argumentation for this e.g. through design arguments or through appeal to Plantinga’s model of warranted belief. Then we really would be in a debate about epistemology.

Still, even then, I think claim is a stretch. It seems easy to support the idea that words written in books or voices emanating from the heavens are divine communications. It seems much less easy to support the idea that intuitions or practical reason are forms of communication. The former are, at least, analogous to human communications and thus can be supported through argument from analogy. The latter cannot be similarly supported.

When I engage my faculty of practical reason, it sure doesn’t seem like I’m receiving a divine communication. It seems like I’m directly contemplating that which is valuable or virtuous and drawing its implications for my actions. When I act on my moral intuitions things are slightly more mysterious, but at most it feels like a trained response to certain states of affairs (kind of like the trained unconscious response of tennis player to a fast serve). Neither intuition nor practical reason carry the usual indicia of communications.

This brings me to the final point. When I actually engage in moral practical reasoning or use my moral intuitions, and it seems like direct contemplation, awareness or access to the good or the valuable is what dictates the appropriate moral response, then it also seems like communications are, contra Adams, not needed for obligations. It seems like the reasons for action provided by contemplation or analysis of the good is what grounds my obligations.

If this is right, then the modified DCT is improperly motivated, and premise (1), above, is false.

This is part 3 of my series on game theory. For an index, see here. The series follows the lectures of Ben Polak which are available on the Open Yale Courses website.

In the previous entry, we introduced some of the formal notation needed for game theory and used it to give a formal definition of the concept of strict dominance. In this entry, we continue by first examining the concept of weak dominance and by then exploring the iterated deletion of dominated strategies, as well as the concept of common knowledge.

1. The Hannibal Game
Hannibal Barca was a famous Carthaginian military commander and tactician. He is most renowned for marching an army, complete with war elephants, through the Pyrenees and the Alps into Northern Italy, during the Second Punic War. Following his arrival in Italy he won a number of notable victories over the Roman army.

His choice of invasion route is widely held-up as an example of shrewd military planning. But was it really that shrewd? We can’t provide a definitive analysis here, but we can construct a simple game theoretic model that provides some insight.

Here’s the set up: An invader is thinking of invading a country and there are two passes through which they can choose to invade. One of the passes is hard and one is easy (from the invader's perspective). The defender must defend but only has enough troops to defend one pass.

Payoffs in this game will be measured in terms of the number of battalions that the invader will arrive with (or are captured by the defender). There is a maximum of two battalions. Suppose that if both players choose the easy path, the defender can expect to win one battalion from the invader. Suppose that if both choose the hard pass, the defender will win both battalions. Finally, suppose that the hard pass is so difficult that even if the invader is unimpeded, he can expect to lose a battalion.

The following is the game matrix for this game:

If you were the defender in this game, what would you choose to do? Think about it for a moment or two and then come back to me....

What did you decide? It is suggested that you should choose to defend the easy pass even though it is not a strictly dominant strategy. Why do we make this suggestion? Well consider the following:

(i) If you defend the easy pass, then the invader is indifferent between the easy pass and the hard pass. In other words, he could choose either since they both yield the same payoff for him.

(ii) If you defend the hard pass, then the invader definitely prefers the easy pass.

Technically, what we say is that for the invader, the easy pass weakly dominates the hard pass. Which gives us the following definition:

Clearly, this holds true for the invaders strategy e relative to strategy h.

According to this model, Hannibal’s decision to invade via the Alps seems irrational. But the model is only as good as the assumptions that go into it. In our case, we simplified massively from the original set of circumstances. In reality, it is likely that there were uncertainties about the payoffs associated with the different routes. These uncertainties could have made Hannibal’s decision more rational.

2. The Numbers Game
One of the key ideas in game theory is dominance solvability. This is when you solve a game through the iterated deletion of dominated strategies. Here’s a simple game that illustrates this phenomenon:

Suppose you are in a class of 50 students (the precise number doesn’t matter) and you are all asked to play the following game. You are given a sheet of paper on which you must write a number between 1 and 100. You are told that the average number for the class will be calculated and the person who writes the number that is closest to being 2/3 of that average will win a prize of some kind. Assuming you would like to win, what number should you write on the sheet of paper?

This game forces you to make use of one of the lessons from part one: namely, putting yourself in other people’s shoes, imagining what they are likely to do, and then determining your own strategy in response to your assumptions about the other player.

To solve the game, I suggest picking an expected average number (pretty much at random) and see whether writing a number that is two thirds of this average holds up to scrutiny. As follows:

(1) If everyone were to write a number at random, then we might expect the average number in the class to be 50, thus if you wrote a number that was roughly two thirds of 50, you could expect to win. Therefore, you should write 33 or 34.

(2) The problem is that people don’t choose at random. If they follow the same reasoning pattern as you do, then 33-34 would be the expected average. So you should write a number that is two thirds of this average, i.e. approx. 22.

(3) But, of course, this reasoning process is available to all players, and if they follow it, then 22 would be the expected average. So you should write a number that is two thirds of this.

(4) This reasoning process continues on and on until you reach the number 1.

What’s happening in this game? The answer: an iterated deletion of dominated strategies. To see this in more detail, start the analysis once again from scratch. Note that any number chosen above 67 is going to be weakly dominated by 67, so you can remove any number above 67 from the set of viable strategies. Once you do this, any number above 45 becomes weakly dominated and so must be removed from the set of viable strategies. This process of elimination continues until you reach the number one.

Of course, if you really did have to play this game, you should take into account how strategically savvy your opponents are.

3. Common Knowledge
The numbers game illustrates another important phenomenon in game theory: common knowledge. Two examples will help us to understand this phenomenon.

Consider first the following diagram. It depicts two people wearing pink hats. Person X can see that person Y is wearing a pink hat; person Y can see that person X is wearing a pink hat; but neither knows the colour of their own hat. In this case, the fact that both are wearing pink hats is not common knowledge, it is only mutual knowledge.

This example suggests that common knowledge is a pretty subtle thing. Formally, it is defined as follows:

Common Knowledge: Proposition P is common knowledge between X and Y, iff X knows P and Y knows P, X knows that Y knows P and Y knows that X knows P, X knows that Y knows that X knows P and so on ad infinitum.

Common knowledge is thought to underly much of social life and can create enormous problems. This is humorously illustrated by our second example: a famous scene from the movie The Princess Bride.

Thursday, April 14, 2011

I recently discovered (amazing really, after all this time) the journal Analysis. It is fast becoming one of my favourites. “Why?” you ask. Well, for one simple reason: brevity. Many of the pieces in Analysis are short (under 10 pages), highly relevant and to the point. You have no idea how refreshing this is, particularly for one like me who comes from a legal background where articles can be often over 100 pages (check out US Law Reviews if you don’t believe me).

(Philosophical Trivia: Analysis is the journal in which Edmund Gettier's famous 3-page revolution of 20th century epistemology was published.)

Anyway, I thought I might do a post on the following piece that I recently read in Analysis:

1. Frankfurt defeats PAP
According to one of the standard positions in the philosophical debate, an agent (X) can only be held responsible for an action (A) if they were able to do something else (~A). In other words, the agent is only responsible for A if they could have done otherwise. This position is captured by something called the principle of alternative possibilities (PAP).

Harry Frankfurt suggested that PAP can be undermined by certain counterexamples, of which the following is one:

Frankfurt Case: Black wants Jones to perform a certain action A. Suppose Black is an amazingly reader of body language cues such that he can tell, in advance, what Jones has decided to do. If Jones decides to perform A, then Black will do nothing; If Jones does not decide to perform A, then Black will intervene and force him to do A. Now imagine that, as it happens, Jones decides to perform A and Black never has to intervene.

Question: Is Jones responsible for A?

Most say “yes”. This seems to create a problem for the defender of PAP because, in the scenario described by Frankfurt, Jones could not have done otherwise. So there appears to be a dilemma: either we accept that Jones is responsible for A and discard PAP, or we retain PAP and discard the belief that Jones is responsible for A.

More formally:

(1) In the Frankfurt Case, Jones is responsible for A.

(2) Jones is responsible for A if and only if Jones could have done otherwise (PAP).

(3) Jones could not have done otherwise in the Frankfurt Case.

(4) Therefore, either Jones is responsible for A and PAP is false; or PAP is true and Jones is not responsible for A.

2. Larvor Defuses Frankfurt
All is not lost for the defender of PAP. Assuming they do not wish to give up on (1), they can always try to challenge (3). In other words, they can argue that Jones can actually do otherwise in the Frankfurt case because alternative possibilities are available to him.

This is exactly what Larvor argues in a previous article in Analysis. Larvor points out that in the counterfactual scenario, i.e. the scenario in which Black intervenes and forces Jones to perform A, it is not actually the case that Jones performs A. In the counterfactual scenario it is Black who performs A. The fact that he does so through the medium of Jones’s body is incidental.

Larvor then argues if it is not the case that Jones performs A in the counterfactual scenario, then it is the case that Jones can do otherwise in the factual scenario. Why is this? Because Jones actually does face two possibilities in the Frankfurt case: (i) he can perform A of his own volition or (ii) he can get Black to perform A through the medium of his own body.

More formally, we can say that the conjunction of the following two premises defeats (3):

(5) In the counterfactual scenario, Jones does not perform A, Black does.

(6) If Jones does not perform A in the counterfactual scenario, then Jones could have done otherwise in the actual scenario.

3. Nucci Defends Frankfurt
We’ve now reached the point in the dialectic at which Nucci’s article actually becomes relevant. Nucci, you see, tries to respond to Larvor’s argument. He does so by drawing a distinction between:

(a) Not A-ing; and

(b) Avoiding to A.

He claims that the former does not entail the latter and that this is crucial to the success of Larvor's objection. To put this in slightly less abstract terms: Suppose in the Frankfurt scenario, Black wants Jones to kill a man named Smith. In the actual scenario, Jones decides to kill Smith without any interference from Black. This means, following Larvor, in the counterfactual scenario Jones manages to not kill Smith because Black ends up doing it through the medium of Jones’s body.

Nucci’s point is that not killing Smith is a very different thing from avoiding to kill Smith. Avoiding to kill Smith implies that it is somehow up to Jones whether or not Smith is killed. This is akin to Jones having some kind of power or ability to prevent Smith’s death. Clearly, in the Frankfurt scenario, Jones lacks this ability. After all, Black’s intervention in the counterfactual scenario is not up to Jones; it is something within Black’s control.

How does this save the Frankfurt counterexample from defeat? Well, the idea is that in order for PAP to truly hold, it must be the case that not killing Smith is up to Jones. Since even on Larvor’s interpretation this is not the case, PAP does not hold true in the Frankfurt counterexample. Thus Frankfurt’s original dilemma is preserved.

(7) To not A is not the same thing as to avoid A-ing.

(8) To be able to do otherwise is to be able to avoid A-ing.

(9) In the Frankfurt scenario, Jones cannot avoid A-ing.

So goes it for Nucci’s defence of Frankfurt. I am unsure whether it is successful or not.