In the article, Gooskens asks the question: can the moral predicates “right” and “wrong” be applied to virtual acts? And if not, where does that leave us in relation to our attitude towards such virtual acts? In answering those questions, Gooskens defends three related claims:

Claim 1: A necessary condition for the application of the moral predicates “right” and “wrong” to virtual acts is that the game player must have the requisite freedom to act in the virtual world.

Claim 2: The satisfaction of this necessary condition is not sufficient for the application of those predicates. This is because the phenomenology of virtual acts is not suited to the application of those predicates.

Before considering Gooskens’s defence of each of those claims, it’s worth clarifying that he is only interested in what he calls “purely virtual acts”. These are actions that real-world people perform, in a virtual environment, through the medium of their character or avatar, against virtual (i.e. computer simulated) characters or agents, without causal implications in the real world. This excludes, for instance, actions in multi-player games where what I do with my character might affect a character being controlled by another human agent. The reason for focusing on purely virtual acts is straightforward: they seem to be the ones most insulated from typical, real-world, moral concerns. So if it turns out that there is something wrong with them, it would be an interesting philosophical result.

1. Freedom in a Virtual World
If we are going to label certain virtual acts as “right” or “wrong”, then it seems like we must make certain assumptions about the responsibility of the agent performing those acts. The term “wrong” carries with it some admonishment or blame; the term “right” carries with it some praise or exhaltation. But if that’s the case, then it would also seem to follow that the freedom of the virtual agent is a necessary precondition for the application of those predicates. Why? Because freedom is a precondition for responsibility.

Or so, at least, Gooskens believes. I’m not sure I agree. I think the labels “right” and “wrong” can be applied without any moral appraisal of the agent who performed the acts. Gooskens acknowledges this in a footnote, and tries to correct for it by talking about rightness and wrongness in the “strong sense” (i.e. in the sense requiring blame or responsibility). I’m willing to accept that modification since the dispute is largely a semantic one.

In addition to this, Gooskens never actually defends his claim that freedom is a necessary precondition for responsibility. He simply states it as a fact. One could quibble with this on the grounds that what is and what is not necessary for responsibility is a contested matter. To be fair, however, I think we can say that Gooskens is using the term “freedom” as a general label for whatever kind of practical rationality is needed for morally evaluable action. Thus, I assume that nothing he says requires any particular position on compatibilist or libertarian theories of responsibility. (Of course, it may exclude certain forms of hard determinism).

Despite all this, I think Gooskens does make some good points about the nature of freedom in the virtual environment. Specifically, he draws attention to the fact that games don’t always allow their players a great degree of freedom. Indeed, classically, games only allowed players to exercise a very narrow form of instrumental rationality. The games would have clearly-defined goals or objectives, and a limited set of pathways (maybe even only one pathway) to achieving those goals or objectives. Arguably, this doesn’t allow for the necessary kind of freedom.

This raises obvious questions about the virtual rape in a game like Custer’s Revenge. In this game there was a clearly defined objective — rape of the Native American woman tied to a stake — and to succeed in the game you had to manipulate your character to avoid attacks and reach the woman. As such, this game only allowed the player to exercise a restricted form of instrumental rationality. This might raise questions as to whether the necessary precondition for moral appraisal is met in this case. The problem with this analysis, however, is that it ignores the decision by the player to play a game with such a clearly-defined goal in the first place. Surely freedom over that choice would meet the necessary condition for responsibility?

In any event, more recent video games allow for a much more robust type of ethical rationality to be exercised. Increasingly, there are multiple ways of achieving a given game objective. For example, in Grand Theft Auto you don’t usually have to mow down pedestrians or kill prostitutes in order to complete the missions. Also, there are some games with open-ended environments, in which you choose your own objectives. These games would seem to meet the necessary condition.

2. Morality and the Phenomenology of the Image-World
Nevertheless, Gooskens thinks that the sufficient conditions for the application of the predicates “right” or “wrong” are not met in virtual environments. To defend this conclusion, he makes uses of certain ideas from phenomenology, particularly those of Edmund Husserl.

Phenomenology is a branch of philosophy that deals with the contours of subjective experience. Husserl was one of the leading phenomenologists and he developed a particular phenomenology of the “image world”, i.e. the world of representations, both real and fictional. These representations would include photographs or paintings of real or fictional events and people, and also video games involving real or fictional agents and worlds.

Gooskens argues that acting in a virtual world is a kind of image-consciousness (roughly: an intention to do things through the medium of an image). Image-consciousness is characterised by the “as-if”-modifier, i.e. the situation is such that you are actually perceiving or intending something of type X, but it is as-if you are perceiving or intending something of type Y. Consider the following example:

Grandfather’s Photograph: I am holding in my hand a small black-and-white photograph of my grandfather and looking directly at it. What am I perceiving? I am actually perceiving a grey two-dimensional figure on a piece of card; but it is as-if I am perceiving my grandfather.

That example should be pretty straightforward. The point is that the things that are perceived or intended through image-consciousness are “beings-as-if”, i.e. they are not really there, they are only virtually there.

The key Husserlian move is to argue that image-consciousness not only covers the objective correlates of our perceptions and intentions, but also those mental states themselves. In other words, when looking at my grandfather’s photograph, it is not just that the photograph is only an “as-if” representation of my grandfather, it is also that my subjective state of perception is an “as-if” state of perception. Husserl illustrates the extreme implications of this view with the example of a picture of human suffering. He argues that we cannot really feel real pity by viewing this photograph, we can only feel “as-if” pity.

I have problems with this Husserlian move (assuming Gooskens is fair to Husserl in his description of it). I just don’t see why the emotional states that accompany the perception of a representation cannot be real. This seems particularly true if the image represents something in the real world. If I saw a picture or documentary about real suffering children, I think I would experience a real, genuine form of pity, not just an “as-if” form of pity. Admittedly, the situation is trickier with purely fictional forms of representation, but even then I think there is some room for genuine emotional (or other) responses. Oddly enough, Gooskens seems to accept this when he defends his third claim. We’ll get to that in a moment.

For the time being, we’ll just have to accept the point about the mental states also being affected by the “as-if” modifier. It is central to Gooskens argument against the application of moral predicates like “right” or “wrong” to virtual acts. That argument runs something like this: There are two ways in which to assess the morality of virtual acts, the consequentialist way and the deontological way. The consequentialist way assesses rightness and wrongness in relation to the consequences of an act. But since purely virtual acts have no real consequences, they cannot be judged right or wrong in the consequentialist way.

The deontologist fares no better. According to his method, we must assess rightness or wrongness by focusing on the acts and the intentions accompanying them. The problem with this is that — following the Husserlian move — these intentions are affected by the as-if modifier: they are not genuine intentions to do something wrong; they are merely as-if intentions. This is further compounded by the fact that in the virtual world the player acts through the medium of their character. This means that it’s not just that the player is performing acts with “as-if” intentions, it is that the player is acting “as-if” s/he was the game character who had those as-if intentions. Gooskens puts it this way: there is an important distinction between the real-I and the image-world-I. When I play a video game, I occupy the perspective of the image-world-I not that of the real-I.

To set this out more formally:

(1) The moral predicates “right” and “wrong” can only be applied to purely virtual acts using either the consequentialist or deontological methods of assessment.

(2) The consequentialist method cannot work because purely virtual acts have no real consequences by which they can be assessed.

(3) The deontological method cannot work because virtual acts are a type of image-consciousness and are consequently affected by the “as-if”-modifier (they are neither acts nor real intentions: they are merely “as-if” acts and intentions).

If we accept the image-world phenomenology underlying it, the argument would thus seem to go through. One obvious omission, however, is any mention of virtue ethics in the first premise (that being the third branch of normative ethics). One could argue that the omission is fair insofar as virtue ethics is not concerned with the rightness or wrongness of actions per se but, rather, with the goodness or badness of characters. Still, that seems like a stretch since virtue ethics does have implications for the assessment of actions. And as was suggested in my previous post about this topic, the virtue ethical approach could allow us to make moral assessments of those who engage in certain virtual acts.

3. Moral Discomfort and Virtual Acts
As it happens, Gooskens seems to be aware of this. He just doesn’t couch it in terms of “rightness” or “wrongness”. For in the third section of his paper, he goes on to defend the view that we can rightly feel moral discomfort at certain kinds of virtual acts.

In broad outlines, his argument is very straightforward. It is simply that we rightly feel moral discomfort whenever a performer of a virtually immoral act blurs the boundary between their real-self and image-world-self. In other words, when the “as-if” modifier begins to lose its grip. (But, of course, if this can happen it casts some doubt on the preceding argument, doesn’t it?)

Though the argument is straightforward, Gooskens uses some nice examples to flesh it out. First, he takes us away from the hi-tech world of video games and back to the more genteel and low-tech world of the stage play. After all, actors are the quintessential performers of virtual acts and usually we have no problem with them performing acts that would otherwise be highly immoral. Thus, we do not balk at Anthony Hopkins playing the sadistic cannibal Hannibal Lector.

But there is some complexity to this too. Usually, upon learning that an immoral act is part of a play or performance, we feel a great sense of relief. Imagine walking into your flat to find your actor roommate apparently forcing himself on a woman, but then seeing the director of the play your roommate is in issuing various directions from the couch. In this case, what would initially be a feeling of grave moral concern would shift to one of relief. (Gooskens also gives the example of the performance of Julius Caesar in the British comedy series Blackadder III, which is rather amusing but I won’t recount it here).

Can the emotions ever run in the other direction, i.e. from a feeling of relief (or ease) to a feeling of discomfort? Gooskens argues that it can. Imagine if you are watching the stage performance of your roommate’s play. Everyone is captivated by his intense and realistic performance during the rape scene. Afterwards, you congratulate him on this and he tells you that the reason why he was so intense and realistic is that he was actually sexually aroused during the scene. Gooskens suggests that in this case you would feel moral discomfort. Your roommate appears to have blurred the boundary between his real-self and his image-world-self in a most disturbing manner.

Can this ever happen in relation to video games? Yes, it can. Indeed, Gooskens thinks that the case of someone who frequently plays a game like RapeLay is possibly more discomforting. There are certain features of video game performances that differentiate them from stage performances and that render the motives of the performers even more questionable. As he puts it himself:

Users of a virtual environment, however, are not playing for a public but only for themselves. The player of the Japanese rape game [RapeLay], for example, does not portray a rapist to convey something to an audience, and this makes it very probable that his or her only reason to engage in artificial rape is that he or she is actually aroused by it. This causes the distinction between his or her actual person and his or her image-world-I to collapse.

If we accept this analysis, however, don’t we allow ourselves to start applying moral predicates like “wrongness” to their actions? Gooskens insists that we cannot. We can only feel moral discomfort because there are still no intrinsically wrong acts being performed. I find this conclusion somewhat disingenuous. Given that the distinction between the real-self and the image-world-self was central to his argument against the application of moral predicates to virtual acts, the suggestion that this distinction can collapse in certain cases would seem to undermine that argument.

Still, there is something attractive to Gooskens’s analysis. I’m not a fan of over-moralising everything we do, the idea of holding video game players morally responsible for purely virtual acts seem like a case of moralistic overreach to me. Gooskens allows us to resist this moral overreach, but acknowledge occasional instances of moral discomfort.

Saturday, April 26, 2014

The notorious 1982 video game Custer’s Revenge requires the player to direct their crudely pixellated character (General Custer) to avoid attacks so that he can rape a Native American woman who is tied to a stake. The game, unsurprisingly, generated a great deal of controversy and criticism at the time of its release. Since then, video games with similarly problematic content, but far more realistic imagery, have been released. For example, in 2006 the Japanese company Illusion released the game RapeLay, in which the player stalks and rapes a mother and her two daughters.

The question I want to explore in this post is the morality of such representations. One could, of course, argue that they are extrinsically wrong, i.e. that they give rise to behaviour that is morally problematic and so should limited or prohibited for that reason. This is like the typical “violent video games cause real violence”-claim, and I suspect it would be equally hard to prove in practice. The more interesting question is whether there is something intrinsically wrong with playing (and perhaps enjoying) such video games. Prima facie, the answer would seem to be “no”, since no one is actually harmed or wronged in the virtual act. But maybe there is more to it than this?

To answer that question, I am going to enlist the help of Stephanie Partridge. She has a written a number of interesting articles over the years about the ethics of virtual and fictional representations. She argues that we should have a modestly moralistic attitude toward such representations. In other words, she argues that there is something intrinsically wrong with enjoying such representations. The “it’s only a game”-response, doesn’t always work.

1. The Virtual Theoretical Approach and the Challenge of Video Games
You might be inclined to think that your aesthetic enjoyment of something (e.g. video games, movies, books, jokes) is an intrinsically amoral thing. To the extent that each of those representations involves fictional entities and events (and for the purposes of this discussion we are assuming that they only involve such entities and events), they don’t seem to have any intrinsic moral relevance. No agent is harmed, no wrong is done, it’s all just a matter of subjective enjoyment.

That seems right at a first pass. But there is, however, one thing about the aesthetic reaction to fictional representations that is real, namely: the aesthetic reactions themselves. If you find a racist joke funny, then you are really amused; if you are angered or frustrated by your lack of success in a video game, then you are really angered and frustrated; and if you enjoy engaging in virtual acts of rape, then you are really in a state of enjoyment while engaging in those acts.

The reality of these aesthetic reactions suggests that the fictional world isn’t entirely beyold the moral realm. In particular, it suggests that a virtue ethicist can easily make an argument against games like Custer’s Revenge and RapeLay. For the virtue ethicist, what matters when it comes to morality is the cultivation of positive character traits (the “virtues”). A person who enjoys fictional representations of the sort described is expressing something negative about their characters. Consequently, there is something unvirtuous (“vicious”) about that individual. They are rightly the subject of moral criticism.

In very broad terms, the virtue theoretical approach looks like the best one to take when it comes to arguing that there is something intrinsically morally wrong with enjoying certain fictional representations. Nevertheless, there are two challenges to this position that must be overcome.

The first challenge forces us to confront the fictional nature of the representations. Imagine the following case:

Colosseum Spectator: Suppose you and friend are spectators at the Colosseum in ancient Rome. You watch all the events, but the only one that really excites your friend, and elicits laughter and other expressions of joy from him, is when the Christians are fed to the lions. What should you think of him?

That he’s a bad guy, right? That there is something morally vicious about his character. That seems straightforward enough, but that’s probably because there is real human suffering underlying his aesthetic enjoyment. He would have to be morally corrupt to enjoy that kind of thing. But what if it was a video game in which he fed Christians to the lions? And what if you have no other reason to suspect that your friend is morally corrupt? Outside of this one video game he does not seem to derive any enjoyment from human suffering. So is enjoying a purely fictional representation of such suffering enough to say something bad about his character?

This brings us to the second challenge. It’s not that all fictional representations of immoral conduct raise questions about the moral character of those who enjoy them, but rather that a particular subset do, specifically the subset involving things like Custer’s Revenge, RapeLay, and maybe virtual paedophilia, virtual genocide and so forth. This challenge is something I covered previously when I looked at Morgan Luck’s “Gamer’s Dilemma” (which asked: why is virtual murder acceptable but virtual paedophilia is not?).

Any satisfactory defence of virtue theoretical approach to fictional representations will need to address these two challenges. Partridge thinks she is up to this task. Let’s see why.

2. The Incorrigible Social Meaning of (Some) Fictional Representations
In brief outline, Patridge’s argument is this: there are certain fictional representations that have incorrigible social meanings. That is to say: there are fictional representations that have a limited range of reasonable interpretations, and that range may require us to see the representation as saying something prejudicial (or otherwise morally problematic) about the society in which we live. It is, consequently, not reasonable for any member of our moral community to interpret those representations in another way. Thus, if they neglect or ignore the incorrigible social meaning of the representations, we are entitled to draw negative inferences about their character.

We can state this in slightly more formal terms:

(1) If a fictional representation has an incorrigible (and morally problematic) social meaning, then there is a limit on the range of reasonable interpretations of that representation.

(2) Certain fictional representations — e.g. the representations of rape and sexual violence in games like Custer’s Revenge and RapeLay — have incorrigible and morally problematic social meanings.

(3) Therefore, there is a limit on the range of reasonable interpretations of these representations.

(4) If a member of the relevant moral community fails to notice the limited range of reasonable interpretations — i.e. if they interpret outside of that range — then we are entitled to draw negative inferences about their moral character.

(5) Therefore, if a person interprets fictional representations such as those found in Custer’s Revenge or RapeLay outside of the limited range of reasonable interpretations, we are entitled to draw negative inferences about their moral character.

This formalisation is messy; I’m sure it could be cleaned up. Nevertheless, it will suffice for present purposes. What we need to do now is go through some of the key premises in more detail.

We start with premise (1). What does it mean to say that fictional representations can have incorrigible and morally problematic social meanings? And why does that restrict the range of reasonable interpretations? Partridge starts with a simple observation: fictional representations can be more or less fictional. That is to say: they can be more or less connected to the real world. Some fictional representations are intended to closely map onto the real world, some are more fantastical. Still, virtually all representations require us to bring our knowledge of the real world to bear upon our interpretations of those representations. If I am reading a fantasy novel, I will accept certain fantastical premises and suspend my disbelief in accordance with those premises, but I won’t be willing to completely suspend disbelief. This is why we often get critiques of such novels in terms of the “realism” of their world-building or characterisations.

This degree of connection to the real world can give rise the phenomenon of incorrigible social meanings. Partridge gives the example of fictional representations of African-Americans eating watermelons. I have to confess that this rings no alarm bells for me (coming from an Irish background) but Partridge informs me that such representations have an incorrigible social meaning in the United States. The U.S. has a morally problematic history with slavery and racism, and images of African-Americans eating watermelons have apparently played a part in that history. Specifically, they have been used to insult and dehumanise.

Partridge’s point is that within the U.S. such representations have an incorrigible and morally problematic meaning. Anyone who is a member of that community should be aware of that meaning. Partridge also gives other examples of representations with such incorrigible social meanings. For instance, an image of simianised black person would have a racist meaning across most (probably all) cultures. Likewise, the representations in Custer’s Revenge and RapeLay have incorrigible social meanings, given the treatment of the Native American population in the U.S. and the systematic social oppression and sexual objectification of women. (This, incidentally, gives us a defence of premise (2) as well).

Let’s grant that such representations have morally problematic meanings. Is it really true that there is consequntly a limit on the range of reasonable interpretations? Partridge’s answer is a nuanced one. Sometimes such representations could have reasonable alternative meanings but, she argues, this will almost always be when they are explicitly used by people to draw attention to the moral problems they raise. For example, it may be possible for a black artist to use racist imagery in order to make an anti-racist statement. But that is a limited range of cases. In most instances, the morally problematic meanings will be the only reasonable ones.

Moving on then to premise (4). Is it really fair to say that someone who fails to heed the incorrigible social meaning deserves moral criticism? If our game-playing friend laughs off the crude depiction of rape in Custer’s Revenge and insists that it is only a game, are we still entitled to chastise him? Partridge wants to say “yes”, but in saying that she needs to be sensitive to the second challenge outlined above. Why is moral criticism deserved in relation to some kinds of immoral representation and not others?

Partridge admits that there is no definitive test. Nevertheless, she wants to argues that representations that play upon social prejudice and oppression are almost always going to be problematic. Anyone who fails to pay heed to their social meaning is likely to be worthy of moral criticism. Why? Because oppression stems from a denial of full respect to certain groups of people. It is something that one group (the elite) denies to another group. Hence, if the problem of oppression is to be corrected, it requires changes in social attitudes. This, coupled with the fact that the denial of respect to certain groups has frequently been facilitated by fictional representations, is what makes the failure to attend to the incorrigible social meaning of such representations worthy of criticism. To quote from Partridge herself:

This denial has been achieved partly through the kinds of imaginative entertaining that the games in question invite us to adopt. This is what makes the images cited here particularly incorrigible, so that a friend who responds to our criticism of Custer’s Revenge by claiming, ‘‘Come on, it’s only a game; I’m not sexist.’’ sees his imagining as just some random imagery detached from his own moral commitments, and detached from the moral facts on the ground. Such a failure is a failure both of sensitivity and of sympathy—sensitivity to the social meaning of the imagery, and sympathy with those who are the targets of such imagery.

A failure of sensitivity in this instance, given the need to correct social attitudes, is what warrants the moral criticism.

With this defence of premise (4) complete, the argument as whole seems moderately plausible.

3. Implications
So what are the implications of this for how we treat fictional representations? Must we now approach video game playing with a moralistic mindset? Will this radically alter the way in which we regulate fictional representations? A few points are worth mentioning here.

First, it’s imporant to realise that Partridge’s arguments are relatively modest and her position quite complex. She is not claiming that enjoyment of all immoral fictional representations is worthy of approbation, nor is she claiming that it is easy to say when it is worthy of approbation. The creators of fictional worlds can insulate their fictional representations from social meanings. But this can be a tricky process. Partridge discusses, in particular, the example of Resident Evil 5, a game set in Africa in which the character, Chris, kills hordes of marauding African zombies. Some complained about the representation — a white character was, after all, being required to kill African zombies — but one could argue that within the context of that fictional universe, the representation made sense and was hence insulated from its racist social meaning.

Second, presumably because of its modesty and complexity, Partridge doesn’t really consider the social implications of her argument. To be precise, she doesn’t really consider the form that the moral criticism should take, or the regulatory impact on video games and other fictional media. Do we just need to gently admonish our friends and the creators of those representations? Encourage them to be more sensitive in the future? Or do we need a more intrusive, more punitive, social response? Those are questions I’d like to see answered.

A final point that’s worth mentioning is that Partridge’s argument also forces us to draw a distinction between certain kinds of response to fictional media. The person who thinks that games involving virtual rape are a “bit of a laugh” is different from the person who actively enjoys those kinds of representation, who seeks them out, and seems to derive some (sexual) pleasure from them. The former is demonstrating a lack of requisite moral sensitivity; the latter is demonstrating a deeply troubling moral character. This suggests an additional layer of complexity in the analysis. In order to make a proper assessment of moral character, we must pay attention not only to the meanings of the representations but also to the precise nature of the aesthetic response to those representations.

Okay, so that brings us to the end of this post. I should emphasise that this is a very brief overview of Partridge’s arguments. In her published work, she uses many more examples and elaborates her views in additional ways. To be honest, it’s not always easy going, but if you are interested in the topic, I’d suggest checking it out.

Thursday, April 24, 2014

This is going to be the final part in my series on Nicholas Agar’s book Truly Human Enhancement. In the most recent entry, I went through the first part of the argument in chapter 4. To briefly recap, that argument contends that radical enhancement may lead to the disintegration of personal identity (in either a metaphysical or evaluative sense).

Agar supports this argument with three main claims. The first is that our autobiographical memories — the medium through which we situate and organise our life experiences — are crucial to our personal identities. The second is that the process of autobiographical remembering is a reconstructive one that relies on present cognitive resources. And the third is that by radically enhancing ourselves we risk altering those cognitive resources to such an extent that we become unable to remember ourselves. This is because our past lives will lack significance from the perspective of our future, radically enhanced selves.

In this post, I am going to work through the second half of chapter 4. This half continues the argument from the first half, and makes considerable use of an analogy about the relationship between childhood and adulthood and the relationship between normal adults and radically enhanced adults. I’m going to explain why that analogy comes into the discussion; how Agar supports his argument through that analogy; and then I’m going to offer some general criticisms of the argument from chapter 4.

1. Why wouldn’t we want to grow up to be posthumans?I’d give all the wealth that years have piled, The slow result of life’s decay, To be once more a little child, For one bright summer’s day.
(From Solitude by Lewis Carroll)

If Agar is right, we should avoid radical enhancement because of the risk it poses to identity. Transhumanists are likely to disagree. In order to provide a reasoned basis for their disagreement, they might like to fall back on the argument proffered by Nick Bostrom and Toby Ord. This argument — parts of which I’ve covered before — holds that opposition to enhancement often stems from an irrational bias toward the status quo.

Bostrom and Ord introduce several thought experiments and tests which are designed to diffuse this bias. One of them makes an analogy between childhood-adulthood and unenhanced self-enhanced self. The idea is this: As children, we have certain interests, desires, goals and aspirations. We want to play hide-and-seek with our friends, we want to climb trees and explore the woods; we want to watch cartoons and read comic books; and so on. In growing up, we physically and intellectually mature to such an extent that some of these desires and interests seem less and less important. We may even disparagingly refer to them as being “childish”.

The point, as Agar notes, is that there is something slightly “tragic” about growing up, about putting away childish things. We lose our childish identities. Nevertheless, as Bostrom and Ord argue, this is almost always a worthwhile process. When we become adults, new interests and desires can emerge; we begin to see the world in a different light. With some exceptions (e.g. Lewis Carroll in the quote given above), most of us are happy that we became adults and moved beyond our childhoods.

But if that’s right — if one phase of physical and intellectual maturation is, all-in-all, a “good thing” — why wouldn’t a second phase also be a good thing? Why should we call a halt to our physical and intellectual maturation and deny ourselves the opportunity for radical enhancement? That, at any rate, is the question posed by Bostrom and Ord. We can think of this in terms of different patterns of development: the traditional pattern and the enhanced pattern:

Agar is trying to argue that the traditional pattern is better (or, rather, the “safer” bet). So is there anything more to Agar’s loyalty to the traditional pattern than an irrational bias?

2. The Role of Adulthood in Human Life
Agar, unsurprisingly, thinks that there is. He offers two reasons for thinking that Bostrom and Ord’s analogy is misleading. The first is that the kind of discontinuity between unenhanced self and radically enhanced self is likely to be far more discontinuous than that between childhood and adulthood. The second is that just because it is good to undergo one transition from childhood to adulthood does not mean that it will be good to undergo another transition. In other words, there are important disanalogies between the two cases.

As regards the first reason, Agar only explores this briefly in this part of chapter 4 (though, in some ways, the entire book is about those discontinuities). He speaks specifically about the gradual nature of the transition from childhood to adulthood, how we acquire skills and abilities through persistent practice and effort, and how radical enhancement will allow us to acquire new skills much more rapidly and without much effort. He gives the example of a math neuroprosthesis which effectively replaces the parts of your brain responsible for developing mathematical ability.

The concerns Agar raises about effort and gradual change are interesting, but I am not going to discuss them at any length. I discussed them previously when looking at Tom Douglas’s paper on moral effort and moral enhancement. There are differences, of course, between moral enhancement and other forms of enhancement, but I still think that my previous discussion maps out many of the issues one could raise about Agar’s argument.

So I’m going to focus on Agar’s second reason instead. This centres on the claim that undergoing a second transition from human adulthood to posthuman adulthood would not be a good thing. To defend this, Agar proposes an analysis of adulthood and childhood in terms of their role in an individual’s life. He rejects alternative analyses — e.g. in terms of physical or mental properties — on the grounds of speciesism.

What, then, are the roles of childhood and adulthood in an individual’s life. Here I quote directly from Agar. On adulthood:

I define adulthood in terms of the role it occupies in the life of a certain kind of being…For a species to have a stage properly recognized as adulthood, it must be capable of forming complex, all-encompassing desires about the direction its life should take…[Adulthood] is, nevertheless, a stage characterised by the physical, cognitive, and emotional resources to arrive at final and decisive plans about one’s life. There should be no higher authority in respect of question about an indivdual’s own basic values and interests.

(Agar 2013, p. 72)

And on childhood:

This approach to adulthood suggests a matching view of childhood. This approach does not identify childhood with a list of characteristics typical of human children. Rather, childhood is defined in relation to adulthood…Childhood is a stage that anticipates and prepares for the later stage of adulthood…the mere fact that a desire about the direction a life should take is expressed by a child necessarily deprives it of authority.

(Agar 2013, p. 73)

These quotes suggest that the following is a fair characterisation of Agar’s analysis:

Adulthood: The stage at which a person has the physical, cognitive and emotional resources to arrive at final and decisive plans about their life, i.e. the stage at which final authority over a life plan arises.

Childhood: The stage at which a person does not have the physical, cognitive and emotional resources to arrive at final and decisive plans about their life, i.e. the stage at which there is no final authority over a life plan.

Somewhat annoyingly, Agar goes on to suggest that adults can, of course, make bad decisions about life plans and some children can be quite mature and settled, but this still doesn’t mean that the adults lack the requisite authority and that the children have it. I say this is “annoying” because it makes it unclear to me whether the analysis of childhood and adulthood rests primarily on the capacities of the individuals at the respective stages or on thr notion of “authority”. If it’s the latter, as I suspect it may be, then I think the analysis is problematic.

Anyway, what does all this definitional do for us? Agar thinks it allows us to see more clearly why the transition from human adulthood to posthuman adulthood is prudentially problematic. The argument has two parts to it. The first is the claim that when one is due to undergo a transition to a more mature state, one must realise that one’s current plans and desires are ultimately beholden to the future plans and desires of that more mature being (it is here that the concept of “authority” is significant). The second is the claim that one of the key functions of adult parents is to help children undergo the transition, to make sure they don’t prematurely commit to plans and decisions that will negatively impact on their more mature states.

And this is where we encounter the big problem with radical enhancement. We lack posthuman parents who can help us transition to posthuman adulthood. This, Agar argues, has a corrosive effect on our current projects and commitments. We need to bear in mind our future radically enhanced selves, but we don’t quite know what they will value. The projects and plans we currently value may not seem valuable to them. We are faced with a dilemma.

Agar adds to this dilemma by suggesting a potential regress problem for the proponent of radical enhancement. Once they reach their first radically enhanced state, they may demand more radical enhancement, and once they reach their second radically enhanced state they demand even more radical enhancement. The result is that they are forever in a state of childhood: awaiting transition to a more mature state of being. Agar specifically cites Kurzweil’s suggested goal of saturating the cosmos with our intelligence as an example here. Though that goal will eventually come to an end (assuming the cosmos is finite) it does potentially set-up a very long-term state of childhood.

3. Thoughts and Criticisms
I think there is some interesting food for thought in Agar’s discussion of personal identity and radical enhancement, and I’m intrigued by the analyses he proposes of personal identity and the concepts of childhood and adulthood. That said, I find myself unpersuaded by these particular arguments against radical enhancement. There are several reasons for this. What follows is a disjointed and off-the-cuff list.

First, and a somewhat minor point, I find Agar’s definitions of adulthood and childhood argumentatively problematic. It seems to me that he has essentially defined adulthood as being that stage of life from which no further augmentation or alteration of values or life plans is desirable (or “authoritative”). That comes pretty close to definitionally stacking the deck against the proponent of enhancement.

Second, as mentioned above, I find the notion of “authority” over desires and life plans problematic. This is because I’m not sure that any particular temporal slice from an individual’s life should have authority over another. This comes up all the time in discussions of autonomy in healthcare (e.g. advance directives). I think there certainly are some cases where such an authority relationship can exist. A good example would be where someone loses decision-making capacity due to illness. In such a case, we should probably refer back to an earlier stage with such capacity. Obviously, that doesn’t arise in the case of radical enhancement and, to be clear, Agar isn’t claiming that it does — the whole point of his argument is that the radically enhanced self would have the authority. Nevertheless, to rest an analysis of the concept of adulthood on such a contested normative concept seems odd to me.

Third, I don’t really see why we couldn’t transition gradually to a radically enhanced, posthuman state. Agar seems to assume that any such change would be radical and discontinuous (e.g. full upload of the mind to a computer). And that it is this radical discontinuity that leads to the disintegration of identity. But why couldn’t the change be much more gradual and partial than that, with each stage in the transition retaining a strong link to the previous one? Sure, you will end up as a very different being, but that’s really no different from previous transitions like the transition from zygote to infant and infant to adult. Agar doesn’t give any reason to think that gradual transitioning is not possible.

Fourth, and opposing this previous point, Agar also doesn’t give weight to radically discontinuous forms of enhancement that would bypass the concerns he raises. I speak here, in particular, of enhancement achieved through germ-line genetic manipulations. In those cases, children would be born into a posthuman form of existence. They would not have to worry about their future radically enhanced selves having different life plans and desires, or about the troubling second childhood that Agar outlines. (There might still, of course, be the problem of perpetual childhood, but that’s different).

Fifth, and perhaps most problematic, is the fact that Agar’s analysis seems to ignore relevant comparators. In other words, in his arguments he compares a somewhat idealistic state of human adulthood (in which the adult has commitments and life plans that they presently find valuable) with a future radically enhanced state of being. But that’s not really a fair comparison. As Angra Mainyu pointed out in the comments section of the previous post, we might need to compare the possibility of a radically enhanced future self with the possibility of one’s current adult self dying. That is to say: radical enhancement might be the only way of avoiding death. If that’s the case, then whatever the risks to identity, the decision to radically enhance will seem a lot more prudentially wise than Agar makes it out to be.

Agar may quibble here that, according to his argument, radical enhancement would be as good as death due to the disintegration it involves. But Agar himself acknowledges that the disintegration he defends is probabilistic only: it may or may not happen. Indeed, his argument rests of several controversial normative and factual premises (e.g. theories of autobiographical memory and personal identity). If there’s even a small chance of survival via radical enhancement, it could be the rational thing to do. Ironically, Agar discusses this type of objection earlier in the chapter when recounting a dispute between Walter Glannon and John Harris, but he seems to ignore it when it comes to his own argument.

Sixth, and finally, his point about the lack of radically enhanced guardians to help us transition to posthumanhood looks pretty weak to me. As long as someone chooses to undergo the transition — by whatever means or for whatever reason — we will acquire such a guardian. And as more choose to undergo the transition, we will acquire more.

Okay, so that brings me to the end of this series of posts. Unfortunately, I have only managed to cover a small fraction of Agar’s book in this series. Hopefully, this is enough to give you a flavour for its contents and Agar’s overall style of argumentation. Anyone with an interest in the enhancement debate should probably check out the full thing. Subsequent chapters deal with enhancement for the purposes of scientific discovery; the morality of life extension; the case for truly human enhancement; and the enhancement of moral status. All interesting topics.

Friday, April 18, 2014

If we extended our lives by 200 years, or if we succeeded in uploading our minds to an artificial substrate, would we undermine our sense of personal identity? If so, would it be wiser to avoid such radical forms of enhancement? These are the questions posed in chapter 4 of Nicholas Agar’s book Truly Human Enhancement. Over the next two posts I’ll take a look at Agar’s answers. This is all part of my ongoing series of reflections on Agar’s book.

Agar’s main contention is that radical enhancement could indeed pose a serious threat to our personal identity and that this is something we should care about. Arguments about what does or does not pose a threat to identity often take the following form:

(1) Condition X is a necessary condition for personal identity to obtain.

(2) Y undermines or cancels condition X.

(3) Therefore, Y necessarily undermines personal identity.

Such arguments are part of the game played by philosophers to identify the necessary and sufficient conditions for the realisation or exemplification of certain properties and concepts. Agar does not wish to play this game. He is clear that he is not arguing that radical enhancement will necessarily undermine personal identity. He is arguing that it could, and that we would be wise to avoid that risk. This is really Agar preferred mode of argumentation, as mentioned in the previous post.

To get a handle on Agar’s argument, we will need to do three things. First, we’ll need a (very) brief primer on the concept of personal identity and the sense in which that concept is invoked in Agar’s argument. Second, we’ll need to look at Agar’s argument that radical enhancement threatens autobiographical memory. And then third, we’ll need to consider how Agar’s argument can be interpreted in terms of a game we play with our future selves.

1. What is personal identity and why does it matter?
Who we are is a matter of great importance to most of us. We spend our lives developing a sense of self, a sense of purpose and direction, a sense of identity. Our identity is what binds us together, what makes us whole. But as Agar notes, there at least two different senses of the word ‘identity’ in the philosophical debate:

The Metaphysical Sense: Identity is what makes me the same person as I was ten minutes ago. Identity is a one-to-one relationship. So X and Y are identical, if and only if, they are one and the same thing (hence, why this is sometimes referred to as numerical identity).

The Evaluative Sense: Identity is what makes my continued existence meaningful, valuable or worthwhile. When asking questions about identity I am interested in what conditions or mishaps might make continued existence worthless or devoid of meaning. Identity, under this definition, is what matters to us in our survival (and, as Parfit famously argued, this need not require a one-to-one relationship).

There are many interesting philosophical questions about identity. And we could go back and forth forever on which sense of identity is the important one. Those are debates worth having. But we don’t need to have them here. Agar’s argument against radical enhancement works with either sense in mind.

In addition to the different senses of the word, there are also different accounts of what constitutes our identity (in either sense). The two leading ones are the psychological continuity account and the animalist account. According to the psychological continuity account, our identities are constituted by a set of temporally overlapping mental states. In other words, the reason why I am the same person I was ten minutes ago (or the reason why myself ten minutes should care about who I am now) is because we share certain mental states: we have the same beliefs, desires, memories etc.). According to the animalist account, our identities are constituted by a continuity relationship between the biological organism that we are now and that we will be later. (This is often thought to allow for identity to be preserved in troubling cases like that of the person in a PVS).

Agar works with the psychological continuity account in his argument. This seems appropriate to me since that seems like the most plausible theory of identity (particularly in the evaluative sense). Still, one might wonder whether his argument would work as well against the animalist account. I think it probably would. Indeed, depending on the nature of the radical enhancement, it might be easier to argue that identity is undermined on the animalist account. For example, if radical enhancement involves the destruction of the biological human form, and the uploading of the mind to a digital medium, then I think it is safe to say that biological continuity has been undermined.

2. How Radical Enhancement Might Undermine Personal Identity
So much for the conceptual framework. Now we must deal with Agar’s actual argument: how exactly would radical enhancement undermine or threaten our identities? To answer that, Agar appeals to the notion of autobiographical memory. This is the memory of personal events and details from our lives. It is like the narrative tale we tell ourselves about who we are, what has happened and why is it important.

According to modern theories of memory, remembering is a reconstructive process. My brain does not record my past life experiences like a video-recorder. Instead, it creates schemata which encode salient information. The act of remembering then fills out these schemata. But in order to fill them out, other cognitive resources must be drawn upon. For example, my ability to remember riding a bike this morning might rely my having the requisite learned skill and background knowledge. It might also rely on my evaluative beliefs and desires at the present moment (e.g. my ongoing interest in good health, better bike riding etc.).

Disease can affect the reconstructive processes of autobiographical memory. Alzheimer’s is the example discussed by Agar. He refers, in particular, to the case of Ronald Reagan. It was said that, in his final years, Reagan forgot that he had been President of the US, a devastating loss of autobiographical memory. Agar speculates that this probably wasn’t because all the schemata for his presidential life were destroyed, but rather because he lost a lot of the background knowledge that would be needed to reconstruct those memories.

How does this relate to radical enhancement? Well, Agar wants to argue that radical enhancement could have a disruptive effect on the reconstructive process. If we radically enhance ourselves (either through biological or technological manipulation), our future radically enhanced selves are unlikely to actually forget about us. Indeed, modern recording technologies will probably make that impossible: the past will always be recoverable if they wish it to be.

The problem, instead, is that our future radically enhanced selves are likely to have very different evaluative frameworks. Things that seemed important or significant to us, will seem trivial and inconsequential to them. My 120km bike ride this morning — a significant and meaningful achievement to me right now — will look like a walk in park to my radically enhanced future self. He (or she or it) won’t deem it important enough to remember. We experience this to some extent ourselves right now: think about how you rarely have a good memory for the mundane details in your life.

Agar illustrates the problem by discussing a fictional example. The example comes from the late Iain M. Banks’s novel Matter. The novel was part of his Culture series, which frequently engaged with themes of radical enhancement. In the novel, a character named Anaplian — who comes from a world with sixteenth century technology and culture — undergoes a series of radical enhancements. She is made significantly stronger or faster; she can sense radio waves; she can operate machinery through thought alone; and she can switch pain and fatigue on and off at will. After she undergoes all this, she develops a very ambivalent relationship to her past self. She cares less and less about who she was.

Within the novel, there are probably good reasons for this: her people were backward and patriarchal. But Agar suggests that even if they were not, radical enhancement will just tend to produce that feeling of disconnect because of the different evaluative frameworks. So his argument works a little bit like this:

(4) Our autobiographical memories are integral to our identities (metaphysical/evaluative): they are like a record of important events and experiences in our lives.

(5) Autobiographical remembering relies on an array of background cognitive resources to reconstruct the memories.

(6) Radical enhancement will so alter those background cognitive resources that we may no longer be able (or, rather, willing) to reconstruct those memories.

(7) Therefore, radical enhancement will undermine our identities.

3. A Dangerous Game with our Future Selves
I won’t say too much about the merits of this argument for now (I leave that until the next post), but here is a way to think about it that might be helpful.

Game theorists often look at the decision to start smoking as a game you play with your future self. When given the opportunity to reflect, broadly, on the shape of their lives, many people would prefer not to smoke, despite the short-term rewards they experience. This is because they prioritise and prefer their long term health and well-being over their short-term desire to smoke. The problem is that we aren’t very good at prioritising long-term goals over short-term desires. We often undergo a process known as preference reversal: a point where the short term desire to smoke rises about the long-term desire for health and well-being.

Because of this phenomenon of preference reversal, it pays to think about the decision to start smoking as being akin to a game you play with your future self. This is depicted in the game tree below. Your present self gets a payoff of 0 for not smoking in the short-term, and a small reward from smoking in the short-term (say 1, but it doesn’t really matter for this game). In the long-term, your present self gets a payoff of 1 for not smoking, and a payoff of −1 for smoking (due to the health effects). Your future self, on the other hand, gets a payoff of 1 for smoking and −1 for not smoking (he, after all, is addicted and suffers the loss much more).

The Smoking Game

When you look at the decision to start smoking like this, you realise that your future self is not on your side. He is a competitor in this game. If you care about your long-term health, you need to do something to make sure he doesn’t get a chance to act out his preferences. You can do this either by not smoking at all in the short-term (and thereby never developing the addiction) or you can develop some commitment strategy that will limit the options open to your future self (think Ulysses and the Sirens).

The relevance of this here is that Agar’s argument about radical enhancement and person identity involves a similar game. Agar is saying that your present self has certain life interests and experiences that are important to it right now. It would like to see those interests and experiences preserved in the long-term. But your present self can’t rely on your radically enhanced future self caring about those things. It may have very difference preferences. Agar emphasises this by noting the asymmetrical attitude we have towards our past and future. Since we tend to care more about our future than our past, we can at least count on our radically enhanced future self having a similar bias. This increases the likelihood of him/her disconnecting from our present selves.

Okay, so I’ll leave it there for now. In the second part, I’ll look at Agar’s childhood-adulthood analogy (which he uses to further underscore his point about radical enhancement and identity), and consider some weaknesses in this argument.

Tuesday, April 15, 2014

I recently published an article in the Journal of Evolution and Technology on the topic of sex work and technological unemployment (available here, here and here). It began by asking whether sex work, specifically prostitution (as opposed to other forms of labour that could be classified as “sex work”, e.g. pornstar or erotic dancer), was vulnerable to technological unemployment. It looked at contrasting responses to that question, and also included some reflections on technological unemployment and the basic income guarantee.

I hate to say this myself, but I thought the arguments in the paper were interesting, and I’d like to hear what other people think about them. But since people are busy, and may not be inclined to read the full 8,000 words, I thought I would provide a brief precis of the main arguments here. That might persuade some to read the full thing, and others to offer their opinions. So that’s what I’m going to do. I’m going to focus solely on the arguments relating to the replacement of sex workers by robots, leaving the basic income arguments out.

This is the first time I’ve ever tried to summarise my own work on the blog — I usually focus on the work of others — and it comes with the caveat that there is much more detail and supporting evidence in the original article. I’m just giving the bare bones of the arguments here. No doubt everyone else whose work I’ve addressed on this blog wishes I added a similar caveat before all my other posts. In my defence, I hope that such a caveat is implied in all these other cases.

1. The Case for the Displacement Hypothesis
Those who think that prostitutes could one day be rendered technologically unemployed by sophisticated sexual robots are defenders of something I call the “displacement hypothesis”:

Displacement Hypothesis: Prostitutes will be displaced by sex robots, much as other human labourers (e.g. factory workers) have been displaced by technological analogues.

As I note in the article, a defence of the displacement hypothesis is implicit in the work of several writers. The most notable of these is, perhaps, David Levy, whose 2007 book Love and Sex with Robots remains the best single-volume work on this topic. In the article, I try to clarify and strengthen the defence of the displacement hypothesis.

I argue that it depends on two related theses:

The Transference Thesis: All the factors driving demand for human prostitutes can be transferred over to sex robots, i.e. the fact that there is demand for the former suggests that there will also be demand for the latter.

The Advantages Thesis: Sex robots will have advantages over human prostitutes that will make them more desirable/more readily available.

I then proceed to consider the arguments in favour of both.

The argument for transference thesis depends on a close analysis of the factors driving demand for human prostitution. Extrapolating from several empirical studies of human demand, these factors can be reduced to four general categories: (i) people demand prostitutes because they are seeking the kind of emotional connection/attachment that is typical in romantic human sexual relationships; (ii) people demand prostitutes because they are seeking sexual variety (both in terms of partners and types of sex act); (iii) people demand prostitutes because they desire sex that is free from the complications and expectations of non-commercial sex (basically, the inverse of the first reason); and (iv) people demand prostitutes because they are unable to find sexual partners through other means.

To defend the transference thesis, one simply needs to argue that sex robots can cater to all of these demands. So you must argue that it will be possible to create sex robots that develop emotional bonds with their users (or not, if that is the user preference); it will be possible to create sex robots that cater to the need for variety; and it will be possible to supply sex robots to those who are unable to find sexual partners by other means.

The argument for the advantages thesis depends on identifying all the ways in which sex robots could be more desirable and more readily available than human prostitutes. In the article, I list four types of advantage that sex robots could have over human sex workers. First, there are the legal advantages: prostitution is illegal in several countries whereas the production of sex robots is not (I also suggested that sex robots could cater to currently illegal forms of sexual deviance, though this is more controversial). Second, there are the ethical advantages: less need to worry about trafficking or objectification. Third, there are the health risk advantages: less risk of contracting STDs (though this depends on sanitation). And fourth, and finally, there are the advantages of production and flexibility: it might be easier to produce sex robots en masse to cater for demand, and to re-programme them to cater to new desires.

When combined, I suggest that the transference thesis and the advantages thesis present a good case for the displacement hypothesis. An argument diagram summarising what I have said and clarifying the logical connections is provided below.

2. The Case for the Resiliency Hypothesis
Although I accept that there is a reasonable case for the displacement hypothesis, one of my primary goals in the article is to suggest that there is also a case to be made for the contrasting view. Thus, I introduce something I call the “resiliency hypothesis”:

Resiliency Hypothesis: Prostitution is likely to be resilient to technological unemployment, i.e. demand for and supply of human sexual labour is likely to remain competitive in the face of sex robots.

As with the displacement hypothesis, the case for the resiliency hypothesis rests on two theses:

The Human Preference Thesis: Ceteris Paribus, if given the choice between sex with a human prostitute or a robot, many (if not most) humans will prefer sex with a human prostitute.

The Increased Supply Thesis: Technological unemployment in other industries is likely to increase the supply of human prostitutes.

In retrospect, I possibly should have called the second of these, the “Increased Supply and Competitiveness Thesis” since the claim is not just that there will be an increased supply but that those drawn into sex work will do everything they can to remain competitive against sex robots (thereby countering some of the advantages robots have over humans). I think this is clear in how I defend the thesis in the article, just not in the name I gave it.

Anyway, I rested my defence of the human preference thesis on three arguments and bits of evidence. The first was largely an argument from philosophical intuition. I suggested that it seems plausible to suppose that we would prefer human sex partners to robotic ones. I based this on the belief that ontological history matters to us in matters both related and unrelated to sex. Thus, for example, we care about where food or fine art comes from: it’s more valuable if it has the ontological right history (not just because it looks or tastes better). We also seem to care about where our sexual partners come from: witness, for example, the reaction to transgendered persons, who are sometimes legally obliged to disclose their gender history. (I’m not saying that this reaction is a good thing, just that it is present).

It has been pointed out to me — by Michael Hauskeller — that my ontological history argument may simply the beg the question. It assumes that sex robots will have an ontological history that fails to excite us as much as the ontological history of human sex workers, but in a way that is the very issue under debate: would we prefer humans to robots. On reflection, Hauskeller looks to be right about this. Additional evidence is needed to show that the ontological history we desire is a human one. I would also add that if our concern with ontological history is irrational or prejudiced, it may be possible to overcome it. Thus, even if humans are preferred in the short term, they may not be in the long term.

Fortunately, there were two other arguments for the human preference thesis. One was based on some polling data suggesting that humans were not all that willing to have sex with a robot (though I did critique the poll as well). The other was based on the uncanny valley hypothesis. I reviewed some of the recent empirical literature suggesting that this is a real effect, and argued that it might not even be a valley.

The defence of the increased supply thesis rested an a simple argument (the numbering may look a bit weird here but remember that’s because everything I’ve said is going into an argument diagram at the end):

(16) An increasing number of jobs, including highly skilled jobs, are vulnerable to technological employment.

(17) If an increasing number of jobs are vulnerable to technological unemployment, people will be forced to seek other forms of employment (all else being equal).

(18) When making decisions about which form of employment to seek, people are likely to be attracted to forms of employment: (i) in which there is a preference for human labour over robotic labour; (ii) with low barriers to entry; and (iii) which are comparatively well-paid.

(19) Prostitution satisfies all three of these conditions (i) - (iii).

(11) Therefore, there is likely to be an increased supply of human prostitution.

I looked at each of the premises of this argument in the paper, though I focused most attention on premise (19). In support of this, I considered evidence from economic studies of prostitution. I also followed this with some argumentation on the way in which human prostitutes could address the advantages of sex robots.

That gives us the following argument diagram.

That’s it then. I hope this clarifies the case for the displacement and resiliency hypotheses. For more detail and supporting evidence please consult the original article. There is also some follow-up in the article about the implications of all this for the basic income guarantee.

Monday, April 14, 2014

This is the third part of my series on Nicholas Agar’s bookTruly Human Enhancement. As mentioned previously, Agar stakes out an interesting middle ground on the topic of enhancement. He argues that modest forms of enhancement — i.e. up to or slightly beyond the current range of human norms — are prudentially wise, whereas radical forms of enhancement — i.e. well beyond the current range of human norms — are not. His main support for this is his belief that in radically enhancing ourselves we will lose certain internal goods. These are goods that are intrinsic to some of our current activities.

I’m offering my reflections on parts of the book as a read through it. I’m currently on the second half of Chapter 3. In the first half of Chapter 3, Agar argued that humans are (rightly) uninterested in the activities of the radically enhanced because they cannot veridically engage with those activities. That is to say: because they cannot accurately imagine what it is like to engage in those activities. I discussed this argument in the previous entry.

As Agar himself notes, the argument in the first half of the chapter only speaks to the internal goods of certain human activities. In other words, it argues that we should keep enhancements modest because we shouldn’t wish to lose goods that are intrinsic to our current activities. This ignores the possible external goods that could be brought about by radical enhancement. The second half of the chapter deals with these.

1. External Goods and the False Dichotomy
It would be easy for someone reading the first half of chapter 3 to come back at Agar with the following argument:

Trumping External Goods Argument: I grant that there are goods that are internal and external to our activities, and I grant that radical enhancement could cause us to lose certain internal goods. Still, we can’t dismiss the external goods that might be possible through radical enhancement. Suppose, for example, that a radically enhanced medical researcher (or team of researchers) could find a cure for cancer. Wouldn’t it be perverse to forgo this possibility for the sake of some internal goods? Don’t certain external goods (which may be made possible by radical enhancement) trump internal goods?

The proponent of this argument is presenting us with a dilemma, of sorts. He or she is saying that we can stick with the internal and external goods that are possible with current or slightly enhanced human capacities, or we can go for more and better external goods. It would seem silly to opt for the former when the possibilities are so tantalising, especially given that Agar himself acknowledges that new internal goods may be possible with radically enhanced abilities.

The problem with this argument is that it presents us with a false dilemma. We don’t have to pick and choose; we can have the best of both worlds. How so? Well, as Agar sees it, we don’t have to radically enhance our abilities in order to secure the kinds of external goods evoked by the proponent of the trumping argument. We have other kinds of technology (e.g. machines and artificial intelligences) that can help us to do this.

What’s more, as Agar goes on to suggest, these other kinds of technology are far more likely to be successful. Radical forms of enhancement need to be integrated with the human biological architecture. This is a tricky process because you have to work within the constraints posed by that architecture. For example, brain-computer interfaces and neuroprosthetics, currently in their infancy, face significant engineering challenges in trying to integrate electrodes with neurons. External devices, with some user-friendly interface, are much easier to engineer, and don’t face the same constraints.

Agar illustrates this with a thought experiment:

The Pyramid Builders: Suppose you are a Pharaoh building a pyramid. This takes a huge amount of back-breaking labour from ordinary human workers (or slaves). Clearly some investment in worker enhancement would be desirable. But there are two ways of going about it. You could either invest in human enhancement technologies, looking into drugs or other supplements to increase the strength, stamina and endurance of workers, maybe even creating robotic limbs that graft onto their current limbs. Or you could invest in other enhancing technologies such as machines to sculpt and haul the stone blocks needed for construction.

Which investment strategy do you choose?

The question is a bit of a throwaway since, obviously, Pharaohs are unlikely to have the patience for investment of either sort. Still, it seems like the second investment strategy is the wiser one. We’ve had machines to assist construction for a long time now that aren’t directly integrated with our biology. They are extremely useful, going well beyond what is possible for a human. This suggests that the second option is more likely to be successful. Agar argues that this is all down to the integration problem.

2. Gambling on radical enhancement: is it worth it?
I think it’s useful reformulate Agar’s argument using some concepts and tools from decision theory. I say this because many of Agar’s arguments against radical enhancement seem to rely on claims about what should we be willing (or unwilling) to gamble on when it comes to enhancement. So it might be useful to have one semi-formal illustration of the decision problems underlying his arguments, which can then be adapted for subsequent examples.

We can do this by for the preceding argument by starting with a decision tree. A decision tree is, as the name suggests, a tree-like diagram that represents the branching possibilities you confront every time you make a decision. The nodes in this diagram either depict decision points or points at which probabilities affect different outcomes (sometimes we think of this in terms of “Nature” making a decision by determining the probabilities, but this is a just a metaphor).

Anyway, the decision tree for the preceding argument works something like this. At the first node, there is a decision point: you can opt for radical enhancement or modest (or no) enhancement. This then branches out into two possible futures. In each of those futures there is a certain probability that we will secure the kinds of external goods (like cancer cures) alluded to by the proponent of the trumping argument, and a certain (complementary) probability that we won’t. So this means that either of our initial decisions leads to two further possible outcomes. This gives us four outcomes in total:

Outcome A: We radically enhance, thereby losing our current set of internal goods, and fail to secure trumping external goods.

Outcome B: We radically enhance, thereby losing our current set of internal goods, but succeed in securing trumping external goods.

Outcome C: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, and fail to secure trumping external goods through other technologies.

Outcome D: We modestly enhance (or don’t enhance at all), keeping our current set of internal goods, but succeed in securing trumping external goods through other technologies.

This is all depicted in the diagram below.

With the diagram in place, we have a clearer handle on the decision problem confronting us. Even without knowing what the probabilities are, or without even having a good estimate for those probabilities, we begin to see where Agar is coming from. Since radical enhancement always seems to entail the loss of internal goods, modest enhancement looks like the safer bet (maybe even a dominant one). This is bolstered by Agar’s argument we have good reason to suppose that the probability of securing the trumping external goods is greater through the use other technologies. Hence, modest enhancement really is the better bet.

There are a couple of problems with this formalisation. First, the proponent of radical enhancement may argue that it doesn’t accurately capture their imagined future. To be precise, the proponent could argue that I haven’t factored in the new forms of internal good that may be made possible with radically enhanced abilities. That’s true, and that might be a relevant consideration, but bear in mind that those new internal goods are, at present, entirely unknown. Is it not better to stick with what we know?

Second, I think I’m being a little too-coarse grained in my description of the possible futures involved. I think it’s odd to suggest, as the decision tree does, that there could be a future in which we never achieve certain trumping external goods. That would suppose that there could be a future in which there is no progress on significant moral problems at our current level of technology. That seems unrealistic to me. Consequently, I think it might be better to reformulate the decision tree with a specific set of external goods in mind (e.g. things like a cure for cancer, or for world hunger, childhood mortality etc. etc.).

3. The External Mind Objection
There is another objection to Agar’s argument that is worth addressing separately. It is one that he himself engages with. It is the objection from the proponent of the external mind thesis. This thesis can be characterised in the following manner:

External Mind Thesis: Our minds are not simply confined to our skulls or bodies. Instead, they spill out into the world around us. All the external technologies and mechanisms (e.g. calculators, encyclopedias) we use to help us think and interact with the world are part of our “minds”.

The EMT has been famously defended by Andy Clark (and David Chalmers). Clark argues that the EMT implies that we are all cyborgs because of the way in which technology permeates in our lives. The EMT can be seen to follow from a functionalist theory of mind.

The thing about the EMT is that it might also suggest that the distinction Agar draws between different kinds of technological enhancement is an unprincipled one. Agar wants to argue that technologies that enhance by being integrated with our biology are different from technologies that enhance by providing us with externally accessible user interfaces. An example would be the difference between a lifting machine like a forklift and a strength enhancing drug that allows us to lift heavier objects. The former is external and non-integrated; the latter is internal and integrated. The defender of the EMT argues that this is a distinction without a difference. Both kinds of technological assistance are part of us, part of how we interact with and think about the world.

Agar could respond to this by simply rejecting the EMT, but he doesn’t do this. He thinks the EMT may be a useful framework for psychological explanation. What he does deny, however, is its usefulness across all issues involving our interactions with the world. There may be some contexts in which the distinction between the mind/body and the external world count for something. For example, in the study of the spread of cancer cells, the distinction between what goes on in your body, versus what goes on in the world outside it, is important (excepting viral forms of cancer). Likewise, the distinction between what goes on in our heads and what goes on outside, might count for something. In particular, if we risk losing internal goods through integrated enhancement, why not stick with external enhancement? This doesn’t undermine Clark’s general point that we are “cyborgs”; it just says that there are different kinds of cyborg existence, some of which might be more valuable to us than others.

I don’t have any particular issue with this aspect of Agar’s argument. It seems correct to me to say that the EMT doesn’t imply that all forms of extension are equally valuable.

That brings us to the end of chapter 3. In the next set of entries, I’ll be looking at the arguments in chapter 4, which have to do with radical enhancement and personal identity.

Sunday, April 13, 2014

This is the second post in my series on Nicholas Agar's new book Truly Human Enhancement. The book offers an interesting take on the enhancement debate. It tries to carve out a middle ground between bioconservatism and transhumanism, arguing that modest enhancement (within or slightly beyond the range of human norms) is prudentially valuable, but that radical enhancement (well beyond the range of human norms) may not be.

As noted in the previous entry, the purpose of this series is to share my reflections on the book as I work my way through the chapters. Today's post is the first of two on the contents of chapter 3. To follow that chapter, you need to familiarise yourself with the conceptual framework set out in chapter 2. Fortunately, I covered that in the previous entry. I recommend reading that before proceeding with this post. I'm serious about this: if you don't know what is meant by terms like "prudential value", "intrinsic value" or "internal goods", then you will miss out on aspects of this discussion.

Anyway, assuming you are familiar with these concepts, we can proceed. Chapter 3 is entitled "What interest do we have in superhuman feats?". It is an appropriate title. The chapter itself looks at two related arguments that respond to that question. The first holds that we have little interest in superhuman feats, at least in terms of their relationship to intrinsically valuable internal goods. The second holds that we might have great interest in them, if they were the only way of bringing about certain external goods, but as it happens they aren't the only way of doing this.

I'm going to look at each of these arguments over the next two posts, starting today with the first.

1. Are we uninterested in superhuman sports and games?
To support the first argument, Agar uses some illustrations from the world of human sports and games. The illustrations supposedly demonstrate that we do as a matter of fact lack an interest in superhuman versions of these activities. This is then used as the springboard for an argument about why we lack this interest.

The first example is that of the marathon, specifically Haile Gebrselassie's victory in the Berlin marathon in 2008. Gebrselassie ran that marathon in 2hr 03mins 59secs, which then sparked a debate about whether we would soon seen a sub-two-hour marathon. Agar suggests that Gebrselassie's achievement and the subsequent debate are interesting to us; that we can relate to and value these possibilities.

Contrast this with a (for now) hypothetical superhuman marathon. Agar refers to Robert Freitas's idea of the respirocyte. This is a one-micron-wide nanobot that could be used to replace human haemoglobin. This could massively increase the oxygen-carrying capacity of our blood, allowing us to run at sprint speed for 15 minutes or more. If we enhanced ourselves with respirocytes, the traditional 26.2 mile marathon would no longer be of interest. We would have to invent a new race, perhaps a 262 mile marathon, to create a challenge worthy of our abilities. Agar's suggestion is that we are less interested and less excited by this possibility.

That example might not work for you. So here's another, with a much starker contrast. Consider the game of chess. As you all know, Gary Kasparov -- probably the greatest human chess player of all time -- was defeated by the IBM computer Deep Blue in 1997. Since then, computers have been decisively better than humans at chess (though teams of computers and humans are still better than computers alone).

Nevertheless, despite the clear superiority of computers over human beings, we are not interested in or engaged by the prospect of computer-against-computer competitions (unless, perhaps, we are computer programmers). Human competitions still take place and still dominate the popular imagination. Why is this?

2. Veridical Engagement and Simulation Theory
Agar answers this question by appealing to the concept of veridical engagement. We can define this in the following manner:

Veridical Engagement: We veridically engage with an activity or state of being when we can (more or less) accurately imagine ourselves performing that activity or being in that state.

This definition is mine, not Agar's. I based it on what he wrote but there may be some differences. He speaks solely to activities since the two examples he uses (marathon running and chess) are activities, but I've broadened it out to cover states of being since they would also seem to fit with his argument, and to be relevant to the enhancement debate. I've also added the "more or less" bit before "accurately imagine". When he initially introduces the concept, Agar only refers to "accurately imagine", but later he acknowledges that this comes in degrees. So I think, for him, the imagining does not need to be perfect, just close to reality.

Why is the concept helpful? In essence, Agar argues that our lack of interest in superhuman feats can be explained by our inability to veridically engage with those feats. We have no interest in the achievements of Deep Blue because we cannot think like a computer. To think like Deep Blue would require us to compute 200,000,000 positions per second. We could at best perform a very poor facsimile of this. That's very different from how we engage with Kasparov's achievements. As Agar himself puts it:

No matter how soundly Deep Blue beats Kasparov, a human player will always play chess in ways that interests human spectators to a greater degree than Deep Blue and its successors. Human chess players of modest aptitude can read Kasparov's annotations and thereby gain insight into his stratagems. Kasparov's chess play is vastly superior to that of his fans. But he, presumably as a very young player, passed though a stage in his development [that was]...similar to that of his fans.

Agar offers us a psychological theory that accounts for our ability to veridically engage with certain activities and states of being. This is simulation theory which argues that the way in which we understand the behaviour of other human beings is by performing a simulation of the mental processes that lie behind that behaviour. Gregory Currie has used this to explain how we engage with fiction. It also helps to explain why we resort to anthropomorphism when imagining non-human animal behaviour.

So the upshot here for Agar is that we don't care about superhuman endeavours because we can't veridically engage with them. Agar is quick to point out that this doesn't mean that superhuman feats are devoid of intrinsic value. It could be that once we become superhuman, we will find our new capacities thrilling and begin to appreciate a whole new set of goods (like how we appreciate new things when we transition from childhood to adulthood). Nevertheless, it does suggest, to him at least, that superhuman activities and states of being lack intrinsic value to us, right now, as ordinary human beings. It'd be better to stick with the intrinsic goods that currently excite our imaginations.

3. Some thoughts and criticisms
I can appreciate what Agar is trying to do in this part of chapter 3. He is trying to flesh out his anthropocentric ideal of enhancement. He is trying to explain how it could be that enhancement up to, or slightly beyond, the current range of human norms is prudentially valuable, but enhancement outside of that range is not. I do, however, have a couple of critiques and queries.

The first has to do with the nature of the argument being presented. I take it that Agar is trying to present an argument from prudential axiology. That is: from premises about what we ought to prudentially value to conclusions about how radical enhancement might negatively impact on those values. That would be consistent with his stated aims from earlier chapters. The problem is that the argument he presents doesn't seem to be like that. It seems to be a purely factual argument about what interests or excites us and why. It's an explanation of one of our psychological quirks, not a defence of a principled normative distinction. At least, it reads that way to me.

Agar could perhaps respond by suggesting that his argument is based on intuitions about particular cases. In other words, he could argue that we intuitively find superhuman feats less prudentially valuable, as is obvious from our reaction to these cases. Arguments from intuition are certainly venerable in axiological debates, but he doesn't seem to adopt this approach directly. Furthermore, if this is what he is doing, it renders the explanation in terms of veridical engagement somewhat superfluous, however interesting it may be. Or, at least, it does so provided that Agar doesn't think that the notion of veridical engagement is itself axiologically significant. Might he believe that? I'm not sure, and I'm not sure why it would be.

This brings me to another point, which has to do with making claims about our capacity to veridically engage with certain activities. This is a dangerous game since what seems experientially out of reach to some may seem less so to others. I certainly have this feeling in relation to the superhuman marathon runners that Agar imagines. I just don't see what's so difficult to imagine about their experiences. I can imagine running at sprint speed; and I can imagine running for a very long time. Why couldn't I imagine both together? Seems like it just requires adding together experiences that I'm already capable of veridically engaging with. It just requires more of the same.

Now, you may respond by saying that this is just one example: Agar's case doesn't stand or fall on this one example. And I happen to think that this is right (I certainly think Agar hits the nail on the head with respect to computer chess: I don't think we can veridically engage with that style of chess-play). My only point is that my reaction to the superhuman marathon could indicate that cases of truly radical enhancement are harder to find than we might think. For example, hyperextended lifespans might be deemed "radical" enhancements by some, but it would seem possible to veridically engage with them: they are longer versions of what we already have. Admittedly, Agar has a chapter on this later in his book where he will no doubt argue that this view of hyperextended lifespan is wrong. I haven't read that yet.

Anyway, that's what I'm thinking so far. In the next post, I'll look at the second argument from chapter three. That argument claims that not only would radical enhancement deprive us of certain intrinsic goods, it would also be unnecessary for achieving certain external goods.