Pages

Friday, April 28, 2017

Patrick Lin started it. In an article entitled ‘The Ethics of Autonomous Cars’ (published in The Atlantic in 2013), he considered the principles that self-driving cars should follow when they encountered tricky moral dilemmas on the road. We all encounter these situations from time to time. Something unexpected happens and you have to make a split second decision. A pedestrian steps onto the road and you don’t see him until the last minute: do you slam on the brakes or swerve to avoid? Lin made the obvious point that no matter how safe they were, self-driving cars would encounter situations like this, and so engineers would have to design ‘crash-optimisation’ algorithms that the cars would use to make those split second decisions.

In a later article Lin explained the problem by using a variation on the famous ‘trolley problem’ thought experiment. The classic trolley problem asks you to imagine a trolley car hurtling out of control down a railroad track. If it continues on its present course, it will collide with and kill five people. You can, however, divert it onto a sidetrack. If you do so, it will kill only one person. What should you do? Ethicists have debated the appropriate choice for the last forty years. Lin’s variation on the trolley problem worked like this:

Imagine in some distant future, your autonomous car encounters this terrible choice: it must either swerve left and strike an eight-year old girl, or swerve right and strike an 80-year old grandmother. Given the car’s velocity, either victim would surely be killed on impact. If you do not swerve, both victims will be struck and killed; so there is good reason to think that you ought to swerve one way or another. But what would be the ethically correct decision? If you were programming the self-driving car, how would you instruct it to behave if it ever encountered such a case, as rare as it may be?

There is certainly value to thinking about problems of this sort. But some people worry that, in focusing on individualised moral dilemmas such as this, the framing of the ethical challenges facing the designers of self-driving cars is misleading. There are important differences between the moral choice confronting the designer of the crash optimisation system (whether it be programmed from the top-down with clearly prescribed rules or the bottom-up using some machine-learning system) and the choices faced by drivers in particular dilemmas. Recently, some papers have been written drawing attention to these differences. One of them is Hin-Yan Liu’s ’Structural Discrimination and Autonomous Vehicles’. I just interviewed Hin-Yan for my podcast about this and other aspects of his research, but I want to take this opportunity to examine the argument in that paper in more detail.

1. The Structural Discrimination Problem
Liu’s argument is that the design of crash optimisation algorithms could lead to structural discrimination (note: to be fair to him, Lin acknowledged the potential discriminatory impact in his 2016 paper).

Structural discrimination is a form of indirect discrimination. Direct discrimination arises where some individual or organisation intentionally disadvantages someone because they belong to a particular race, ethnic group, gender, class (etc). Once upon a time there were, allegedly, signs displayed outside pubs, hotels and places of employment in the UK saying ‘No blacks, No Irish’. The authenticity of these signs is disputed, but if they really existed, they would provide a clear example of direct discrimination. Indirect discrimination is different. It arises where some policy or practice has a seemingly unobjectionable express intent or purpose but nevertheless has a discriminatory impact. For example, a hairdressing salon that had a policy requiring all staff to show off their hair to customers might have discriminatory impact on (some) potential Muslim staff (I took this example from Citizen’s Advice UK).

Structural discrimination is a more generalised form of indirect discrimination whereby entire systems are set up are structured in such a way that they impose undue burdens on particular groups. How might this happen with crash optimisation algorithms? The basic argument works like this:

(1) If a particular rule or policy is determined with reference to factors that ignore potential forms of discrimination, and if that rule is followed in the majority of circumstances, it is likely to have an unintended structurally discriminatory impact.

(2) The crash optimisation algorithms followed by self-driving cars are (a) likely to be determined with reference to factors that ignore potential forms of discrimination and (b) are likely to be followed in the majority of circumstances.

(3) Therefore, crash optimisation algorithms are likely to have an unintended discriminatory impact.

The first premise should be relatively uncontroversial. It is making a probabilistic claim. It is saying that if so-and-so happens it is likely to have a discriminatory impact, not that it definitely will. The intuition here is that discrimination is a subtle thing. If we don’t try to anticipate it and prevent it from happening, we are likely to do things that have unintended discriminatory effects. Go back to the example of the hairdressing salon and the rule about uncovered hair. Presumably, no one designing that rule thought they were doing anything that might be discriminatory. They just wanted their staff to show off their hair so that customers would get a good impression. They didn’t consciously factor in potential forms of bias or discrimination. This is what created the potential for discrimination.

The first part of premise one is simply saying that what is true in the case of the hair salon is likely to be true more generally. Unless we consciously direct our attention to the possibility of discriminatory impact, it will be sheer luck whether we avoid it. That might not be too problematic if the rules we designed were limited in their application. For example, if the rule about uncovered hair for staff only applied to one particular hairdressing salon, then we have some problem but it would fall far short of structural discrimination. There would be discrimination in the particular salon, but that discrimination would not spread across society as whole. Muslim hairdressers would not be excluded from work at all salons. It is only when the rule is followed in the majority of cases that we get the conditions in which structural discrimination can breed.

This brings us to premise two. This is the critical one. Are there any reasons to accept it? Looking first to condition (a), there are indeed some reasons to believe that this will be the case. The reasons have to do with the ‘trolley problem’-style framing of the ethical challenges facing the designers of self-driving cars. That framing encourages us to think about the morally optimal choice in a particular case, not at a societal level. It encourages us to pick the least bad option, even if that option contravenes some widely-agreed moral principle. A consequentialist, for example, might resolve the granny vs. child dilemma in favour of the child based on the quantity of harm that will result. They might say that the child has more potentially good life years ahead of them (possibly justifying this by reference to the QALY standard) and hence it does more good to save the child (or, to put it another way, less harm to kill the granny). The problem with this reasoning is that in focusing purely on the quantity of harm we ignore factors that we ought to consider (such as the potential for ageism) if we wish to avoid a discriminatory impact. As Liu puts it:

[A]nother bling spot of trolley problem ethics…is that the calculus is conducted with seemingly featureless and identical “human units”, as the variable being emphasised is the quantity of harm rather than its character or nature.

We could try to address this problem by getting the designers of the algorithms to look more closely at the characteristics of the individuals that might be affected by the choices made by the cars, but this will then lead us to the second problem, namely the fact that whatever solution we hit upon is likely to be multiplied and shared across many self-driving cars, and that multiplication and sharing is likely to exacerbate any potentially discriminatory effect. Why is this? Well, presumably car manufacturers will standardise the optimisation algorithms they offer on their cars (not least because the software that actually drives the car is likely to be cloud-based and to adapt and learn based on the data collected from all cars). This will result in greater homogeneity in how cars respond to trolley-problem like dilemmas, which will in turn increase any potentially discriminatory effect. For example, if an algorithm does optimise by resolving the dilemma in favour of the child, we get a situation in which all cars using that algorithm favour children over grannies, and so an extra burden is imposed on grannies across society as a whole. They face a higher risk of being killed by a self-driving car.

There are some subtleties to this argument that are worth exploring. You could reject it by arguing that there will still presumably be some diversity in how car manufacturers optimise their algorithms. So, for example, perhaps all BMWs will be consequentialist in their approach whereas all Audis will be deontological. This is likely to result in a degree of diversity but perhaps much less diversity than we currently have. This is what I think is most interesting about Liu’s argument. In a sense, we are all running crash-optimisation algorithms in our heads right now. We use these algorithms to resolve the moral dilemmas we face while driving. But as various experiments have revealed, the algorithms humans use are plural and messy. Most people have intuitions that make them lean in favour of consequentialist solutions in some cases and deontological solutions in others. Thus the moral choices made at an individual level can shift and change across different contexts and moods. This presumably creates great diversity at a societal level. The differences across the different car manufacturers is likely to be more limited.

This is, admittedly, speculative. We don’t know whether the diversity we have right now is so great that it avoids any pronounced structural discrimination in the resolution of moral dilemmas. But this is what is interesting about Liu’s argument: It make an assumption about the current state of affairs (namely that there is great diversity in the resolution of moral dilemmas) that might be true but is difficult to verify until we enter a new state of affairs (one in which self-driving cars dominate the roads) and see whether there is a greater discriminatory impact or not. Right now, we are at a moment of uncertainty.

Of course, there might be technical solutions to the structural discrimination problem. Perhaps, for instance, crash optimisation algorithms could be designed with some element of randomisation, i.e. they randomly flip back-and-forth between different moral rules. This might prevent structural discrimination from arising. It might seem odd to advocate moral randomisation as a solution to the problem of structural discrimination, but perhaps a degree of randomisation is one of the benefits of the world in which we currently live.

2. The Immunity Device Thought Experiment
There is another nice feature to Liu’s paper. After setting out the structural discrimination problem, he introduces a fascinating thought experiment. And unlike many philosophical thought experiments, this is one that might make the transition from thought to reality.

At the core of the crash optimisation dilemma is a simple question: how do we allocate risk in society? In this instance the risk of dying in a car accident. We face many similar risk allocation decisions already. Complex systems of insurance and finance are set up with the explicit goal of spreading and reallocating these risks. We often allow people to purchase additional protection from risk through increased insurance premiums, and we sometimes allocate/gift people extra protections (e.g. certain politicians or leaders). Might we end up doing the same thing when it comes to the risk of being struck by a self-driving car? Liu asks us to imagine the following:

Immunity Device Thought Experiment:‘It would not be implausible or unreasonable for the manufacturers of autonomous vehicles to issue what I would call here an “immunity device”: the bearer of such a device would become immune to collisions with autonomous vehicles. With the ubiquity of smart personal communication devices, it would not be difficult to develop a transmitting device to this end which signals the identity of its owner. Such an amulet would protect its owner in situations where an autonomous vehicle finds itself careening towards her, and would have the effect of deflecting the care away from that individual and thereby divert the car to engage in a new trolley problem style dilemma elsewhere.'

(Liu 2016, 169)

The thought experiment raises a few important and interesting questions. First, is such a device technically feasible? Second, should we allow for the creation of such a device? And third, if we did, how should we allocate the immunity it provides?

On the first question, I agree with what Liu says. It seems like we have the underlying technological infrastructure that could facilitate the creation of such a device. It would be much like any other smart device and would simply have to be in communication with the car. There may be technical challenges but they would not be insurmountable. There is a practical problem if everybody managed to get their hands on an immunity device: that would, after all, defeat the purpose. But Liu suggests a work around to this: have a points-based (trump card) rating system attached to the device. So people don’t get perfect immunity; they get bumped up and down a ranking order. This changes the nature of the allocation question. It’s no longer who should get such a device but, rather, how the points should be allocated.

On the second question, I have mixed views. I feel very uncomfortable with the idea, but I can’t quite pin down my concern. I can see some arguments in its favour. We do, after all, have broadly analogous systems nowadays whereby people get additional protection through systems of social insurance. Nevertheless, there are some important disanalogies between what Liu imagines and other forms of insurance. In the case of, say, health insurance, we generally allow richer people to buy additional protection in the form of higher premiums. This can have negative redistributive consequences, but the gain to the rich person does not necessarily come at the expense of the poorer person. Indeed, in a very real sense, the rich person’s higher premium might be subsidising the healthcare of the poorer person. Furthermore, the protection that the rich person buys may never be used: it’s there as peace of mind. In the case of the immunity device, it seems like the rich person buying the device (or the points) would necessarily be doing so at the expense of someone else. After all, the device provides protection in the event of a self-driving car finding itself in a dilemma. The dilemma is such that the car has to strike someone. If you are buying immunity in such a scenario it means you are necessarily paying for the car to be diverted so that it strikes someone else. This might provide the basis for an objection to the idea itself: this is something that we possibly should not allow to exist. The problem with this objection is that it effectively applies the doctrine of double effect to this scenario, which is not something I am not comfortable with. Also, even if we did ban such devices, we would still have to decide how to allocate the risk burden: at some stage you would have to make a choice as to who should bear the risk burden (unless you adopt the randomisation solution).

This brings us to the last question. If we did allow such a device to be created, how would we allocate the protection it provides. The market-based solution seems undesirable, for the reasons just stated. Liu considers the possibility of allocating points as a system of social reward and punishment. So, for example, if you commit a crime you could be punished by shouldering an increased risk burden (by being pushed down the ranking system). That seems prima facie more acceptable than allocating the immunity through the market. This is for two reasons. First, we are generally comfortable with the idea of punishment (though there are those who criticise it). Second, according to most definitions, punishment involves the intentional harming of another. So the kinds of concerns I raised in the previous paragraph would not apply to allocation-via-punishment: if punishment is justified at all then it seems like it would justify the intentional imposition of a risk burden on another. That said, there are reasons to think that directly harming someone through imprisonment or fine is more morally acceptable than increasing the likelihood of their being injured/killed in a car accident. After all, if you object to corporal or capital punishment you may have reason to object to increasing the likelihood of bodily injury or death.

Okay, that brings us to the end of this post. I want to conclude by recommending Liu's paper. We discuss the ideas in it in more detail in the podcast we recorded. It should be available in a couple of weeks. Also, I should emphasise that Liu introduces the Immunity Device as a thought experiment. He is definitely not advocating its creation. He just thinks it helps us to think through some of the tricky ethical questions raised by the introduction of self-driving cars.

Sunday, April 23, 2017

In this episode, I talk to Mark Coeckelbergh. Mark is a Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and President of the Society for Philosophy and Technology. He also has an affiliation as Professor of Technology and Social Responsibility at the Centre for Computing and Social Responsibility, De Montfort University, UK. We talk about robots and philosophy (robophilosophy), focusing on two topics in particular. First, the rise of the carebots and the mechanisation of society, and second, Hegel's master-slave dialectic and its application to our relationship with technology.

Tuesday, April 18, 2017

Polynesian sailors developed elaborate techniques for long-distance sea travel long before their European counterparts. They mapped out the elevation of the stars; they followed the paths of migrating birds; they observed sea swells and tidal patterns. The techniques were often passed down from generation to generation through the medium of song. They are still taught to this day (in some locations). In 1976, there was a famous proof of their effectiveness when Mau Piailug, a practitioner of the techniques, steered a traditional sailing canoe nearly 3,000 miles from Hawaii to Tahiti without relying on more modern methods of navigation.

These Polynesian sailing techniques provide a perfect real-world illustration of distributed cognition theory. According to this theory, cognition is not something that takes place purely in the head. When humans want to perform cognitive tasks, they don’t simply represent and manipulate the cognition-relevant information in their brains, they also co-opt features of their environment to assist them in the performance of cognitive tasks. In the case of the Polynesian sailors, it was the migrational patterns of birds, the movements of the sea and the elevation of the stars that assisted the performance. It was also the created objects and cultural products (e.g. songs) that they used to help to offload the cognitive burden and transmit the relevant knowledge down through the generations. In this manner, the performance of the cognitive task of navigation became distributed between the individual sailor and the wider environment.

Generally speaking, there are three features of the external environment that can assist in the performance of a cognitive task:

Cognitive Artifacts: Intentionally designed objects that are used in the performance of the task, e.g. a map, a calendar, an abacus, or a textbook.

Naturefacts: Natural objects, events or states of affairs that get co-opted into the performance of a cognitive tasks, e.g. the paths of migrating birds and the elevation of the stars.

Other Cognitive Agents: Other humans (or, possibly, robots and AI) that can perform cognitive tasks in collaboration/cooperation with one another.

I think it is important to understand how all three of these cognitive-assisters function and to appreciate some of the qualitative differences between them. One thing that distributed cognition theory enables you to do is to appreciate the complex ecology of cognition. Because cognition is spread out across the agent and its environment, the agent becomes structurally coupled to that environment. If you tamper with or alter one part of the external cognitive ecology it can have knock-on effects elsewhere within the system, changing the kinds of cognitive task that need to be performed, and altering the costs/benefits associated with different styles of cognition (I discussed this, to some extent, in a previous post). Understanding how the different cognitive assisters function provides insight into these effects.

In the remainder of this post, I want to take a first step towards understanding the complexity of our cognitive ecology by taking a look at Richard Heersmink’s proposed taxonomy of cognitive artifacts. This taxonomy gives us some insight into one of the three relevant features of our cognitive ecology (cognitive artifacts) and enables us to appreciate how this feature works and the different possible forms it can take.

The taxonomy itself is fairly simple to represent in graphical form. It divides all cognitive artifacts into two major families: (i) representational and (ii) ecological. It then breaks these major families down into a number of sub-types. These sub-types are labelled using a somewhat esoteric conceptual vocabulary. The labels make sense once you have mastered the vocabulary. The remainder of this post is dedicated to explaining how it all works.

1. Representational Cognitive Artifacts
Cognition is an informational activity. We perform cognitive tasks by acquiring, manipulating, organising and communicating information. Consequently, cognitive artifacts are able to assist in the performance of cognitive tasks precisely because they have certain informational properties. As Heersmink puts it, the functional properties of these artifacts supervene on their informational properties. One of the most obvious things a cognitive artifact can do is represent information in different forms.

’Representation’ is a somewhat subtle concept. Heersmink adopts CS Peirce’s classic analysis. This holds that representation is a triadic relation between an object, sign and interpreter. The object is the world that the sign is taken to represent, the sign is that which represents the world, and the interpreter is the one who determines the relation between the sign and the object. To use a simple example, suppose there is a portrait of you hanging on the wall. The portrait is the sign; it represents the object (in this case you); and you are the interpreter. The key thing about the sign is that it stands in for something else, namely the represented object. Signs can represent objects in different ways. Some forms of representation are straightforward: the sign simply looks like the object. Other forms of representation are more abstract.

Heersmink argues that there are three main forms of representation and, as a result, three main types of representational cognitive artifact. The first form of representation is iconic. An iconic representation is one that is isomorphic with or highly similar to the object it is representing. The classic example of an iconic cognitive artifact is a map. The map provides a scaled down picture of the world. The visual imagery on the map is supposed to stand in a direct, one-to-one relation with the features in the real world. A lake is depicted as an blue blob; a forest is depicted as a mass of small green trees, a mountain range is depicted as a series of humps, coloured in different ways to represent their different heights.

The second form of representation is indexical. An indexical representation is one that is causally related to the object it is representing. The classic example of an indexical cognitive artifact would be a thermometer. The liquid within the thermometer expands when it is heated and contracts when it is cooled. This results in a change in the temperature reading on the temperature gauge. This means there is a direct causal relationship between the information represented on the temperature gauge and the actual temperature in the real world.

The third form of representation is symbolic. A symbolic representation is one that is neither iconic nor indexical. There is no discernible relationship between the sign and the object. The form that the sign takes is arbitrary and people simply agree (by social convention) that it represents a particular object or set of objects. Represented language is the classic example of a symbolic cognitive artifact. The shapes of letters and the order in which they are presented bears no direct causal or isomorphic relationship to the objects they describe or name (pictographic or ideographic languages are different). The word ‘cat’, for example, bears no physical similarity to an actual cat. There is nothing about those letters that would tell you that they represented a cat. You simply have to learn the conventions to understand the representations.

The different forms of representation may be combined in any one cognitive artifact. For example, although maps are primarily iconic in nature, they often include symbolic elements such as place-names or numbers representing elevation or distance.

2. Ecological Cognitive Artifacts

The other family of cognitive artifacts are ecological in nature. This is a more difficult concept to explain. The gist of the idea is that some artifacts don’t merely provide representations of cognition-relevant information; rather, they provide actual forums in which information can be stored and manipulated. The favourite example of this — one originally posed by the distributed cognition pioneer David Kirsh — is the game of Tetris. For those who are not familiar, Tetris is a game in which you must manipulate differently shaped ‘bricks’ (technically known as ‘zoids’) into sockets or slots at the bottom of the game screen so that they form a continuous line of zoids. Although you could, in theory, play the game by mentally rotating the zoids in your head, and then deciding how to move them on the game screen, this is not the most effective way to play the game. The most effective way to play the game is simply to rotate the shapes on the screen and see how they will best fit into the wall forming at the bottom of the screen. In this way, the game creates an environment in which the cognition-relevant manipulation of information is performed directly. The artifact is thus its own cognitive ecology.

Heersmink argues that there are two main types of ecological cognitive artifact. The first is the spatial ecological artifact. This is any artifact that stores information in its spatial structure. The idea behind it is that we encode cognition-relevant information into our social spaces, thereby obviating the need to store that information in our heads. A simple example would be the way in which we organise clothes into piles in order to keep track of which clothes have been washed, which need to be washed, which have been dried, and which need to be ironed. The piles, and their distribution across physical space, stores the cognition-relevant information. Heersmink points out that the spaces in which we encode information need not be physical/real-world spaces. They can also be virtual, e.g. the virtual ‘desktop’ on your computer or phonescreen.

The other kind of ecological cognitive artifact is the structural artifact. I don’t know if this is the best name for it, but the idea is that some artifacts don’t simply encode information into physical or virtual space; they also provide forums in which that information can be manipulated, reorganised and computed. The Tetris gamescreen is an example: it provides a virtual space in which zoids can be rearranged and rotated. Another example would be scrabble tiles: constantly reorganising the tiles into different pairs or triplets makes it easier to spot words. The humble pen and paper can also, arguably, be used to create structures in which information can be manipulated and reorganised (e.g. writing out the available letters and spaces when trying to solve a crossword clue).

3. Conclusion
This then is Heersmink’s taxonomy of cognitive artifacts. One thing that is noticeable about it (and this is a feature, not a bug) is that it focuses on the properties of the artifacts themselves, not on human uses. It is, thus, an artifact-centred taxonomy not an anthropomorphic one. Also the taxonomy does not divide the world of cognitive artifacts into a set of jointly exhaustive and mutually exclusive categories. As is clear from the descriptions, particular artifacts can sit within several of the categories at one time.

Nevertheless, I think the taxonomy is a useful one. It sheds light on the different ways in which artifacts can figure in our cognitive tasks, it makes us more sensitive to the rich panoply of cognitive artifacts we encounter in our everyday lives, and it can shed light on the propensity of these artifacts to enhance our cognitive performance. For example, symbolic cognitive artifacts clearly have a higher cognitive burden associated with them. The user must learn the conventions that determine the meaning of the representations before they can effectively use the artifact. At the same time, the symbolic representations probably allow for more complex and abstract cognitive operations to be performed. If we relied purely on iconic forms of representation we would probably never have generated the rich set of concepts and theories that litter our cognitive landscapes.

Saturday, April 15, 2017

The lecture is much maligned. An ancient art form, practiced for centuries by university lecturers, writers and public figures, it is now widely-regarded as an inferior mode of education. Lectures are one-sided, information dumps. They are more about the ego of the lecturer than the experience of the audience. They are often dull, boring, lacking in dynamism. They need to be replaced by ‘flipped’ classrooms, small-group activities, and student-led peer instruction.

And yet lectures are persistent. In an era of mass higher education, there is little other choice. An academic with teaching duties simply must learn to lecture to large groups of (apathetic) students. The much-celebrated paradigm of the Oxbridge-style tutorial, whatever its virtues may be, is simply too-costly to realise on a mass scale. So how can we do a better job lecturing? How can we turn the lecture into a useful educational tool?

I claim no special insight. I have been lecturing for years and I’m not sure I am any good at it. There are times when I think it goes well. I feel as if I got across the point I wanted to get across. I feel as if the students understood and engaged with what I was trying to say. Many times the evaluations I receive from them are encouraging. But these evaluations are global not local in nature: they assess the course as a whole, not particular lectures. Furthermore, I’m not sure that one-time, snapshot evaluations of this nature are all that useful. Not only is there a significant non-response rate, there is also the fact that the value of the particular lecture may take time to materialise. When I think back to my own college days, I remember few, if any, of the lectures I attended. It’s the odd one or two that have stuck in mind and proven useful. It would have been impossible for me to know this at the time.

So the sad reality is that most of the time we lecture in the dark. We try our best (or not) and never know for sure whether we are doing an effective job. The only measures we have are transient and immediate: how did I (qua lecturer) act in the moment? Was I fluent in my exposition? Did the class engage with what I was saying? Did they ask questions? Was their curiosity piqued? Did any of the students come up to me afterwards to ask more questions about the topic? Did I create a positive atmosphere in the class?

Despite this somewhat pessimistic perspective, I think there are things that a lecturer can do to improve the lecturing experience, both for themselves and for their students. To this end, I created a poster with four main tips on how to lecture more effectively. I created this some time ago, after reading James Lang’s useful book On Course: A Week-by-Week Guide to Your First Semester of College Teaching, and by reflecting on my own classroom experiences. You can view the poster below; I elaborate on its contents in what follows.

1. Cultivate the Right Attitude
The first thing to do in order to improve the lecturing experience is simply to improve one’s own attitude towards it. If you read books on pedagogy or attend classes on teaching in higher education, you’ll come across a lot of anti-lecture writings. And if you do enough lectures yourself, you can end up feeling pretty jaded and cynical. The main critique of the lecture as a pedagogical tool is that it is antiquated. It may have had value at a time when students didn’t have easy access to the information being presented by the lecturer, but in today’s information rich society it makes no sense. Students can acquire all the information that is presented to them in the lecture through their own efforts — all the more so if you are providing them with class notes and lecture slides. So why bother?

The answer is that the lecture is still valuable and it’s important to appreciate its value before you start lecturing. For starters, I would argue that in today’s information-rich society, the lecture possibly has more value than ever before. The lecture is not just an information-dump; it is a lived experience. Just because student’s have easy access to the information contained within your lecture doesn’t mean they will actually access it. Most probably won’t, not unless they are cramming for their final exams. Not only is today’s society information-rich; it is also distraction-rich. When students leave the classroom they will have to exert exceptional willpower in order to avoid those distractions and engage with the relevant information. Thus, there is some value to the lecture as a ‘special’ lived experience when students are forced to confront the information and ideas relevant to their educational programme. They can, of course, supplement this with own reading and learning, but students who don’t avail of the ‘special time’ of the lecture face an additional hurdle.

On top of this, there are things that a lecture can do that cannot be easily replicated by textbooks and lecture notes and the like. First, they can effectively summarise the most up-to-date research and synthesise complex bodies of information. This is particularly true if you are lecturing on your research interests and you keep abreast of the latest research in a way that textbooks and other materials do not. Lectures can also translate complex ideas to particular audiences. If you are lecturing to a group (in person) you can get a good sense of whether they ‘grok’ the material being presented by constantly checking-in. This allows you to adjust the pace of presentation or the style of explanation to a manner that best suits the group. Another value of lectures is that they allow the lecturer to present themselves as an intellectual model to their students — to inspire them to engage with the world of ideas.

Finally, if all else fails, lectures have value for the lecturer because they learn more about their field of study through the process of preparing for lectures. It is an oft-repeated truism that you don’t really know something until you have to explain it to someone else. Lectures give you the opportunity to do that several times a week.

2. Organise the Material
The second thing to do is to organise the material effectively. It’s an obvious point, but if the lecture consists largely in you presenting information to students, it is important that the information is presented in some comprehensible and compelling format. There are many ways to do this effectively, but three general principles are worth keeping in mind:

(i) Less is more: Lecturers have a tendency to overstuff their lectures with material, often because they have done a lot of reading on the topic and don’t want it to go to waste. What seems manageable to the lecturer is often too much for the students. I tend to think 3-5 main ideas per fifty-minute lecture is a good target.

(ii) Coherency: The lecture should have some coherent structure. It should not be just one idea after another. Organising the lecture around one key argument, story, or research study is often an effective way to achieve coherency. I lecture in law or legal theory so I tend to organise lectures around legal rules and the exceptions to them, or policy arguments and the critiques of them. I’m not sure this is always effective. I think it might be better to organise lectures around stories. Fortunately, law is an abundant source of stories: every case that comes before the court is a story about someone’s life and how it was affected by a legal rule. I’m starting to experiment with structuring my lectures around the more compelling of these stories.

(iii) Variation: It’s always worth remembering that attention spans are short so you should build some variation into the lecture. Occasionally pausing for question breaks or group activities are good way to break up the monotony.

3. Manage the Performance
The third thing to do is to manage the physical performance of lecturing. This might be the most difficult part of lecturing when you are starting out. I know when I first started lecturing I never thought of lecturing as a performance art. But over time I have come to learn that it is. Being an effective lecturer is just as much about mastering the physical space of the lecture theatre as it is about knowing the material. I tended to focus on the latter when I was a beginner, now I tend to focus more on the former.

The general things to keep in mind here are (i) your lecturing persona and (ii) the way in which you land your energy within the classroom.

When you are lecturing you are, to at least some extent, playing a character. Who you are in the lecture theatre is different from who you are in the rest of your life. I know some lecturers craft an intimidating persona, eager to impress their students with their impressive learning and being dismissive of what they perceive to be silly questions. Such personas tend to stem from insecurity. At the same time, I know other lecturers who try to be incredibly friendly and open in their classroom personas, while oftentimes being more insular and closed in the rest of their worklife. I try to land somewhere in between these extremes with my lecturing persona. I don’t like being overly friendly, but I don’t like being intimidating either.

’Landing your energy’ refers to the way in which you direct your attention and gaze within the classroom. I remember one lecturer I had who used to land his energy on a clock at the back of the lecture theatre. At the start of every lecture he would open up his powerpoint presentation, gaze at the clock on the back wall of the lecture theatre, tilt his head to one side, and then start talking. Never once did he look at the expressions on his students faces. Suffice to say, this was not a very effective way to manage the physical space within the classroom. It wasn’t engaging. It didn’t make students feel like they were important to the performance.

A good resource for managing the physical aspects of lecturing is this video from the Derek Bok Center on ‘The Act of Teaching’.

4. Engage the Students
The final thing to do is to make sure that lectures are not purely one-way. This is the biggest criticism of lectures and it can be avoided by building-in opportunities for genuine student engagement during the 50 or so minutes you have in the typical lecture. There are some standard methods for doing this. The most obvious is to encourage students to take notes. This might seem incredibly old-fashioned, but I always emphasise it to students in my courses. The note-taking process forces students to cognitively engage with what is being said and to translate it into a language that makes sense to them. To some extent, it doesn’t even matter if the students use the notes for revision purposes.

Other things you can do include: building discussion moments into the class when you pause to ask questions, get students to think about them, and then ask follow up questions; using in-class demonstrations of key ideas and concepts; and using the peer-instruction model (pioneered by Erik Mazula) where you pose conceptual tests during the lecture and get students to answer in peer groups. Of these, my favourite are the first two. I like to pause during lectures to get students to think about some question for a minute; get them to discuss it with the person sitting next to them for another minute; and then to develop this into a classroom discussion. I find this to be the most effective technique for stimulating classroom discussion — much more so than simply posing a question to the group as a whole. Demonstrations can also work well, but only for particular subjects or ideas. I use game theory in some of my classes and I find demonstrating how certain legal, political and commercial ‘games’ work, using volunteers from the class, is an effective way to facilitate student engagement.

Monday, April 10, 2017

The most widely discussed argument against abortion focuses on the right to life. It starts from something like the following premise:

(1) If an entity X has a right to life, it is impermissible to terminate X’s existence.

This premise seems plausible but needs to be modified. It does not deal with the clash of rights. There are certain cases in which rights conflict and need to be balanced and traded off against each other. The most obvious case in the one in which one person’s right to life conflicts with another person’s right to life. In those cases (typically referred to as ‘self defence’ cases) it may be permissible for one individual to terminate another individual’s existence. Abortion may occasionally be permitted on these grounds. For example, the foetus may pose a genuine threat to the life of the mother and so her right to life might be taken to trump the foetus’s right to life (assuming, for the sake of argument, that it has such a right).

The more difficult case is where the foetus poses no threat to the life of the mother. The question then becomes whether the mother’s right to control what happens to her body trumps the foetus’ right to life. Judith Jarvis Thomson’s famous article ‘A Defense of Abortion’ tries to argue the affirmative answer to this question. It does so through a series of fanciful and ingenious thought experiments. The most widely-discussed of those thought experiments is the violinist thought experiment, which supposedly shows that the right to control one’s body trumps the right to life in cases of pregnancy resulting from rape. I presented a lengthy analysis of that thought experiment in a recent post.

Less widely-discussed is Thomson’s ‘People Seeds’ thought experiment and it’s that thought experiment that I wish to discuss over the remainder of this post. I do so with some help from John Martin Fischer’s article ‘Abortion and Ownership’, as well as William Simulket’s article ‘Abortion, Property and Liberty’.

1. People Seeds and Contraceptive Failure
Here is Thomson’s original presentation of the ‘People Seeds’- thought experiment.

[S]uppose it were like this: people-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets or upholstery. You don’t want children, so you fix up your windows with fine mesh screens, the very best you can buy. As can happen, however, and on very, very rare occasions does happen, one of the screens is defective; and a seed drifts in and takes root.

Now ask yourself two questions about this thought experiment: (1) Do you have a right to remove the seed if it takes root? and (2) What is this scenario like?

In answer to the first question, Thomson suggests that the answer is ‘yes’. You have no duty to allow the people-seed to gestate on the floor of your house just because one happened to get through your meshed curtains. Your voluntary opening of the windows does not give an insurmountable right to the people-seeds. In answer to the second question, it is supposed to be like the case of pregnancy resulting from contraceptive failure. Arguing by analogy, Thomson’s claim is that the moral principle governing the ‘People-Seed’-case carries over to the case of pregnancy resulting from contraceptive failure. So just as the right to control what happens to one’s property trumps the people-seed’s right to life in the former, so too does the right to control what happen’s to one’s body trump the foetus’ right to life (assuming it has one) in the latter. I have tried to illustrate this reasoning in the diagram below.

This argument is significant, if it is right. Thomson’s violinist thought experiment could only establish the permissibility of abortion in cases of involuntary pregnancy (i.e. pregnancy resulting from rape). The ‘People-seeds’ thought experiment goes further and purports to establish the permissibility of abortion in cases of voluntary sexual intercourse involving contraceptive failure. Is the argument right?

2. Counter-Analogies to People-Seeds
I’m going to look at John Martin Fischer’s analysis of the ‘People-Seeds’-thought experiment. I’ll start with an important preliminary point. Whenever we develop and evaluate a thought experiment, we have to be careful to ensure that our intuitions about what is happening in the thought experiment are not being contaminated or affected by irrelevant variables.

Thomson’s stated goal in her article is to consider the permissibility of abortion if we take for granted that the foetus has a right to life. Obviously, this is a controversial assumption. Many people argue that the foetus does not have a right to life because the foetus is not a person (or other entity capable of having a right to life). Thomson is trying to set that controversy to the side. She is willing to accept that the foetus really does have a right to life. Consequently, it is important for her project that she uses thought experiments involving entities that clearly do have a right to life. The violinist thought experiment clearly succeeds in this regard. It involves a fully competent adult human being — an entity that uncontroversially has a right to life. It’s less clear whether the people-seeds thought experiment shares this quality. It could be that when people are imagining the scenario they don’t think of the people-seeds as entities possessing a right to life (perhaps they think of them as the equivalent to sperm cells getting lodged in your carpet - they will take a bit of time to become people). Consequently, their conclusion that there is nothing wrong with removing the people-seeds from the carpet might not be driven by intuitions regarding the trade off between the right to life and the right to control one’s property but rather by intuitions about the right to control one’s property simpliciter.

Fischer thinks there is some evidence for this interpretation of the thought experiment. If you run an alternative, but quite similar, thought experiment involving an entity that clearly does possess a right to life, the conclusion Thomson wishes to draw is much less compelling. Here’s one such thought experiment coming from the philosopher Kelly Sorensen:

Imagine you live in a high-rise apartment. The room is stuffy, and so you open a window to air it out. You don’t want anyone coming in…so you fix up your windows with metal bars, the very best you can buy. As can happen, though, the bars and/or their installation are defective, and the Spiderman actor [who is filming in the local area]…falls in, breaks his back in a special way, and cannot be moved, without ending his life, for nine months. Are you morally required to let him stay?

The suggestion from Fischer is that you might be under such an obligation. But if this is right, then it possibly provides a better analogy with the case of pregnancy resulting from contraceptive failure and a reason to think that the right to control one’s body does not trump the right to life.

Another point that Fischer makes is that your role in causing the entity in question to become dependent on you (your body or your property) might make a relevant difference to our moral beliefs. Thus, the fact that Thomson’s thought experiment asks us to suppose that the people-seeds are just out there already, floating around on the breeze, waiting to take up residency on somebody’s carpet, might be affecting our judgment. In this world, you are constantly in a defensive posture, trying to block the invasion of the people-seeds. If we changed the scenario so that you actually play some positive causal role in drawing them into your house/apartment we might reach a different conclusion. So here’s a slight variation on Thomson’s thought experiment:

Suppose that you can get some fresh air by simply opening the window (with the fine mesh screen), but still, you would get so much more if you were to use your fan, suitably placed and positioned so that it is sucking air from outside into the room. The only problem is that this sucks people-seeds into the room along with the fresh air.

The suggestion is that this is much closer to the case of pregnancy resulting from contraceptive failure. After all, voluntarily engaging in sexual intercourse (even with contraception) involves playing a positive causal role in drawing into your body the sperm cells that make pregnancy possible.

In sum, then, we have two counter-analogies to Thomson’s ‘People-Seeds’-thought experiment. The suggestion is that both of these thought experiments are closer to pregnancy resulting from contraceptive failure and so the moral principle that applies in both should carry over to that case. The right to control one’s body does not trump the right to life.

3. Analysis of the Counter-Analogies
There are two problems with these counter-analogies. The first is simply that they do not compare like with like. This is a problem with all thought experiments that are intended to provide analogies with pregnancy, including Thomson’s. Pregnancy is, arguably, a sui generis phenomenon: there are no good analogies with it, period. Consequently, it is very difficult to build a moral argument for (or against) abortion by simply constructing elaborate and highly artificial thought experiments that pump our intuitions about the right to life in various ways. Furthermore, even if you hold out some hope for the analogical strategy, there is something pretty obviously disanalogous about the two scenarios: all the thought experiments involve interferences with the right to property not with the right to control over one’s body. Perhaps one has a property right over one’s body. Even still, the degree of invasiveness and dependency involved in pregnancy is quite unlike someone taking up residency on your carpet.

Another problem with the thought experiments is the normative principles underlying them. The whole discussion about pregnancy and contraceptive failure is motivated by the belief that consent matters when it comes to determining the rights claims that others have over us. Pregnancy from rape is distinctive because it involves a lack of consent. One person impregnates another against their will. It seems intuitively plausible (irrespective of the ranking one has of different rights) to assume that duties cannot be easily imposed on someone without their consent. Pregnancy from contraceptive failure is different because (a) everyone knows that pregnancy is a possible (if not probable) result of sexual intercourse even when it takes place with contraceptive protection and (b) by consenting to the sexual intercourse it seems like you must be willing to run the risk of this possible result. Consequently, it doesn’t seem quite so far-fetched to suppose that you might be voluntarily incurring some duties by engaging in the activity.

This line of reasoning, as William Simulket sees it, is motivated by the following consent principle:

Consent principle: When an agent A freely engages in action X, A consents to all possible foreseeable consequences of X.

At first glance, this seems like a plausible principle and if it is correct it would seem to imply that A incurs certain obligations or duties with respect to X. But according to Simulket (and Thomson) this consent principle cannot possibly be correct because it entails absurd consequences. It entails that women are ‘on the hook’ (so to speak) for all the possible pregnancies that might befall them (irrespective of whether they consented to the sexual activity that led to the pregnancy) because rape is a possible foreseeable consequence of being alive and walking about in the world, and hence women who refuse to get hysterectomies must have consented to the possibility of pregnancy resulting from rape. Thomson put it like this in her original article:

…by the same token anyone can avoid a pregnancy due to rape by having a hysterectomy, or anyway by never leaving home without a (reliable!) army.

The circumstances that we face are, largely, outside of our control. But whether we have invasive surgery to remove our reproductive organs is, largely, within our control. It is uncontroversially true that any of us might be raped at some point in the future. Therefore, according to this argument, women who realize that rape is possible but who do not have a hysterectomy have consented to becoming pregnant from sexual assault.

Simulket also suggests, along similar lines, that the consent principle, if true, would entail that we all consent to all the possible foreseeable misfortunes that befall us because we could have avoided them by committing suicide. It is, of course, absurd to assume that if we wish to avoid responsibility for what happens to us we must get hysterectomies or commit suicide, hence the consent principle must be wrong.

I’m not sure what to make of this. I agree with Simulket and Thomson that the strong version of the consent principle — the one that holds that we are on the hook for all possible foreseeable consequences of what we do — must be wrong. But obviously some version of the consent principle must be correct (perhaps one that focuses on results that are reasonably foreseeable or probable). After all it is essential of our systems of contract law and legal responsibility that we incur duties through our voluntary activity.

If this is correct, then maybe Thomson’s thought experiments succeed in showing that the right to control one’s body trumps the right to life of the foetus (assuming it has one) in cases of pregnancy resulting from contraceptive failure, but it does nothing to show whether the same result holds in cases of unprotected consensual sexual intercourse. Those cases might be covered by a suitably modified version of the consent principle. If we want to argue for a pro-choice stance in relation to those cases, we may need to focus once more on the question of who or what bears a right to life.

Abstract: Rape and sexual assault are major problems. In the majority of rape and sexual assault cases consent is the central issue. Consent is, to borrow a phrase, the ‘moral magic’ that converts an impermissible act into a permissible one. In recent years, a handful of companies have tried to launch ‘consent apps’ which aim to educate young people about the nature of sexual consent and allow them to record signals of consent for future verification. Although ostensibly aimed at addressing the problems of rape and sexual assault on university campuses, these apps have attracted a number of critics. In this paper, I subject the phenomenon of consent apps to philosophical scrutiny. I argue that the consent apps that have been launched to date are unhelpful because they fail to address the landscape of ethical and epistemic problems that would arise in the typical rape or sexual assault case: they produce distorted and decontextualised records of consent which may in turn exacerbate the other problems associated with rape and sexual assault. Furthermore, because of the tradeoffs involved, it is unlikely that app-based technologies could ever be created that would significantly address the problems of rape and sexual assault.