Ethics in the News

Enhancement

Neurointerventions can be roughly described as treatments or procedures that act directly on the physical properties of the brain in order to affect the subject’s psychological characteristics. The ethics of using neurointerventions can be quite complicated, and much of the discussion has revolved around the use of neurointerventions to improve the moral character of the subjects. Within this debate, there is a sub-debate concerning the use of enhancement techniques on criminal offenders. For instance, some jurisdictions make use of chemical castration, intended to reduce the subjects’ level of testosterone in order to reduce the likelihood of further sexual offenses. One particularly thorny question regards the use of neurointerventions on offenders without their consent. Here, I focus on just one version of one objection to the use of non-consensual neurocorrectives (NNs).

According to one style of objection, NNs are always impermissible because they express a disrespectful message. To be clear, the style objection I consider does not appeal to the potential consequences of expressing this message; rather, it relies on the claim that there is something intrinsic to the expression of such a message that gives us a reason (or reasons) for not performing an action that would express this message. For the use of non-consensual neurocorrectives, this reason (or set of reasons) is strong enough to make NNs impermissible. The particular version of this objection that I focus on claims that the disrespectful message is that the offender does not have a right to be listened to.

If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.

Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.

This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties? Continue reading →

Over the last 25 years there has been an explosion of psychological research investigating the influence of ‘moral identity’ on agency with a recent meta-analysis of 111 studies concluding that people’s moral identity has as much of an effect on agency is either their moral emotion or powers of moral reasoning (Hertz & Krettenauer, 2016). Although the mainstream view of moral psychology is that moral self-concept plays a significant role in moral agency, the practical ethical implications of this view remain underexplored. Here, I argue that one of those implications is that, in situations where we need to improve morality, such as decision-making in the boardroom, consumer behaviour, and reform of criminal offenders, we should do so (in part) by developing people’s moral identities. Indeed, in many cases, changes to moral identity have the potential to efficiently deliver relatively large moral improvements. Continue reading →

The coffee you are having with your colleagues at a business meeting does more than keep you awake. Many of us know that caffeine can help with alertness and working memory – the first systematic study on caffeine and performance, sponsored by Coca-Cola, was published over 100 years ago. But did you know caffeine can also have “social” effects?

Around a decade ago, Facebook users were widely playing a game called ‘Scrabulous’ with one another. It was pretty close to Scrabble, effectively, leading to a few legal issues.

Alongside Scrabulous, the popularity of Scrabble-assistance websites grew. Looking over the shoulders of work colleagues, you could often spy a Scrabulous window, as well as one for scrabblesolver.co.uk too. The strange phenomenon of easy, online Scrabulous cheating seemed pervasive for a time.

The strangeness of this can hardly be overstated. Friends would be routinely trying to pretend to one another that they were superior wordsmiths, by each deploying algorithmic anagram solvers. The ‘players’ themselves would do nothing but input data to the automatic solvers. As Charlie Brooker reported back in 2007,

“We’d rendered ourselves obsolete. It was 100% uncensored computer-on-computer action, with two meat puppets pulling the levers, fooling no one but themselves.”

Back to the present, and online Scrabble appears to have lost its sheen (or lustre, patina, or polish). But in a possible near future, I wonder if some similar issues could arise. Continue reading →

The appellant in R v BM was a tattooist and body piercer who also engaged in ‘body modification’. He was charged with three offences of wounding with intent to do grievous bodily harm. These entailed: (a) Removal of an ear; (b) Removal of a nipple; and (c) division of a tongue so that it looked reptilian. In each case the customer had consented. There was, said the appellant, no offence because of this consent.

Where an adult decides to do something that is not prohibited by the law, the law will generally not interfere.

“Every human being of adult years and sound mind has a right to determine what shall be done with his own body.”[1]

This principle has been fairly consistently recognised in the English law.[2] Thus, for instance, In In re T (Adult: Refusal of Treatment, Butler-Sloss LJ cited with approval this section of the judgment of Robins JA in Malette v Shulman[3]:

‘The right to determine what shall be done with one’s own body is a fundamental right in our society. The concepts inherent in this right are the bedrock upon which the principles of self-determination and individual autonomy are based. Free individual choice in matters affecting this right should, in my opinion, be accorded very high priority.’ Continue reading →

Novel gene editing technologies, such as CRISPR/Cas9, allow scientists to make very precise changes in the genome of human embryos. This could prevent serious genetic diseases in future children. But the use of gene editing in human embryos also raises questions: Is it safe? Should prospective parents be free to choose the genetic characteristics of their children? What if they want to use gene editing to have a deaf child, or a child with fair skin and blue eyes? Should gene editing be regulated globally, or should each country have their own legislation? In this interview with Katrien Devolder, John Harris (Professor Emeritus, University of Manchester & Visiting Professor in Bioethics, King’s College London) answers these and other questions, and defends the view that we have the strongest moral obligation to gene-edit human embryos, not only to prevent disease but also for the purpose of enhancement.

According to a story by Catherine Caruso published in STAT Newsthis week, authorities at the International Association of Athletics Federations (IAAF) are getting set to debate whether or not women with hyperandrogenism, or higher-than-expected testosterone levels, should be restricted from competing against women with “normal” or “expected” levels. The debate over the IAAF rules began in 2011, when a rule was first created to prevent women with high testosterone levels competing because of the belief that their hormone levels gave them an unfair advantage. The rule was challenged in 2015, and the IAAF was given two years to provide further justification for its position.

As Caruso writes, the main focus of the current controversy is the legal case of Dutee Chand, an Indian athlete whose testosterone levels exceed “the 10 nanomoles per liter limit, the level deemed to be the lower end of the ‘male range,’” i.e., the amount of testosterone in the blood typically exhibited by male athletes. Testosterone is widely considered a hormone that assists in athletic performance, given that it increases the rate of muscle development and bone mass, among other traits. The idea behind the IAAF’s position is that “unnaturally” high levels of testosterone that exceed levels typical of one’s gender would give such athletes an unfair advantage over other competitiors. Insofar as the IAAF is concerned with creating the fairest competition possible, the presence of elevated testosterone levels in a select group of athletes, like Chand, presents a serious problem.

The problem with the IAAF’s position, however, is that it overlooks one of the central nuances of sporting ethics. It is true that sporting events are supposed to be fair in a wide sense: we would not consider the competition just if one athlete took some action that made it impossible for other athletes to win. This is why athletes are given certain rules to which they must conform. In basketball, for example, one is forbidden from reaching out and grabbing the opposing player’s arm to prevent them from dribbling; in hockey, players are forbidden from tripping each other; and soccer players cannot decide to randomly touch the ball with their hands (unless, of course, they are a goalie).

Four members of the Dawoodi Bohra sect of Islam living in Detroit, Michigan have recently been indicted on charges of female genital mutilation (FGM). This is the first time the US government has prosecuted an “FGM” case since a federal law was passed in 1996. The world is watching to see how the case turns out.

A lot is at stake here. Multiculturalism, religious freedom, the limits of tolerance; the scope of children’s—and minority group—rights; the credibility of scientific research; even the very concept of “harm.”

To see how these pieces fit together, I need to describe the alleged crime.

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”