Ethics in the News

Brian D. Earp

Brian D. Earp is a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. He holds degrees from Yale, Oxford, and Cambridge Universities, and studies issues in psychology, philosophy, biomedicine, and ethics.

For a small country, Iceland has had a big impact on global media coverage recently, following its proposed ban on male circumcision before an age of consent.

Iceland’s proposed legislation seeks to criminalise circumcision on male minors that is unnecessary “for health reasons,” stating individuals who remove “part or all of the sexual organs shall be imprisoned for up to 6 years.”

The bill claims circumcision violates children’s rights to “express their views on the issues [concerning them]” and “protection against traditions that are harmful.”

According to bill spokesperson Silja Dögg Gunnarsdóttir, a key reason for the bill is that all forms of female genital cutting (FGC), no matter how minor, have been illegal in Iceland since 2005, but no similar legislation exists for males.

“If we have laws banning circumcision for girls,” she said in an interview, then for consistency “we should do so for boys.” Consequently, the bill is not specific to male circumcision, but adapts the existing law banning FGC, changing “girls” to “children.”

There is much to unpack here. We first discuss self-determination and informed consent, before addressing claims about potential health benefits and harms. We then explore the religious significance of circumcision for some groups, and ask what implications this should have.

“The urge to censor is greatest where debate is most disquieting and orthodoxy most entrenched…”

–Chief Judge Alex Kozinski

In September of last year, conservative speaker, Ben Shapiro, spoke at the UC Berkeley campus for approximately 90 minutes. The cost of security for the physical protection of Mr. Shapiro, an American citizen, and Harvard-educated lawyer came to approximately $600,000. Prior to Mr. Shapiro’s visit at the Berkeley campus, right-wing British speaker and internet provocateur, Milo Yiannopolis, was prevented from speaking on campus due to security worries caused by approximately 150 masked agitators who committed various acts of vandalism, arson, and violence, resulting in the harm of several innocent Berkeley students and local citizens and totaling around $100,000 in property damage. In May of last year, at Evergreen State College in Olympia, Washington, biology professor, Bret Weinstein, refused to participate in the “Day of Absence” in which “white students, staff, and faculty,” were, “invited to leave the campus for the day’s activities.” Weinstein’s refusal resulted in him being surrounded by an angry mob of 50 students who called him a racist and insisted that he resign. He was later advised by campus police to remain off campus indefinitely for his own physical safety. Weinstein and his wife, also a professor, did so, and eventually resigned from their positions at Evergreen in September. In March of 2017, at Middlebury College, demonstrators physically attacked libertarian author Charles Murray and his liberal host, professor Allison Stanger, pulling her hair and giving her whiplash, sending her to the ER.

Berkeley. Evergreen. Middlebury. Missou. Yale. Brown. McMasters. Wilfred Laurier. The list goes on. One must wonder where this trend will ultimately take us. There have been several justifications given for this increasing rash of no-platforming, shaming, and at times, physical violence on North American campuses. In essence, these justifications can be distilled into a triad of well-meaning but ultimately flawed theses, namely, 1.) that all discourse is about power and that any speech that renders a listener physiologically uncomfortable therefore rises to the level of a physical attack upon that individual, thereby justifying actual physical violence in response, 2.) that for the sake of historically marginalized voices, persons who are members of historically privileged groups should forfeit their right to free speech or ought to remain silent, 3.) that certain assertions, even if possibly true, are nonetheless morally impermissible to make since to do so will likely create conditions whereby bad-intentioned persons will inevitably and successfully advance their morally heinous projects.

This first thesis—that all discourse is fundamentally about power—finds its philosophical origins in the likes of post-modernists such as Jacques Derrida and Michel Foucault. To quote Foucault, “Discourses are tactical elements or blocks operating in the field of force relations.” Thus, on Foucalt’s view, if all discourse is, at heart, really just veiled force relations between competing groups; if language isn’t fundamentally capable of being about objective truth or about the world in any meaningful sense, then the ink symbols written on the page and the shaped air admitted from one’s mouth in the forms of ‘rationality’, ‘facts’, ‘knowledge’, and ‘truth’ are just another set of weapons in a person’s overall arsenal to seize and maintain power, no different in kind from weapons of a physical sort. To speak then, on Foucault’s view, is to wield a weapon, albeit a subtler and refined one. The uncomfortable physiological feeling of hearing offensive speech, it would then seem, vindicates this view that one is being attacked. One might thus conclude, “Why not attack back with heavier, more effective, and more expedient weapons?”

This provoked a complacent smile followed by a quick look around to ensure that nobody else had seen this result on my monitor. After all, outright utilitarians still risk being thought of as profoundly disturbed, or at least deeply misguided. It’s easy to see why: according to my answers, there are at least some (highly unusual) circumstances where I would support the torture of an innocent person or the mass deployment of political oppression.

Choosing the most utilitarian responses to these scenarios involves great discomfort. It is like being placed on a debating team and asked to defend a position you abhor. The idea of actually torturing individuals or oppressing dissent evokes a sense of disgust in me – and yet the scenarios in these dilemmas compel me not only to say such acts are permissible, they’re obligatory. Biting bullets is almost always uncomfortable, which goes a long way in explaining the lack of popularity utilitarianism enjoys. But this discomfort largely melts away once we recognize three caveats relevant to the Oxford Utilitarianism Scale and to moral dilemmas more generally.

The first of these relates to the somewhat misleading nature of these dilemmas. They are set up to appear as though you are being asked to imagine just one thing, like torturing someone to prevent a bomb going off, or killing a healthy patient to save five others. In reality, they are asking two things of you: imagining the scenario at hand, and imaging yourself to be a fundamentally different being – specifically, a being that is able to know with certainty the consequences of its actions.

* Please note that this article is being cross-posted from the Journal of Medical Ethics Blog

Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.

This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.

Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”

Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?

Four members of the Dawoodi Bohra sect of Islam living in Detroit, Michigan have recently been indicted on charges of female genital mutilation (FGM). This is the first time the US government has prosecuted an “FGM” case since a federal law was passed in 1996. The world is watching to see how the case turns out.

A lot is at stake here. Multiculturalism, religious freedom, the limits of tolerance; the scope of children’s—and minority group—rights; the credibility of scientific research; even the very concept of “harm.”

To see how these pieces fit together, I need to describe the alleged crime.

Readers of the Practical Ethics Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we judge whether someone is really an expert in a scientific area? What happens when empirical research gets polarized? Most of these are short – just a few minutes. Links below:

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Alice Dreger, the historian of science, sex researcher, activist, and author of a much-discussed book of last year, has recently called attention to the loss of ambivalence as an acceptable attitude in contemporary politics and beyond. “Once upon a time,” she writes, “we were allowed to feel ambivalent about people. We were allowed to say, ‘I like what they did here, but that bit over there doesn’t thrill me so much.’ Those days are gone. Today the rule is that if someone—a scientist, a writer, a broadcaster, a politician—does one thing we don’t like, they’re dead to us.”

I’m going to suggest that this development leads to another kind of loss: the loss of our ability to work together, or better, learn from each other, despite intense disagreement over certain issues. Whether it’s because our opponent hails from a different political party, or voted differently on a key referendum, or thinks about economics or gun control or immigration or social values—or whatever—in a way we struggle to comprehend, our collective habit of shouting at each other with fingers stuffed in our ears has reached a breaking point.