Stipulated: Adults get to make whatever boneheaded medical decisions about themselves that they want.

Stipulated: Adults do not get to make whatever boneheaded medical decisions about their children that they want.

Question: Ought a 17-year-old be able to make a boneheaded medical decision about herself?

Cassandra C. is a 17-year-old with Hodgkin lymphoma, a disease which, when treated with chemotherapy, has a high (80-85%) survival rate. Cassandra initially underwent surgery, then two rounds of chemo, before deciding that while she wants to live, she wants to do so without, in the words of her mother, Jackie Fortin, putting “poison” in her body.

It’s not a stretch for a layperson to consider chemo a poison: the patient ingests the drugs with the idea that they will kill the cancer without killing her, and it is the lucky, lucky cancer patient who isn’t sickened by this treatment.

But it is a stretch to think that there exists some other, effective, non-poisonous treatment for Hodgkin’s, not least because there is no good evidence of its existence. Some (#notall. . .) alt-med folks may think oncologists are in league with pharma companies to hide cheap and easy cures to nasty diseases, but I highly doubt there is a conspiracy of cancer docs to keep effective treatments away from their patients just so they can profit from their suffering.

In any case, if Cassandra were 18, she could cease the chemo in search of those non-poisonous treatments, but at not-quite-17-and-a-half, she’s been confined to medical ward by Connecticut state officials and forced to undergo treatment; the Connecticut Supreme Court just reaffirmed that decision by those officials.

Art Caplan (from whom I took a class when he was at Minnesota) wrote a brief editorial that 17 is 17—that is, not 18, and therefore unable to medical decisions on her own behalf. I get the technical point (17≠18), but I’m not so sure that the consequentialist argument Caplan goes on to make—Hodgkin lymphoma is treatable—ought to carry the day.

After all, if she turned 18 tomorrow, the lymphoma would remain just as treatable, and the absence of that treatment would leave her just as dead.

it disgusts her to have “such toxic harmful drugs” in her body and she’d like to explore alternative treatments. She said by text she understands “death is the outcome of refusing chemo” but believes in “the quality of my life, not the quantity.”

“Being forced into the surgery and chemo has traumatized me,” Cassandra wrote in her text. “I do believe I am mature enough to make the decision to refuse the chemo, but it shouldn’t be about maturity, it should be a given human right to decide what you want and don’t want for your own body.”

It is about maturity, actually; the difficulty is determining what counts as maturity?

Is it just about age? Reach 18 years and you’re mature; prior to that, not.

That has both the benefit and drawback of simplicity. It’s a straightforward standard, but one which, strictly applied, seems nonsensical, ascribing a substantive ethical property to passage of time : “January 1 you’re immature, but October 1 you’re mature.”

Age matters—if Cassandra were 10, I’d think there was no ethical problem—but largely as a stand-in for other properties, including the ability to make decisions.

So is maturity about decision-making ability? Well, okay, but what does this mean? Is this about making good (by whatever metric) decisions? And what if someone repeatedly makes bad (b.w.m.) decisions?

If their adults, and those decisions are of a non-criminal nature, we say, Okay, but largely because most of us don’t want to live in a society where we don’t get make decisions about our own lives. We assert the procedural right to decide, regardless of the content of the decision, because we’d rather make our own decisions (good and bad) than have others make them for us.

But teenagers, man, teenagers get to make some decisions and not others, and figuring out what decisions they get to make often does come down to the content of those decisions. If the kid makes good decisions (as determined by the parents), he’s given the leeway to make even more; if not, then not.

And thus the Connecticut Supreme Court has judged the procedural ability of Cassandra to make her own medical decisions on the content of those decisions: it thinks she’s decided badly, and as a result, ought not be able to decide at all.

I get this, I do, but I am made uneasy by it. What if she had a different disease, with a much lower (40 percent? 30?) survival rate? What if the treatment were more disabling over the long-term? Or what if she doesn’t respond to the treatment? Is there any amount of suffering from the treatment that would lead the hospital to stop?

Or will they only stop when Cassandra turns 18, and is free to decide for herself, whatever the content of that decision?

This is a tough case, and I don’t know that the Court got it wrong. I just don’t know if they got it right, either.

She was in charge of the Biomedical Ethics Unit at McGill during my postdoc, and while I think I spoke to her on the phone before moving to Montréal, I hadn’t met her before then. I was going to fly up to Montréal to look for an apartment, but she’d assured me that I could find a place upon arrival.

In all my years of knowing her, that might have been the only time she gave me bad advice. (Well, that and suggesting that if I liked Montréal, I’d probably like Boston, too.)

In every other way, however, Kathy was as fine a guide into bioethics and Québec as I could have hoped for. She and her husband Leon invited me over for dinner more times than I could count—in fact, I stayed with them a good chunk of the time I was looking for an apartment—and took me hiking outside of the city, and to various festivals within it.

She also tried to convince me that Montréal bagels were as good as New York bagels, but that didn’t take. (Montréal bagels are fine—and, honestly, given how pillowy so many NY bagels have become of late, certainly the better size—but a bit too sweet for my taste.)

Mostly, though, I remember the many long conversations with her in her office, first in the old building on Peel, then in her corner office in the building on the other side of the street. I’d have been in my office at the end of the day and have wandered over to hers to say goodbye, then end up staying for an hour or two as we talked about ethics and genetics and politics and music and memory.

She was generous with her time and with herself.

Again, she was kind as she worked her way through her and my thoughts, but it was through these long conversations, as well as in our various BMU meetings, seminar, and colloquia, that her tough-mindedness revealed itself. It was so easy to skip past the basics, but Kathy always returned to them, and to the basic necessity of patient and subject protection.

That was Kathy’s abiding concern: how to take care of people, be they patients at the Children’s Hospital, where she served as a clinical ethicist, or when writing about subjects in clinical trials. She and her colleagues (including Stan Shapiro and Charles Weijer) returned again and again to the necessity of clinical equipoise in research trials, especially in regards to trials of psychoactive medications.

All too often psychiatric patients would be—are—offered fewer subjects-protections than other similarly seriously ill patient-subjects: instead of testing new treatments against current ones, researchers test the investigational drug against. . . nothing. Not only will this skew the results by inflating the effects of the drug—which is bad enough—but subjects who might otherwise benefit from current treatments are denied them, and thus, suffer as a direct and entirely predictable result of their participation in the trial.

This, as Kathy would note, is a textbook definition of unethical research.

She and Stan focused on psychiatric patients, but Kathy’s research ranged widely across bioethics and included considerations of genetic and stem cell research. She worked with Bartha Knoppers at the Université de Montréal and Françoise Baylis at Dalhousie in trying to come to grips with the then-novel human embryonic stem cell research.

Bartha and Françoise can be aggressive in argumentation—I am like them in that respect—but Kathy was not one to be flattened by fast-rolling words. She was too acute a thinker.

This is what I missed, at first. Her kindness, her gentleness, was so immediately apparent, that I made the mistake I too often made: that a softness means weakness.

She was soft; she was also sharp. There was no contradiction.

That is a lesson I’m still learning.

I am so sorry that I will never be able to tell her how much she meant to me, personally and intellectually. I am a better thinker for having known her, and a better teacher for having taught alongside her. She is, and will remain, a touchstone. I will miss her for the rest of my life.

Some bioethicists who worry about enhancement don’t worry about normalization; some embrace enhancement precisely because they think it offers a way out of normalization.

Neither position makes sense insofar as enhancement and normalization are linked.

The enhancement-worriers fret about new techs or practices taking us away from a baseline normal human, yet don’t wonder about the creation of that baseline normal human. The enhancement- embracers think other-than-normal is just dandy, yet don’t consider that enhancement can lead to new normals.

This is not, I must say, the position of all those who write on enhancement and normalization; one of the things I like about Parens’s book Enhancing Human Traits is that it includes plenty o’ pieces by those who weigh both enhancement and normalization.

Me, I think the real issue is normalization, such that my concerns about enhancement are precisely that they might become the new norm. Enhancement leads to questions; normalization feeds off forgetting.

No, not anything brilliant on my part: I brought up an issue in my bioethics course that I’ve mentioned in previous courses—had thought I’d mentioned previously in this course—and a number of them lost it.

I told them that there were deaf people who didn’t think there was anything wrong with being deaf, and furthermore, they’d like you to keep your cochlear implants and whatnot to yourselves, thankyouverymuch.

That did not compute.

Now, the backdrop for this moment of brain splatter was a discussion of social coercion, normalization, enhancement, disability, and morality (among other things). Somewhere in this discussion I noted that devices which are promoted as aiding the disabled might be more about assuaging the discomforts of the non-disabled. This was one of Anita Silver’s points in her essay “A Fatal Attraction to Normalizing” (in Enhancing Human Traits, ed. by Erik Parens), as exemplified by the decision of the Canadian government to push children affected by thalidomide into prostheses and forbidding them to roll or crawl. “The direction of resources to fund artificial limb design and manufacture rather than wheelchair design was influenced by the supposition that walking makes people more socially acceptable than wheeling does.”

A number of them did not like where I was going with this. So how far do we go to accommodate those people, they said. If we’re the majority, shouldn’t they, you know, have to adapt? Are we just supposed to design everything around them?

One of them even complained about ramps: Why should I have to go around and around if I just want to take the stairs?

I pointed out that ramps rarely replace stairs, but are instead treated as an addition, meaning that the stairs remained. I also noted that crappy design is bad for everyone. The building in which the class is held, Carman Hall, is a terribly designed building—you have to go down a flight of steps just to enter the building—and suggested that it’s just possible that being forced to think about accessibility for, say, wheelchair users might just lead to designs which are good for everyone. Curb cuts, I noted, are useful for those pushing strollers or, say, 3 weeks worth of laundry in a cart.

Besides, I noted, at some point we’re all, if we’re lucky, going to get old and frail, so designing for access is, in effect, designing for everyone.

In any case, my mind was a little blown by their sense that accommodating people who came in a model unlike themselves was unfair.

Okay, now back to their shorted neural circuits. Deafness, I noted, is a condition, and some who are deaf are also a part of the Deaf community. These Deaf members see themselves as distinct, not disabled, and their community as worth preserving; as such, they see cochlear implants as a way of eliminating members of that community. Furthermore, since cochlear implants are imperfect, not only will these deaf people not gain the full range of sound as hearing people, they will never gain full status as hearing people: they will also be lesser “normals” than full and “normally” Deaf.

But why would they want to be deaf? they asked. Doesn’t that limit them? Why wouldn’t they want cochlear implants?

Well, I noted, we’re all hearing in our class, so if we lost our hearing we would, in fact, experience it as a loss. But while we might be able to see only the limitations of deafness, they see other capacities enabled by it.

They were dubious. What about contacts, one of the students asked. I’d be blind without my contacts. J., I said, you would not be blind, you would simply have bad sight, which is more akin to being hard of hearing than being deaf.

(That said, it was a provocative question: is their a Blind community akin to the Deaf community? And what would be the implications of that? What are the implications of a lack of a Blind community?)

I’m used to students gasping a bit at the thought that Deaf people might not have a problem with their own deafness, but I can usually get them to consider that the problem with deafness is the problem that hearing people have with deafness. No, I’m trying to force them to accept the Deaf argument—I’m not quite sure what to make of it myself—but I do want to crowbar them out of their own defaults, their own unthinking attachments to normal.

There are streams within bioethics which maintain their own unthinking attachments to normal, as well as those who prefer to poke a stick into the concept. I’m more in the latter camp (big surprise), but as I think normalizing is impossible to avoid, my approach is simply to unsettle, and be unsettled by, the normal, and go from there.

The students weren’t so much unsettled as shocked, and given that shocking can lead to reaction rather than reflection, I guess I shouldn’t be shocked that they held ever tighter to their own normality.

I’m a big fan of science, and an increasingly big fan of science fiction.

I do, however, prefer that, on a practical level, we note the difference between the two.

There’s a lot to be said for speculation—one of the roots of political science is an extended speculation on the construction of a just society—but while I am not opposed to speculation informing practice, the substitution of what-if thinking for practical thought (phronēsis) in politics results in farce, disaster, or farcical disaster.

So too in science.

Wondering about a clean and inexhaustible source of energy can lead to experiments which point the way to cleaner and longer-lasting energy sources; it can also lead to non-replicable claims about desktop cold fusion. The difference between the two is the work.

You have to do the work, work which includes observation, experimentation, and rigorous theorizing. You don’t have to know everything at the outset—that’s one of the uses of experimentation—but to go from brain-storm to science you have to test your ideas.

Biologist George Church thinks synthesizing a Neandertal would be a good idea, mainly because it would diversify the “monoculture” of the Homo sapiens.

My first response is: this is just dumb. The genome of H. sapiens is syncretic, containing DNA from, yes, Neandertals, Denisovans, and possibly other archaic species, as well as microbial species. Given all of the varieties of life on this planet, I guess you could make the case for a lack of variety among humans, but calling us a “monoculture” seems rather to stretch the meaning of the term.

My second response is: this is just dumb. Church assumes a greater efficiency for cloning complex species than currently exists. Yes, cows and dogs and cats and frogs have all been cloned, but over 90 percent of all cloning attempts fail. Human pregnancy is notably inefficient—only 20-40% of all fertilized eggs result in a live birth—so it is tough to see why one would trumpet a lab process which is even more scattershot than what happens in nature.

Furthermore, those clones which are successfully produced nonetheless tend to be less healthy than the results of sexual reproduction.

Finally, all cloned animals require a surrogate mother in which to gestate. Given the low success rates of clones birthed by members of their own species, what are the chances that an H. sapiens woman would be able to bring a Neandertal clone to term—and without harming herself in the process?

I’m not against cloning, for the record. The replication of DNA segments and microbial life forms is a standard part of lab practice, and replicated tissues organs could conceivably have a role in regenerative medicine.

But—and this is my third response—advocating human and near-human cloning is at this point scientifically irresponsible. The furthest cloning has advanced in primates is the cloning of monkey embryos, that is, there has been no successful reproductive cloning of a primate.

To repeat: there has been no successful reproductive cloning of our closest genetic relatives. And Church thinks we could clone a Neandertal, easy-peasy?

No.

There are all kinds of ethical questions about cloning, of course, but in the form of bio-ethics I practice, one undergirded by the necessity of phronēsis, the first question I ask is: Is this already happening? Is this close to happening?

If the answer is No, then I turn my attention to those practices for which the answer is Yes.

Cloning is in-between: It is already happening in some species, but the process is so fraught that the inefficiencies themselves should warn scientists off of any attempts on humans. Still, as an in-between practice, it is worth considering the ethics of human cloning.

But Neandertal cloning? Not even close.

None of this means that Church can’t speculate away on the possibilities. He just shouldn’t kid himself that he’s engaging in science rather than science fiction.

No, what caught my eye was the reading list: Look at all of those books!

General of the Army: George C. Marshall, Soldier and Statesman” by Ed Cray

“Leading Lives That Matter: What We Should Do And Who We Should Be” by Mark Schwen and Dorothy Bass

Pericles of Athens and the Birth of Democracy by Donald Kagan

“Augustine of Hippo” by Peter Brown

“How to Live: Or A Life of Montaigne in One Question and Twenty Attempts at an Answer” by Sarah Bakewell

Reflections on the Revolution in France by Edmund Burke

“The Long Loneliness: The Autobiography of the Legendary Catholic Social Activist” Dorothy Day

The Irony of American History by Reinhold Niehbuhr

“Thinking Fast and Slow” by Daniel Kahneman

The Hedgehog and the Fox by Isaiah Berlin

I’d rather have the students read Pericles (via, say, Thucydides—and hey, let’s toss in the Melian dialogue while we’re at it) than read about Pericles—ditto Augustine and Montaigne—but if the Kagan, Brown, and Bakewell books include large chunks of these thinkers’ words, it’s defensible.

I like the Dorothy Day (of course), think de Tocqueville would have been better than Burke (and, perhaps, Niehbuhr), and while I have the Kahnemann book on my to-read list, I wonder what he’ll do with it. Berlin, eh, but perhaps fitting.

I also think “The Character Course” would be a better title than “The Humility Course”—I think a fair amount of the snark is due to the title itself (the other part, of course, due to Brooks himself)—but it’s the content that matters, and, again, the content is defensible.

That’s not a major endorsement, of course, but its minimalism isn’t meant as a slam. It’s hard to put together a syllabus, especially the first time, and what’s on the page and what’s in the classroom are not always in sync. And that were I to teach a course on, say, political character, I’d probably keep Pericles (and the Melians) and Augustine and Day, add Plato and Machiavelli (of course), perhaps Voltaire, probably something from Foucault’s History of Sexuality, focusing on ethos and self-care. Something from Mandela. Portions of the Nixon tapes, perhaps. Some James Baldwin.

At least, that’s what I’d like to offer; I wouldn’t actually be able to do so: There is no way I could assign that many texts. My previous chair actively discouraged me from assigning too much reading (too much for a 200-level course: more than 25-50 pages a week), although the current chair might not have a problem with my overloading 300-level students.

More to the point, the students wouldn’t do the reading. I got my 100-level American government students to read the text by assigning near-weekly quizzes, and by requiring them to pull from the supplemental book (journalistic essays) for their take-home mid-terms. I’m wondering how to get my 100-level contemporary issues students to read their short-short pieces before class, and am tentatively planning to require them to hand in a brief summary of the readings before each and every class.

In other words, if they’re not being graded directly on the readings themselves, they will not do them.

I recognize this with my bioethics class, and while there is a fair amount of reading on the syllabus, I’d bet that more than half the class doesn’t bother to do all of the reading. Why would they? No final exam.

Given that, I’ve concentrated less on the answers the various authors provide and more on the questions. They won’t remember the readings, may not need most of them for their papers, so if I want them to get anything out of the class, I have to find something that will stick to the roofs of their minds.

(Another image I’ve used? Questions-as-earwigs.)

I ask them questions, I poke their answers, turn them around and push ‘em right back at ‘em. Oh, you think this is settled? Well then, what about that? What, you say that that has nothing to do with this? What about p, q, r? If you approve of red, why not orange? On what basis do you disapprove of triangles?

I can do this because these kinds of troubles are inherent in the material itself; when I half-joke that I aim to trouble you, it’s less about what I come up with sui generis than what I can point to in the rumpled textures of, say, enhancement technologies. Having ranged over this ground for some years, I’ve become, to switch metaphors, pretty good at kicking up the artifacts half-buried in the dirt—and showing them how to do so, as well.

It’s be great if my students would read everything that I assign because they truly want to learn everything they can about the subject, but that ain’t gonna happen.

So I work around that, and try to get them to care enough to learn, anyway.

Celltex has responded by sending a letter to Eric Kaler, president of the University of Minnesota alleging misdeeds by Turner, and ending with the following:

Please inform us at your earliest convenience whether Associate Professor Turner’s February 21st letter, on the University’s letterhead, was authorized by the University. If it was not authorized, please inform us of what steps the University will take to disclaim any sponsorship of the Turner letter, retract the letter, remove the letter from the internet, prevent further distribution of the letter, and prevent recurrence of this type of action by Associate Professor Turner (or any other University professor). We wish to limit legal liability to those responsible for the wrongful acts and appreciate your cooperation in that regard.

Yeah, no.

Now, at this point I must admit that I know Leigh Turner—I worked with him at McGill—and like and greatly respect him. Leigh is a methodical thinker and researcher and, unlike your erratic and absurd host, not at all prone to popping off.

I also have to say that I found out about this SLAPP-suit at Carl Elliott’s blog, that I know, like, and greatly respect Carl, AND that I know, like, and greatly respect a number of the people who have also written to the FDA in support of Leigh.

(I also admit that I disclose these connections not just for reasons of honesty but because I think these people are terrific and am glad I know them.)

Anyway, read through the comments at Carl’s post and you’ll understand what I mean by “all hail Leigh Turner!” Note, for example, his patient and relentless responses to the evasive comments and personal attacks levelled by Laurence B. McCullough’s of the Center for Medical Ethics and Health Policy at Baylor College of Medicine. Leigh responds to every single point, respectfully requests additional information, and does. not. let. up.

Did I mention Leigh is methodical?

Okay, he does let one snipe go: McCullough at one point accuses Leigh of “American provincialism”; Leigh is Canadian.

In any case, Leigh has set a standard on how to respond to evasion, misdirection, and intimidation: know your stuff and don’t back down.