Last week I traveled to Atlanta, Georgia, where the American Society for Bioethics and Humanities (ASBH)* held its annual meeting. Most of a thousand people participated in the four-day conference. The sessions drew a mix of nerdy physicians like me, nurses, professional bioethicists, philosophy professors, a few lawyers, historians and artists.

It was really a lot of fun. Fun, that is, if you’re into subjects like philosophy in medicine, literature in medicine, medicine in literature, ethics in medicine, technology and privacy, justice and parsimony in health care, etc. I hadn’t heard the word “epistemic” so many times since I was in college. I felt young and idealistic, talking seriously about philosophy, as though it matters. (For the record: it does.) This was, clearly, a medical society meeting unlike others. For instance, an academic named Woods Nash, of the University of Tennessee, gave a talk on David Foster Wallace’s story, “Luckily the Account Representative Knew CPR.”

original cover image (Wikipedia), publisher: Random House

On the first day, I walked into a provocative plenary talk by Julian Savulescu, an ethicist and Oxford professor. He presented an argument that that using medical tools for the purpose of moral bioenhancement might be a good thing. (If this topic brings to mind A Clockwork Orange, you’re on track. Think also of Huxley’s soma, as a questioner raised.) All very serious. The next day, a packed ballroom of people heard from Amy Gutmann, President of the University of Pennsylvania and Chair of the Presidential Commission for the Study of Bioethical Issues. She spoke about the concept of deliberative democracy, and the value of teaching ethics. Toward the end, she entered into a humorous and seemingly candid discussion of men and women in the workplace, “having it all,” and common sense. “Time is finite,” she mentioned.

I could go on, and list all the lectures and smaller sessions, but this post would get dry. Besides, I couldn’t possibly attend each one, nor can I give all the speakers’ due credit. Some talks were better than others, as meetings necessarily go. But I can’t resist a plug for the presentation by Rosemarie Garland-Thomson, a professor of women’s studies and English at Emory, on perspective and disability. Another favorite had to do with technology and science. David Magnus, of Stanford University, considered whether research accomplished through gamification – a means of crowd-sourcing science – on platforms like FoldIt, EteRNA and EyeWire should be covered by the usual rules for biomedical research. “Are the players scientists?” he asked.

The tone, overall, was intense. Intellectual, brain-stimulating… By contrast to other medical meetings I’ve attended, there was little glitz, scant makeup and limited Wireless. Perhaps the most surprising aspect of the ASBH conference was the distribution of freebies at booths in a display area, where attendees gathered for an opening evening reception and, on other days, breakfasts. Of course it was all minor stuff handed out, like pens and candy, mainly from university departments seeking applicants for fellowships, and academic presses selling books. The most substantive, and useful, gift I received (or “accepted” – a term with greater moral accuracy, from my perspective) was a green umbrella from the Hastings Center – a bioethics stronghold where I’d love to spend some time learning and doing research, in the future.

On Sunday morning, I attended one of the last sessions, on decision aids in bioethics. We lingerers were treated to three terrific talks. I can’t cover them all. So to close this post, I’ll refer to the promising work of Michael Green, a physician and bioethicist at the Penn State College of Medicine. He and colleagues have been developing an on-line decision tool for advanced care planning with grant support from the NIH, the American Cancer Society and elsewhere. The website, MakingYourWishesKnown.com, enables individuals to detail their wishes through an interactive questionnaire. Green and his colleagues collect and publish data on users’ feelings upon using the decision aid. They can measure, for instance, if it gives people a sense of control, or reduces fear, and if patients’ families and doctors find the “outputs” useful. I, for one, intend to try out the MYWK website.

And I do hope to attend another ASBH meeting. Next year’s is planned for October, in San Diego.

One advantage of blogging is that I can share my ideas, directly, with people who find them interesting, provocative or otherwise read-worthy. So for those who are curious, here is my general view on health care reform (HCR) by any name, in 3 points:

First, we need it. The U.S. health care system doesn’t work. It doesn’t serve doctors. Good physicians are few and far between in some geographical regions, in primary care and in needed specialties (like oncology and geriatrics). It doesn’t serve people who might be patients, except if they happen to work for a generous employer that offers a good plan (few do), they are rich enough so they might spend thousands each year out-of-pocket and out-of-network, or they are most fortunate of all, having no serious medical problems to contend with or pay for.

Second, although I wholeheartedly support the Affordable Care Act, because it’s a step in the right direction, I don’t think the legislation goes far enough. We need a simpler, single-payer solution, as in a national health care program, Medicare-style, for all. Why? Because the quasi-plan for state-based exchanges, each with competing offerings and not necessarily interpretable terms of coverage, is too complicated. There’s no reason to think a free market operating at the state level would match the public’s or many individuals’ medical needs. As long as each provider is trying to make a buck, or a billion, it won’t put patients’ access to good care first. Besides, there’ll be administrative costs embedded in each exchange that we could live better without. As for private insurers, well, I couldn’t care less about the well-being of those companies or their executives’ incomes.

Profit is not what medical care is about, or should be about. What we need is a simple, national health plan, Europe-style, available to everyone, with minimal paperwork and, yes, limits to care.

Third point – on rationing.

Some of my readers may wonder how I, who support some costly components of good medical care, like providing breast cancer screening for middle-aged women and sometimes giving expensive drugs to people with illness, favor health care reform. New cancer meds cost around $100,000 year, more or less, as do innovative treatments for cystic fibrosis, inflammatory bowel disease, rheumatoid arthritis and other conditions. I don’t think the sane solution is abandoning expensive but life-saving and quality-of-life-improving treatments.

The hardest part of this debate and what’s so rarely discussed is the appropriate limits of medical treatment, not based on costs – which we can certainly afford if we pull back on administrative expenses of health care and insurers’ huge profits – but on factors like prognosis and age. So, for example, maybe a 45 year old man should get a liver transplant ahead of an 80 year old man. Screening for breast cancer, if it is valuable as I think it is, should perhaps be limited to younger women, maybe those less than 70 or 75, based on the potential for life-years saved. Maybe we shouldn’t assign ICU beds to individuals who are over 85, or 95, or 100 years old.

The real issue in HCR, if you ask me, is who would decide on these kinds of questions. That conversation’s barely begun, and I would like to participate in that…

Meanwhile, the Supreme Court is busy doing its thing, sorting out whether the Affordable Care Act is constitutional or not. I’m glad they’re on the case, so that they might find that it stands and we can move on and forward.

The latest NEJM features a bigstory about a small trial, with only 39 patients in the end, on the potential for placebos to relieve patients’ experience of symptoms. This follows other recent reports on the subjective effectiveness of pseudo-pharmacology.

My point for today is that placebos are problematic in health care with few exceptions. First, in clinical trials, patients sometimes agree to take what might be a placebo so that researchers can measure effects of a drug, by comparison. A second instance is, possibly, when doctors treat children. Even then, I’m not sure it’s wise to “train” kids to take a pill and expect to feel better.

The relationship of an adult patient with a physician involves, or should involve, trust and mutual respect. A person cannot possibly give informed consent for a treatment he or she doesn’t know about. So if the doctor’s giving a placebo to the patient, and making the decision for the patient because it might help, that diminishes the patient’s autonomy, or self-determination. In simpler terms, it’s condescending.

You might consider the hypothesis that there’s nothing wrong with something if it makes you, or someone else, feel better. But that’s kind of like saying the ends justify the means.

A placebo is, by definition, manipulative. I wouldn’t want any doctor to treat me that way.

According to the Washington Post coverage, the proposal comes from the United Network for Organ Sharing, a Richmond-based private non-profit group the federal government contracts for allocation of donated organs. From the Times piece:

Under the proposal, patients and kidneys would each be graded, and the healthiest and youngest 20 percent of patients and kidneys would be segregated into a separate pool so that the best kidneys would be given to patients with the longest life expectancies.

I have to admit, I’m glad to see these stories in the media. Any reasoned discussion of policy and reform requires frank talk on health care resources which, even in the best of economic times, are limited.

An on-line friend, colleague and outspoken patient advocate, Trisha Torrey, has an ongoing e-vote about whether people prefer to be called a “patient” a “consumer” a “customer” or some other noun to describe a person who receives health care.

My vote is: PATIENT.

Here’s why:

Providing medical care is or should be unlike other commercial transactions. The doctor, or other person who gives medical treatment, has a special professional and moral obligation to help the person who’s receiving his or her treatment. This responsibility – to heal, honestly and to the best of one’s ability – overrides any other commitments, or conflicts, between the two.

The term “patient” constantly reminds the doctor of the specialness of the relationship. If a person with illness or medical need became a consumer like any other, the relationship – and the doctor’s obligation – would be lessened.

Some might argue that the term “patient” somehow demeans the health care receiver. But I don’t agree: From the practicing physician’s perspective, it’s a privilege to have someone trust you with their health, especially if they’re seriously ill. In this context, the term “patient” can reflect a physician’s respect for the person’s integrity, humanity and needs.

What makes this question so ripe, in my oncologist-patient-teacher-blogger’s way of thinking, is that we may never, even if formal studies do provide data on this issue 10 years ahead, reach an objective conclusion on this matter.

The problem is this: To prove that empowered patients are “better and healthier,” how would we design a trial? If we were to compare those engaged – who almost by definition are more educated or at least have Internet access, or who are one way or another are linked to people who can help them find needed information – they’d likely do better than the disconnected patients. But the outcome might be a function of confounding variables: their education, economic status, on-line connectivity, etc.

I think the answer is inherent in the goal of being engaged, and this has to do with the concept of patient autonomy – what’s essentially the capacity of a person to live and make decisions according to one’s own set of knowledge, goals and values.

Autonomy in medicine, which borders on the empowerment idea, can be an aim in itself, and therefore valuable regardless of any measured outcome. For autonomy, or patient empowerment, to be meaningful and maybe even “better” in the strictly medical sense, as measured by outcomes like survival or quality of life, there needs be stronger public education in the U.S. and everywhere.

You can read all you want on stem cells, gene therapy or rare forms of chronic leukemia that are driven by a turned-on oncogene, but if you don’t know the basics of science and math, or don’t have sufficient language skills to read and absorb new knowledge or at least ask pertinent questions, it’s easy to get lost in that information, overwhelmed or – worse – suckered by those who’d try to persuade you of something that’s not true, cloaked in pseudoscience, that’s abundant and available on-line and, occasionally, in some doctors’ offices.

A front-page story on the Humanities and Medicine Program at the Mount Sinai School of Medicine, here in Manhattan, recently added to the discussion on what it takes to become a doctor in 2010. The school runs a special track for non-science majors who apply relatively early in their undergraduate years. Mount Sinai doesn’t require that they take MCATs or the usual set of premedical science courses – some college math, physics, biology, chemistry and organic chemistry – before admission.

The idea of the program is two-fold: first, that the traditional med school requirements are a turn-off, or barrier, to some young people who might, otherwise, go on to become fine doctors; second, that a liberal arts education makes for better, communicative physicians and, based on the numbers published in a new article, a greater proportion who choose primary care.

Today Orac, a popular but anonymous physician-scientist blogger, considers the issue in a very long post. His view, as I understand it, is that if doctors don’t know enough science they’ll be vulnerable to misinformation and even quackery.

On the side of the spectrum, perhaps, Dr. Pauline Chen, a surgeon who puts her name on her blog and essays. In a January column, “Do You Have the Right Stuff to Be a Doctor?” she challenged the relevance of most medical schools’ entry requirements.

I see merit on both sides:

It seems fine, even good, for some students to enter medical school with backgrounds in the humanities. Knowledge of history, literature, philosophy, art history, anthropology and pretty much any other field can enhance a doctor’s capability to relate to people coming from other backgrounds, to recognize and describe nonparametric patterns and, perhaps, deliver care. Strong writing and verbal skills can help a doctor be effective in teaching, get grants and publish papers and, first and foremost, communicate well with patients and colleagues.

Still, there’s value in a doctor’s having a demonstrated aptitude in math and science. Without the capacity to think critically in math and science, physicians may not really understand the potential benefits and limitations of new medical findings. What’s more, doctors should grasp numbers and speak statistics well enough so they can explain what often seems like jumbled jargon to a patient who’s about to make an important decision.

Thinking back on my years in medical school, residency, fellowship, research years and practice in hematology and oncology, I can’t honestly say that the general biology course I took – which included a semester’s worth of arcane plant and animal taxonomy – had much value in terms of my academic success or in being a good doctor. Chemistry and organic chemistry were probably necessary to some degree. Multivariable calculus and linear algebra turned out to be far less important than what I learned, later on my own, about statistics. As for physics and those unmappable s, p, d and f orbitals whereabout electrons zoom, I have no idea how those fit in.

What I do think is relevant was an advanced cell biology course I took during my senior year. That, along with a tough, accompanying lab requirement, gave me what was a cutting-edge, 1981 view of gene transcription and the cell’s molecular machinery. Back then I took philosophy courses on ethical issues including autonomy – those, too, proved relevant in my med school years and later, as a practicing physician. If I could do it again, now, I’d prepare myself with courses (and labs) in molecular biology, modern genetics, and college-level statistics.

My (always-tentative) conclusions:

1. We need doctors who are well-educated, and gifted, in the humanities and sciences. But for more of the best and brightest college students to choose medicine, we (our society) should make the career path more attractive – in terms of lifestyle, and finances.

(To achieve this, we should have salaried physicians who do not incur debt while in school, ~European-style, and who work in a system with reasonable provisions for maternity leave, medical absences, vacation, etc. – but this is a large subject beyond the scope of this post.)

2. There may not be one cookie-cutter “best” when it comes to premedical education. Rather, the requirements for med school should be flexible and, perhaps, should depend on the student’s ultimate goals. It may be, for instance, that the ideal pre-med fund of knowledge of a would-be psychiatrist differs from that of a future orthopedist or oncologist.

3. We shouldn’t cut corners or standards in medical education to save money. As scientific knowledge has exploded so dramatically in the past 30 years or so, there’s more for students to learn, not less. Three years of med school isn’t sufficient, even and especially for training primary care physicians who need be familiar with many aspects of health care. If admission requirements are flexible, that’s fine, but they shouldn’t be lax.

Critical thinking is an essential skill for a good doctor in any field. But that kind of learning starts early and, ideally, long before a young person applies to college. To get that right, we need to go back to basics in elementary and high school education. If students enter college with “the right stuff,” they’ll have a better understanding of health-related topics whether they choose a career in medicine, or just go to visit the doctor with some reasonable questions in hand.

I like the idea that we can make smart choices, eat sensible amounts of whole foods and not the wrong foods, exercise, not smoke, maintain balance (whatever that means in 2010) and in doing so, be responsible for our health. Check, plus.

It’s an attractive concept, really, that we can determine our medical circumstances by informed decisions and a vital lifestyle. It appeals to the well – that we’re OK, on the other side, doing something right.

There is order in the world. God exists. etc.

Very appealing. There’s utility in this outlook, besides. To the extent that we can influence our well-being and lessen the likelihood of some diseases, of course we can! and should adjust our lack-of-dieting, drinking, smoking, arms firing, boxing and whatever else damaging it is that we do to ourselves.

I’m all for people adjusting their behavior and knowing they’re accountable for the consequences. And I’m not keen on a victim’s mentality for those who are ill.

So far so good –

Last summer former Whole Foods CEO John Mackey offered an unsympathetic op-ed in The Wall Street Journal on the subject of health care reform. He provides the “correct” i.e. unedited version in the CEO’s blog:

“Many promoters of health care reform believe that people have an intrinsic ethical right to health care… While all of us can empathize with those who are sick, how can we say that all people have any more of an intrinsic right to health care than they have an intrinsic right to food, clothing, owning their own homes, a car or a personal computer? …

“Rather than increase governmental spending and control, what we need to do is address the root causes of disease and poor health. This begins with the realization that every American adult is responsible for their own health. Unfortunately many of our health care problems are self-inflicted…

Now, here’s the rub. While all of us can empathize, not everyone does. And few citizens go to medical school. Some, uneducated or misinformed, might sincerely believe that illnesses are deserved.

So let’s set some facts straight on real illness and would-be uninsurable people like me:

Most people who are sick – with leukemia, diabetes, osteogenesis imperfecta, heart disease, multiple sclerosis, scoliosis, glycogen storage disease Type II, depression, Lou Gehrig’s disease, sickle cell anemia, rheumatoid arthritis or what have you – are not ill by choice. They didn’t make bad decisions or do anything worse, on average, than people who are healthy.

Rather, they became ill. Just like that.

The idea of an insurance pool is that when everyone in the community participates, whoever ends up with large medical expenses is covered, explained Jonathan Cohn. When contributions come in from all, including those who are healthy, funds are sufficient to provide for the sick among us.

As things stand, the insurance industry divides us into likely profitable and unprofitable segments. “So you know if you’re one of the people born with diabetes, you have cancer, you had an injury that requires lengthy rehabilitation, tough luck, you’re going to end up in that pool of unhealthy people,” Cohn said.

Insurance is no cure-all, to be sure. It won’t take away my cousin’s cancer or fix Bill Clinton’s heart. That would require research and better medicines.

Depriving insurance, or care, to those who need it most is inconceivable to a society as ours was intended. It’s uncivil.

Mrs. Henrietta Lacks died of metastatic cervical cancer in the colored ward at Johns Hopkins Hospital in Baltimore, MD in September 1951. She lived no more than 31 years and left behind a husband, five children and an infinite supply of self-replicating cancer cells for research scientists to study in years to come.

HeLa cells with fluorescent nuclear stain (Wikimedia Commons)

Like many doctors, I first encountered HeLa cells in a research laboratory. Investigators use these famous cells to study how cancer cells grow, divide and respond to treatments. I learned about Mrs. Lacks, patient and mother, just the other day.

Skloot chronicles her short life in fascinating detail. She contrasts the long-lasting fate and productivity of her cells with that of the woman who bore them. She connects those, and her human descendants’ unfortunate financial disposition, to current controversies in bioethics.

In the years following their mother’s death, scientists repeatedly approached her husband and asked her young children for blood samples to check the genetic material, to see if their DNA matched that of cell batches, or clones, growing in research labs.

The issue is this: her husband had but a third-grade education. The children didn’t know what is a “cell,” “HLA-testing” or “clone.”

The family had essentially no idea what the doctors who’d taken, manipulated and cloned their mothers’ cells were talking about, Skloot recounts. They thought the doctors were testing them for cancer.

Years later, when they learned that their mother’s cells were bought, sold and used at research institutions throughout the world, they became angry and distrustful. The problem was essentially one of poor communication, she considered.

“Even a basic education in science would have helped,” Skloot said. “Patients, they want to be asked, and they want to be told what’s going on.”