Testing What We Think We Know

BY 1990, many doctors were recommending hormone replacement therapy to healthy middle-aged women and P.S.A. screening for prostate cancer to older men. Both interventions had become standard medical practice.

But in 2002, a randomized trial showed that preventive hormone replacement caused more problems (more heart disease and breast cancer) than it solved (fewer hip fractures and colon cancer). Then, in 2009, trials showed that P.S.A. screening led to many unnecessary surgeries and had a dubious effect on prostate cancer deaths.

How would you have felt — after over a decade of following your doctor’s advice — to learn that high-quality randomized trials of these standard practices had only just been completed? And that they showed that both did more harm than good? Justifiably furious, I’d say. Because these practices affected millions of Americans, they are locked in a tight competition for the greatest medical error on record.

The problem goes far beyond these two. The truth is that for a large part of medical practice, we don’t know what works. But we pay for it anyway. Our annual per capita health care expenditure is now over $8,000. Many countries pay half that — and enjoy similar, often better, outcomes. Isn’t it time to learn which practices, in fact, improve our health, and which ones don’t?

To find out, we need more medical research. But not just any kind of medical research. Medical research is dominated by research on the new: new tests, new treatments, new disorders and new fads. But above all, it’s about new markets.

We don’t need to find more things to spend money on; we need to figure out what’s being done now that is not working. That’s why we have to start directing more money toward evaluating standard practices — all the tests and treatments that doctors are already providing.

Photo

Credit
Leigh Guldig

There are many places to start. Mammograms are increasingly finding a microscopic abnormality called D.C.I.S., or ductal carcinoma in situ. Currently we treat it as if it were invasive breast cancer, with surgery, radiation and chemotherapy. Some doctors think this is necessary, others don’t. The question is relevant to more than 60,000 women each year. Don’t you think we should know the answer?

Or how about this one: How should we screen for colon cancer? The standard approach, fecal occult blood testing, is simple and cheap. But more and more Americans are opting for colonoscopy — over four million per year in Medicare alone. It’s neither simple nor cheap. In terms of the technology and personnel involved, it’s more like going to the operating room. (I know, I’ve had one.) Which is better? We don’t know.

Let me be clear, answering questions like these is not easy. The Veterans Affairs Cooperative Studies Program is in fact preparing to take on the colonoscopy versus fecal occult blood testing question. The trial, which will involve up to 50,000 patients, will last a decade and surely cost millions of dollars.

Research like this takes more than grant money. For starters, it takes a research infrastructure that monitors what standard practice is — data on what’s actually happening across the country. Because of Medicare, we have a clear view for patients age 65 and older, but it’s a lot cloudier for those under 65. Basic questions like how common annual physical exams are and what testing is part of them are unanswerable.

It also takes a research culture that promotes a healthy skepticism toward standard medical practice. That requires physician researchers who know what standard practice is, have the imagination to question it and the skills to study it. These doctors need training that’s not yet part of any medical school curriculum; they need mentoring of senior researchers; and they need some assurance that investigating accepted treatments can be a viable option, instead of career suicide.

We have to move quickly. The administrative demands of clinical care, on one side, and the competition for research funding on the other, make it increasingly difficult for researchers to see patients. They become isolated from standard practice, and their ability to study it diminishes. Clinicians who are well positioned to study these issues are increasingly directed toward enhancing productivity — questions about how can we do this better, faster or more consistently — instead of questions about whether the practices are warranted in the first place.

Here’s a simple idea to turn this around: devote 1 percent of health care expenditures to evaluating what the other 99 percent is buying. Distribute the research dollars to match the clinical dollars. Figure out what works and what doesn’t. The Patient-Centered Outcomes Research Institute (created as part of the Affordable Care Act to study the comparative effectiveness of different treatments) is supposed to tackle questions of direct relevance to patients and could take on this role, but its budget — less than 0.03 percent of total spending — is far from sufficient.

A call for more medical research might sound like pablum. Worse, coming from a medical researcher, it might sound like self-interest (cut me some slack, that’s another one of our standard practices). But I don’t need the money. The system does. Or if you prefer, we can continue to argue about who pays for what — without knowing what’s worth paying for.

H. Gilbert Welch, a professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice, is a co-author of “Overdiagnosed: Making People Sick in the Pursuit of Health.”

A version of this op-ed appears in print on August 20, 2012, on page A19 of the New York edition with the headline: Testing What We Think We Know. Today's Paper|Subscribe