Effective March 2011, I am a reporter-at large for Bloomberg News based in New York covering drugs, biotechnology, medicine, and science. You can reach me at rlangreth@gmail.com. Before that, I was a senior editor at Forbes in charge of health coverage. Before Forbes, I covered the drug industry as a staff reporter at The Wall Street Journal for five years and was an associate editor at Popular Science for three. I won the Evert Clarke Award, the CASW Science and Society Prize and a co-winner of the Polk Award, and was runner up recently in the Association for Health Care Journalists Award. More interesting: I'm the only one in my family who is not a redhead.

9/16/2010 @ 1:43PM18,941 views

The New York Times Makes Amazing Correction on Alzheimer's Disease Story

If you care about Alzheimer’s disease research, you should read the New York Times corrections page today (Sept. 16). Most of the corrections there are boring matters of a misspelled name, a wrong title or a wrong dollar figure. But today there is a lengthy 3-paragraph correction on a front page story from August 10th that seemed to imply that a perfectly accurate test for predicting Alzheimer’s disease years in advance is at hand.

Maybe not.

The original story seemed too good to be true when I briefly scanned it from vacation (the online story has been revised to reflect the correction, so you can’t see what it originally said). Some media critic bloggers argued the story went too far in the way it presented its conclusions. Stories by wire services were much more circumspect. Now–a full month later–the New York Times seems to be acknowledging as much.

Here’s the correction:

An article on Aug. 10 about spinal fluid tests in Alzheimer’s research left the incorrect impression that the test can predict the disease with 100 percent accuracy in all patients. (That impression was reinforced by the headline.) In fact, the test was found to be as much as 100 percent accurate in identifying a signature level of abnormal proteins in patients with memory loss who went on to develop Alzheimer’s — not in identifying patients who “are on their way” to developing the disease.

The article also misinterpreted an element of the researchers’ findings. Among a group of patients who had memory loss and developed Alzheimer’s within five years, every one had protein levels associated with the disease five years before; it was not the case that “every one of those patients with the proteins developed Alzheimer’s within five years.”

And the article misstated the source from which the finding of 100 percent accuracy was drawn. It came from a separate set of patients that the researchers examined to validate the protein signature they had identified in an initial group. (In the initial group, as the article noted, nearly every person with Alzheimer’s had the signature protein levels.)

One problem is that the 100% finding in the original study was from a subset of the study that used a tiny sample of just over 50 patients. Hardly definitive. Another problem is that the 100% figure referred only to a measure called sensitivity–the ability of a test to catch everyone who will get a disease. It doesn’t address a second crucial measure called specificity that relates to how many people who come up positive on the test won’t get Alzheimer’s at all.

The current thinking is that Alzheimer’s drugs will work best in people with very early stages of the disease, or who have mild cognitive impairment and are at risk for getting Alzheimer’s. How to identify those patients for clinical trials has been a big question. If those patients are identified using a test that also snares too many people that will never go on to get Alzheimer’s, patients in drug company trials could be exposed to potentially dangerous chemicals for years with no possibility of benefiting from them. It is not a far-out concern. Eli Lilly last month halted a trial of one of its Alzheimer’s drugs because it appeared to make the disease worse.

Research into predictive tests is good and absolutely should continue. Researchers need to make sure the tests are fully validated in 1000s of patients before they are used to make predictions about the futures of real people.

Post Your Comment

Post Your Reply

Forbes writers have the ability to call out member comments they find particularly interesting. Called-out comments are highlighted across the Forbes network. You'll be notified if your comment is called out.