Study on genetics of longevity comes under scrutiny

A study published in the prestigious journal Science earlier this month suggesting that genes may hold a key for living to be 100 or older has since come under criticism from experts in the field of genetics. The study, led by Paola Sebastiani and Dr. Thomas Perls at the Boston University School of Public Health and School of Medicine, respectively, used genetic analysis to identify 150 gene variants that researchers used to predict whether people would live to be centenarians with 77% accuracy.

The findings were widely reported — by TIME, the New York Times, the Los Angeles Times, and elsewhere — as yielding clues to the secrets of long life, and potentially paving the way for genetic tests for longevity. Yet, skepticism from colleagues in the field of genetics has since given way to more vocal criticism of the study, the editorial process that led to its publication, and the media environment that put it in the headlines.

After the paper was published, some fellow genetics researchers began scratching their heads — as they told Newsweek at the time, the small sample size and varying technologies used to analyze the DNA could have undermined the results, (which, for some, were surprising to the point of creating incredulity), but they couldn’t put a finger on exactly what the study flaw might be. But then, drawing on the idea that use of different types of technology, or specifically different DNA-analysis chips, could sully the data, Kári Stefánsson, a genetics researcher from Iceland who founded deCode Genetics, took a closer look at which specific chips had been used for the study. He homed in on one known as the 610-Quad, which he has experience with. As Mary Carmichael first wrote for Newsweek:

“[Stefánsson] says [the 610-Quad] has a strange and relevant quirk regarding two of the strongest variants linked to aging in the BU study, called rs1036819 and rs1455311. For any given gene, a person will have two “alleles,” or forms of DNA. In the vast majority of people, at the rs1036819 and rs1455311 locations in the genome, these pairs of alleles consist of one “minor” form and one “major” form. But the 610-Quad chip tends to see the wrong thing at those particular locations. It always identifies the “minor” form but not the “major” form, says Stefánsson — even if the latter really is present in the DNA, which it usually is. If you use the error-prone chip in more of your case group than your control group — as the BU researchers did — you’re going to see more errors in those cases. And because what you’re searching for is unusual patterns in your cases, you could very well mistake all those errors (i.e., false positives) for a genetic link that doesn’t actually exist.”

As the L.A. Times health blog has since reported, the researchers likely used different analysis chips “reportedly because the tool they used at the beginning of the study was taken off the market midway through, so they had to switch to a comparable but not identical product.” Additionally, they point out, DNA information for the older participants and younger subjects (in the control group) were collected differently, an inconsistency which may undermine the findings.

Yet, despite the criticism and potential study flaws, the authors remain confident that the broad findings will hold upon further analysis. As Perls told the New York Times in an email, the authors had been “made aware that there is a technical error in the lab test” used for some participants and that they were reviewing the findings, but initial results of the review suggest that “the apparent error would not affect the overall accuracy of the model.”

Still, critics of the study questioned not only the results, but the editorial process that led to its publication. As David Goldstein, a geneticist at Duke University, told the New York Times:

“I think it is very unlikely indeed that the findings in the Science paper are correct, or even mostly correct… “I am pretty surprised that Science carried it.”

Goldstein didn’t reserve his condemnation for the study or journal however, but raised a broader question about how the media covers scientific research. As he told the Times:

“And I think this also raises an interesting, more general point, which is how the press ought to handle stuff when there are serious questions about the security of the finding.”

The New York Timessummed up the multi-tiered nature of the problem this way:

“Journal editors know that press coverage can burnish a journal’s reputation. Scientists, in turn, like to have their work cited in the news media because it helps draw attention to their fields and raise money. And science journalists, competing for space with political and sports news, welcome astounding claims without always kicking the tires as hard as necessary. These factors sometimes combine to give substantial publicity to scientific claims that may not fully deserve such attention.”

What do you think? Does the pressure to publish — and make news — create an atmosphere for hurried or less rigorous science? And are we, as health reporters, too credulous of studies from major, reputable journals?