Research leaves consumers in a quandry

Some health studies carry more weight than others

Dec. 1, 2012

Sam Washington processes blood samples for a drug study before putting them in the freezer at the Greenville Hospital System Cancer Treatment Center. / Heidi Heilbrunn/Staff

ADVERTISEMENT

Along with her grits, Fay Hart has an egg for breakfast every morning.

Even though a new study says yolks may be as bad as smoking for the cardiovascular system, the Greer woman says that doesn’t make sense to her.

“I just don’t buy their conclusions,” she says. “I know people going on 100 years old and have eaten eggs all their lives. There are a lot of question marks to all of this.”

Indeed. Over the years, research has alternately painted eggs as a healthy source of protein or a heart attack in a shell.

And the cholesterol in eggs is hardly the only issue bedeviled by conflicting research.

Hormone therapy was supposed to protect women from heart disease. Then researchers said it doesn’t.

Omega-3 fatty acid supplements were thought to reduce the risk of heart attacks. But according to a recent article in the Journal of the American Medical Association, an analysis of existing research concludes that they don’t.

And after years of reports that gum disease can lead to cardiovascular problems comes new research from the American Heart Association that refutes that claim.

All this back and forth has left Hart and others a bit suspicious. And even experts say consumers are right to take some research with a healthy dose of skepticism.

Confusion reigns

There are different types of studies and they are of varying quality, says Bill Hanage, associate professor of epidemiology at the Harvard School of Public Health.

In general, says Dr. Jerry Youkey, dean of the University of South Carolina School of Medicine-Greenville, there are epidemiological studies and clinical trials.

In epidemiological studies, researchers look for common factors to explain why one group of patients has some kind of condition and another group doesn’t, he says. Like eggs and heart disease.

Clinical trials typically compare different therapies, or a therapy against a placebo, in two groups of patients with a common disease to see which works better, he says.

(Page 2 of 5)

Though there can be problems with both, Youkey says, the real culprit confusing the public is epidemiological studies.

“The amount of stock people put in them is sometimes unfortunate,” he said. “I’ve told medical students here to try and remember everything they can, but to recognize that probably half the things that are dogma now will be proven ineffective in the next decade.”

Studies that look backward, or retrospective studies, hold the least weight, especially if they involve only one institution, says Dr. Hal Croswell, director of clinical research at Bon Secours St. Francis Health System, who focuses on finding new cancer treatments.

“They usually have a memory-recall bias,” he says, “for example, asking individuals to remember how many eggs they ate a day.”

As a result, he says, drawing conclusions that implicate a causal effect is difficult.

Study bias

Other types of bias also come into play.

Sample size, or the number of participants in a study, is important, says Croswell. And more is generally better when trying to prove a drug is effective or not.

Using a biased group is also problematic, says Hanage, citing the yolk study.

Published in the August issue of the journal Atherosclerosis, it concluded egg consumption had about two-thirds the risk of smoking in development of clogged arteries, and suggested people with a risk of cardiovascular disease avoid eating egg yolks.

But Hanage says it focused on people over 60 with cardiovascular risk factors.

“That is already a very biased sample,” he says. “This doesn’t mean the work is completely wrong or they haven’t found out anything interesting. But you have to think about the reason they did that ... and you need to be careful about applying that to different age groups.”

“Many studies are underpowered from a size standpoint,” adds Youkey, “and ... you may be introducing selection bias simply by looking for what one group has in common compared to the other group.”

Other forms of bias include how the study population is defined, how a drug or device is used, and the statistics used to measure the response, all of which can impact the validity of the research, Croswell says. And studies often have variations in all of those aspects, he says.

(Page 3 of 5)

“If one study uses all breast cancer patients as opposed to only breast cancer patients with a certain molecular subtype or certain amount of disease,” he says, “you have to weigh those studies differently.”

Confounding factors are also vital to consider when reviewing studies, Hanage says. A classic example is doing research into the effects of eating granola on health when chances are those who eat granola are more fit and less likely to smoke or drink to excess to begin with.

“You need to realize there are other factors going on that they’ve not taken into consideration that are probably just as likely to be responsible for the differences as what they’ve chosen,” says Youkey.

“Scientists know this and try to control for it and that should be reported,” Hanage says. “But it’s not always reported.”

The egg yolk study authors acknowledged that their theory needs to be tested further and include more details about diet and other possible confounders.

Who’s funding the study is something else to look for.

“If a study says that all kinds of dangerous bacteria live on a keyboard, and it’s funded by a company that markets things you use to clean keyboards,” says Hanage, “bear in mind that company might stand to benefit.”

Hart says that’s something she always takes into account.

“People who are putting studies out believe it, and want you to believe it,” she says. “But there’s a bottom line called money. And I read something discriminately and question ... who funded the research.”

Publish or perish

Pressure to publish can also influence the research that is reported, says Dr. Tom Diller, vice president for quality and patient safety at Greenville Hospital System. It can result in the release of many smaller articles rather than larger encompassing ones or in some low-quality research efforts, he says.

“In the academic world, there is a tremendous pressure to publish. People’s careers are built on that,” he says. “There’s also an explosion of journals and portals to publish that stuff. So some of the research maybe isn’t as good or not as robust as it should be.”

(Page 4 of 5)

And the media, confronted with hundreds of research papers, has a tendency to report on studies of interest to the public, and some may not be valid, Diller says.

Hanage says the studies whose results sound interesting are the ones most likely to be pitched to the media and the academic journals are more likely to publish them because they are more interesting.

“In some cases,” he says, “that is stuff which is just wrong, or the interpretation is wrong, or it’s taken too far.”

Few studies are perfect, says Croswell, and most findings need to be validated and examined in the appropriate context.

“Drawing conclusions from a lot of these epidemiological studies, with all the variables involved, gets confusing for everybody,” he says.

Hart, a psychotherapist, AARP volunteer and advocate for the elderly, says she’s particularly disquieted about the impact some of the research has on senior citizens.

“I’m concerned about all the conflicting information that we have,” she says. “It may lead a large group of people down a rosemary path.”

But some research is inherently better than others, says Hanage. A meta-analysis, for example, looks at all the existing studies, ranks their validity, and draws a conclusion based on that evidence, he says.

Some studies look to find effects over many years, or go toward providing evidence that will further secure earlier findings, Croswell says.

And the holy grail of studies — randomized double-blind controlled trials in which a therapy is given to only some study participants and no one knows who they are — require a lot of funding and broad scientific thinking, he says, but have more significant weight.

“If you have a double-blind randomized multi-site clinical trial, it oftentimes can settle a question, such as whether to use cholesterol medicine or not,” adds Diller. “But those things are expensive and rare.”

Emerging therapeutic studies, says Youkey, are in the field of genomics and proteomics that will allow science to identify individual patients or groups of patients and develop therapies likely to work specifically for them.

(Page 5 of 5)

Until recently, he says, a treatment may be found to work in 30 percent of patients, but that means it doesn’t work in 70 percent. However, because the 30 percent can’t be identified, everyone is treated and exposed to potentially toxic side effects.

“That has been the state of the art, the best we have in our lifetime,” he says. “But as we start being able to identify patients by genome or proteome, we can identify the 30 percent this will work on and we can do trials on smaller cohorts of patients with much better outcomes. That’s one of the most exciting things on the horizon in health care.”

A piece of the puzzle

But while the public tends to look at one study as the definitive work, scientists think of it as one more piece of the puzzle, says Diller.

“The reporting of individual papers is sometimes problematic. And you rarely can look at one study and say, yeah, now we know what to do,” he says. “There may be 100 articles that say eggs are good and another 100 that say they’re bad. The reality is it’s only over an extended period of time we start to get an idea.

“And relative to eggs,” he adds, “the body of evidence suggests they’re probably good for you.”

Youkey agrees that physicians and scientists typically view study findings more contextually. And he advises consumers to ask themselves if what they’re reading passes the common-sense test.

“Read them. Find them interesting,” he says. “And realize that there are some things that over time have proven absolutely legitimate and some that haven’t.”

Hanage says the most useful thing for the public to remember is to think about how many people are in the study, what ax the researchers may have to grind, and whether it’s biased in any way and controlled for confounding factors.

“And once you’re pretty sure they’ve done all those things and been responsible,” he says, “you can start paying it heed.”

As for Hart, she says her breakfast routine won’t change.

“I’ll have my egg every morning. And my grits,” she says, “no matter what the studies have said.”