Beware wonky statistics

People go to school for years to understand this branch of mathematics, so it’s no wonder the everyday consumer can feel overwhelmed trying to sort it all out.

But understanding just a few statistical concepts can help weed out good information from bad.

One of the most common problems is mixing up absolute risk and relative risk.

Health Arrived Review uses this example: Say a heart medicine claims to reduce the risk of heart attack by half. But it was tested in a population with a heart attack risk of 2 percent. After taking the drug, the risk dropped to 1 percent. The drug company then publishes an ad that states their new product reduces risk of heart attack by 50 percent.

That’s kind of true, but it’s hardly the whole story. The risk may have dropped by half relative to the placebo group, but the absolute risk of heart attack only changed by 1 percentage point. That’s a much different result, and perhaps, depending on the price of the drug and its side effects, a worthless one.

So, if an article or ad says a drug, treatment, or device has some effect, but doesn’t give any information about the control group, it’s providing the relative risk, and might be misleading.

Be especially wary if the drug’s benefits are reported in terms of relative risk, but its harms are reported in terms of absolute risk.

Another statistical term thrown about casually is “significance.” If something is statistically significant, then it meets a standard set by the researchers before the experiment began. Usually, that standard is 95 percent, meaning that if the hypothesis being tested is true, then there is a 95 percent chance that the results are not a fluke.

All that means is, assuming certain circumstances are true, the results are probably worth reporting.

Sound confusing and kind of underwhelming? Well, it is.

That’s why good science is composed of many, many studies. One test, even if it achieves statistical significance, is not proof that something works.

Plus, statistical significance has nothing to do with clinical significance. A therapy can be statistically significant but not actually all that useful to living, breathing human beings.

Many studies don’t test the outcome scientists actually want to achieve, like a lowered risk of heart attack. Instead, they often measure some other marker, like blood pressure, and then assume that if blood pressure goes down, risk of heart attack will likely go down as well.

These surrogate markers are typically easier and faster to measure, but it doesn’t mean they’ll lead to the outcome that really matters.

Another common, but egregious, statistical misstep is confusing correlation with causation.

Just because both happen to be on the rise, it doesn’t mean that eating cheese actually leads to dangerous sheet accidents. Many people know this concept, but it’s still easy to be duped, especially if the two issues being compared seem like they could go hand in hand.

For example, if a study finds that people who ate fish were less likely to develop Alzheimer’s disease later in life, our knee-jerk reaction is to think that fish consumption prevents Alzheimer’s disease.

But that’s not what the study says. It merely observes a fact that is true among a particular group of people. There could be some third factor that wasn’t assessed that actually provides a link between fish and Alzheimer’s.

Maybe people who ate a lot of fish spent a lot more time on the ocean, and ocean air is what keeps dementia at bay. (This example is, of course, made up.)

Because we’re predisposed to think that eating fish is healthy, we’re likely to interpret this study a certain way.

Take note of industry funding

Some of the research that crosses our desks is so suspect that it’s laughable.

Take this excerpt from a press release from an industry group: “Children and adolescents who eat pasta have better overall diet quality, new research shows.”

You wouldn’t expect much else from a study publicized, and funded, by the National Pasta Association. But a scroll through the other items on the group’s website shows that they often employ a slightly different, and subtler, tactic to convince visitors that pasta is a healthy choice.

One of those is advocating the Mediterranean diet.

The Mediterranean diet incorporates a lot of fruits, vegetables, nuts, and grains. It is generally considered healthy, and might include pasta, so the group isn’t saying anything overtly false.

The problem is that they’re not going to publicize any research that contradicts the notion that the Mediterranean diet, or pasta, is good for you.

In other words, the news from this source might not be wrong, but it will always be in favor of pasta, no matter what other evidence is out there. After all, the first line of the group’s mission statement is “to increase the consumption of pasta.”

The National Pasta Association should at least be commended for its transparency. A lot of industry-funded campaigns are not so clear about what entities support them.

Root out not-so-obvious industry funding

In 2015, The New York Times blog post that the Global Energy Balance, a nonprofit aimed at promoting exercise, was funded in part by Coca-Cola.

The group did indicate its funding in fine print on the website, although the Timesreported that the company’s relationship with the nonprofit was not initially disclosed.

Coca-Cola had also supported the research of several scientists affiliated with the group, one of whom served as a consultant on exercise guidelines for the federal government.

The implication here was that the group, and these scientists, might have ignored soda as a possible contributor to obesity, lest their funding be jeopardized.

This strategy is common among pharmaceutical companies, too.

The manufacturers of Addyi, the “female Viagra” you may have heard about (but was probably ) lobbied for the drug’s Food and Drug Administration (FDA) approval with an aggressive marketing campaign called .

Even the Score presented itself as a feminist movement, fighting for drug approval in order to correct an imbalance between the sexes.

But FDA regulators had concerns over the drug’s safety and efficacy. Nonetheless, the drug won in 2015.

And if it seems like you’ve suddenly become more aware of sleep disorders like narcolepsy, you might have Jazz Pharmaceutical’s campaign to thank.

Jazz makes one of the few narcolepsy medications on the market, so its sales depend on more people being diagnosed with the condition. That doesn’t mean that the information on its website, which includes a symptom screener, is incorrect, but it does mean the website exists, at least in part, to sell drugs.

Pay attention to personal gain

Big corporations aren’t the only entities to seek financial gain from shared information.

The internet is rife with health gurus who, coincidentally, sell the lifestyles they tout.

Dr. William Davis, a cardiologist, and author of “Undoctored,” offers some free health advice on his website, but encourages users to sign up for his Undoctored Inner Circle at the cost of $6.65 a month.

Gwyneth Paltrow’s Goop sells vitamins for $90 a month ($75 if you subscribe).

“Life hacker” Dave Asprey, who wants you to blend butter into your coffee every morning, sells both coffee and butter, along with numerous supplements, through the website Bulletproof.

The problem with getting health information from these sources is that they could be cherry-picking the research that agrees with their points of view. They are not likely to be balanced sources of information.

Doctors, too, are not immune to bias. Pharmaceutical companies aggressively market their drugs to doctors, and even sponsor courses that doctors can take for continuing medical education credit.

Since 2014, any direct payments doctors receive from these companies are now reported on a , a provision of the Affordable Care Act (ACA).

Even websites that don’t sell products typically sell ad space, which means they may want to drum up traffic to their site. That is at odds with the slow and typically unsexy pace of scientific research.

A savvy consumer must interpret it all thoughtfully. In other words, you have to turn your “baloney detector” on.

Sniff out the lies

Sometimes, an item on the internet that is stylized to look like real news is actually made up.

This issue has gotten a lot of attention lately, with accusations that the Russian government in the U.S. election by spreading fake news online.

So how to spot fake news? It comes down to a gut check.

Take this about first lady Melania Trump banning genetically modified foods from the White House. How do we know that it’s fake? Healthline sent the article to Mrs. Trump’s press officer, who said there was zero truth to the story.

While a press officer something that is true, this doesn’t seem like one of those times, especially when we take a look at the site and the article’s author.

Your Arrived Wire has come under fire for sharing false information, along with the article’s author, Baxter Dmitry, a frequent contributor. A quick check of Dmitry’s Twitter feed shows that he continually spouts information and opinions that are on the edge of reality.

A quick internet investigation shows that Snopes, a website that investigates rumors, has labeled this article as .

So, who to trust? It may help to think like a scientist: Where does the balance of evidence lie?

Besides what we know from Snopes and from our search for information about Your Arrived Wire and the article’s author, we also know that Mrs. Trump has spent most of her husband’s presidency in New York, as opposed to the White House, and that she hasn’t stepped into the role of advocate for any specific issue.

How likely is it that she would make, and publicize, such a move?

So our conclusion is that the balance of evidence suggests that Mrs. Trump did not ban GMO foods in the White House. Evidence may someday come to light that refutes this, but we don’t have it. So file this one as fake news.

The best way to avoid this issue is to find certain news sites or sources of information that you trust and get your information there.

“It’s important to develop what I call ‘health anchors’ and to learn where to go for information,” says Dr. Stephen Barrett, who runs the website Quackwatch. “[Don’t] make the mistake of thinking you can read endlessly and figure out who’s telling the truth.”

Barrett, a retired psychiatrist, has devoted the past few decades to rooting out “quacks” and compiling sound health information to counter the nonsense that can be found online.

“The amount of misinformation is enormous and it always was enormous, but with the internet I can see it,” he told Healthline. “The internet enables more information to be spread faster and more inexpensively than it had in the past.”

Barrett keeps an information hub called Internet Health Pilot that compiles links to reputable websites, and he offers a guide on .

Fight confirmation bias

Getting information from trustworthy websites that carefully interpret multiple lines of evidence is also a good way to avoid falling prey to confirmation bias.

Confirmation bias happens when you’ve made up your mind about an issue and stop collecting any more information on it, or you discount information that conflicts with your world view while embracing information that agrees with it.

For example, if you believe that fluoridated water is dangerous, and you only read articles on water fluoridation published by Joseph Mercola, a well-known fluoridation opponent, you’re unlikely to ever see information that contradicts that point of view.

You’ll grow more and more convinced that fluoridation is dangerous without having all sides to the story.

That’s why it’s good to find sources of information that are as neutral as possible, and get your information there. You’ll get a better idea of where the balance of evidence lies.

Facebook and other social media outlets are notorious for encouraging confirmation bias. Because you see what your friends choose to share, and because you likely agree with how your friends see the world, you’re probably going to see articles that you already agree with.

Plus, Facebook is primed to share content that is easily digestible and ready to go viral, not complex discussions of important issues.

Keep publication bias in mind

This one isn’t something that the average reader can do anything about, but it’s good to be aware of it.

A lot of scientific research doesn’t pan out, but consumers are unlikely to hear about what doesn’t work

You can pin the blame for this on a lot of factors. These include the researchers themselves, who tend to shelve research that didn’t work.

It also includes scientific journals, which are unlikely to accept studies that didn’t show demonstrable results. There are also universities and corporate sponsors that probably won’t write up press releases about negative results.

And there’s the media, which probably won’t bother to report on research that doesn’t show some sensational new trend.

Maybe it really comes down to human nature. We’re hungry for results.

But we have to keep this hunger in check, since science is all about assessing the balance of evidence, and publication bias artificially tips the balance.

There may be research out there that suggests some controversial therapy works, but how much research is there that suggests it doesn’t work, and is that research published?

The importance of peer review

If an article includes a phrase like “According to research presented at the annual Convention of Rodeo Clowns … ” that means the information came from a conference or meeting.

That’s not necessarily a bad thing, but it does mean that the research in question may not have been subject to peer review.

Scientists often use meetings to talk about work-in-progress and studies that haven’t been published yet. In fact, these studies may never be published.

The road to publication is lined with roadblocks that are meant to stop bad science from moving forward (at least that how it’s ).

In order for research to be published in a peer-reviewed journal, it is first evaluated by a group of anonymous scientists who know something about the field the research is in. They note any concerns with the way the information was collected or presented and send their edits back to the study authors.

If their concerns can be fixed, the authors redo the analysis or rewrite the paper. If the concerns cannot be resolved, then the paper is rejected and does not become part of the scientific record.

So any research that has not been through this process is not as trustworthy as research that has.

Scientific meetings are great places to take the pulse of a field, and reports from these meetings can be interesting, true, and helpful, particularly if they describe emerging trends.

But if an article reports on a single study presented at a conference that hasn’t yet gotten into the scientific literature, know that the science hasn’t been thoroughly vetted yet.

Be your own editor

In the past, most people got their news from newspapers, and the content in newspapers was carefully curated by editors.

Editors picked the stories that would be included in the paper that day. They also chose which stories would make it onto the front page.

Nowadays, people put together their own front pages from various sources, most of them available online.

In many ways, that’s a good and powerful thing. Stories that might not have gotten much attention from the establishment now have a place to live online. But as any superhero fan knows, with great power comes great responsibility.

As your own editor, you have to be the one to decide which stories go on your front page, and which don’t.

And when it comes to sharing those stories — on Facebook, Twitter, or at your next barbecue — you take on the responsibility of editor once again.

Is the information good enough to endorse and send out into the world?