Recently, amid much brouhaha, the journal Science retracted an article that may have contained fabricated data. Unfortunately, this was only the most recent in a long line of retractions by reputable scientific publications. They add fuel to the anti-science fire that’s raging in the United States, at a time when we need science more than ever to deal with the world’s problems.

The institutions governing publication in many scientific fields need reform; likewise, journalists need to value research quality over sensationalism. But with academics and journalists alike often giving shoddy science a pass, it has become ever more urgent that we, the readers, start applying higher standards.

In this age when misinformation can go viral in an instant, basic scientific literacy has never been more valuable. Fortunately, you don’t need to be a scientist to achieve it.

For students in my statistics course, I highlight four fundamental tenets of sound empirical research. Virtually anyone can figure out if a study can be trusted, simply by asking:

1) Were participants recruited at random?

2) Were they randomly assigned to a “treatment” condition (a particular diet, say), either by design or by historical accident?

3) Did large numbers of people participate?

4) Did researchers explain that even “statistically significant” effects may arise by chance and not actually exist in the population at large?

Let’s test one recent study — the chocolate diet, exposed as a hoax this spring.

To expose irresponsible media coverage of diet research (and our gullibility), a journalist and some documentarians implemented a poorly designed study with the intent of grabbing headlines. The study implied that eating dark chocolate caused weight loss. Numerous news outlets picked up the story, often running pictures of slender women eating chocolate.

However, like so many candy bars, the findings were junk. The answers to those four questions show why.

First, the study should have been based on a randomly selected group, not volunteers recruited on Facebook (which was the case).

Second, the chocolate diet should have been randomly assigned, to help rule out that other factors related to eating lots of dark chocolate (say, economic stature) caused the effects. Random assignment was in fact used. Yet, the small number of study participants (15!) — a fail on question 3 — means that chance differences between the dark-chocolate dieters and other participants probably influenced the findings.

Fourth, and finally, the researchers should have explained the limitations of their “statistically significant” findings. In particular, the study didn’t look at only one outcome — weight loss — but rather tracked many outcomes that didn’t appear to respond as favorably. This kind of “data fishing” makes it more likely that a study’s findings arise purely by chance.

The chocolate hoax flunked three of the four tests of sound empirical research. Sadly, research on diet rarely passes. Yet it is widely devoured by the media and the public, probably because so many of us are hungry for the secrets to living forever and looking great.

Today, many of the world’s most pressing problems can only be solved through science: basic scientific literacy is more urgently needed than ever. If we could all just test research with those four questions — who participated, how the treatment was assigned, how many people participated and potential limits on the effect — we would be less likely to fall victim to untrustworthy research and more likely to make sound, scientifically-informed choices for our lives.

Elizabeth U. Cascio is associate professor of economics at Dartmouth College and a Dartmouth Public Voices Fellow. She wrote this for this newspaper.