If you’ve read a recent science- or health-related article, you are probably aware of one inescapable fact: You’re dying. Or you’re lucky you’re not already dead.

It’s true. You are dying. The fact that you are still alive is qualified only by the fact that you will, one day, die. It’s a fundamental fact of biology: For life to exist, the possibility of death must be present.

But simple mortality doesn’t sell. The 24-hour news cycle today requires reporters to produce pieces that not only are quickly digestible, but also drive Web traffic and increase readership. Proper scientific analysis is the first victim of limited resources and pressure on reporters to produce click-bait: Headlines are often exaggerated, and news releases are used in place of proper interviews with researchers. The result is easy-to-read stories that are neither properly vetted nor interpreted with the appropriate degree of curiosity and skepticism.

The contemporary necessity of a trifecta of virality, speed and definitive fact result in the construction of completely written scientific articles on incomplete science. Articles that suggest that something is killing you or keeping you alive often lack in substance what they attempt to make up for in a viral propensity that far exceeds any modern illness.

Same means, different ends

While science and journalism are similar in process — both are based in research and investigation — their final products are inherently different. Science moves slowly; most studies take years, if not decades, to complete, and even then, they are rarely truly complete. The objective of science is not to arrive at definitive facts, but to constantly investigate uncertainties. Studies are rarely definitive, and articles that suggest definitive results should be met with at least a small degree of skepticism.

Journalists, on the other hand, are expected to deal primarily in definitive facts and to disseminate those facts swiftly. Contemporary journalism contends with a need for speed that conflicts with the way true scientific inquiry operates.

Thomas E. Patterson, author of Informing the News: The Need for Knowledge-Based Journalism, quotes Kevin Barnhurst, a communications professor at the University of Illinois-Chicago: “Although the vocabularies differ, the [journalistic and scientific] processes closely parallel each other. Both attend to occurrences out there, formulating guesses (which become events or hypotheses), both resolve issues to arrive at facts (or theories) and both seek to establish truth (or paradigm).”

So if the two are so similar, why is their merger so seemingly unreliable?

American author and journalist Walter Lippmann once wrote, “news and truth are not the same thing, and must be clearly distinguished.”

When we read science and health articles, there’s an implication that science and the results of scientific research have uncovered some definitive and unchangeable truth. In reality, the purpose of scientific inquiry is to continuously break down existing truths to uncover new potential ones. Scientists often won’t provide definitive answers regarding their research because the limitations of their studies can often only serve as intelligent conjecture.

So when an article suggests that there is some substantial breakthrough that provides an answer to an age-old scientific mystery, readers must be skeptical. As scientists work to generate some theoretical truth, journalists attempt to translate that theory into something that resembles applicable information.

“Now the problem of securing attention … is a problem of provoking feeling in the reader, of inducing him to feel a sense of personal identification with the stories he is reading. News which does not offer this opportunity to introduce oneself into the struggle which it depicts cannot appeal to a wide audience,” Lippmann wrote.

This is one of several concerns regarding the way in which science articles are written. Scientists don’t “do science” for an audience. Journalists, more often than not, “do journalism” almost entirely to retain, maintain or acquire an audience.

Reading smarter

What better way do journalists have than to take meaty content dealing with health, death, life and limb, slather them in the generally considered indisputable stickiness of statistics and present them to insatiable readers as hors d’oeuvres of gossipy goodness? When you read, “Doctors say coffee will make you live longer,” you likely find yourself devouring the next headline, “Scientists link chocolate to better brain function.”

Three days later, coffee in hand and a number of chocolate bars laid out before you like some ritualistic offering, you may find yourself sifting through headlines of a different nature: “Caffeine shown to cause tumors in mice,” or “Scientists discover toxins in 82% of chocolate.”

Inevitably and rightfully, doubt sets in and distrust in the ability either for science to make up its mind or for journalists to discover theirs blooms into full-blown indifference.

While there are reputable organizations that require dedicated science reporters to follow specific protocol to ensure that stories are as accurate as possible and can pass the muster of high journalistic standards, there are also fewer major publications with science sections and dedicated science reporters. This leaves the majority of science-related content up to the discretion of oftentimes less prepared independent reporters, or general assignment reporters who lack specific training in the handling of complex scientific concepts.

According to the Columbia Journalism Review, in 1989, 95 weekly science sections appeared in newspapers. In 2005, that number dropped significantly to 34. And in 2012, only 19 weekly science sections remained in newspapers.

The reality is that many reporters rely solely on researchers’ news releases — marketing materials that are generally aimed toward drawing attention to and thus potentially more funding for a research project, not toward sharing a newly discovered and definitive scientific truth.

When journalists use news releases to produce “news” articles, they are at greater risk of misinterpreting data. The quotes they use are sometimes taken straight from the releases themselves, not from personal interviews with experts in the field. The result is a skewed expression of what the study may actually be revealing, or a misinterpretation of whether its potential findings are truly generalizable or even consequential.

These releases often are also laden with purposely manufactured statistics that, for PR reasons, are genius. But for journalists and the public, who lack an understanding of the complexities in which they’re constructed, the statistics can be dumbfounding and thus unable to be ignored. Science reporting often goes wrong when it comes to statistics. Scientists use statistics to create a probabilistic world, but this becomes problematic in areas where probability and speculation don’t characteristically fit. While statistics provide scientists the ability to understand and address large-scale patterns, the analysis is one that requires both respect for the wide-scale potential of the statistic and skepticism of its reach.

The ability to generalize studies is important to note. In most cases, experiments are not initially conducted on humans. While conclusions made from experimentation on mice provides information for scientists to analyze data in ways that may shed light on their impact on humans, these conclusions can’t always be immediately generalized and applied to a human population, at least not with the ease that most headlines would suggest.

Headlines themselves carry significant weight. Many readers will base their understanding of complex research material on a 10-word headline’s truncated summarization written not by the reporter who researched the story, but by an editor whose job it is to draw attention to the piece. These constitute two different approaches to writing a headline, and it’s important to understand their distinction. Whereas one headline might read, “Doctors encourage drinking red wine everyday for better health,” another will read, “Researchers find possible benefits in antioxidants commonly found in fermented beverages, most concentrated in red wine.”

But not all is lost

Not all science and health news is inaccurate or irresponsible. In some cases, articles are written extremely well and with great care, but often don’t get the same dramatic attention. They report science in a way that leaves out the gimmick, and by doing so, makes poor click-bait or lacks the sensational nature that makes the reader believe that a 10-week study on two mice in a single lab sheds light on the origins of the obesity epidemic among urban youth.

It is also significant to note that misinterpretation isn’t always the issue. The apparent flip-flop of science news is sometimes the result of the quality of the record itself. Patterson writes in Informing the News: “The more precise the system of record, the more precise the news coverage.” To illustrate this point, the Harvard University professor cites obesity studies. Until the 1990s, journalists portrayed obesity largely as a personal issue and a result of genetics or eating disorders. In 1996, when the National Center for Health Statistics publicized findings suggesting a large amount of systemic evidence on obesity (which included data suggesting that half of all Americans were overweight and a quarter were obese), journalists began to frame obesity stories much differently; it began to be explained in “systemic terms.”

In other words, a shift in the availability of scientific data will inevitably shift the framing of journalistic works. As science progresses and greater sources of data and more robust records are produced, news articles will reveal differing frames on the same topic. This is another reason it is increasingly important to be skeptical of science articles that suggest definitive results to any study or research. Few well-seasoned professional reporters who are trained to handle scientific content will suggest that there are no alternative theories, no additional questions or no gaps in the research itself. The best science writing often leaves you with more questions than answers.

This, however, presents a challenge for news organizations, a challenge that many have opted not to undertake and have consequently opted out of science reporting altogether. Reputable news organizations are expected to maintain a certain position of factual certainty and report science definitively — and often by virtue of that — inaccurately, all while doing so quickly and in ways that suggest incomplete research is immediately applicable to daily life. When this fails, which it inevitably does, they must either retract, repair or re-report findings. The risk of losing readers’ trust to tell the “truth” is perhaps too great in an already unstable news environment.

Lippmann’s writing speaks too well to this point: “The more passionately involved [the reader] becomes, the more he will tend to resent not only a different view, but a disturbing bit of news. That is why many a newspaper finds that, having honestly evoked the partisanship of its readers, it can not easily, supposing the editor believes the facts warrant it, change position. If a change is necessary, the transition has to be managed with the utmost skill and delicacy. Usually a newspaper will not attempt so hazardous a performance. It is easier and safer to have the news of that subject taper off and disappear, thus putting out the fire by starving it.”

Little debate surrounds the propensity of news organizations to promote content primarily to drive traffic. While the burden rests on journalists to provide substantive, well-vetted, intelligible scientific content that exemplifies accurately the progress of science without sensationalizing its impact, it is also increasingly important for readers to be able to identify true science news from fluff, filler and viral content.

In most cases, the solution is to navigate through science articles as a scientist would: with a great deal of curiosity and a responsible amount of skepticism.