Selected tag(s): systematic review

Common sense tells us it’s impossible to evaluate the safety of a chemical without any data. We’ve repeatedly highlighted the scarcity of information available on the safety of chemicals found all around us (see for example, here and here). Much of this problem can be attributed to our broken chemicals law, the Toxic Substances Control Act of 1976 (TSCA).

But even for those chemicals that have been studied, sometimes for decades, like formaldehyde and phthalates, debate persists about what the scientific data tell us about their specific hazards and risks. Obtaining data on a chemical is clearly a necessary step for its evaluation, but interpreting and drawing conclusions from the data are equally critical steps – and arguably even more complicated and controversial.

How should we evaluate the quality of data in a study? How should we compare data from one study relative to other studies? How should we handle discordant results across similar studies? How should we integrate data across different study designs (e.g., a human epidemiological study and a fruit fly study)? These are just a few examples of key questions that must be grappled with when determining the toxicity or risks of a chemical. And they lie at the heart of the controversy and criticism surrounding chemical assessment programs such as EPA’s Integrated Risk Information System (IRIS).

Recently, a number of efforts have been made to systematize the process of study evaluation, with the goal of creating a standardized approach for unbiased and objective identification, evaluation, and integration of available data on a chemical. These approaches go by the name of systematic review.