Government authorities in the EU and U.S. continue to wrangle with how best to identify and regulate potential endocrine-disrupting chemicals (EDCs). In an effort to influence those efforts, the Endocrine Society, an association representing endocrinologists and endocrine researchers, published a series of economic papers in 2015 and 2016 that estimated annual health care costs attributable to EDC exposure in the hundreds of billions of euros and dollars. Although these estimates were initially received with a fair amount of skepticism – both by scientists (see March 2015 BBC interview with Professor Richard Sharpe) and authorities, including the European Commission – they nevertheless received widespread media coverage and continue to be cited by activists in various public policy forums. For these reasons, we undertook a thorough and rigorous critique of the underlying methodology used to generate the cost estimates, which was published on 20 May 2017 in the influential, peer-reviewed journal Archives of Toxicology.

We found that the disease burden and cost estimates were as flawed and immaterial to public health decision-making as many experts first suspected. For example, the authors assumed causal relationships between putative exposures to EDCs and selected diseases, e.g., “loss of IQ” and “increased prevalence of intellectual disability,” but did not establish them via thorough evaluation of strengths and weaknesses of underlying animal toxicology and human epidemiology evidence. Consequently, the assigned disease burden costs are highly speculative and should not be considered in a weight-of-evidence approach underlying policy discussions to protect the public and regulate chemicals considered possible EDCs.

Our analysis was structured to investigate the following aspects of the methodology employed to generate the cost estimates:

selection and roles of the steering committee and “expert” panels;

literature search and selection of underlying scientific studies;

weight-of-evidence analysis;

evaluation of animal toxicology evidence;

evaluation of human epidemiology evidence;

framework for assessing probability of causation;

how attributable fraction and exposure-response relationships were estimated and applied;

sources of and uses of biomonitoring data;

sources of uses for cost data; and

the cumulative effect of numerous assumptions inherent in each process step.

We uncovered substantial and crippling flaws with the underlying methodology, including:

failure to use state-of-the-art systematic review methodology;

lack of transparency in reporting how the literature was searched and which studies were selected for review;

failure to achieve a balance of perspectives in selecting membership for the review panels;

deviation from best practices for applying the Delphi technique in an effort to gain consensus among panel members;

lack of serious critical discussion of the strengths and weaknesses of the individual studies relied upon;

failure to use a weight-of-evidence approach to assessing the evidence;

over-reliance on sparse and unrepresentative data for choosing dose-response relationships and on unrepresentative biomonitoring data for modeling the number of cases of disease attributable to EDC exposures; and

a flawed framework for assessing the probability of causation.

Since more than 75 percent of the total cost estimates for both the EU and U.S. were driven by the authors’ assessment that exposures to EDCs were causing lost IQ and increased prevalence of intellectual disability, we further scrutinized the underlying animal toxicology and human epidemiology studies they cited for these estimates. We found the evidence to be remarkably sparse and inconsistent.

We further noted that other scientific groups had also recently evaluated the same evidence base, using superior methods. In their own published findings, these groups reported that the “evidence” falls far short of what is required to establish a causal link between EDC exposures and lost IQ and intellectual disability. In a breach with standard scientific practice, the authors of the cost estimates neither cited those other published scientific reviews, nor explained why they arrived at different conclusions.

Although we didn’t examine in detail the underlying studies which generated the remaining 20-25 percent of estimated total costs, we did note that other scientists are beginning to do so and have reported similar problems to the ones we exposed. Thus, the cost estimates in their entirety appear dubious at best and should not be given any weight in serious policy debates about EDCs.

Some readers may still have questions: What is the harm if the cost estimates are grossly exaggerated? Isn’t it better to be safe than sorry?

We argue, as have others, that everyone loses when the scientific method, which demands impartial objectivity, is not met, as demonstrated by this series of economic papers. Not only can there be significant and lasting damage to the credibility of individual scientists and organizations, but public acceptance of science, which is already on shaky ground, becomes further eroded. Moreover, public policy choices based on poor science or isolated findings from unreplicated studies, even if well-intentioned, will have significant adverse consequences for individuals and society.

Click here to download a free copy of the Archives of Toxicology critique before August 20, 2017. Click here for an English version of the University of Konstanz press release about the critique.