Programs aren’t “magic bullets,” say researchers

Atlanta, December 16, 2016 –New research conducted by the U.S. Department of Health and Human Services (HHS) shows that many “evidence-based” sex education programs are not actually effective, even though they have been promoted as such. According to this in-depth evaluation, published in the American Journal of Public Health (AJPH), “most of the programs had small or insignificant impacts on adolescent behavior.” In fact, 80% of the students in these “evidence-based” programs fared no better or no worse than their peers who were not in the programs. Further, teens in some “evidence-based” programs were even more likely to begin having sex, more likely to engage in oral sex, and more likely to get pregnant than those who were not in the program.

The “evidence-based” programs evaluated were from a list resulting from the “Teen Pregnancy Prevention” (TPP) program initiated in 2010. Schools and communities have been misled to believe that programs on this list offer some type of “magic bullet” solution with guaranteed success, and some have even been pressured to adopt programs only from this list. Nearly all the listed programs are contraceptive-centered, focusing on risk reduction, which has been problematic for communities that desire a more abstinence-centered approach emphasizing risk avoidance through sexual delay. Abstinence-centered programs also teach about the benefits and limitations of contraception, but always in the context of promoting sexual delay as the healthiest choice.

This latest research was prompted by the recognition that there were serious questions about the replicability of the so-called “evidence-based” programs. According to HHS researchers, all but one of the programs on the list had only a single study demonstrating effectiveness, often performed by the program developer, rather than by an independent, unbiased researcher.Further, many of these studies showed only limited effectiveness (e.g. with boys but not girls, or at 3 months but not 6 months), and many were also not performed in a school setting. Thus, this latest research sought to determine whether these programs were actually effective when replicated as an independent study in “broader populations in real-world conditions.” After finding that most programs had small or insignificant impacts, the researchers stated that no program should be expected to be a “magic bullet’ that’s effective with anyone, anywhere. The researchers further concluded that, rather than picking a program from a list, “Communities need to spend more time selecting programs that are the best fit [based on community needs, organizational capacity, and intended outcomes] and ensuring quality implementation…thereby increasing the likelihood that the program will be effective.”