In 2007, the government earmarked £173,000,000 (approximately 260,000,000 U.S. dollars) to train up an army of new therapists. Briefly, the money was allocated following an earlier report by Professor Richard Layard of the London School of Economics which found that a staggering 38% of illness and disability claims were accounted for by “mental disorders.” The sticking point—and part of the reason for the article by Laurance—is that training was largely limited to a single treatment approach: cognitive-behavioral therapy (CBT). And research released this week indicates that the efficacy of the method has been seriously overestimated due to “publication bias.”

Combine such findings with evidence from multiple meta-analyses showing no difference in outcome between treatment approaches intended to be therapeutic and one has to wonder why CBT continues to enjoy a privileged position among policy makers and regulatory bodies. Despite the evidence, the governmental body in the UK that is responsible for reviewing research and making policy recommendations—National Institute for Health and Clinical Excellence (NICE)–continues to advocate for CBT. It’s not only unscientific, its bad policy. Alas, when it comes to treatment methods, CBT enjoys what British psychologist Richard Wiseman calls, the “get out of a null effect free” card.

What would work? If the issue is truly guaranteeing effective treatment, the answer is measurement and feedback. The single largest contributor to outcome is who provides the treatment and not what treatment approach is employed. More than a dozen randomized clinical trials—the design of choice of NICE and SAMSHA—indicate that outcomes and retention rates are improved while costs are decreased—in many cases dramatically so.