Wednesday, 30 October 2013

Don't show me the evidence. Show me how you weighed the evidence.

Sometimes we fool ourselves into thinking that if people just had access to all the relevant evidence, then the right decision - and better outcomes - would surely follow.

Of course we know that's not the case. A number of things block a clear path from evidence to decision to outcome. Evidence can't speak for itself (and even if it could, human beings aren't very good listeners).

It's complicated. Big decisions require synthesizing lots of evidence arriving in different (opaque) forms, from diverse sources, with varying agendas. Not only do decision makers need to resolve conflicting evidence, they must also balance competing values and priorities. (Which is why "evidence-based management" is a useful concept, but as a tangible process is simply wishful thinking.) Later in this post, I'll describe a recent pharma evidence project as an example.

If you're providing evidence to influence a decision, what can you do? Transparency can move the ball forward substantially. But ideally it's a two-way street: Transparency in the presentation of evidence, rewarded with transparency into the decision process. However, decison-makers avoid exposing their rationale for difficult decisions. It's not always a good idea to publicly articulate preferences about values, risk assessments, and priorities when addressing a complex problem: You may get burned. And it's even less of a good idea to reveal proprietary methods for weighing evidence. Mission statements or checklists, yes, but not processes with strategic value.

The human touch. If decision-making was simply a matter of following the evidence, then we could automate it, right? In banking and insurance, they've created impressive technology to automate approvals for routine decisions: But doing so first requires a very explicit weighing of the evidence and design of business rules.

Where automation isn't an option, decision makers use a combination of informal methods and highly sophisticated models. Things like Delphi, efficient frontier, or multiple criteria decision analysis (MCDA); but let's face it, there are still a lot of high-stakes beauty contests going on out there.

What should transparency look like? Presenters can add transparency to their evidence in several ways. Here's my take:

Level 1: Make the evidence accessible. Examples: Publishing a study in conventional academic/science journal style. Providing access to a database.

Level 2: Show, don't tell: Supplement lengthy narrative with visual cues. Provide data visualization and synopsis. Demonstrate the dependencies and interactivity of the information. Example: Provide links to comprehensive analysis, but first show the highlights in easily digestible form - including details of the analytical methods being applied.

Level 3: Make it actionable: Apply the "So what?" test. Show why the evidence matters. Example: Show how variables connect to, or influence, important outcomes (supported by graph data and/or visualizations, rather than traditional tabular results).

On the flip side, decision makers can add transparency by explaining how they view the evidence: Which evidence carries the most weight? Which findings are expected to influence desired outcomes?

How are pharma coverage decisions made? Which brings me to transparency in health plan decision-making. Here you have complex evidence and important tradeoffs, compounded by numerous stakeholders (payers, providers, patients, pharma). When U.S. pharmaceutical manufacturers seek formulary approval, they present the evidence about their product; frequently they must follow a prescribed format such as AMCP dossier (there are other ways, including value dossiers). Then the health plan's P&T (Pharmacy and Therapeutics) committee evaluates that evidence.

“Right now, there is a bit of a ‘black box’ around the formulary decision-making process,” said Robert Dubois, MD, PhD, NPC’s chief science officer and an author of the study. “As a result, decisions about treatment access are often unpredictable to patients, providers and biopharmaceutical manufacturers. We sought to identify ways to clarify the process.”

Whose business is it, anyway? Understandably, manufacturers want to clarify what factors influence the level of access their products receive. And patients want more visibility into formularies: What coverage and co-pays can they expect from their health plan? How is safety weighed against effectiveness? Now that U.S. healthcare is becoming more consumer-driven, I expect something to change.

The process. Put simply, the project sponsors were asking payers to explain how they balance the evidence about drug efficacy, safety, and cost. Capturing that information systematically is a big challenge. In scenarios like this, you'll often end up with a big checklist, which is sort of what happened (snippet shown above). An evidence assessment tool was developed by surveying medical and pharmacy directors, who identified key factors by rating the level of access they would provide for drugs in various hypothetical scenarios.

And then sadness. The tool was validated, then pilot-tested in real-world environments where P&T committees used it to review new drugs. However, participants in the testing portion indicated that "the tool did not capture the dynamic and complex variables involved in the formulary decision-making process, and therefore would not be suitable for more sophisticated organizations." Once again, capturing a complex decision-making process seems out of reach.

Setting expectations. Traditional vendor/customer relationships don't lend themselves to openness. If pharma companies want more insight into payer expectations, they'll have to build strong partnerships with them. That's something they're now doing with risk-sharing and value-based reimbursement, but things won't change overnight. Developing the data infrastructure is one of the challenges long-term, but it seems to me - despite the unsuccessful result with the formulary tool - that more transparency could happen without substantial IT investments.