Medical innovation: Three questions and an observation

As a summer sequel to last winter’s popular “three tensions in medical innovation” piece, here are three medical innovation questions — and an observation.

Question 1: Is Medical Care Getting Better Or Worse?

Most senior physicians I know speak wistfully of the good old days, the days before the fifteen minute office visit and confounding EMR systems, the days when they were doctors, not providers, respected more and metricized less.

In the eyes of many of these docs, the corporatization of medicine has robbed the profession of its vital soul, and turned a noble calling into just another job. As doctors have increasingly started to feel more like assembly-line workers than artisans, many lament that patient care has suffered.

On the other hand, most health economist and health policy experts point to data suggesting patient care and outcomes have improved as the result of the exact process “improvements” so many doctors despise. Physicians, say the critics, have resented the challenge to their autonomy and the historic assumption that the doctor knows best. Evidence, experts say, suggests otherwise.

Part of the solution here — as outlined in Vinod Khosla’s 2012 white paper – must involve the robustification of clinical decision support systems in order to leverage existing data to guide physicians away from conspicuously bad choices. Critically (in my view), such a system must also be flexible enough — truly, humble enough — to tolerate a range of acceptable options (see “Phase 1,” here), and must learn continuously over time.

It’s also essential to recognize the profound potential of the patient-physician relationship, especially in the context of serious or chronic illness and serious illness. This value of this deeply human connection goes far beyond the specific therapeutic recommendation the patient receives. Helping patients dynamically navigate illness in a way that sensitively incorporates both cutting edge scientific understanding as well highly personal patient preferences remains perhaps the most important aspect of a doctor’s job. In focusing so myopically on the discrete “selection of therapy” component, important as it is, many technologists may be misperceiving a key part of the broader “problem to be solved” here.

Question 2: Should Pharma Front-Load Risk in Clinical Development?

As I’ve previously discussed (see “Prioritization” section here), every management consultant I know is exasperated by pharma’s unwillingness to kill doomed projects. A good part of the reason drug development is so expensive, the consultants say, is that companies can’t stop plowing good money into bad projects.

The problem, of course, is figuring out whether a particular project is good or bad. Within every large pharma lie competing sets of narratives – stories of ultimately successful products that senior management had tried their best to kill, and stories of promising-sounding projects that have consumed millions of dollars (or more) yet led to nothing.

One area where the rubber hits road here is figuring out how much risk to assume early in clinical development. Traditionally, the view has been to defer risk – such as head-to-head studies versus competitor drugs – as long as possible, ostensibly to learn as much as you can about your drug before putting it to the acid test. Of course, many companies would probably like to avoid head-to-head studies entirely, concerned that the possibility of failure isn’t worth the risk.

More recently, in the context of increased payor pressure in the US (and the typical requirement for head-to-head studies abroad), companies have started to accept the need for comparison with existing drugs, but there’s still the question about when you do such studies.

Many on the business side urge development teams to do head-to-head studies as early as possible, ideally in phase 2. If you’re not better than the competition, the reasoning goes, let’s figure it out and move on. However, many experienced clinical developers argue that by subjecting your drug to such comparisons before you really understand how to dose it and use it, you’re unfairly increasing your chances of failure – and of killing what might be a promising drug in the process.

We had the chance to see this exact issue play out recently, when BMS announced they were going to kill a new drug they were developing for treatment-resistant depression, because it failed in phase 2 to demonstrate superiority to key competitors.

Many strategists presumably applauded BMS for the guts to do this sort of high-risk study. Yet others questioned the approach: “Dumb decision, MOA [mechanism of action] very compelling, should have waited and run larger HTH [head-to-head] trial,” a critic tweeted, reflecting the view of a number of experienced drug developers.

There’s probably not a single right answer here, and the development strategy will probably continue to be informed by the perceived market. As long as drug developers can avoid front-loading risk, they will; but if it gets to the point where early differentiation becomes a must-have, rather than a nice-to-have, someone will insist they do it. The question is whether the resulting quick kills will liberate resources for more productive programs or ultimately cost resources by prematurely terminating potential blockbusters.

Question 3: Are Academic Hospitals Still The Good Guys?

Traditionally, academic medical centers (AMCs) were inefficient – notoriously so. The culture was built for reflection and teaching, not for throughput. AMCs have also been a source – essentially, the source – of medical innovation, as it’s difficult to think of a major clinical advance in the modern era that didn’t originate from an inquisitive academic physician.

However, as the external environment has changed, and placed more emphasis on cost and price, AMCs seem to be changing in two significant ways.

First, they are overtly focused on – obsessed with — billing and money. The result, say many of my former colleagues who’ve remained in academic medicine, has been a profound change in the AMC’s atmosphere. There is a now a huge emphasis on production – RVUs, billable procedures, the works – with real penalties for those who come up short.

Second, the AMCs are aggressively consolidating and using their ever-growing market power to sustain unusually high prices – as AthenaHealth’s CEO Jonathan Bush lamented during his recent TED talk.

This emphasis on financial return seems to have coincided with profound challenges on the funding front – it seems absurdly hard for academic investigators, especially emerging ones, to scour up enough funds to support themselves, and the intensifying clinical obligations make it extremely difficult for those with limited support to have the time for meaningful research.

AMC executives, for their part, tend to argue that without aggressive financial management, AMCs wouldn’t be able to perform their vital, distinctive function.

The question is, how much can AMCs evolve without sacrificing this exact distinctive character? At what point does an academic center become just another hospital system, the neighborhood bully that Steve Brill recently described in Time?

Perhaps, in the long run, AMCs might be better off by focusing on their unique value proposition, and justifying the cost of inefficiency – e.g. by specializing in rare cases and difficult procedures, as Bush suggests – rather than persisting with their current approach of bullying competitors, and squeezing their talent more than they cultivate it.

However, there’s also the chance that by focusing so intensively on subjects such as process optimization, AMCs may have the opportunity to drive an important reconceptualization of academic medical research, broadening the focus from physiology and molecular biology to include care system improvement and user engagement. These are exactly the sort of topics leaders such as Stanford’s Arnold Milstein are trying to pursue (see here). The hope is that such research occurs in addition to, rather than instead of, the still essential, more traditional work on disease pathophysiology (as I’ve argued here and here).

Yet perhaps the most significant consequence of the challenges facing AMC-based medical innovators is that it begs the question, why should AMC doctors have all the fun?

Especially in our hyperconnected age, in the era of open innovation, Joy’s Law, and the quantified self, there would seem to be a tremendous – and largely unrealized — opportunity to take advantage of “field discovery” (to use MIT’s Eric von Hippel’s term), the insight of real-world practitioners, and more generally, of innovators outside of AMCs. Medicine might also benefit from a populist counterpart to the NEJM, and welcome an alternative publication – or platform — that more fully represents and leverages the voice and experience of practitioners in the trenches, helps better capture and reflect the value they bring, and both catalyzes and celebrates the democratization of medical research.

Observation: Digital Health’s Customers

In the space of several hours this week, I watched this video of TechCrunch’s Felicia Williams discussing her initial experience with the wearable activity monitor Misfit Shine, and listened to this podcast of Harvard Pilgrim’s Eric Schultz, discussing engagement and transparency, including Harvard Pilgrim’s relationship with Castlight Health.

It’s difficult to imagine two more different customers. To the extent that the TechCrunch video represents the Silicon Valley, consumer-focused perspective, and the Schultz interview captures the healthcare system, results-oriented viewpoint, it’s jarring to recognize just how distinct these two mindsets are, and how different the expectations for “health” technology – essentially, delight versus outcome.

Prospective digital health entrepreneurs will need to figure out at an early stage whether they are targeting Williams or Schultz. The most successful digital health entrepreneurs may be the ones who eventually figure out how to develop products that please both.