The Central Question of Medicine

Staring at landscapes made of pill containers amidst mountains of paperwork, patients contemplate their troubling present and their uncertain future. What is best for me? What is best for my family?

At least 3 challenges await those seeking an answer to this central question of medicine. The first challenge results from the paucity of trustworthy comparative effectiveness research that directly addresses patient dilemmas. The second challenge relates to what each person values and would consider best for their own situation. Fitting treatments within the context of each person’s life is the third challenge.

Ongoing treatments, to work, have to fit into people’s daily routines, weaving seamlessly and constantly into their day-to-day activities.1 It is hard to predict how a new intervention will interact with the existing care plan, and how the care plan will work with each day’s plan. Clinicians cannot discover patient values, preferences, and contexts without interacting meaningfully with patients. To find what is best, clinicians must partner with patients to think, talk, and feel through their situation, use the best research evidence, and craft together the best treatment plan while minimally disrupting their lives.2,3

What sort of evidence enterprise can support that work? Here, I propose that the evidence necessary to support patient-centered care requires a fundamental change in the culture of research. Finding what is best for patients requires generous collaboration for Big Science.

How Little Science Fails Patients

The available research produces evidence that clinicians and patients struggle to use to determine what is best for each patient. It is as if the research enterprise was not fit for this purpose. Consider trials determining the value of a treatment by measuring its relative impact on a primary end point. This end point is often a surrogate marker, a laboratory measurement devoid of patient experience, importance or meaning.4 Or it is a composite end point comprised important outcomes (eg, mobility and mortality) combined with less important ones. These trivial ones tend to capture most of the effect and cloud the interpretation of trial results.5 These end points are needed to run the briefest and smallest trial, usually with much <1000 very high-risk participants. This choice is then argued beautifully in a sample size calculation, researchers’ closest foray into writing fiction. The budget for this Lilliput project would come, somehow, only a few cents short of the maximum fundable budget. As they efficiently succeed in identifying a winning intervention, the narrow and skewed definition of success in these studies reduces their value to usual patients and their clinicians. Should a patient use a treatment because it can impact a laboratory marker? How to know the extent to which patients like this would be better off—for example, live longer, with less disability or feel better—with this treatment? For patients taking multiple medications already, how to appreciate the incremental value vis-à-vis the incremental burden of treatment?6 Are these trials able to provide patients and clinicians with salient reasons to support their decisions, for example, the relative effect of alternatives on outcomes that people can experience and value?

Many large drug and device trials are designed primarily to secure the approval of the US Food and Drug Administration. Approval hinges on successful comparisons against placebo controls rather than against another active option that patients may use instead. But in the frontlines of care, in deciding what is best for our patient, what we need are comparisons among the best, most sensible alternatives. Instead, placebo-controlled trials forces reliance on unreliable indirect comparisons, contrasting each agent’s effect against placebo.

A recent development permits the comparison of alternatives using all available studies, including trials that directly compared agents against each other and placebo-controlled trials.7,8 Placed in a network of comparisons, we can analyze the scarce head-to-head direct comparisons and the more numerous indirect comparisons to better estimate the relative impact of each agent. Pooling published evidence, however, cannot overcome the limitations in the underlying evidence. Because some was produced to position drugs as market and sales leaders, only partial and biased data sets and trial results are published and available for pooling.9,10

We can hardly compensate in clinical care for inadequate or unreliable evidence. Consider efforts to bring the evidence about antidepressants to bear in helping primary care clinicians and their patients select an antidepressant. Combining what could be used of the published record with the expertise of clinicians and patients, a tool to support this process was produced.11,12 The tool compares available agents on their ability to affect patient weight, sexual function, and sleep quality; on their effects with drug discontinuation; and on their daily use and out-of-pocket cost. One domain is crucially absent from this tool because the evidence of any difference across agents was too unreliable: their impact on depression. The work and expense in conducting trials of efficacy that were not designed, conducted, and published to meet the decision needs of patients and clinicians has supported fantastically effective marketing campaigns. Yet, this evidence cannot be used confidently to figure out what is best for our patients.

Big Data Is Not Great Data

The sexiest solution proposed to address the problem of inadequate evidence for clinical decision making is to harness the power of the so-called Big Data. Big Data makes use of the unintentional and heterogeneous data byproducts of the provision of clinical care and of billing payers for this care. Mining these extremely large observational and administrative data sets requires sophisticated policies, informatics, statistics, and data processing to render the data useful and its use ethical.13 Many features of Big Data make it attractive. Investigators can propose myriad projects against the same data set, lowering the cost per question. Questions made feasible by mega–data sets explore the effects of treatments, including those use infrequently, on patient subgroups, and on rare outcomes, on harms, or on outcomes that take a long time to appear. Because these data result from actual practice, they are less affected by the constraints typical of efficacy trials.14

Unfortunately, in my view, these data, no matter how big (and some data sets include >100 million people), can only provide us with estimates of association, albeit interesting and precisely estimated. These estimates of associations about the relative value of the available options fall short of meeting decision makers’ needs to draw causal inferences. I do not trust instrumental variables and propensity scores and other fancy analytic tools designed to extract quasicausal inferences from these data sets. Their methodological sophistication cannot successfully overcome the limitations in the data set itself: big, yes, but riddled with error, incomplete (from key variables only available for some patients to massive silence about the biological, psychological, and socioeconomic context of each patient), and confounded. Although these hints can be useful, and sometimes very useful, too often Big Data is simply not Great Data.

Toward Big Science

Assessing the impact of sensible options on outcomes that matter requires very large randomized trials. These trials can only be assembled through massive participation. Here, it is not only the data that is big but also the scale of the collaboration across scientists and academic institutions, clinics and health systems, and patients and communities. This is not just Big Data. It is Big Science.

Big Science can help us characterize and estimate differences that matter not just with precision, but credibly. This goes beyond determining whether the options are different, to estimating the magnitude and nature of their differences across diverse patients, outcomes, and contexts. To answer these questions, Big Science must take place across geographies, cultures, and models of care, a task that requires broad international collaboration in the conduct of large, multicenter randomized trials, prospective meta-analyses, and new designs we are yet to invent. The Table compares Big Science with other clinical care research approaches.

Comparison of Big Science With Other Clinical Care Research Approaches

Just now, we are beginning to realize just how large Big Science has to be. For example, when the Patient-Centered Outcome Research Institute started funding trials, these planned to enroll only a few hundred patients. When Patient-Centered Outcome Research Institute pivoted to fund practical comparative effectiveness trials designed to meet the needs of decision makers,14 the planned size of these trials grew into the tens of thousands. To be feasible, work of this magnitude and complexity must take place within existing care settings, including practice networks dedicated to participate in a learning health system.15 To serve as a scaffold for the conduct of very large trials, Patient-Centered Outcome Research Institute invested in and assembled PCORNet (the National Patient-centered Clinical Research Network). The ADAPTABLE (Aspirin Dosing: A Patient-centric Trial Assessing Benefits and Long term Effectiveness), for example, was designed to make use of PCORNet to enroll 20 000 patients with coronary disease to compare 2 doses of aspirin,16 across outcomes important to patients and across pertinent subgroups. Big Science makes this scale of work possible.

From Competition to Collaboration

To get to Big Science, however, we must extirpate competition from clinical research. Private foundations, local research funding offices, and federal agencies all use competition to allocate resources and build their portfolios. Competition culls. Competition, mostly centered on obtaining research funding, produces very few winners and very many losers. Lack of funding is interpreted by academic institutions to mean that peers have judged your ideas as unworthy and funding your research as wasteful. I cannot find good evidence in support of the notion that this form of competition improves ideas, drives innovation, and fosters talent in science. Has research funded during periods of austerity yielded better science?

Consider the work necessary to produce a research proposal and that 8 or 9 out of every 10 of these proposals ends up unfunded and buried in the unreadable hard drive of an outdated laptop. Some researchers, like lottery players, play their odds by submitting and resubmitting as many applications, even mediocre ones, as they can. This produces the illusion of productivity and keeps funding agency staff and reviewers busy. With their creativity, time, and effort wasted, trained talented researchers writing proposals instead of conducting new experiments burn out and give up, leaving behind unanswered questions and unexplored ideas. Perhaps these investigators, their ideas, and approaches were the weakest. Perhaps our system also discarded the brilliant, the generous, and the pathbreaker.

Because for every winner there are many losers, the winner learns not to share their secrets, their contacts, their approach, and their resources. When researchers are in the same institution—a situation that should facilitate collaboration and Big Science—sharing is discouraged when the promotion of one may require the failure of competitors. Transparency and generosity, key scientific characteristics critical for Big Science, languish devalued and displaced by competition.

What It Takes to Do Big Science

The nature and magnitude of collaboration required for Big Science must follow from a fundamental change in the culture of research. Institutions will have to reward the collaborative and generous scientist, one who excels at followership, fellowship, engagement, and inclusion. Big Science requires close partnerships within and between communities of research and clinical practice. The methods deployed must work at scale and cause minimal disruption in the process and experience of clinical care. They must balance rigor with adequate privacy protection. And, we will have to invent new ways of funding to promote and support the work of collaborative multidisciplinary teams realizing best ideas and the practice networks that realize their plans.

Healthcare payers are exploring new care and payment models. We must consider the possibility that the biggest value proposition in healthcare involves caring for patients while we learn about how to improve care. Could payers reward care teams that collaborate with Big Science teams? It is possible that by breaking the budget silos of research and practice we can fund and sustain Big Science and improve evidence-based practice.

We need to be able to share protocols and ideas and need to be able to develop commons where that sharing happens freely and easily. The collaborative culture of Big Science should mitigate the fear that drives some researchers to claim ownership of ideas and data and facilitate collaboration in the secondary analyses of these complex and rich repositories, perhaps a better form of Big Data. Talented people sequestered in a room or thousands collaborating online could work toward the best possible research designs; communities of practice could pilot test and improve the feasibility of the protocol. The value proposition of Big Science, however, will be woefully incomplete unless it is able to fundamentally improve the lives of patients by translating evidence into practice. Some groups are leading the way through large scale collaborations that configure a learning system that is able to generate new evidence and improve quality of care.17,18

Clinicians and patients will still need to struggle to carefully identify what is best for each patient at each junction. They will have to consider the available options, including the option to participate in clinical trials, until the best way becomes evident. I do not think that science can ever answer the question of what is best for each patient; that answer depends on the values, preferences, and dynamic context of each one. But Big Science offers a promising chance to ease the challenging practice of patient-centered evidence-based care.

This integrated learning system—one that produces and uses Big Science to advance patient-centered care—will need broad engagement of stakeholders. The time is right as it has become increasingly difficult to imagine research without engaging patients and caregivers and other stakeholders in all aspects of clinical research.19 A similar shift is taking place in practice where care is increasingly imagined as being cocreated with patients.20 Leading healthcare institutions and individuals must give voice to the true magnitude of ignorance and uncertainty in which we seek to care and to improve care. The work of developing research-practice communities and their commons, of facilitating collaboration across ideas, protocols, work, and data, and of translating evidence into practice calls for inclusive and generous collaboration. The next big cultural challenge for biomedical science is to make generous collaboration fundamental.

Generous Collaboration

Science fairs sometimes inspire children to become scientists. What they learn in these fairs, unfortunately, is that a successful scientist is one that beats everyone else for recognition. Clinical researchers recognize the same rules apply to grown-up medical science. Show up often with your not-so-innovative grantsmanship and compete to win. Keep your ideas, resources, and credit to yourself. These rules must change. To be successful, scientists focused on improving the care of patients must apply innovative craftsmanship. They must collaborate broadly while sharing generously and transparently without regard for credit. They must publish fully and liaise with others to ensure that science-based care reaches everyone who can benefit.

Big Science needs generous collaboration. The International Space Station, built and maintained by scientists from different countries orbits above, its fast and luminous path a monument to generous collaboration. In the bowels of Europe, the Large Hadron Collider is rewarding multinational financial and scientific collaboration with magnificent discoveries from the subatomic world. More insights into space and matter are being revealed thanks to the ingenuity, hard work, and generosity of collaborating people, institutions, and countries. The article describing the discovery of the Higgs boson listed the names of >5154 collaborators in its authorship byline in alphabetic order.21 Who discovered that particle? We all did. I hope the day will come when we can all celebrate the fruits of generous collaboration in medicine. A day in which Big Science helps patients and clinicians uncover what is best for the patient.

Acknowledgments

Dr Montori is grateful for the generous contribution to the ideas reflected here of his colleagues at the Knowledge and Encounter Research Unit at Mayo Clinic.

Disclosures

Dr Montori leads the Knowledge and Encounter Research (KER) Unit at Mayo Clinic; this research group has received over the last 12 years grant funding and contracts from nonprofit organizations for the conduct of systematic reviews and meta-analyses and for the formulation of practice guidelines and shared decision making tools based on these syntheses. Dr Montori and the KER Unit derive no other income from these activities. Dr Montori has no other financial relations to report.

Footnotes

Adapted from a Keynote Address at the First Patient-Centered Outcome Research Institute (PCORI) Annual Meeting, Washington, DC, 2015. Dr Montori is grateful for the generous contribution to the ideas reflected here of his colleagues at the Knowledge and Encounter Research Unit at Mayo Clinic. The ideas expressed here do not necessarily represent views held by PCORI.

National Academies of Sciences, Engineering, and Medicine.Refining the Concept of Scientific Inference When Working with Big Data: Proceedings of a Workshop-in Brief. Washington, DC: The National Academies Press, 2016.