Thursday, July 2, 2015

Theory and empirics in cancer research

A couple years ago I read The Emperor of All Maladies, a fantastic book on the history of cancer research and treatment. I think a lot of economists would like this book because, among other things, it focuses on the differing roles of empirical and theoretical progress in a discipline that is constantly asked to serve the real world. In particular, both fields seem to display a constant tension between quick, "credible" fixes to problems and a deeper theoretical understanding of driving forces.

Reader beware--medicine and biology are way outside my wheelhouse, and it's been some time since I've read this book.

If you've known someone with cancer, there's a good chance that the treatment consisted of some combination of chemotherapy, radiation, and surgery. In a sense these are very brute force ways to treat the problem, and as you might have noticed that the collateral damage can be massive. For the most part, the brute force approach to treating cancer comes to us from an empirical (actually experimental) tradition.

For a long time, surgery was the preferred method of treating cancer. The tricky thing is that simply cutting out a visible tumor doesn't always eliminate cancer from the body. This fact, discovered empirically, led to improvements in surgical technique but also to some extreme practices among some doctors. William Halsted performed "radical mastectomies" which left breast cancer patients with severed collarbones and gaping holes in their bodies. "Halsted and his disciples would rather evacuate the contents of the body than be faced with cancer recurrences" (65). This was part of a broader milieu in the world of surgery. "By 1898, it had transformed into a profession booming with self-confidence" (66).

Over the short term, these extreme surgical methods seemed to work reasonably well. But they reflected a woefully incomplete understanding of cancer. They could often cure the problem in women with small, local cancer activity, but at the cost of destroying the body without cause. On the other hand, cancer that had metasticized throught the body was not eliminated with the surgery. The approach did not save nearly as many lives as its proponents hoped, and it left a lot of people tragically disfigured.

Radiation was a more precise approach allowing specific targets (see page 75). It could often work well for localized tumors, saving many lives. Sometimes, though, it could actually cause cancer. And in any case, the collateral damage can still be huge, leaving patients "scarred, blinded, and scalded." It destroys cells indiscriminately.

Chemotherapy has roots in experiments with cloth dyes in the 1800s (85) and mustard gas in WWI (87). In the 1940s, treatments with combinations of chemicals were employed in increasingly well-designed experiments. This is chemotherapy, and while it's more advanced than surgery it is similar in that it employs brute force methods with huge collateral damage. Its advancement was driven by experiment rather than a growing understanding of cancer. Treating each specific cancer was just a matter of finding the right combination of toxins. The approach could lengthen lives, but sometimes by only a few months (208) (and sometimes for much longer!). Collateral damage was often huge. Much of cancer research funding went toward these experiments as opposed to deeper research:

They wanted a Manhattan Project for cancer. Increasingly, they felt that it was no longer necessary to wait for fundamental questions about cancer to be solved before launching an all-out attack on the problem (121).

This approach had some success in treating cancer, with high costs (page 330 reviews cancer treatment's results to the mid-1980s). So there was some backlash:

As the armada of cytotoxic therapy readied itself for even more aggressive battles against cancer, a few dissenting voices began to be heard along its peripheries. These voices were connected by two common themes.

First, the dissidents argued that indiscriminate chemotherapy, the unloading of barrel after barrel of poisonous drugs, could not be the only strategy by which to attack cancer. Contrary to prevailing dogma, cancer cells possessed unique and specific vulnerabilities that rendered them particularly sensitive to certain chemicals that had little impact on normal cells.

Second, such chemicals could only be discovered by uncovering the deep biology of every cancer cell. Cancer-specific therapies existed, but they could only be known from the bottom up, i.e., from solving the basic biological riddles of each form of cancer, rather than from the top down, by maximizing cytotoxic chemotherapy or by discovering cellular poisons empirically.

This biology-based approach gained some traction, and there were some major breakthroughs in the 1980s that identified cancer-causing mechanisms at the molecular level. There has been a lot of progress since then. Understanding cancer better has made us better at early detection, which has significantly reduced mortality. More chemotherapy has played a role too. One cancer researcher looks back at a pioneer of experimental cancer treatment, Sidney Farber (from the 1940s), and writes,

Farber's generation had tried to target cancer cells empirically, but had failed because the mechanistic understanding of cancer was so poor. Farber had had the right idea, but at the wrong time. (433)

The author concludes that "an integrated approach" is needed (457). The case of breast cancer is particularly illustrative of this point (402).

So basically we spent a century treating cancer with methods discovered through experiment, with results that range from tragic to pretty good (the book reviews some studies). In a sense the approach served us pretty well, providing ways to treat a terrible disease without having to invest a lot of time and money in knowing a lot about it. Then we focused more on understanding the disease, which gave us pretty good progress. At any given point in time, focusing more on basic research may have denied effective treatment to existing patients, but it may have sped the process of finding better approaches.

I don't really have any big conclusions other than to say that I think the conflict between theory and empirics is complicated and may be unavoidable, particularly in the case of economics. The "credibility revolution" is a huge step forward for the field with its ability to provide quick answers to policy questions. But what does it tell us about a $15 minimum wage? Nothing, really. But theory is a messier business, and it can result in a lot of wasted effort. So neither the theorists or the empiricists are in a position to feel overly important. We need both, and sometimes one will make progress faster than the other.

This book is a really good read with a lot of other insights relevant to economics (page 211 rings a bell, for example); recommended.

1 comment:

That was probably my favorite non-fiction book of all time. A fascinating read with all kinds of insights.

I've been thinking a lot about the theory vs. empirical debate recently as I try to determine where to lay down my roots. A big problem that I see is that the field of economics (at least micro) has been split into people who do empirical work and people who do deep theoretical work. Much of the theoretical work is not particularly useful in a practical sense, because it worries so much about the math that it misses the point. This seems to be the view of most mainstream econ PhDs coming out of grad school these days.

At the same time (and less widely recognized) it seems that much of the empirical work being done is also somewhat useless, but for a different reason. Most empirical work is (1) not generalizable and (2) provides the answers to important questions "too late," or after the answers would have been most useful. This is implicit in your comment about the $15 minimum wage. Economists have developed really good tools for estimating the effects of something like a $15 minimum wage, but only if someone actually implements it.

To me, the biggest tragedy seems to be the slow death of "applied theory," where one uses the tools of economics to answer important policy questions. Applied theory has the advantage that it allows the economist to provide useful results regarding policy questions prior to the policies being enacted. While there is some applied theory used among the best empiricists such as Amy Finkelstein, Raj Chetty, and others, it is typically only used when there is some way to combine the theory and the empirics. There is a whole chunk of important policy questions that is being neglected by economists because there is no good way to apply data to the model, and without the combination of the theory with empirics, the reward given by our field is minimal. This is true despite (1) the importance of these questions and (2) the potential insights that can come from applying a simple theoretical model to the question. This was the model in the field for decades (think Akerlof, Stiglitz, etc.), and while all of the econometric advances have been amazing and incredibly useful and socially valuable, it seems like neglecting the field of applied theory makes us vulnerable to becoming "causal" statisticians who don't understand our own discipline and who can provide little advice to policymakers on so many questions for which there is no empirical data.