Why Survival Rate Is Not the Best Way to Judge Cancer Spending

In 2012, a study published in Health Affairs argued that the big money we spend on health care in the United States is worth it, at least when it comes to cancer. The researchers found that the survival gains seen in the United States equated to more than $550 billion in additional value, more than the difference in spending.

This research depended on survival rates. A new study was recently published in the same journal, but using mortality rates. That study found that cancer care in the United States might provide significantly less value than that in Western Europe.

Which should you believe? It’s worth exploring these two studies, and their metrics of choice, to get a better understanding of whether what we are spending in the United States really is worth it.

Mortality rates are determined by taking the number of people who die of a certain cause in a year and dividing it by the total number of people in a population. For instance, the mortality rate for men with lung cancer in the United States, according to the Seer database, is 61.6 per 100,000 people.

Survival rates describe the number of people who live a certain length of time after a diagnosis. The five-year survival rate for people found to have lung cancer is 16.8 percent.

These numbers describe very different concepts. But almost all of the research you might find in this area uses survival rates as the metric. One reason is that it’s much easier to measure. You enroll people upon diagnosis, follow them for a set number of years, and measure how many survive. Mortality rates are more of a population metric. They describe the population as a whole, and they’re much harder to measure accurately.

Moreover, the survival rate is the information patients want. When patients learn they have cancer, they want to know the likelihood that they will live a certain amount of time. That’s what a survival rate will tell them. Mortality rates won’t mean anything to them at all.

But there are two problems with survival rates. The first is what’s known as lead-time bias. In reality, you can decrease the mortality rate only by preventing people with the disease from dying, or preventing them from getting it in the first place.

You can improve the survival rate, however, by preventing death, preventing people from getting sick, or making the diagnosis earlier. That last factor can make all the difference.

Here’s the example I always use to explain this concept: Let’s consider a hypothetical illness, thumb cancer. We have no method to detect the disease other than feeling a lump. From that moment, everyone lives about four years with our best therapy. Therefore, the five-year survival rate for thumb cancer is effectively zero, because within five years of detection, everyone dies.

Now, let’s assume that we develop a new scanner that can detect thumb cancer five years earlier. We prevent no more deaths, mind you, because our therapy hasn’t improved. Everyone now dies nine years after detection instead of four. The five-year survival rate is now 100 percent.

But the mortality rate remains unchanged, because the same relative number of people are dying every year. We’ve just moved up the time of diagnosis and potentially subjected people to five more years of therapy, increased health care spending and caused more side effects. No real improvements were made.

But if we just looked at survival rates, we would think we made a difference. Unfortunately, that happens far too often in international comparisons, as the United States often does much more screening than other countries and then justifies it through improved survival rates.

The second problem with using survival rates is overdiagnosis bias. Let’s say that a certain number of cases of thumb cancer that are detectable by scan never progress to a lump. That means some subclinical cases that would never lead to death are now being counted as diagnoses.

Since they were never dangerous, and we’re now picking them up by scans, they’re improving our survival rates. But they do nothing for mortality rates because no fewer people are dying.

These two factors are important to consider when you compare ways of caring for cancer, especially when there are differences in the ways diagnosis and screening occur. For many cancers, we’ve been diagnosing significantly more cases, but making little headway in mortality rates.

The first Health Affairs study I mentioned used survival rates of 13 cancers in 10 countries in Europe. The researchers took the amount of money spent on cancer care and determined how much was spent to achieve the better survival rates seen in the United States. They concluded that the increased spending on care was less than the value achieved.

But the increased value was achieved by looking at survival. Moreover, almost all the gains were because of findings in two cancers: breast cancer and prostate cancer. These are the two most hotly debated in terms of whether we are screening too aggressively and diagnosing too much in the United States. Both of these factors would greatly affect lead-time bias and make the use of survival rates unappealing.

The more recent Health Affairs study went back to the drawing board and started over with mortality rates. It was also a wider study. The researchers included 20 countries in Western Europe. They also added lung cancer, which was left out of the 2012 study, but which is the largest cancer killer in the developed world.

The differences in mortality rates between the United States and Western Europe are nowhere near as large as the differences in survival rates. Even so, the United States often outperforms Europe. From 1982 to 2010, it’s estimated that we averted almost 67,000 deaths from breast cancer compared with Western Europe. We averted almost 60,000 deaths from prostate cancer and almost 265,000 deaths from colorectal cancer.

But at what cost? The researchers found that the incremental cost of each year of quality adjusted life, or QALY, gained for colorectal cancer was $110,000. For breast cancer, we spent more than $400,000 per QALY gained. For prostate cancer, we spent almost $2 million per QALY gained.

We often focus on breast, colorectal and prostate cancer because we do better with those diseases. But we don’t with all cancers. Over the same period, the United States had more than 1.1 million more deaths from lung cancer than Western Europe. Because we still spent more on care for this disease, we had a negative cost of about $19,000 per QALY gained. We also had negative costs per QALY gained for other cancers, including melanoma (about $137,000) and cervical cancer (about $855,000).

As I’ve written before, discussions of cost effectiveness are difficult to have in the United States. I am sure there are many people who believe that $400,000 isn’t too much money to give a woman with breast cancer an additional year of quality adjusted life. But this is money we can’t then spend on other treatments or other therapies that might do more good for more people.

We should have these conversations, and we should do them with the right data. When it comes to preventing death, we need to consider mortality rates, not survival rates, or we may be getting far less for our money than we think.

Aaron E. Carroll is a professor of pediatrics at Indiana University School of Medicine. He blogs on health research and policy at The Incidental Economist, and you can follow him on Twitter at @aaronecarroll.