Essay # 5:

What do epidemiologists see as causes?

This essay will review the conceptual framework that epidemiologists use to decide if a particular risk factor is causally associated with a health outcome. We will begin with the relatively simple scenario in which only one risk factor is under consideration and where no other risk factors for the outcome exist. We will attempt to distinguish non-causal statistical associations from causal mechanisms. Although the notion of causality is rather simple at an intuitive level, a formal demonstration of causality proves rather difficult. In fact, in the context of epidemiology, we may be faced with considerable complexity in deciding if a risk factor is indeed causal. This is partly because epidemiologists are not usually able to use experiments in the population, but also because other risk factors are usually in play, and because the underlying mechanisms by which exposures cause disease are not always clear. The further question of multiple causes for a disease will be developed in the final essay in this series.

Intuitive ideas about causality

Most people carry with them a relatively simple and intuitive interpretation of causality, but nevertheless one which is quite useful in practice. When they observe that an outcome follows some particular exposure or event, and they believe that the outcome would not have occurred without the prior exposure or event, they conclude that the outcome was “because of” the exposure, or that the exposure “caused” the outcome. For instance, if there is a head-on traffic collision and all the passengers are found dead at the scene, one would probably conclude that the accident “caused” the deaths of those individuals. Implicitly, we are saying that had the accident not occurred those people would not have died.

This kind of logic is known as counterfactual causation. We observe that A (the accident) preceded B (the deaths) and we imagine that failure to observe A would also have led to a failure to observe B. Unfortunately, simple conclusions like this may not be adequate. In the traffic accident example, suppose that the drivers had been drinking, or that the road surface was icy. Suppose one car was full of noisy teenagers playing the radio at high volume and distracting the driver, while the driver of the other car was trying to locate his cell phone in his pocket and answer it. While initially we thought that the accident caused the deaths, now we are not so sure. Perhaps it is a combination of risk factors that might be thought of as responsible for the accident itself and hence for the deaths. We will discuss the problems of separating the effects of several possible causes in the final essay of this series.

Causality for epidemiologists

Epidemiologists face these interpretational difficulties in much of their work. Intuitively, epidemiologists have often approached ideas that touch upon the counterfactual approach. For example, already in 1957 Lilienfeld wrote that epidemiologists needed a practical approach. For that he proposed that if taking away some presumed cause of a disease (or imagining that it is taken away) would lead to less disease in the population, that factor might indeed be seen as a cause.

The ideal approach to demonstrating a simple causal effect would be to observe both the fact and the counterfact, in the example above: to observe what happened with the car, and then go back in time, change a single factor, e.g., the drinking and let run time again, to see whether changing the single factor prevents this particular accident or not.

That is, of course, impossible. The thing that is closest to it is to try and design a suitable experiment. In the experiment, some individuals would be randomly assigned to be exposed to the potential risk factor (say, drinking alcohol), while the remaining participants would be randomly assigned to be unexposed (they would not be drinking). With an infinitely large sample of participants, we would have the expectation that all other potential explanations of the outcome would be equally balanced between the exposed and unexposed groups, if we observe a difference in frequency of fatal accidents between those groups, we can reasonably conclude that the exposure to the potential risk factor indeed causally affected the risk of the outcome. Still, we would need all those people to adhere strictly to what we were asking them to do (drunk driving vs. sober driving), and we need complete follow-up. Moreover, to single out such a factor, we need some good theory behind it. Unfortunately, epidemiologists are not usually at liberty to assign exposures in a random way as would be required for such an experiment. It would be impossible to randomly assign some drivers to consume large amounts of alcohol and dispatch them in their cars to the highway, while other sober drivers are used for controls. So, the experiment is impossible.

Mind, that even in situations where an experiment is possible, for example in assessing the effect of a therapeutic drug, the ideal experiment with extremely large numbers, people sticking to their treatment, and having no loss to follow-up, is still impossible. Even drug trials are approximations to the ideal - and are therefore always subject to diverse interpretation and discussion. However, in epidemiology, even an approximate trial can often not be done.

Causality and probabilities

A further difference between the epidemiologist’s world and the ideal experiment is that health outcomes have only a certain probability of occurring in any given individual. Completely deterministic links do exist (for instance, being bitten by a rabid dog and remaining untreated almost invariably leads to death by rabies), but these examples are rare. Usually the epidemiologist must compare the observed probabilities of an outcome between two or more comparison groups in his study. The inevitable statistical fluctuations that will occur in the estimated probabilities for each group will have to be taken into account.

In comparing the observed disease probabilities between groups, we are implicitly trying to decide if the probability difference indeed represents the causal effect of the exposure. If so, then we can calculate how much of the risk might have been prevented if all exposed individual would have been unexposed - under the assumption that all these persons would then have experienced the disease risk of the unexposed members of the study, and vice versa. Of course, as became clear from the car accident example, we are usually in the situation that study members can only be observed once in whichever exposure group they happen to lie; the counterfactual outcome, or what “might have been”, is not observable. So, for example, if we are considering the risks of deaths in coal miners, we might compare them to workers in some other industry such as at an oil refinery. However, for many reasons, we cannot really be sure that if the coal miner had taken a different life path and worked instead at the oil refinery, that he then would have assumed the risks as observed among the oil refinery workers. As they say, “you only live once”, and so the miners and the oil refiners can only be observed in their lives as they actually took place, with all the inherent differences that implies.

The probabilistic element to causal thinking shines through with an interesting perspective in the legal arena. In compensation or personal injury cases, workers might allege that their cases of cancer were caused by exposures in the workplace. A frequently used criterion for the judgment at trial is that the “balance of probabilities” is such that it was more likely than not that the exposure caused the disease. So in other words, the judge or jury is required to assess the probability that exposure does lead to disease, and to award compensation if they determine this probability to be more than 50%. Notice that the epidemiologic evidence will not apply directly to the individual plaintiffs, but comes in the form of aggregated information from groups of people observed in epidemiology studies. The epidemiology data can also help to assess if more than half the cases were caused by the exposure, but again this cannot be done at the level of the individual plaintiff. In other legal cases, the criterion is different. For instance, in murder trials the jury is usually required to be convinced of the defendant’s guilt “beyond reasonable doubt”, or in other words, to require a probability of causation in their minds very close to one. That is impossible when epidemiology is invoked. Moreover, the ‘balance of probabilities’ is actually a wrong use of epidemiology, since a very large number of people might be damaged by some environmental factor, adverse effect of a drug, without the risk being doubled.

Attempts to think about establishing causality

The problem of identifying causal links between exposures and diseases raises some very deep philosophical issues. These were heavily discussed in the 1950s by epidemiologists, who began to use a set of practical guidelines summarized in 1965 by Sir Austin Bradford Hill. Hill proposed some “causal viewpoints” for thinking about establishing causality which are often referred to as “Hill’s criteria” by epidemiologists. Although these criteria do form a useful framework, they do not establish causality by themselves - and in several instances, causality is accepted when one or more of the guidelines does not hold.

The first principle proposed by Hill is that the observed associations should be strong. The notion here is that strength is somehow correlated with the likelihood that an association is causal. Strong associations are in general better candidates for causal links than are weak associations, because it is difficult to imagine other factors that might have been the real causes: such other factors should then have an even stronger association. The reasoning is that they should either be known, or do not exist.

The second principle is consistency, referring to the idea that a causal association should lead to similar results being observed in various circumstances, such as when different populations are studied, or data from different time periods are compared. Unfortunately, for epidemiology, the situation is confused by the fact that health outcomes (e.g. fever and associated disease) may or may not occur following an exposure (infection). Because of the probabilistic nature of outcomes in most epidemiologic studies, there may be inconsistency and instability when the results of different studies are compared.

Hill also proposed a principle of specificity, this being the situation where a particular cause leads to only a single effect, and not to multiple effects. While this may be a useful criterion in some circumstances, (for instance the unique linkage between a viral infection and a particular disease), it certainly is not one that applies to all epidemiology data. A counter-example is that smoking leads to a wide variety of disease outcomes, and is not confined to only one outcome.

The next principle is temporality, which requires that the cause occurs before the effect in time. This particular principle seems eminently reasonable, and is in fact widely accepted by scientists. Note however that there may be some individuals for whom exposure actually occurs after the health outcome, for example, people who begin smoking after developing cancer. The existence of such individuals would not negate the importance of the temporality requirement in general, however.

The next principle is that of a biologic gradient. This means that individuals who have higher levels of exposure should have higher risks of disease than individuals with more modest or lower levels of exposure. This criterion is often reasonable; for example the risk of death among burn victims increases with the percentage of body area affected by burns, but it is not necessarily true for all associations. A counter-example here is that modest consumption of alcohol may provide some protection against heart disease, compared to individuals who do not consume alcohol at all, while individuals who are heavy consumers of alcohol are at high risk of heart disease.

Hill’s next principle is one of biologic plausibility. This is a difficult criterion to put into practice, because one’s sense of plausibility about a particular association being causal will necessarily reflect one’s own background, experience, and prejudices. There are many examples where distinguished scientists have regarded new hypotheses about diseases as completely ridiculous, only to see those hypotheses subsequently proved correct. Still, in all instances, a study cannot be set up without at least having some general idea about potential causes.

The next principle is coherence which requires the association to “fit” with current knowledge about the disease biology. So, for example, the coherence criterion concerning the association of smoking and lung cancer is satisfied by evidence documenting the mechanism of lung tissue damage from cigarette smoke, and associated changes at the cellular level.

Hill also suggested that experimental evidence would be supportive of a causal interpretation. As mentioned earlier, the experimental approach may not be feasible for many epidemiologic problems, but even if experiments are possible, the results may be ambiguous. For instance, an educational campaign intended to increase the consumption of fruit and vegetables might achieve its apparent objective, but the real effect on food consumption patterns might be driven more by price changes over the time period of the study.

Hill’s final principle is one of analogy, which is arguing the plausibility of a causal association on the basis of similar observations from another context. Again, this is a difficult criterion to apply for epidemiologists, especially because an analogy can be found for almost any set of observations, given a sufficiently rich imagination. As such, therefore, the analogy criterion does not appear to be particularly helpful.

Although Hill’s principles have been very widely cited, Hill himself indicated that they did not constitute an absolute checklist of requirements to establish causality. Indeed, he frames them with many possible exceptions and caveats. Commentators on Hill’s work have concluded that there is no necessary or sufficient criterion to determine if an observed association is causal.

As we have seen, epidemiologists are usually not able to adopt the ideal experimental approach to causality, nor even a very approximate experiment, because they cannot control or dictate who is or is not exposed to risk. And even if randomisation of exposure is possible, the results of randomised studies may still not provide adequate evidence of causality. Instead, epidemiologists must observe people in their day-to-day lives, but without artificially exposing them to risk. While this considerably complicates the causal interpretation of epidemiologic data, such a challenge is also one of epidemiology’s greatest strengths. By studying these issues in the real world outside the laboratory, epidemiologists are confronting health problems in the most relevant way for human populations.