Weak associations that we agree with are still weak associations

For decades we’ve been plagued with dietary regulations based upon studies that found weak associations generated from food frequency questionnaires.

Here’s how it works: some researchers want to look into whether something people eat affects their health, so they come up with a survey, send it to say 100 people, and ask questions like

Q25: Over the last year, on average how many oranges did you eat a week?

0

1

2-6

1 daily

more than 1 daily

And then they ask the same question about eggs, bacon, avocados, etc.

Then they ask questions to these people’s physicians such as

Q37: Considering Patient XXXX, over the past year, how many heart attacks did they suffer?

None

1 non-fatal

several non-fatal

1 fatal

When they get the information back, they look at the food frequency questions and match that with the health data and start to see patterns.

Cohort

Males

Females

Nonfatal MI

Fatal MI

Avocados

50

0

0

0

Bacon

40

0

1

0

Oranges

60

0

0

0

Everyone

100

0

2

0

For example, they see that the entire population of 100 people had 2 nonfatal heart attacks, so the prevalence is 2/100 heart attacks. But if you count just the 40 people who ate bacon at least once a week, one of them had a heart attack, so the incidence rate is 1/40.

Cohort

Heart attack Rate

Risk

Avocados

0/50

0%

Bacon

1/40

2.5%

Oranges

0/60

0%

Everyone

2/100

2%

So the risk of a heart attack in the entire population is 2%, and the risk of a heart attack in the bacon eating population is 2.5%. Doing the math (2.5-2.0)/2.0 = 25%. Well, there is your juicy headline.

Eating bacon is associated with a 25% increase in heart attack risk

And in peer review, other scientists complain that all of their subjects were white males, so let’s say they correct for that by mailing the survey out to an additional 900 people and now they have captured responses from women and non-Caucasian people. And maybe they see that their entire population had 12 heart attacks so the general population rate has dropped to 1.2% but then they added a lot of women and people from groups that have less chance of a heart attack. When they review their data they find that the ratio of bacon eaters has dropped from 40 in 100, to 198 in 1000, and that group had 3 heart attacks.

Cohort

Males

Females

Nonfatal MI

Fatal MI

Avocados

205

210

5

0

Bacon

140

58

3

0

Oranges

290

320

5

0

Everyone

400

600

12

0

Now we have a risk rate of heart attacks in people who eat bacon that goes to 1.52%. And now we have a relative risk of 26.26% [the math there is (1.52-1.2)/1.2 = 26.26%]

Cohort

Heart attack Rate

Risk

Avocados

7/693

1.01%

Bacon

3/198

1.52%

Oranges

5/610

0.82%

Everyone

12/1000

1.2%

So now they publish that the latest science shows the relative risk of bacon giving you a heart attack has increased from 25% to over 26% with improved research. And the popular press pick up on it and you see headlines like,

“If you stop eating bacon you will lower your chance of dying from a heart attack by more than a quarter”

So you tell me: knowing the underlying data, is that a reasonable statement?

Let’s count all the problems with this scenario:

How reliable is the underlying data – can you recall what you ate in the past year?

Did we improve our accuracy with respect to bacon by adding relatively more non-bacon eaters?

We saw a reduction in the rate of bacon eaters having heart attacks from 2.5% to 1.52%.

Did adding more subjects make the study more accurate? Remember the rate of incidents changed from 2.5% to 1.52%.

Can we say from this that eating bacon causes nonfatal heart attacks? Could the explanation also be that having a nonfatal heart attack causes people to begin eating bacon?

Is it possible there is a third factor—like habitually ignoring advice? Maybe not following advice to exercise caused the heart attack, and not following advice to reduce salt/saturated fat caused the bacon eating. The cause is not the bacon, it’s an effect.

Is it possible that people who don’t eat bacon for religious reasons also have a protection against heart disease?

Or do people regularly eat bacon with some other food that causes heart disease?

Oh, and did you notice the flim-flam they pulled about increasing the risks of dying from a heart attack? No one in this study died, but the researchers were able to use another association between people who have a nonfatal heart attack being more likely to have a subsequent fatal heart attack.

We can keep finding ways that this observation of an association in an epidemiological study doesn’t, and can not infer cause. It is a weak association. It is only adequate to generate the hypothesis that bacon causes heart attacks. Then we can run a blinded, controlled, clinical trial where we give a random half of the group bacon, and the other half something that looks like bacon but isn’t. We keep everything else constant and we test the hypothesis.

The problem is that randomized clinical trials like those are prohibitively expensive, so well meaning institutions considering the expense to the public purse have saved us all money by focusing on cheap epidemiological studies.

Like the Nurses Health Study which started in 1976 with 121,700 married female registered nurses between the ages of 30 and 55, and then in 1989 they added 116,430 participants in the Nurses Health Study II, and they are currently recruiting 280,000 nurses (this time they will be also taking male subjects) for NHS III. In these studies, they send out a food frequency survey every 2 years and ask participants to fill out the results.

They have produced hundreds of papers teasing out weak associations, one of which found a 13% reduction in risk of coronary heart disease associated with a 5% substitution of polyunsaturated fat for saturated fat.

https://www.ncbi.nlm.nih.gov/pubmed/19211817

How did they establish the risk of cardio vascular disease? They looked at the effect on cholesterol and used an association found in ANOTHER epidemiological study (The Framingham Study) which found a weak association between total cholesterol and heart disease in men (and even weaker in women).

Why don’t we know about them? They were buried for 40 years. You decide why that is.

Either is sufficient to show replacing saturated with polyunsaturated fats is not supported by the evidence. But we’re talking about associational studies.

Okay, so there are 2 ways you can infer cause from associational studies like these. The first is if the effect size is large enough. For example, if you see an association in the general population of 10%, but in the intervention population of 20%, then that would be a 2x increase and now you would have a stronger case—at least you would have met the first standard of the Bradford hill standards of causation.

If, however, you were able to find a 0% association, then you could use the NON-CAUSATION to infer NON-CORRELATION. For example, let’s consider our hypothetical study into food that causes heart attacks. Let’s say we looked at people who ate avocados and found that it didn’t matter how many avocados you ate because the chance of a heart attack remained pretty much the same. So then you could say that eating avocados can be shown to not cause heart attacks.

Avocado use

Cohort

Nonfatal MI

Rate

None

99

1

1.01%

1/week

202

2

0.99%

2-6/week

96

1

1.04%

daily

198

2

1.02%

more than 1 daily

98

1

1.01%

Which brings me to the PURE Study, also known as, “Associations of fats and carbohydrate intake with cardiovascular disease and mortality in 18 countries from five continents (PURE): a prospective cohort study”

You might have seen this referenced by one of the principal investigators in this video on Youtube.

This study is a large, epidemiological cohort study. Dietary intake of 135 335 individuals (35–70 years in 18 countries) was recorded using validated food frequency questionnaires. The primary outcomes were total mortality and major cardiovascular events (fatal cardiovascular disease, nonfatal myocardial infarction, stroke, and heart failure).

The summary from this study is;

High carbohydrate intake was associated with 28% higher risk of total mortality, whereas total fat (23%) and saturated fat (14%) were related to lower total mortality. Total fat and types of fat were not associated with cardiovascular disease, myocardial infarction, or cardiovascular disease mortality, whereas saturated fat had an inverse association with stroke. Global dietary guidelines should be reconsidered in light of these findings.

Sounds pretty good for our hypothesis that carbohydrate in the diet causes hyperinsulinemia. And hyperinsulinemia is sufficient and necessary to cause cardiovascular disease, right?

But here is the problem. It’s a weak association. A 28% observation is inadequate to infer that carbohydrates cause death from heart disease. It is adequate to say, “Hmmm interesting hypothesis—let’s test it in a randomized controlled experiment.”

If we accept this result because we like the results then we are no better than the people who keep torturing the Nurses Health Study data to produce studies supporting their dogma (and who hid the results of at least 2 RCTs that refuted their dogma).

I’m looking at you Dietitians

This should be a teachable moment that we don’t need to rely on weak associational results, when we can point to actual RCTs showing low carbohydrate diets reversing type 2 Diabetes possibly the primary risk factor for Cardio vascular disease.

What this study DOES add to the sum of human knowledge, however, is the non-correlation between Saturated fat and cardiovascular disease, myocardial infarction, or cardiovascular disease mortality. And as we’ve already seen NON-CORRELATION is adequate to infer NON-CAUSATION.

This study proves that our dietary guidelines (which are founded upon the principal of removing Saturated fat in the diet because of a supposed link to cardiovascular disease) must be reviewed to remove that prohibition, because the science no longer supports that claim (if it ever did).

Notable Replies

Although the PURE study is of limited value, I did chuckle when I thought of the fact that sometimes people lie or otherwise skew their food diary responses toward what they think the researchers might want to hear. (Which, of course, is a near-fatal flaw for questionnaires.) But here, if that phenomenon was true, and bacon still won out? Wow…

Although not appropriate for the topic of your post, you touched upon another major distinction people need to understand. The difference between absolute and relative risk. Something that the press has no idea about, and completely messes up when reporting on things.

Amazing Richard! I am still catching up on all of your podcasts and just got to the Conventional Wisdom Show this morning where you and Carl spoke on this exact topic, and the problem with doctors and dietitians referencing epidemiological studies and not controlled randomized studies which are much more definitive. It really got me thinking. You specifically mentioned how someone responding to this survey might not even be 100% truthful, they may sway their answers to what they believe are the healthy answers. I know I am guilty, taking a survey on myself I tend to rate myself higher to look better. So these individuals in the epidemiological study may be doing the same thing(it really is human nature to want to make yourself look as good as possible) They get the survey and due to the conventional wisdom thing “bacon is bad, fruit is good” and they answer with a slight sway to make themselves more in line with that mentality, which can really disrupt any valid results a study like that could provide.

The difference between absolute and relative risk. Something that the press has no idea about, and completely messes up when reporting on things.

So true, Brian. One group has a very low absolute risk, let’s say 0.0015% chance of dying from cause X. Another group has 0.00263% chance. OHMIGOD, the second group has a 75% higher chance of dying, there!!

In research—particularly psychology—demand characteristics refers to an experimental artifact where participants form an interpretation of the experiment's purpose and unconsciously change their behavior to fit that interpretation. Pioneering research was conducted on demand characteristics by Martin Orne. Typically, they are considered an extraneous variable, exerting an effect on behavior other than that intended by the experimenter. A possible reason for demand characteristics is the participa...

In my opinion, observational studies are not science. Or at least not the science that they think they are studying. They are more along the lines of psychology and sociology than biology or nutrition.

What observational studies are is a starting point for the real science. Eating X or doing Y is associated with outcome Z. The first thing to ask is “is this plausible?” i.e. does it make sense from an evolutionary biological viewpoint that the behavior could cause the effect? For example, being a basketball player is positively associated with height. So does playing basketball cause people to be tall? If you can come up with a biological reason for that then the real science would be to take a group of people, assign them to basketball playing and non-basketball playing groups and see if one group gets taller than the other. I’m going to guess that the answer will be no, playing basketball does not make you taller and that the association is backwards. Taller people tend to play more basketball. But i don’t know which is true merely from the observation that basketball players tend to be tall. Or if there is any causal relationship at all.