Primary Menu

Category nutrition epidemiology

One of the major problems in nutritional epidemiology is that we have a hard time measuring things that we are supposed to be measuring in order to say anything meaningful about relationships between diet and chronic disease. You know, things like how much people are actually eating or how much physical activity they really do get.

Today I had the opportunity to hear Walter Willett, king daddy of the field of nutritional epidemiology, speak on just this dilemma (and yes, he still has that sweet ‘stache). Introduced as having received “too many awards to mention,” W began his talk–“Energy Balance and Beyond: The Power and Limits of Dietary Data”–by addressing the recent unpleasantness raised by researchers who have suggested that the dietary data that we collect simply isn’t worth analyzing (Archer, Pavela, & Lavie 2015; Dhurandhar et al. 2015).

Having been recently immersed in my rhetoric of science readings, I noted that W started right off with some perfunctory boundary work, as way of indicating who was “in” and who was “out” when it came to credibility: He noted that the investigators questioning the value of dietary self-reports are “funded by Coca-cola,” and even the ones that are “pretty good scientists” are “not epidemiologists” and therefore “a little bit naive.” So much for evaluating the data and the arguments on their own merits.

Then he trotted out the “slippery slope” argument: If we throw out self-reported dietary data because it is wildly inaccurate, then not only do we have to “throw out the Dietary Guidelines” (heaven forfend!), but we’d have to throw out occupational safety and drug trial data also–because these are often based on self-reports. Certainly, there’s very little difference between reporting on events in your workplace or what side effects you might have in response to a pill you’re taking and reporting on what you remember eating over the course of the past year.

Then he got down to the nitty-gritty. The reason your Average American is fat is, to put it bluntly, because of math:

2500 kcals/day x 1% = 25 kcals/day

25 kcals/day x 365 days/year = 9125 kcals/year

9125 kcals/year = about 1kg of weight gain/year

Of course, the Average American only gains (on average) about 0.5 kg/year, but W easily explained the discrepancy: We gain weight, but then we have to expend more energy dragging our fat asses around–my words, not his–so we don’t gain as much weight as we would, but then the increased energy expenditure makes us hungry, so we eat more, so we gain weight, ad infinitum, only not, because we seem to plateau, but then, well, there’s that.

So here’s the $64,000 (or really a few hundred million in grant money) question: How good are we at measuring the 1% (or less) difference between “energy in” and “energy out” that we see expressed as weight gain in the population?

The answer, according to the man himself? Not very.

W then showed a comparison of a set of methods to measure “energy in,” along with their coefficient of variation (you can get the technical explanation here, but it is–simplistically–a measure of the amount of variability in your data; long story short: larger numbers = more variability, less precision and smaller numbers = less variability, more precision).

It looked something like this:

Method

Co-efficient of variation

Food frequency questionaire (FFQ)

15%

Diet record

13%

24-hour diet recall interview

28%

Doubly-labeled water (DLW)

9%

Weight

3%

What W made clear is that, if you want to know whether–or to what extent–Americans are indeed eating more and moving less, you can’t measure “energy in” using FFQs, diet records, and interviews and expect to get anywhere near that “1% of calories” accuracy you’re looking for. You can’t even rely on doubly-labeled water, typically considered to be a biomarker measurement with a high degree of precision. In fact, W made the point that DLW samples sent out to different labs would come back with results that differed by up to 50%.

W went on to explain that we have similar difficulties measuring “energy out,” with our very best measurements having a coefficient of variation of nearly 20%.

So in other words, or actually in W’s exact words:

“Weight is the best measure of energy balance.”

Wait? Weight?

As far as I can figure it, using weight as a way of “measuring” (and I use that term loosely) energy balance creates a theoretical–if not a methodological–situation that is, in a word, unfalsifiable. Or–in another word–bogus.

“Weight gain results when the things that we think cause weight gain happen, thus proving that those things have happened.”

At least one of the reasons for attempting to measure “energy in” and “energy out” is to find out whether or not it makes sense to attribute weight gain (or loss) to the differences between them. Instead, W is saying, we can know all we really need to know about how much people eat or move or both, because, voila, weight.

This is like saying, even though traces of tooth fairy fingerprints, footprints, or fiber samples are extremely difficult to obtain, we can reliably determine the existence of tooth fairies by the presence of quarters under your pillow.

Not only does this completely disregard the ever-growing list of things that may also contribute to weight gain/loss over time,* but it contradicts W’s own assertion just moments later that–even though we can’t do it very accurately–a good reason for attempting to measure total energy intake is that it can act as a “crude measure of physical activity.”

One the one hand, W is saying we can estimate how active you are by finding out how much you eat because people who eat more are also likely to be more active–that’s why they eat more–and people who eat less are typically less active–that’s why they don’t eat as much–with the implication being that “energy in” and “energy out” are positively related; as one changes, the other changes in the same direction.

On the other hand, he’s saying these two variables are completely independent of each other. Eating more doesn’t imply you move more; moving less doesn’t imply you eat less. And the way we know that these two variables are disconnected in any given Average American is, voila again, weight.

At this point, it was hard not to be completely distracted by the cognitive dissonance ringing through the room.

I did hang on long enough to hear W say that we shouldn’t really be worried about energy intake anyway because what really matters is diet quality, which, by the way, we can’t measure accurately either.

With all due respect, all I could think of is that while Emperor W may not be completely without clothes, he was definitely down to boxer shorts today.

In an auditorium full of really smart people, I cannot have been the only person thinking that W and his data looked a little over-exposed. But–as we saw with the circumstances revealed by the Ramsden and Zamora paper last week–it can be hard to contradict a famous colleague, and in nutritional epidemiology, no one is famouser than W. It may be even harder, I suppose, when he is an invited guest and an apparently nice fellow. The Q&A was respectful and polite. Difficult as it is to believe, even I kept my mouth shut.

I know science sometimes advances one funeral at a time, and I truly wish W a long and happy life. But maybe he’ll start to get a little chilly there in his boxers and start thinking about retiring someplace warm. Soon.

It's nice to share:

Like this:

In case you missed it, in a recent article published in the American Journal of Preventive Medicine entitled Overstatement of Results in the Nutrition and Obesity Peer-Reviewed Literature (not making this up), the authors found that a lot of papers published in the field of obesity and nutrition have, shall we say, issues.

What’s more, nutrition policy recommendations are supposed to be based on observational data. Hello? Dietary Guidelines? (Seriously. You don’t expect public health nutrition people to do actual experiments now, do you? I mean, unless you are talking about our population-wide, no-control-group, 35-year experiment with low-fat diet recommendations, but that’s different.)

And we don’t mind generalizing conclusions to Everyone in the Whole Wide World based on data from a bunch of white health care professionals born before the atom bomb because, honestly, those are the only data we really care about.

Equating correlation and causation, over-generalizing observations, and then using these results as the basis of policy is the bread (whole wheat) and butter (substitute) of nutrition epidemiology of chronic disease (aka NECD – pronounced Southern-style as “nekked”). NECD has a long proud tradition of misinterpreting results this way, and dammit, nobody is going to take that away from us.

Early NECD researchers have in the past tried to tentatively misinterpret results by obliquely implying that observed nutritional patterns might perhaps have resulted in the disease under investigation. Wusses.

In 1990, Walter Willett and JoAnn Manson came along to show us how the pros do it. These mavericks were the ones who made bold inroads into the kind of overreaching conclusions that made NECD great. Their data come from an observational study of female registered nurses from 11 states in the US, born between 1921 and 1946, who were asked to remember and report what they ate 4 whole times between 1976 and 1984, plus remember and report what they weighed when they were 18 years old. From this dataset, which is clearly comprehensive, and this population, which is practically every female in the US, Willett, Manson and company naturally conclude that “obesity is a major cause of excess morbidity and mortality from coronary heart disease among women in the United States” (emphasis mine). None of this wimpy “associated with increased risk of” bullshooey, obesity CAUSES heart disease, they tell us, CAUSES IT!!!! BWHAAAHAAAAA!!!!!!!

It is on this foundation of intrepid willingness to misinterpret data that the science of NECD was built. This is why Walter Willett is the Big Kahuna at the Harvard School of Public Health. He has demonstrated the courage to misinterpret data in innovative and comprehensive ways, publishing articles throughout his career that indicate that even small increases in BMI—including BMI levels that are currently considered “normal”–cause chronic disease.

In 1999, in what is considered a landmark article in overstatement, one with which all NECD acolytes should familiarize themselves, he states unequivocally, in a review of observational data:

The rest of that sentence reads: ” . . . and is a growing problem in many countries.” His data is once again gathered mostly from American white health care professionals born before the atom bomb. Generalization from specific populations to the rest of the world? Ding ding.

And what should we do with this conclusion, according to Willett? “Preventing weight gain and overweight among persons with healthy weights and avoiding further weight gain among those already overweight are important public health goals.” Using observed associations to make policy recommendations? Ding ding ding. In one fell swoop, Willett dexterously manages to use all three designated methods of overstatement and misinterpretation in the nutrition epidemiology NECD toolbox, demonstrating why he is considered by most researchers to be “the ‘father’ of nutrition epidemiology.” This man overstates and misinterprets in ways that the rest of us can only dream of doing.

But that’s okay. Willett and the Harvard Family know how to deal with this sort of thing.

“Someday, and that day appears to have come, I will call upon you to ignore the work of other scientists when their results contradict my own.”

Let’s face it, in the world of NECD, you can’t just have people like Flegal refusing to infer causation from observed results, just because they don’t want to. When that sort of thing happens, well, let’s just say, if she won’t do it, the Harvard Family will have to do it for her. And so they did.

The Family get-together was held at the Harvard School of Public Health, a “neutral convening space” that is also ground zero for the Nurses’ Health Study I and II, the Physicians Health Study I and II, and the Health Professional Follow Up Study, three datasets that have generated many NECD articles that, unlike Flegal’s article, brilliantly illustrate the powers of misinterpreting observational data. That Flegal herself was invited, but “could not attend” tells us just how ashamed she must be of her inability to make over-reaching conclusions–or perhaps she was temporarily “incapacitated” if you know what I mean.

The webcast from the meeting show us how NECD should be done, with dazzling examples of overstatement and marvelous feats of misinterpretation.

In the world of NECD, PowerPoint arrows are a scientifically-acceptable method of establishing causation.

The panelists highlighted the importance of maintaining clear standards of overstatement and expressed concern that Flegal’s research could undermine future attempts of more credible researchers to misinterpret data as needed to protect the health of the public.

Because that’s what it’s all about folks: protection. Someone needs to protect the science from renegades like Flegal, and someone needs to protect the public from science.

We should be thankful that we have Willett and the Harvard Family there. They know that data like Flegal’s can only confuse the poor widdle brains of Americans. Allowing us to be exposed to such “rubbish” might lead us to the risky conclusion that perhaps overweight and mild obesity won’t cause all of us to die badly, or to the even more dangerous notion that observational data should remark only upon association, not causation. And we sure don’t want that to happen.

We learned that there were some significant changes in those 40 years. We saw dramatic increases in vegetable oils, grain products, and poultry—the things that the 1977 Dietary Goals and the 1980 Dietary Guidelines told us to increase. We saw decreases in red meat, eggs, butter, and full-fat milk—things that our national dietary recommendations told us to decrease. Mysteriously, what didn’t seem to increase much—or at all—were SoFAS (meaning “Solid Fats and Added Sugars”) which, as far as the 2010 Dietary Guidelines for Americans are concerned, are the primary culprits behind our current health crisis. (“Solid Fats” are a linguistic sleight-of-hand that lumps saturated fat from natural animal sources in with processed partially-hydrogenated vegetables oils and margarines that contain transfats; SoFAS takes the trick a step further, by being not only a dreadful acronym in terms of implying that poor health is caused by sitting on our “sofas,” but by creating an umbrella term for foods that have little in common in terms of structure, biological function or nutrition.)

Around the late 70s or early 80s, there were sudden and rapid changes in America’s food supply and food choices and similar sudden and rapid changes in our health. How these two phenomena are related remains a matter of debate. It doesn’t matter if you’re Marion Nestle and you think the problem is calories or if you’re Gary Taubes and you think the problem is carbohydrate—both of those things increased in our food supply. (Whether or not the problem is fat is an open debate; food availability data points to an increase in added fats and oil, the majority of which are, ironically enough, the “healthy” monounsaturated kind; consumption data points to a leveling off of overall fat intake and a decrease in saturated fat—not a discrepancy I can solve here.) What seems to continue to mystify people is why this changed occurred so rapidly at this specific point in our food and health history.

Personally responsible or helplessly victimized?

At one time, it was commonly thought that obesity was a matter of personal responsibility and that our collective sense of willpower took a nosedive in the 80s, but nobody could ever explain quite why. (Perhaps a giant funk swept over the nation after The Muppet Show got cancelled, and we all collectively decided to console ourselves with Little Debbie Snack Cakes and Nickelodeon?) But because this approach is essentially industry-friendly (Hey, says Big Food, we just make the stuff!) and because no one has any explanation for why nearly three-quarters of our population decided to become fat lazy gluttons all at once (my Muppet Show theory notwithstanding) or for the increase of obesity among preschool children (clearly not affected by the Muppet Show’s cancellation), public health pundits and media-appointed experts have decided that obesity is no longer a matter of personal responsibility. Instead the problem is our “obesogenic environment,” created by the Big Bad Fast Processed Fatty Salty Sugary Food Industry.

Even though it is usually understood that a balance between supply and demand creates what happens in the marketplace, Michael Pollan has argued that it is the food industry’s creation of cheap, highly-processed, nutritionally-bogus food that has caused the rapid rise in obesity. If you are a fan of Pollanomics, it seems obvious that food industry—on a whim?—made a bunch of cheap tasty food, laden with fatsugarsalt, hoping that Americans would come along and eat it. And whaddaya know? They did! Sort of like a Field of Dreams only with Taco-flavored Doritos.

As a result, obesity has become a major public health problem.

Just like it was in 1952.

Helen Lee in thought-provoking article, The Making of the Obesity Epidemic (it is even longer than one of my blog posts, but well worth the time) describes how our obesity problem looked then:

“It is clear that weight control is a major public health problem,” Dr. Lester Breslow, a leading researcher, warned at the annual meeting of the western branch of the American Public Health Association (APHA). At the national meeting of the APHA later that year, experts called obesity “America’s No. 1 health problem.”

The year was 1952. There was exactly one McDonald’s in all of America, an entire six-pack of Coca-Cola contained fewer ounces of soda than a single Super Big Gulp today, and less than 10 percent of the population was obese.

In the three decades that followed, the number of McDonald’s restaurants would rise to nearly 8,000 in 32 countries around the world,sales of soda pop and junk food would explode — and yet, against the fears and predictions of public health experts, obesity in the United States hardly budged. The adult obesity rate was 13.4 percent in 1960. In 1980, it was 15 percent. If fast food was making us fatter, it wasn’t by very much.

Then, somewhat inexplicably, obesity took off.”

It is this “somewhat inexplicably” that has me awake at night gnashing my teeth.

And what is Government going to do about it?

I wonder how “inexplicable” it would be to Ms. Lee had she put these two things together:

(In case certain peoples have trouble with this concept, I’ll type this very slowly and loudly: I’m not implying that the Dietary Guidelines “caused” the rise in obesity; I am merely illustrating a temporal relationship of interest to me, and perhaps to a few billion other folks. I am also not implying that a particular change in diet “caused” the rise in obesity. My focus is on the widespread and encompassing effects that may have resulted from creating one official definition of “healthy food choices to prevent chronic disease” for the entire population.)

In a similar fashion, the 1977 Dietary Goals were the culmination of concerns about obesity that had begun decades before, joined by concerns about heart disease voiced by a vocal minority of scientists led by Ancel Keys. Declines in red meat, butter, whole milk and egg consumption had already begun in response to fears about cholesterol and saturated fat that originated with Keys and the American Heart Association—which used fear of fat and the heart attacks they supposedly caused as a fundraising tactic, especially among businessmen and health professionals, whom they portrayed as especially susceptible to this disease of “successful civilization and high living.” The escalation of these fears—and declines in intake of animal foods portrayed as especially dangerous—picked up momentum when Senator George McGovern and his Select Senate Committee created the 1977 Dietary Goals for Americans. It was thought that, just as we had “tackled” smoking, we could create a document advising Americans on healthy food choices and compliance would follow. But issue was a lot less straightforward.

To begin with, when smoking was at its peak, only around 40% of the population smoked. On the other hand, we expect that approximately 100% of the population eats.

In addition, the anti-smoking campaigns of the 1960s and 1970s built on a long tradition of public health messages—originating with the Temperance movement—that associated smoking with dirty habits, loose living, and moral decay. It was going to be much harder to fully convince Americans that traditional foods typically associated with robust good health, foods that the US government thought were so nutritionally important that in the recent past they had been “saved” for the troops, were now suspect and to be avoided.

Where the American public had once been told to save “wheat, meat, and fats” for the soldiers, they now had to be convinced to separate the “wheat” from the “meat and fats” and believe that one was okay and the others were not.

To do this, public health leaders and policy makers turned to science, hoping to use it just as it had been used in anti-smoking arguments. Frankly, however, nutrition science just wasn’t up to the task. Linking nutrition to chronic disease was a field of study that would be in its infancy after it grew up a bit; in 1977, it was barely embryonic. There was little definitive data to support the notion that saturated fat from whole animal foods was actually a health risk; even experts who thought that the theory that saturated fat might be linked to heart disease had merit didn’t think there was enough evidence to call for dramatic changes in American’s eating habits.

The scientists who were intent on waving the “fear of fat” flag had to rely on observational studies of populations (considered then and now to be the weakest form of evidence), in order to attempt to prove that heart disease was related to intake of saturated fat (upon closer examination, these studies did not even do that).

Nutrition epidemiology is a soft science, so soft that it is not difficult to shape it into whatever conclusions the Consistent Public Health Message requires. In large-scale observational studies, dietary habits are difficult to measure and the results of Food Frequency Questionnaires are often more a product of wishful thinking than of reality. Furthermore, the size of associations in nutrition epidemiological studies is typically small—an order of magnitude smaller than those found for smoking and risk of chronic disease.

But nutrition epidemiology had proved its utility in convincing the public of the benefits of dietary change in the 70s and since then has become the primary tool—and the biggest funding stream (this is hardly coincidental)—for cementing in place the Consistent Public Health Message to reduce saturated fat and increase grains and cereals.

There is no doubt that the dramatic dietary change that the federal government was recommending was going to require some changes from the food industry, and they appear to have responded to the increased demands for low-fat,whole grain products with enthusiasm. Public health recommendations and the food fears they engendered are (as my friend James Woodward puts it) “a mechanism for encouraging consumers to make healthy eating decisions, with the ultimate goal of improving health outcomes.” Experts like Kelly Brownell and Marion Nestle decry the tactics used by the food industry of taking food components thought to be “bad” out of products while adding in components thought to be “good,” but it was federal dietary recommendations focusing above all else on avoiding saturated fat, cholesterol, and salt that led the way for such products to be marketed as “healthy” and to become acceptable to a confused, busy, and anxious public. The result was a decrease in demand for red meat, butter, whole milk and egg, and an increase in demand for low-saturated fat, low-cholesterol, and “whole” grain products. Minimally-processed animal-based products were replaced by cheaply-made, highly-processed plant-based products, which food manufacturers could market as healthy because, according to our USDA/HHS Dietary Guidelines, they were healthy.

The problem lies in the fact that—although these products contained less of the “unhealthy” stuff Americans were supposed to avoid—they also contained less of our most important nutrients, especially protein and fat-soluble vitamins. We were less likely to feel full and satisfied eating these products, and we were more likely to snack or binge—behaviors that were also fully endorsed by the food industry.

Between food industry marketing and the steady drumbeat of media messages explaining just how deadly red meat and eggs are (courtesy of population studies from Harvard, see above), Americans got the message. About 36% of the population believe that UFOs are real; only 25% believe that there’s no link between saturated fat and heart disease. We are more willing to believe that we’ve been visited by creatures from outer space than we are to believe that foods that humans have been eating ever since they became human have no harmful effects on health. But while industry has certainly taken advantage of our gullibility, they weren’t the ones who started those rumors, and they should not be shouldering all of the blame for the consequences.

Fixing it until it broke

Back in 1977, we were given a cure that didn’t work for diseases that we didn’t have. Then we spent billions in research dollars trying to get the glass slipper to fit the ugly stepsister’s foot. In the meantime, the food industry has done just what we would expect it to do, provide us with the foods that we think we should eat to be healthy and—when we feel deprived (because we are deprived)—with the foods we are hungry for.

We can blame industry, but as long as food manufacturers can take any mixture of vegetable oils and grain/cereals and tweak it with added fiber, vitamins, minerals, a little soy protein or maybe some chicken parts, some artificial sweeteners and salt substitutes, plus whatever other colors/preservatives/stabilizers/flavorizers they can get away with and still be able to get the right profile on the nutrition facts panel (which people do read), consumers–confused, busy, hungry–are going to be duped into believing what they are purchasing is “healthy” because–in fact–the government has deemed it so. And when these consumers are hungry later—which they are very likely to be—and they exercise their rights as consumers rather than their willpower, who should we blame then?

There is no way around it. Our dietary recommendations are at the heart of the problem they were created to try to reverse. Unlike the public health approach to smoking, we “fixed” obesity until it broke for real.

It's nice to share:

Like this:

Oh the drama! Some of the current hyperventilating in the alternative nutrition community–sugar is toxic, insulin is evil, vegetable oils give you cancer, and running will kill you–has, much to my dismay, made the alternative nutrition community sound as shrill and crazed as the mainstream nutrition one.

When you have self-appointed nutrition experts food writers like Mark Bittman agreeing feverishly with a pediatric endocrinologist with years of clinical experience like Robert Lustig, we’ve crossed over into some weird nutrition Twilight Zone where fact, fantasy, and hype all swirl together in one giant twitter feed of incoherence meant, I think, to send us into a dark corner where we can do nothing but nibble on organic kale, mumble incoherently about inflammation and phytates, and await the zombie apocalypse.

No, carbohydrates are not evil—that’s right, not even sugar. If sugar were rat poison, one trip to the county fair in 4th grade would have killed me with a cotton candy overdose. Neither is insulin, now characterized as the serial killer of hormones (try explaining that to a person with type 1 diabetes).

But that doesn’t mean that 35 years of dietary advice to increase our grain and cereal consumption, while decreasing our fat and saturated fat consumption has been a good idea.

I have gotten rather tired of seeing this graph used as a central rationale for arguing that the changes in total carbohydrate intake over the past 30 years have not contributed to the rising rates of obesity.

The argument takes shapes on 2 fronts:

1) We ate 500 grams of carbohydrate per day in 1909 and 500 grams in 1997 and WE WEREN’T FAT IN 1909!

2) The other part of the argument is that the TYPE of carbohydrate has shifted over time. In 1909, we ate healthy, fiber-filled unrefined and unprocessed types of carbohydrates. Not like now.

Okay, let’s take closer look at that paper, shall we? And then let’s look at what really matters: the context.

The data used to make this graph are not consumption data, but food availability data. This is problematic in that it tells us how much of a nutrient was available in the food supply in any given year, but does not account for food waste, spoilage, and other losses. And in America, we currently waste a lot of food.

According to the USDA, we currently lose over 1000 calories in our food supply–calories that don’t make it into our mouths. Did we waste the same percentage of our food supply across the entire century? Truth is, we don’t know and we are not likely to find out—but I seriously doubt it. My mother and both my grandmothers—with memories of war and rationing fresh in their minds—would be no more likely to throw out anything remotely edible as they would be to do the Macarena. My mother has been known to put random bits of leftover food in soups, sloppy joes, and—famously—pancake batter. To this day, should your hand begin to move toward the compost bucket with a tablespoon of mashed potatoes scraped from the plate of a grandchild shedding cold virus like it was last week’s fashion, she will throw herself in front of the bucket and shriek, “NOOOOOO! Don’t throw that OUT! I’ll have that for lunch tomorrow.”

You know what this means folks: in 1909, we were likely eating MORE carbohydrate than we are today. (Or maybe in 1909, all those steelworkers pulling 12 hour days 7 days a week, just tossed out their sandwich crusts rather than eat them. It could happen.)

BUT–as with butts all over America including mine, it’s a really Big BUT: How do I explain the fact that Americans were eating GIANT STEAMING HEAPS OF CARBOHYDRATES back in 1909—and yet, and yet—they were NOT FAT!!??!!

Okay. Y’know. I’m up for this one. Not only is problematic to the point of absurdity to compare food availability data from the early 1900s to our current food system, life in general was a little different back then. At the turn of the century,

Primary occupations made up the largest percentage of male workers (42%)—farmers, fisherman, miners, etc.—what we would now call manual laborers. Another 21% were “blue collar” jobs, craftsmen, machine operators, and laborers whose activities in those early days of the Industrial Revolution, before many things became mechanized, must have required a considerable amount of energy. And not only was the work hard, there was a lot of it. At the turn of the century, the average workweek was 59 hours, or close to 6 10-hour days. And it wasn’t just men working. As our country shifted from a rural agrarian economy to a more urban industrialized one, women and children worked both on the farms and in the factories.

This is what is called “context.”

In the past, nutrition epidemiologists have always considered caloric intake to be a surrogate marker for activity level. To quote Walter Willett himself:

It makes perfect sense that Americans would have a lot of carbohydrate and calories in their food supply in 1909. Carbohydrates have been—and still are—a cheap source of energy to fuel the working masses. But it makes little sense to compare the carbohydrate intake of the labor force of 1909 to the labor force of 1997, as in the graph at the beginning of this post (remember the beginning of this post?).

After decades of decline, carbohydrate availability experienced a little upturn from the mid 1960s to the late 1970s, when it began to climb rapidly. But generally speaking, carbohydrate intake was lower during that time than at any point previously.

I’m not crazy about food availability data, but to be consistent with the graph at the top of the page, here it is.

Data based on per capita quantities of food available for consumption:

1909

1975

Change

Total calories

3500

3100

-400

Carbohydrate calories

2008

1592

-416

Protein calories

404

372

-32

Total fat calories

1098

1260

+162

Saturated fat (grams)

52

47

-5

Mono- and polyunsaturated fat (grams)

540

738

+198

Fiber (grams)

29

20

-9

To me, it looks pretty much like it should with regard to context. As our country went from pre- and early industrialized conditions to a fully-industrialized country of suburbs and station wagons, we were less active in 1970 than we were in 1909, so we consumed fewer calories. The calories we gave up were ones from the cheap sources of energy—carbohydrates—that would have been most readily available in the economy of a still-developing nation. Instead, we ate more fat.

We can’t separate out “added fats” from “naturally-present fats” from this data, but if we use saturated fat vs. mono- and polyunsaturated fats as proxies for animal fats vs. vegetable oils (yes, I know that animal fats have lots of mono- and polyunsaturated fats, but alas, such are the limitations of the dataset), then it looks like Americans were making use of the soybean oil that was beginning to be manufactured in abundance during the 1950s and 1960s and was making its way into our food supply. (During this time, heart disease mortality was decreasing, an effect likely due more to warnings about the hazards of smoking, which began in earnest in 1964, than to dietary changes; although availability of unsaturated fats went up, that of saturated fats did not really go down.)

As for all those “healthy” carbohydrates that we were eating before we started getting fat? Using fiber as a proxy for level of “refinement” (as in the graph at the beginning of this post—remember the beginning of this post?), we seemed to be eating more refined carbohydrates in 1975 than in 1909—and yet, the obesity crisis was still yet a gleam in Walter Willett’s eyes.

While our lives in 1909 differed greatly from our current environment, our lives in the 1970s were not all that much different than they are now. I remember. As much as it pains me to confess this, I was there. I wore bell bottoms. I had a bike with a banana seat (used primarily for trips to the candy store to buy Pixie Straws). I did macramé. My parents had desk jobs, as did most adults I knew. No adult I knew “exercised” until we got new neighbors next door. I remember the first time our new next-door neighbor jogged around the block. My brothers and sister and I plastered our faces to the picture window in the living room to scream with excitement every time she ran by; it was no less bizarre than watching a bear ride a unicycle.

Not too long ago, the 2000 Dietary Guidelines Advisory Committee (DGAC) recognized that environmental context—such as the difference between America in 1909 and America in 1970—might lead to or warrant dietary differences:

“There has been a long-standing belief among experts in nutrition that low-fat diets are most conducive to overall health. This belief is based on epidemiological evidence that countries in which very low fat diets are consumed have a relatively low prevalence of coronary heart disease, obesity, and some forms of cancer. For example, low rates of coronary heart disease have been observed in parts of the Far East where intakes of fat traditionally have been very low. However,populations in these countries tend to be rural, consume a limited variety of food, and have a high energy expenditure from manual labor.Therefore, the specific contribution of low-fat diets to low rates of chronic disease remains uncertain. Particularly germane is the question of whether a low-fat diet would benefit the American population, which is largely urban and sedentary and has a wide choice of foods.” [emphasis mine – although whether our population in 2000 was largely “sedentary” is arguable]

The 2000 DGAC goes on to say:

“The metabolic changes that accompany a marked reduction in fat intake could predispose to coronary heart disease and type 2 diabetes mellitus. For example, reducing the percentage of dietary fat to 20 percent of calories can induce a serum lipoprotein pattern called atherogenic dyslipidemia, which is characterized by elevated triglycerides, small-dense LDL, and low high-density lipoproteins (HDL). This lipoprotein pattern apparently predisposes to coronary heart disease. This blood lipid response to a high-carbohydrate diet was observed earlier and has been confirmed repeatedly. Consumption of high-carbohydrate diets also can produce an enhanced post-prandial response in glucose and insulin concentrations. In persons with insulin resistance, this response could predispose to type 2 diabetes mellitus.

The committee further held the concern that the previous priority given to a “low-fat intake” may lead people to believe that, as long as fat intake is low, the diet will be entirely healthful. This belief could engender an overconsumption of total calories in the form of carbohydrate, resulting in the adverse metabolic consequences of high carbohydrate diets. Further, the possibility that overconsumption of carbohydrate may contribute to obesity cannot be ignored. The committee noted reports that an increasing prevalence of obesity in the United States has corresponded roughly with an absolute increase in carbohydrate consumption.” [emphasis mine]

Hmmmm. Okay, folks, that was in 2000—THIRTEEN years ago. If the DGAC was concerned about increases in carbohydrate intake—absolute carbohydrate intake, not just sugars, but sugars and starches—13 years ago, how come nothing has changed in our federal nutrition policy since then?

I’m not going to blame you if your eyes glaze over during this next part, as I get down and geeky on you with some Dietary Guidelines backstory:

As with all versions of the Dietary Guidelines after 1980, the 2000 edition was based on a report submitted by the DGAC which indicated what changes should be made from the previous version of the Guidelines. And, as will all previous versions after 1980, the changes in the 2000 Dietary Guidelines were taken almost word-for-word from the suggestions given by the scientists on the DGAC, with few changes made by USDA or HHS staff. Although HHS and USDA took turns administrating the creation of the Guidelines, in 2000, no staff members from either agency were indicated as contributing to the writing of the final Guidelines.

But after those comments in 2000 about carbohydrates, things changed.

Beginning with the 2005 Dietary Guidelines, HHS and USDA staff members are in charge of writing the Guidelines, which are no longer considered to be a scientific document whose audience is the American public, but a policy document whose audience is nutrition educators, health professionals, and policymakers. Why and under whose direction this change took place is unknown.

The Dietary Guidelines process doesn’t have a lot of law holding it up. Most of what happens in regard to the Guidelines is a matter of bureaucracy, decision-making that takes place within USDA and HHS that is not handled by elected representatives but by government employees.

However, there is one mandate of importance: the National Nutrition Monitoring and Related Research Act of 1990, Public Law 445, 101st Cong., 2nd sess. (October 22, 1990), section 301. (P.L. 101-445) requires that “The information and guidelines contained in each report required under paragraph shall be based on the preponderance of the scientific and medical knowledge which is current at the time the report is prepared.”

Did HHS and USDA not like the direction that it looked like the Guidelines were going to take–with all that crazy talk about too many carbohydrates – and therefore made sure the scientists on the DGAC were farther removed from the process of creating them?

Hmmmmm again.

Dr. Janet King, chairwoman of the 2005 DGAC had this to say, after her tenure creating the Guidelines was over: “Evidence has begun to accumulate suggesting that a lower intake of carbohydrate may be better for cardiovascular health.”

Dr. Joanne Slavin, a member of the 2010 DGAC had this to say, after her tenure creating the Guidelines was over: “I believe fat needs to go higher and carbs need to go down,” and “It is overall carbohydrate, not just sugar. Just to take sugar out is not going to have any impact on public health.”

It looks like, at least in 2005 and 2010, some well-respected scientists (respected well enough to make it onto the DGAC) thought that—in the context of our current environment—maybe our continuing advice to Americans to eat more carbohydrate and less fat wasn’t such a good idea.

I think it is at about this point that I begin to hear the wailing and gnashing of teeth of those who don’t think Americans ever followed this advice to begin with, because—goodness knows—if we had, we wouldn’t be so darn FAT!

So did Americans follow the advice handed out in those early dietary recommendations? Or did Solid Fats and Added Sugars (SoFAS—as the USDA/HHS like to call them—as in “get up offa yur SoFAS and work your fatty acids off”) made us the giant tubs of lard that we are just as the USDA/HHS says they did?

Stay tuned for the next episode of As the Calories Churn, when I attempt to settle those questions once and for all. And you’ll hear a big yellow blob with stick legs named Timer say, “I hanker for a hunk of–a slab or slice or chunk of–I hanker for a hunk of cheese!”

It's nice to share:

Like this:

Sodium-Slashing Superheroes Low-Sodium Larry and his bodacious side-kick Linda “The Less Salt the Better” Van Horn team up to protect Americans from the evils lurking in a teaspoon of salt!(Drawings courtesy of Butcher Billy)

Larry and Linda KNOW that salt is BAD. Science? They don’t need no stinkin’ science.

Because the one thing everyone seems to be able to agree on is that the science on salt does indeed stink. The IOM report has had to use many of the same methodologically-flawed studies available to the 2010 Dietary Guidelines Advisory Committee, full of the same confounding, measurement error, reverse causation and lame-ass dietary assessment that we know and love about all nutrition epidemiology studies. But the 2010 Dietary Guidelines Advisory Committee didn’t actually bother to look at these studies.

Use this suggestion to establish some arbitrary clinical cut offs for when this marker is “good” and “bad.” (Note to public health advocacy organizations: Be sure to frequently move those goalposts in whichever direction requires more pharmaceuticals to be purchased from the companies that sponsor you.)

Find some dietary factor that can easily and profitably be removed from our food supply, but whose intake is difficult to track (like saturated fat, sodium, calories).

Implicate the chosen food factor in the regulation of the arbitrary marker, the details of which we don’t quite understand. (How? Use observational data—see methodological flaws above—but hunches and wild guesses will also work.)

Create policy that insists that the entire population—including people who, by the way, are not (at least at this point) fat, sick or dead—attempt to prevent this chronic disease by avoiding this particular dietary factor. (Note to public health advocacy organizations: Be sure to offer food manufacturers the opportunity to have the food products from which they have removed the offensive component labeled with a special logo from your organization—for a “small administrative fee,” of course.)

Commence collecting weak, inconclusive, and inconsistent data to prove that yes indeedy this dietary factor we can’t accurately measure does in fact have some relationship to this arbitrary clinical marker, whose regulation and health implications we don’t fully understand.

Finally—here’s the kicker—measure the success of your intervention by whether or not people are willing to eat expensive, tasteless, chemical-filled food devoid of the chosen food factor in order to attempt to regulate the arbitrary clinical marker.

Whatever you do, DO NOT EVER measure the success of your intervention by looking at whether or not attempts to follow your intervention has made people fat, sick, or dead in the process.

Ooops. I think I just described the entire history of nutrition epidemiology of chronic disease.

Blood pressure is easy to measure, but we don’t always know what causes it to go up (or down). There is no real physiological difference between having a blood pressure reading of 120/80, which will get you a diagnosis of “pre-hypertension” and a fistful of prescriptions, and a reading of 119/79, which won’t. Blood pressure is not considered to be a “distinct underlying cause of death,” which means that, technically, no one ever dies of blood pressure (high or low). We certainly don’t know how to disentangle the effects of lowering dietary sodium on blood pressure from other effects (like weight loss) that may be related to dietary changes that are a part of an attempt to lower sodium (and we have an embarrassingly hard time collecting accurate dietary intake information from Food Fantasy Questionnaires anyway). We also know that individual response to sodium varies widely.

So doesn’t it make perfect sense that the folks at the USDA/HHS should ignore science that investigates the relationship between sodium intake and whether or not a person stayed out of the hospital, had a heart attack, or up and died? Well, it doesn’t to me, but nevertheless the USDA/HHS has remained obsessively fixated on one thing and one thing only, what effects reducing sodium has on blood pressure, and they pay not one whit of attention to what effects reducing sodium has on, say, aliveness.

So let’s just get this out there and agree to agree: reducing sodium in most cases will reduce blood pressure. But then, just to be clear, so will dismemberment, dysentery, and death. We can’t just assume that lowering sodium will only affect blood pressure or will only positively affect health (I mean, we can’t unless we are Larry or Linda). Recent research, which prompted the IOM review, indicates that reducing sodium will also increase triglyceride levels, insulin resistance, and sympathetic nervous system activity. For the record, clinicians generally don’t consider these to be good things.

This may sound radical but in their review of the evidence, the IOM committee decided to do a few things differently.

First, they gave more weight to studies that determined sodium intake levels through multiple high-quality 24-hour urine collections. Remember, this is Low-Sodium Larry’s favorite way of estimating intake.

Also, they did not approach the data with a predetermined “healthy” range already established in their brains. Because of the extreme variability in intake levels among population groups, they decided to—this is crazy, I know—let the outcomes speak for themselves.

Finally, and most importantly, in the new IOM report, the authors, unlike Larry and Linda, focused on—hold on to your hats, folks!—actual health outcomes, something the Dietary Guidelines Have. Never. Done. Ever.

In other words, there is no science to indicate that we all need to be consuming less than ¾ of a teaspoon of salt a day. Furthermore, while there may be some subpopulations that may benefit from sodium reduction, reducing sodium intake to 1500 mg/day may increase risk of adverse health outcomes for people with congestive heart failure, diabetes, chronic kidney disease, or heart disease. (If you’d like to wallow in some of the studies reviewed by the IOM, I’ve provided the Reader’s Digest Condensed Version at the bottom of the page.)

No, folks that giant smacking sound you hear is not my head on my keyboard. That was the sound of science crashing into a giant wall of Consistent Public Health Message. Apparently, those public health advocates at the AHA seem to think that changing public health messages—even when they are wrong—confuses widdle ol’ Americans. The AHA—and the USDA/HHS team—doesn’t want us to have to worry our pretty little heads about all that crazy scientifical stuff with big scary words and no funny pictures or halftime shows.

Frankly, I appreciate that. I hate to have my pretty little head worried. But there’s one other problem with this particular Consistent Public Health Message. Not only is there no science to back it up; not only is it likely to be downright detrimental to the health of certain groups of people; not only is it likely to introduce an arsenal of synthetic chemical salt-replacements that will be consumed at unprecedented levels without testing for negative interactions or toxicities (remember how well that worked out when we replaced saturated fat with partially-hydrogenated vegetable oils?)—it is, apparently, incompatible with eating food.

While these researchers suggested that a feasibility study (this is a scientifical term for “reality check”) should precede the issuing of dietary guidelines to the public, I have a different suggestion.

How about we just stop with the whole 30-year-long dietary experiment to prevent chronic disease by telling Americans what not to eat? I hate to be the one to point this out, but it doesn’t seem to be working out all that well. It’s hard to keep assuming that the AHA and the USDA/HHS mean well when, if you look at it for what it is, they are willing to continue to jeopardize the health of Americans just so they don’t have to admit that they might have been wrong about a few things. I suppose if a Consistent Public Health Message means anything, it means never having to say you’re sorry for 30 years-worth of lousy dietary advice.

Weeeell, I got some bad news for you, Marion. Believe it. They have been delusional. They are making this up. And no, apparently there isno clinical or rational basis for the unanimity of these decisions.

But, thanks to the IOM report, perhaps we can no longer consider these decisions to be unanimous.

Praise the lard and pass the salt.

Read ’em and weep: The Reader’s Digest Condensed Version of the science from the IOM report. Studies marked with an asterix (*) are studies that were available to the 2010 Dietary Guidelines Advisory Committee.

Studies that looked at Cardiovascular Disease, Stroke, and Mortality

*Cohen et al. (2006)

When intakes of sodium less than 2300 mg per day were compared to intakes greater than 2300 mg per day, the “lower sodium intake was statistically significantly associated with increased risk of all-cause mortality.”

*Cohen et al. (2008)

When a fully-adjusted (for confounders) model was used, “there was a statistically significant higher risk of CVD mortality with the lowest vs. the highest quartile of sodium intake.”

Gardener et al. (2012)

Risk of stroke was positively related to sodium intake when comparing the highest levels of intake to the lowest levels of intake. There was no statistically significant increase in risk for those consuming between 1500 and 4000 mg of sodium per day.

*Larsson et al. (2008)

“The analyses found no significant association between dietary sodium intake and risk of any stroke subtype.”

*Nagata et al. (2004)

“Among men, a 2.3-fold increased risk of stroke mortality was associated with the highest tertile of sodium intake.” That sounds bad, but the average sodium intake in the high-risk group was 6613 mg per day. The lowest risk group had an average intake of 4070 mg per day. “Thus, the average sodium intake in the US would be within the lowest tertile of this study.”

Stolarz-Skrzypek at al. (2011)

“Overall, the authors found that lower sodium intake was associated with higher CVD mortality.”

Takachi et al. (2010)

The authors found “a significant positive association between sodium consumption at the highest compared to the lowest quintile and risk of stroke.” As with the Nagata (2004) study, this sounds bad, but the average sodium intake in the high-risk group was 6844 mg per day. The lowest risk group had an average intake of 3084 mg per day. “Thus, the average sodium intake in the US would be close to the lowest quintile of this study.”

*Umesawa et al. (2008)

“The authors found an association between greater dietary sodium intake and greater mortality from total stroke, ischemic stroke, and total CVD.” However, as with the Nagata and the Takchi studies (above), lower quintiles—in this case, quintiles one and two—would be comparable to average US intake.

Yang et al. (2011)

Higher usual sodium intake was found to be associated with all-cause mortality, but not cardiovascular disease mortality or ischemic heart disease mortality. “However, the finding that correction for regression dilution increased the effect on all-cause mortality, but not on CVD mortality, is inconsistent with the theoretical causal pathway.” In other words, high sodium intake might be bad for health, but not because it raises blood pressure and leads to heart disease.

Studies in Populations 51 Years of Age or Older

*Geleijnse et al. (2007)

“This study found no significant difference between urinary sodium level and risk of CVD mortality or all-cause mortality.” Relative risk was lowest in the medium intake group, with an average estimated intake of 2, 415 mg/day.

Other

“Five of the nine reported studies in the general population listed above also analyzed the data on health outcomes by age and found no interaction (Cohen et al., 2006, 2008; Cook et al., 2007; Gardener et al., 2012; Yang et al., 2011).”

Studies in Populations with Chronic Kidney Disease

Dong et al. (2010)

“The authors found that the lowest sodium intake was associated with increased mortality risk.”

Heerspink et al. (2012)

“Results from this study suggest that ARBs were more effective at decreasing CKD progression and CVD when sodium intake was in the lowest tertile” which had an estimated average sodium intake of about 2783 mg/day.

Studies on Populations with Cardiovascular Disease

Costa et al. (2012)

“Dietary sodium intake was estimated from a 62-itemvalidated FFQ. . . . Significant correlations were found between sodium intake and percentage of fat and calories in daily intake. . . . Overall, for the first 30 days and up to 4 years afterward, total mortality was significantly associated with high sodium intake.”

Kono et al. (2011)

“Cumulative risk analysis found that a salt intake of greater than the median of 4,000 mg of sodium) was associated with higher stroke recurrence rate. Univariate analysis of lifestyle management also found that poor lifestyle, defined by both high salt intake and low physical activity, was significantly associated with stroke recurrence.

O’Donnell et al. (2011)

“For the composite outcome, multivariate analysis found a U-shaped relationship between 24-hour urine sodium and the composite outcome of CVD death, MI, stroke, and hospitalization for CHF.” In other words, both higher (>7,000 mg per day estimated intake) and lower (<2,990 mg per day estimated intake) intakes of sodium were associated with increased risk of heart disease and mortality.

Studies on Populations with Prehypertension

*Cook et al. (2007)

In a randomized trial comparing a low sodium intervention with usual intake, lower sodium intake did not significantly decrease risk of mortality or heart disease events.

*Cook et al. (2009)

No significant increase in risk of adverse cardiovascular outcomes was associated with increased sodium excretions levels.

“Adjusted multivariate regression analysis found urinary sodium excretion was associated with incident CVD, with increased risk at both the highest [> 4,401 mg/day] and lowest [<2,346 mg/day] urine sodium excretion levels. When analyzed as independent outcomes, no significant associations were found between urinary sodium excretion and new CVD or stroke after adjustment for other risk factors.”

Other

“Two other studies discussed in this chapter analyzed the data on health outcomes by diabetes prevalence and found no interaction (Cohen et al., 2006; O’Donnell et al., 2011).”

“Results for event-free survival at a urinary sodium of ≥3,000 mg per day varied by the severity of patient symptoms.” In people with less severe symptoms, sodium intake greater than 3,000 mg per day was correlated with a lower disease incidence compared to those with a sodium intake less than 3,000 mg per day. Conversely, people with more severe symptoms who had a sodium intake greater than 3,000 mg per day had a higher disease incidence than those with sodium intakes less than 3,000 mg per day.

Parrinello et al. (2009)

“During the 12 months of follow-up, participants receiving the restricted sodium diet [1840 mg/day] had a greater number of hospital readmissions and higher mortality compared to those on the modestly restricted diet [2760 mg/day].”

*Paterna et al. (2008)

The lower sodium intake group [1840 mg/day] experienced a significantly higher number of hospital readmissions compared to the normal sodium intake group [2760 mg/day].

*Paterna et al. (2009)

A significant association was found between the low sodium intake [1,840 mg per day]) and hospital readmissions. The group with normal sodium diet [2760 mg/day] also had fewer deaths compared to all groups receiving a low-sodium diet combined.

It's nice to share:

Like this:

Move over saturated fat and cholesterol. There’s a new kid on the heart disease block: TMAO.

TMAO is not, as I first suspected, a new internet acronym that I was going to have to get my kids to decipher for me, while they snickered under their collective breaths. Rather, TMAO stands for Trimethylamine N-oxide, and it is set to become the reigning king of the “why meat is bad for you” argument. Former contenders, cholesterol and saturated fat, have apparently lost their mojo. After years of dominating the heart disease-diet debate, it turns out they were mere poseurs, only pretending to cause heart disease, the whole time distracting us from the true evils of TMAO.

The news is, the cholesterol and saturated fat in red meat can no longer be held responsible for clogging up your arteries. TMAO, which is produced by gut bacteria that digest the carnitine found in meat, is going to gum them up instead. This may be difficult to believe, especially in light of the fact that, while red meat intake has declined precipitously in the past 40 years, prevalence of heart disease has continued to climb. However, this is easily accounted for by the increase in consumption of Red Bull—which also contains carnitine—even though it is not, as some may suspect, made from real bulls (thank you, BW).

Here to explain once again why we should all be afraid of eating a food our ancestors ignorantly consumed in scandalous quantities (see what happened to them? they are mostly dead!) is the Medical Media Circus! Ringleader for today is the New York Times’ Gina Kolata, who never met a half-baked nutrition theory she didn’t like (apparently Gary Taubes’ theory regarding carbohydrates was not half-baked enough for her).

Step right up folks and meet TMAO, the star of “a surprising new explanation of why red meat may contribute to heart disease” (because, frankly, the old explanations aren’t looking too good these days).

We know that red meat maybe almost probably for sure contributes to heart disease, because that wild bunch at Harvard just keeps cranking out studies like this one, Eat Red Meat and You Will Die Soon.

This study and others just like it definitely prove that if you are a white, well-educated, middle/upper-middle class health professional born between 1920 and 1946 and you smoke and drink, but you don’t exercise, watch your weight, or take a multivitamin, then eating red meat will maybe almost probably for sure increase your risk of heart disease. With evidence like that, who needs evidence?

Flying like the Wallenda family in the face of decades of concrete and well-proven assumptions that the reason we should avoid red meat is because of its saturated fat and cholesterol content, the daring young scientists who discovered the relationship between TMAO and heart disease “suspected that saturated fat and cholesterol made only a minor contribution to the increased amount of heart disease seen in red-meat eaters” [meaning that is, the red-meat eaters that are white, well-educated, middle/upper-middle class health professionals, who smoke and drink and don’t exercise, watch their weight, or take a multivitamin; emphasis mine].

Perhaps their suspicions were alerted by studies such as this one, that found that, in randomized, controlled trials, with over 65 thousand participants, people who reduced or changed their dietary fat intake didn’t actually live any longer than the people who just kept eating and enjoying the same artery-clogging, saturated fat- and cholesterol-laden foods that they always had. (However, this research was able to determine that a steady diet of broiled chicken breasts does in fact make the years crawl by more slowly.)

Exactly how TMAO increases the risk of heart disease, nobody knows. But, good scientists that they are, the scientists have a theory. (Just to clarify, in some situations the word theory means: a coherent group of tested general propositions, commonly regarded as correct. This is not one of those situations.) The researcher’s think that TMAO enables cholesterol to “get into” artery walls and prevents the body from excreting “excess” cholesterol. At least that’s how it works in mice. Although mice don’t normally eat red meat, it should be noted that mice are exactly like people except they don’t have Twitter accounts. We know this because earlier mouse studies allowed scientists to prove beyond the shadow of a doubt that dietary cholesterol and saturated fat cause heart disease mice definitely do not have Twitter accounts.

Look, just because the scientists can’t explain how TMAO does all the bad stuff it does, doesn’t mean it’s not in there doing, you know, bad stuff. Remember, we are talking about molecules that are VERY VERY small and really small things can be hard to find–unless of course you are on a scientific fishing expedition.

What will happen to the American Heart Association’s seal of approval now that saturated fat and cholesterol are no longer to be feared?

Frankly, I’m relieved that we FINALLY know exactly what has been causing all this heart disease. Okay, so it’s not the saturated fat and cholesterol that we’ve been avoiding for 35 years. Heck, everybody makes mistakes. Even though Frank Sacks and Robert Eckel, two scientists from the American Heart Association, told us for decades that eating saturated fat and cholesterol was just greasing the rails on the fast track to death-by-clogged-arteries, they have no reason to doubt this new theory. And even though they apparently had no reason to doubt the now-doubtful old theory, at least not until just now—as a nation, we can rest assured that THIS time, they got it right.

Now that saturated fat and cholesterol are no longer Public Enemies Number One and Two, whole milk, cheese, eggs, and butter—which do not contain red meat—MUST BE OKAY! I guess there’s no more need for the AHA’s dietary limits on saturated fat, or for the USDA Guidelines restrictions on cholesterol intake, or for those new Front of Package labels identifying foods with too much saturated fat. Schools can start serving whole milk again, butter will once again be legal in California, and fat-free cheese can go back to being the substance that mouse pads are made out of. Halla-freaking- looyah! A new day has dawned.

But—amidst the rejoicing–don’t forget: Whether we blame saturated fat or cholesterol or TMAO, meat is exactly as bad for you now as it was 50 years ago.

Nutritional epidemiology has many shortcomings when it comes to acting as a basis for public health nutrition policy. But you don’t have to take Walter Willett’s word for it. Apart from the weaknesses in the methodology, there is one great big elephant in the nutrition epidemiology room that no one really wants to talk about: our current culture-wide “health prescription.”

You don’t have to care about or read about nutrition to know that “fat is bad” and “whole grains are good” [1,2]. Whether or not you follow the nutrition part of the current “health prescription” is likely to depend on a host of other factors related to general “health prescription” adherence, which in turn may have a much larger impact on your health than your actual nutritional choices. This is especially true because variation in intake and/or variation in risk related to intake are frequently quite small.

For example, in a study relating French fry consumption to type 2 diabetes, the women who ate the least amount of French fries ate 0 servings per day while the women who ate the most ate 0.14 servings per day or about 5 French fries per day (i.e. not a big difference in intake) [3]. The risk of developing type 2 diabetes among 5-fries a day piggies was observed to be .21 times greater than the risk among the no-fry zone ladies (i.e. not a big variation in risk).

Okay, everyone knows that French fries are “bad for you.” But these ladies ate them anyway. Were there other factors related to general “health prescription” adherence which may have had an impact on their risk of diabetes?

The French fry eaters also “tended to have a higher dietary glycemic load and higher intakes of red meat, refined grain, and total calories. They were more likely to smoke but were less likely to take multivitamins and postmenopausal hormone therapy.” (They also exercised less.) In other words, the French fry eaters, within a context of a known “health prescription” had chosen to ignore a number of healthy lifestyle recommendations, not just the ones related to French fries.

If you think of our current default diet recommendation as the “placebo” (although its effects may not be exactly benign), it is clear that people who fail to comply with dietary prohibitions against red meat, saturated fats, and “junk” food like French fries may also be more likely to have other poor self-care habits, like smoking and not exercising. That poor health care habits are related to poor health is of no surprise to anyone.

Statistical people

In their statistical manipulation of a dataset, nutritional epidemiologists attempt to “control” for confounding variables (confounders), such as differences in health behavior. A confounder is something that may be related to both the hypothesized cause under investigation (i.e. French fry eating) and the outcome (i.e. type 2 diabetes). As such, it muddies the water when you are trying to figure out exactly what causes what.

When statisticians “control” or “adjust” for these confounders in a data set, they essentially “pretend” (that’s the exact word my biostats professor used) that the other qualities that any given individual brings to a data set are now equalized and that the specific factor under investigation—diet—has been isolated. Well, it has and it hasn’t. The “statistical humans” created by computer programs that now have equalized risk factors are a mirage; these people do not exist. The people who contributed the data that ostensibly demonstrates that “French fries increase risk of type 2 diabetes” are the exact same people who had other behaviors that may also contribute to increased risk of diabetes. (Please note: I chose this example, rather than “red meat causes heart disease” because there are many plausible explanations for French fries causing type 2 diabetes, it is just that you aren’t going to find evidence for them using this approach.)

Most nutritional epidemiology articles contain some version the following statement in their conclusions:

“We cannot rule out the possibility of unknown or residual confounding.”

Meaning: We can not rule out the possibility that our results can be explained by factors that we failed to fully take into account. Like the elephant in the room.

That this is actually the case becomes apparent when hypotheses that seem iron-clad in observational studies are put to the test in experimental conditions.

Lack of experimental confirmation

If ever there was a field about which you could say “for every study there is an equal and opposite study,” it is nutritional epidemiology–although experimental results are generally considered “more equal” than observational data. Associations that link specific nutrients to the prevention of specific diseases can be (relatively) strong and consistent in the context of nutritional epidemiology observational data, but absent in experimental situations. Epidemiological studies suggested that beta carotene could prevent cancer; experimental evidence suggested just the opposite and in fact, smokers given beta carotene supplements had increased risk of cancer [6]. Epidemiological studies suggest that low-fat, high-carb diets are related to a healthy weight. This may be the case, but experimental evidence shows that reducing carbs and increasing fat is more effective for weight loss [7, 8]. In one study, when experiment participants added carbs back into their diet (the increase in calories from 2 months to 12 months is entirely accounted for–and then some–by carbohydrate), they regained the weight they had lost.*

Data from [7]

Kenneth Rothman, in his book Epidemiology: An Introduction, emphasizes the importance of applying Karl Popper’s philosophy of refutationism to epidemiology:

“The refutationist philosophy postulates that all scientific knowledge is tentative in that it may one day need to be refined or even discarded. Under this philosophy, what we call scientific knowledge is a body of as yet unrefuted hypotheses that appear to explain existing observations.” [9]

Rothman makes the point that there is an asymmetry when it comes to refuting hypotheses based on observations: a single contrary observation carries more weight in judging whether or not a hypothesis is false than a hundred observations that suggest that it is true.

If the current nutrition paradigm needs to be “refined or even discarded,” how will we acquire the knowledge we need to create a better system? How can we move away from “statistical people” towards a perspective that encompasses the individual variations in genetics, culture, and lifestyle that have such a tremendous impact on health?

Tune in next time for the final episode of N of 1 nutrition when I ask the all-important question: What the heck does n of 1 nutrition have to do with public health?

*This doesn’t mean that carbs are evil–some of my best friends are carbs–but that the conditions in a population that are associated with a healthy weight and the conditions in an experiment to that lead to increased weight loss are very different.