Main menu

In the nutrition blogosphere, perhaps no phrase is more incendiary than “a calorie is a calorie.” It’s become a pithy way of encapsulating the popular notion that the problem of obesity is nothing more than the consumption of excess energy, or eating too much. When the phrase is used in the context of online micturition contests and forum trolling, it is often done in an effort to provoke those who advocate a low carb or ancestral style diet as a means of losing weight. More specifically, it is usually intended as a rebuttal to the “carbohydrate hypothesis”, a mechanistic explanation for obesity championed perhaps most vociferously by Gary Taubes, author of Good Calories, Bad Calories. The hypothesis, which Taubes lays out in the book, is that consumption of refined carbohydrates leads to hormonal dysregulation at the level of the adipose tissue, ultimately leading to excess fat accumulation (i.e. obesity).

As Taubes himself readily admits, this remains a hypothesis, and as such is still unproven, though he believes at this point it is well supported by the available evidence. He also argues that, if the carbohydrate hypothesis is correct, focusing on the number of calories eaten is less important than the “context” in which those calories are consumed (context here meaning the putative effects of differing macronutrient ratios on fat accumulation). This is sometimes misinterpreted as him believing that “calories don’t matter” or that he’s “denying thermodynamics.” But for those who advocate an ancestral style diet for preventing obesity, this focus on the carbohydrate hypothesis completely misses the point. And attacking it as a means of discrediting the ancestral view of obesity ends up creating a false debate, as it focuses on the problem at the wrong level of analysis (and, for many, is also a straw man, since plenty of folks in the ancestral health community are skeptical of the cho hypothesis).

Much of this rhetorical wheel spinning seems to stem from the fact that, in most ancestral style diets, calories are not counted, leading critics to assert that those advocating this way of eating believe that “calories don’t matter.” The same accusation has been launched ad nauseum at Taubes, typically arising out of a misunderstanding of his views on the misapplication of the laws of thermodynamics. For what it’s worth, Taubes doesn’t argue that calories don’t matter, but rather that the energy balance equation works in both directions — focusing on inputs only ignores half of it. Yet, whether or not Taubes explanation for obesity turns out to be right or not has no bearing on whether an ancestral style diet is the right approach towards maintaining a healthy amount of fat tissue.

In light of this confusion, I think it’s worthwhile to clarify the ancestral view towards body fat regulation and how calorie counting fits into that framework, at least as I see it.

1. Accumulation of excess body fat is fundamentally a problem of disordered homeostasis.

The zillions of biochemical reactions that support a working human being only unfold properly within a narrow range of internal conditions (body temperature, pH, mineral concentrations, etc.). While the external environment of a human may vary considerably, our internal environment cannot. Homeostasis is the process by which the body maintains this vital internal equilibrium.

The central coordinator of homeostasis is the hypothalamus, a tiny, almond-sized structure nestled deep in the brain. Through its position at the helm of the endocrine and autonomic nervous nervous systems and its widespread connections throughout the brain, it can manipulate virtually every aspect of our physiology and behavior on various timescales, all in the service of maintaining homeostasis.

Included within the body’s homeostatic responsibilities is the maintenance of a stable supply of energy, of which the regulation of the fat mass (a large repository of energy) plays an important role. The accumulation of detrimental excessive body fat can thus be viewed as a disorder of energy homeostasis (as could the accumulation of excess sodium or the inability to clear glucose effectively from the bloodstream).

2. Animals can maintain homeostasis most easily in environments in which their species has been adapting to the longest.

The mechanisms that support homeostasis have evolved in and have been shaped by particular environments, and as such are optimized to operate within an expected range of environmental conditions (i.e. ambient temperature, energy availability, day/night cycles, etc.). The greater the mismatch between present conditions and the conditions under which a species is adapted to, the greater the homeostatic challenge. An ostrich can maintain a stable body temperature in the African savannah because, as a species, ostriches have been living in this climate for a very long time. On the other hand, for an ostrich, maintaining temperature homeostasis in Antarctica would present a formidable homeostatic challenge, to say the least.

In evolutionarily discordant environments (i.e. those that differ considerably from the environment of one’s evolutionary ancestors), endogenous homeostatic mechanisms will likely be insufficient to maintain stable internal conditions. Maintaining homeostasis in these settings will require technological adaptations. Continuing with the example of body temperature regulation, the endogenous homeostatic mechanisms of a human alone are insufficient for survival in an Alaskan winter. Surviving these conditions requires technological adaptations such as fire, clothing, man-made shelter and insulation, indoor heating, etc.

Technological adaptation is what has allowed our species to inhabit and thrive in evolutionarily discordant environments. Unlike the rest of the animal kingdom who cannot live outside the bounds of their homeostatic capabilities, many of us live outside of our species’ “ecological niche” – thanks to technology.

Just as with other domains of homeostasis, our endogenous mechanisms for body fat regulation are calibrated to the nutritional environments of our hominid ancestors. Thus, when eating an evolutionarily concordant diet (in the presence of intact mechanisms of energy homeostasis), body fat is easily maintained within a narrow range. In this case, the feelings of hunger and satiety (generated by the hypothalamus) are reliable indicators of the body’s energy needs. Likewise, the more evolutionarily novel a food is (read: twinkies, soda, bread, pasta,…), the greater the likelihood there’ll be a need for technological adaptation to maintain body fat homeostasis. In these instances, our endogenous mechanisms are insufficient, and as such feelings like hunger and satiety, no longer an accurate reflection of the body’s energy needs, are unreliable and misleading.

Calories of course matter, but, like animals in the wild, you need not have any specific knowledge of them to maintain body fat within a healthful range. If you are avoiding evolutionarily novel foods, endogenous, subconscious mechanisms for energy homeostasis are sufficient to prevent excessive accumulation of body fat. As has been said elsewhere, if you are eating an ancestral diet “calories count, but why count them?”

As stated, maintaining body fat homeostasis in the setting of an evolutionarily discordant diet requires technological adaptation. In the case of body fat regulation, the most commonly employed technological adaptation is to “count calories.” With this method, foods are burned in a bomb calorimeter to determine their energy content, which is then used as a crude surrogate for the energy they add to the organism when consumed. It is an imperfect solution, but at the present time the best available technology for those who wish to eat evolutionarily discordant foods and not accumulate excess body fat.

And that’s it.

Nowhere in this argument is it stated or implied that the energy content of food is irrelevant. That, of course, would be silly. Rather, the point is that energy intake need not be consciously regulated. As you can see, though, this argument is agnostic with respect to the specific mechanisms that disrupt body fat homeostasis. Whether it’s from subversion of food reward mechanisms by nutrient-poor-hyperpalatable foods, insulin-induced local dysregulation of adipose tissue, some combination of these or as yet undiscovered mechanisms is immaterial. Furthermore, the prescription for how to maintain body fat homeostasis does not rest on any understanding of the specific mechanisms that lead to obesity, only that an evolutionarily concordant diet is the most sensible path to achieving it. The mechanisms by which unhealthful amounts of body fat accumulate in evolutionarily discordant diets is still very much a matter of open debate and research, and, given the complexity of the subject, will likely remains so for quite some time.

What follows is a review of two important studies pertaining to stroke, both published around the same time. Each had significant implications for stroke care, but only one has made an impression on the mainstream medical community. I’ll leave it to you to draw your conclusions about why this might be.

Study 1: The Kitava Stroke Study

In 1993, Stefan Lindeberg and colleagues published a study entitled “Apparent absence of stroke and ischaemic heart disease in a traditional Melanesian island: a clinical study in Kitava” in the Journal of Internal Medicine. Kitava is an island in Papua New Guinea, its inhabitants are the Kitavans. They are one of the last remaining populations of hunter-gatherers whose diet is largely untainted by the products of modern agriculture and technology. As such, they offer a rare opportunity to examine the health consequences of the modern lifestyle, including our post-agricultural diets.

Prior to the publication of the study, Lindeberg and his colleagues had performed a systematic examination of the Kitavan’s health. As part of this effort, they attempted to search and record cases of stroke.

They didn’t find any.

Not only did they not find any cases amongst the living residents, they didn’t find any inhabitants who could recall a deceased resident having experienced anything resembling a stroke – this in spite of their detailed knowledge of prior generations. In fact, the notion that a person could be suddenly stricken with an inability to speak or move was altogether foreign to them. Amongst the physicians on the Trobriand Islands and in other parts of New Guinea, Lindeberg’s findings weren’t surprising. They’d considered the absence of stroke in the aboriginal populations an established fact (and one that couldn’t be explained by a lack of elderly residents).

As remarkable as this sounds, these findings were not unprecedented. On the contrary, an absence of stroke amongst native populations in various parts of the world had been reported many times before. A 1944 study reviewing the records of 269 consecutive patients treated for neurological disease at a clinic in Kampala, Uganda found not a single case of stroke. Relatedly, a review of 3,000 autopsies performed in Uganda during the 1930s and 1940s showed not one ischemic stroke (and only four cerebral hemorrhages). As the Uganda natives transitioned to modern life and diet, however, things began to change for the worse. By 1968, stroke had gone from non-existent to the most common neurological diagnosis — a story familiar to anyone acquainted with the literature on diseases of civilization.

Lindeberg’s Kitava study, then, was not an isolated finding. It corroborated the experience of so many others in the medical field who had spent time caring for hunter-gatherer populations. But it was one of the most systematic and rigorous accountings of stroke in a native society, and had the benefit of being published in a widely circulated, peer-reviewed medical journal.

Surely, then, a finding this improbable — an entire civilization free of stroke — would ripple through the medical community. After all, the potential public health implications are enormous — it’s hard to even fathom a world without stroke.

But there were no ripples. I entered medical school in 1997, four years after the publication of Lindeberg’s study. And in the next eight years of my medical training, I heard nothing of the Kitavans. Moreover, there was no mention of the diseases of civilization literature, showing entire classes of disease like diabetes, heart disease, stroke, obesity, cancer that were non-existent in native populations. To this day, this omission is not one I fully understand.

What I did hear a LOT about, and what continues to impact my daily life as a neurologist, was another study that would be published two years later…

Study 2: The NINDS t-PA Trial

In 1995, the National Institute of Neurological Disorders and Stroke (NINDS) rt-PA study group published a study entitled “Tissue Plasminogen Activator for Ischemic Stroke”. It was phase III clinical trial of the drug Alteplase for use in acute ischemic stroke. Alteplase, more commonly known as t-PA, is a direct thrombin inhibitor, classified as a thrombolytic agent. It interferes with the blood clotting process, disrupting the formation of the thrombi that lead to strokes and heart attacks. 312 patients with an acute stroke were enrolled in the trial, half received a placebo, half received the active drug. The drug had to be given less than three hours from the onset of the stroke (prior trials had shown a significant increase in serious bleeding when thrombolytics were given after this time frame).

Those who received the drug did better than those who didn’t. This was a significant and important finding, as this was the first positive trial of a drug given to improve the outcome in those suffering from an acute stroke. Following publication of the study, there was a massive effort — spearheaded by the American Heart Association (AHA) — to raise public awareness about stroke in order to maximize t-PA utilization. Stroke was rebranded as a “brain attack” to help convey the same sense of urgency as “heart attack” (the name never stuck). Hospitals around the country developed “acute stroke protocols”, their primary purpose to ensure rapid delivery of t-PA to eligible patients. Committees were formed, faster CT scanners were purchased, ER procedures were streamlined. In other words, all available resources were marshaled to maximize the potential benefit of this new drug. Hospital competency in treating stroke is now judged largely on the single measure of how many eligible stroke patients receive this drug.

It should be noted that the NINDS t-PA trial itself and the AHA response were not without their critics. After all, this wasn’t the first trial of a thrombolytic agent. Fifteen trials of thrombolytic drugs had been conducted prior to this one, each failing to show benefit. Faithful adherence to the scientific method would normally mandate a result like this be replicated before its conclusions are accepted.

Once hospital ERs around the country began using t-PA, the chorus of dissension grew a bit louder. To the physicians administering the drug, it seemed like patients receiving it were bleeding into their brains more than anticipated. Real-world studies of actual t-PA usage in community hospitals confirmed these suspicions, revealing hemorrhage rates substantially higher than those reported in the NINDS trial – high enough to shift the benefit to risk ratio away from t-PA.

Review of the trial design also reveals a major flaw – though it was billed as a double blinded study, outcome raters were not blinded to the presence (or absence) of hemorrhage on the post-tPA CT scans (the presence of bleeding on the scan indicating the patient most likely received active drug), an oversight that could easily tip the scales in favor of t-PA. By 2003, nearly ten years after publication of the NINDS trial, the Cochrane collaboration (an independent nonprofit group that reviews randomized controlled trials) was still unwilling to give t-PA an endorsement for widespread use.

Other critics cited concerns over potential conflicts of interest, alarmed by what appeared to be a disturbingly cozy relationship between Genentech, the manufacturer of t-PA, and the AHA – the former donating $2.5 million dollars towards the construction of the association’s Dallas headquarters. The relationship between Genentech and the AHA remains a close one to this day.

In spite of these protestations, t-PA has remained the “standard of care” treatment for acute stroke since the publication of the NINDS trial. And thanks to the massive PR campaign, many in the general public are now aware that there’s a “clot-busting” drug for stroke. The common perception of t-PA – a perception still perpetuated by many in the medical community – is that if you’re lucky enough to get the drug in time it’ll bust up the stroke-causing clot and reverse the effects of the stroke. The actual data from the NINDS trial, however, paints a more sobering picture.

The NINDS t-PA Data: Separating Fact from Hype

The NINDS t-PA trial consisted of two parts. Part one was designed to determine if patients who received t-PA had early improvement. 147 patients were randomized to the placebo arm, 144 to t-PA. An NIH stroke scale was performed at the time of drug administration and at 24 hours. In the final analysis of part one, there was no statistically significant difference in the two groups. In other words, contrary to the prevailing perception, t-PA doesn’t immediately reverse the effects of stroke. A stroke victim given a placebo infusion was just as likely to rapidly improve as one given t-PA (in clinical practice these days such dramatic recoveries are invariably attributed to t-PA when it’s been given, divine intervention when it hasn’t).

Part two was designed to investigate differences between the two groups at three months. In this part, 168 received t-PA and 165 placebo. Subjects were administered a variety of different functional outcome scales at day 0 and at 3 months. At the 3 month mark, differences between the two groups did emerge. Overall, there were 12% more patients with a favorable outcome in the t-PA group than in the placebo group.

So no immediate improvement and only 12% at 90 days. Sounds a bit underwhelming, right? The truth is, given what we know about brain physiology, it’s somewhat surprising that t-PA has any effect at all.

Stroke Physiology: The Inconvenient Truth

An ischemic stroke occurs when the flow of blood through a blood vessel is suddenly impeded by the formation of a clot. Blood can’t get through the vessel, and the brain tissue that vessel supplies loses its oxygen supply. A brain cell completely deprived of oxygen dies within five minutes.

That’s five minutes. As in 300 seconds. As in nowhere near enough time to recognize you’re having a stroke, call an ambulance, get to the hospital, be seen by a physician, get necessary testing, retrieve the drug from the pharmacy, start an IV, etc…

So right off the bat we realize that with stroke, trying to intervene after the fact leaves us little opportunity to modify the outcome.

Now, all hope is not lost for the would-be-stroke-heroes, thanks to a phenomenon known as the ischemic penumbra. The penumbra is a region of brain tissue surrounding the central core of the stroke whose oxygen delivery has only been partially compromised, thanks to collateral blood supply from non-occluded arteries. The size of this region may differ considerably from one stroke case to another, depending on individual anatomical variation and the location of the clot. In some cases it may not exist at all. Nonetheless, the penumbra affords a potential opportunity for meaningful intervention beyond the 5-minute mark. Saving this “tissue at risk” is the mechanism by which t-PA exerts its beneficial effects when it does help.

The Impact of t-PA

Now let’s consider the practical impact of t-PA. Presently, roughly 4% of patients who have a stroke will receive t-PA – this after over a decade long massive effort (discussed above) to improve t-PA utilization. The reason for this is that there’s a fairly low ceiling – for a number of reasons, only a fraction of patients with stroke will ever be eligible for treatment with t-PA.

Let’s do the math, then – if we have 4% of stroke victims receiving t-PA, and if 12% of those will receive at least some kind of benefit, then that means that with the current state-of-the-art technology for acute stroke we can expect to help 0.48% of all people with stroke achieve at least some improvement with a pharmacologic intervention. In other words, we can expect to help slightly less than 1 out of every 200 stroke victims.

Now, this is a fantastic thing if you’re that one out of 200.

But what of the other 199?

The Path to a Stroke Free Future

Two studies.

One a study of a drug with the potential to help 1 out of 200 stroke victims at least a little bit – a study whose publication has transformed the delivery of emergency medical care in the developed world.

The other points a way towards a future free of stroke, yet whose publication went virtually unnoticed.

Based on what we know of stroke pathophysiology, it is naïve to think we can make a large scale impact on stroke with after-the-fact interventions. Prevention isn’t glamorous, and nobody stands to make wad of cash from it. But, if we ever want to do more than just fiddle at the margins of stroke recovery, it’s the only means by which to do so.

Yet, this will never happen if we continue to ignore findings, like the Kitava study, that can lead us there. Nor will it happen if we continue to advocate prevention strategies (e.g. “low fat, low cholesterol diets”) that have no foundation in basic science and that have been summarily refuted by the scientific evidence.

You’ve probably heard the news by now. There’s a new risk factor for heart attack. Move over cigarette smoking, high blood pressure, obesity, and diabetes, and make way for… car ownership.

That’s right. In a study published in the January 11 issue of the European Heart Journal, researchers found that people who owned a car [and a television set] were 27% more likely to have a heart attack that those who didn’t. Surprisingly, this hasn’t led to a sudden spike in rental car sales or a flood of title transfers.

In fact, despite the correlation found in the study, neither the authors of the study nor anyone in the popular media who’ve reported on it have even hinted that owning a car causes heart attacks. Nor have they suggested that the correlation has to do with prolonged steering wheel clutching, toxic car fabrics, city dwelling, asphalt inhalation, road rage, or any of an almost limitless number of potential explanations. Rather, they’ve suggested instead that the real culprit behind the risk increase is something only tangentially associated with car ownership: sedentary behavior. Sedentary behavior, which has nothing at all to do directly with the act of buying or driving a car, is postulated as the hidden variable causing heart attacks in car owners.

Now let’s contrast this with how a recently published study demonstrating a correlation between red meat consumption and stroke risk is reported (we could choose just about any dietary study to make this point). Here we have the exact same type of data – folks who ate more red meat had a slightly higher incidence of stroke, just as folks who had heart attacks were more likely to own a car. In the study on stroke and red meat, however, there’s no mention of any potentially confounding hidden variables. This time, it isn’t something tangentially associated with red meat consumption that’s postulated to account for the correlation. No, despite the fact that once again we could propose an almost limitless number of potential explanations for this correlation, this time we’re led to believe that there’s a direct link between the two, and that these findings “support current recommendations to limit how much red meat people eat.”

To reiterate, when the correlation is between car/TV ownership and heart attacks, the notion of a causal link between the two factors is too absurd to even consider. In the study on red meat and cancer, however, the possibility that there could even be confounding variables isn’t even mentioned.

The truth is, neither of these studies – particularly the conclusions being drawn from them – are examples of good science in action. This type of data should never be used as a basis for health advice, nor does it ever warrant a headline. It represents the lowest quality of scientific evidence – hardly a source for basic truth. At best, correlations like these can only suggest a hypothesis that can be tested further in a more rigorous fashion. The inherent plausibility of any particular hypothesis is simply a reflection of the biases of those interpreting the data.

Yet, this is the type of low quality data, and this is the type of sloppy, careless reasoning that has led to the current prevailing wisdom about proper nutrition. This same error of causal attribution - one that is immediately rejected by all sentient beings as absurd in the study on heart attacks and car ownership – lies at the heart of why we’re told that minimizing animal fat and cholesterol intake is one of the keys to optimum health.

So just what happens when you do make public health recommendations based on low quality data and sloppy reasoning? This:

And in case you’re wondering if this is just because folks aren’t heeding this brilliant advice:

Human metabolism is a dizzingly complicated process. Every time we eat, our food must be first broken into smaller components by digestive enzymes, then absorbed across the gut into circulation, then shunted into various chemical pathways so that eventually it can be burned for energy, incorporated into our tissue structure, or excreted as waste. In a system with so many moving parts, there are an almost incomprehensible number of ways in which it can go wrong. Thousands of genes are involved in encoding the machinery involved in this process. One little mistake in any of them and the whole thing can unravel, typically with disastrous consequences. The fact that this happens so rarely is remarkable.

But, there are exceptions, classified as disorders of metabolism. These are genetic disorders, usually the result of a mistake in the DNA sequence for an enzyme that catalyzes a particular step in the metabolic process. Defects in the synthesis and breakdown of glycogen are known as the “glycogenoses,” defects in lipid metabolism the “lipodoses,” and so forth. One of the primary consequences of a missing enzyme is that it creates a bottleneck – a specific reaction in the metabolic dance can’t proceed, leading to pathological build-up of the reactants in various bodily tissues (which can be seen microscopically).

Many times this toxic sludge of metabolites builds up in the nervous system. When it does, it leads to progressive degeneration of nervous tissue and resultant loss of function – dementia, movement disorders, seizures, muscle weakness and so forth — until death occurs from the complications of this devastated state. In Globoid Cell Leukodystrophy, just to choose a random example, the absence of the lysosomal galactocerebrosidase enzyme (due to a recessively inherited mutation in the GALC gene) results in the inability to cleave galactose from ceramide derivatives inside the lysosomes. This leads to defects in myelin production, clumps of multinucleated “globoid cells” swollen with galactocerebroside, and brain cell death. Clinically, it is causes progressive loss of cognitive and motor skills, muscle rigidity, seizures, deafness, blindness, and eventually death.

To recap, in all of these neurodegenerative disorders of metabolism, the fundamental problem is a flaw in the metabolic apparatus itself. A particular metabolite can’t be cleared, and its excess becomes toxic. The trash is being taken to the street, but the trucks aren’t coming to pick it up. Eventually, the trash piles up in brain tissue, and the brain stops working as it should.

But what if the problem is that there’s just too much trash to begin with? What if, week after week, we’re bringing more than the trucks can carry?

We don’t typically think of the neurodegenerative diseases of adults as disorders of metabolism, as they’re commonly viewed as fundamentally distinct from the metabolic disorders of childhood that affect the brain. As such, little research has focused on the role of metabolism in adult-onset neurodegeneration, even though the scarcity of dementia amongst the elders in modern day hunter gatherer populations should’ve long ago steered us in that direction. Yet, as discussed in my last post, we now know that at least one metabolic disorder — diabetes — contributes directly to the risk of Alzheimer’s disease, likely via the accelerated formation of advanced glycation end products (AGEs) in the brain. Defective insulin secretion and insulin resistance in the tissues means too much glucose in the blood and in the brain. That this abundance of glucose leads to Alzheimer’s disease is yet another observation beckoning us to reconceptualize Alzheimer’s as a disorder of metabolism.

But that’s not all. AGEs, along with other signs of oxidative damage, have been found lurking inside the pathology of many other adult-onset neurodegenerative disorders, including Pick’s disease, Parkinson’s, Progressive Supranuclear Palsy, Lewy Body Dementia, and ALS. Each of these involves a progressive decline in neurological function with deficits that may include dementia, movement disorders, seizures, muscle weakness and so forth. And each is associated with pathological buildup of gunk that can be seen microscopically in nervous tissue.

Sound familiar?

I don’t think these similarities are coincidental, nor should they be surprising. As stated, pathological accumulation of metabolites can arise either from the inability to break them down (i.e. – trash trucks stop coming) or from consumption of particular dietary components in excess of what their obligate metabolic pathway can handle (i.e. – too much trash). Our hunter gatherer metabolic machinery can’t cope with 150 pounds of sugar or 50 pounds of industrial vegetable oil a year (in fact, likely only a small fraction of this). And, just as in the metabolic disorders of childhood, that excess becomes toxic, with results that are just as devastating.

According to epidemiological data, those with diabetes have anywhere from a two to five-fold increased risk of Alzheimer’s-type dementia (AD). I’ve known of this link for a while, but have had my doubts about its validity. To me, it was equally plausible that these supposed cases of Alzheimer’s were just misdiagnosed cases of vascular dementia. Diabetes unequivocally leads to widespread atherosclerotic disease, including in the vascular beds supplying the brain, and “vascular” dementia is a clear consequence of this process. In these cases, though, the pathology is distinct from dementia of the Alzheimer’s type, as are the typical clinical features. But few patients receive the type of evaluation that can tease out these differences, and misdiagnoses commonly occur. Any correlations based on epidemiological studies that use this diagnostic data, then, are suspect.

The other problem with this correlation is that our pathogenetic model for Alzheimer’s doesn’t account for it. Ever since Alois Alzheimer presented the case of Auguste D in 1906, senile plaques and neurofibrillary tangles have been recognized as the pathologic signature of Alzheimer’s disease in the brain — the former clumps of beta-amyloid protein, the latter “paired helical filaments” of hyperphosphorylated tau.

Based on this pathology, the prevailing idea was that some intrinsic abnormality in one of these proteins was the driving force behind AD. As such, the central debate within the Alzheimer’s research community in recent years revolved around whether we should be trying to figure out how to keep beta amyloid from clumping or tau from phosphorylating, with the amyloidists and tauists arguing over whose piece of the funding pie should be larger. Nowhere in these competing models was there a mechanism by which impaired glucose clearance (i.e. diabetes) led to the Alzheimer’s pathology. So, if the link is real, it means that either both the amyloidists and tauists are wrong or the connection between diabetes and AD risk is false.

Supposing for a moment that the diabetes-Alzheimers link is legit, just how might one lead to the other? The fundamental problem in diabetes is the inability to adequately clear glucose from the bloodstream, and complications arise through the formation of “Advanced Glycation End Products” (AGEs) — a glucose molecule, left hanging around in the circulation for too long, ends up sticking to proteins in bodily tissues, disrupting their structure and function. Any sugar molecule (i.e. not just glucose), in fact, may react with a protein and form an AGE. The more AGEs, the greater the tissue damage, until you end up with widespread tissue destruction — particularly in places where cell turnover is low (nerves, retina, kidney). Conceivably, the accumulation of AGEs in the brain could result in cognitive decline. But what does this have to do with those plaques and tangles that are the sine qua non of the Alzheimer’s brain? We still don’t have a mechanistic connection between hyperglycemia and the AD pathology.

AGEs and Alzheimers

More recent investigations have revealed that there’s a little more to the Alzheimer’s pathology than we initially thought. As it turns out, when you look a little closer at the contents of both beta-amyloid plaques and neurofibrillary tangles, guess what you find?

Advanced Glycation End Products.

You also find receptors for AGEs (known as RAGEs) on the surface of diseased brain cells — receptors that are now known to play a role in oxidative damage. Strengthening the case even further is the fact that AGEs have been shown to both stimulate beta amyloid production and induce tau phosphorylation.

So, here we have several lines of converging evidence suggesting that AGEs are a significant — if not essential — component of Alzheimer’s pathogenesis. Plaques and tangles may well be the final outcome of a process first set in motion with AGEs. Such a story would provide us with the mechanistic connection we need between hyperglycemia and AD pathology. But, then, what about those plaques and tangles in non-diabetics? Does this AGE-incorporated model have any relevance for those with intact glycoregulation mechanisms?

Glycation without the Glucose

The following study was reported in the October 5th issue of the journal Neurology. It was probably read by a small handful of people, and certainly nobody in the popular media appreciated its potential significance. But it is the first prospective study I know of to investigate the effects of AGEs on cognitive function in non-diabetics.

The study involved 920 people without dementia, roughly half of whom were diabetic. At the start of the study, subjects were divided into three groups according to the levels of pentosidine in their urine. Pentosidine is a well established marker of AGEs — the more pentosidine in the urine, the more protein glycation that’s occurring in the body. Several measures of cognitive function were recorded at the beginning of the study and over the ensuing nine years.

At the start of the study, all groups — low, moderate, and high pentosidine — were equal in their cognitive scores, each averaging around 90 on a 100 point scale. At the nine year mark, however, things were no longer the same. The low pentosidine (surrogate for low AGEs) group had declined 2.5 points over that interval, the moderate group 5.4, and the high group 7.0. The p-value for these between group differences was <0.001. In sum, elevated AGE levels at the onset of the study, irrespective of diabetic status, predicted cognitive decline.

This study suggests two important things. One, that diabetics with only low level protein glycation (presumably from adequate pharmacologic control of blood glucose) don’t have any heightened risk of cognitive decline compared to non-diabetics — the path to cognitive decline (and presumably Alzheimer’s) in diabetics then appears to be through hyperglycemia and AGEs. Second, that non-diabetics with high levels of protein glycation have the same risk of cognitive decline as diabetics. In other words, if we’re trying to avoid cognitive decline and Alzheimer’s, it’s just as important for those without diabetes as it is for those with it to focus on minimizing AGEs.

Well Isn’t That Special???

So, how the heck does a person without diabetes avoid AGE accumulation in the brain? After all, if it wasn’t glucose causing all that glycation in those normoglycemic non-diabetics, then just what exactly was it?

As it turns out, there’s more than one way to glycate your bodily proteins. And if you’re not diabetic and thus adequately handling your dietary glucose load, then glucose isn’t likely a major culprit. Fructose, on the other hand, is about ten times more likely than glucose to form AGEs. And, unlike glucose, which can be metabolized in all cells of the body, fructose (like other toxins), can only be metabolized in the liver. The average American consuming an almost unfathomable 150-180 pounds of sugar (which is half glucose, half fructose) a year is easily overwhelming his or her liver’s capacity to metabolize fructose, which leaves it free to glycate like mad.

Not that we really needed another reason to avoid sugar, or, more specifically, fructose. Already implicated as a major contributor to the pathogenesis of obesity, fatty liver disease, insulin resistance, diabetes, and cancer, we’d have to be delusional to still cling to the notion that it’s just an “empty calorie”. But Alzheimer’s, too? Could it really be a primary instigator of this wretched, self-robbing disease of the mind?

Back when Dr. Alzheimer first presented the case of Auguste D in 1906, “presenile dementia” was a medical curiosity, scarce enough to warrant an anecdotal case report. Since that time, the average American’s sugar consumption has more than tripled. And 5.4 million of those Americans now suffer from Alzheimer’s disease.

Three years ago, if you’d told me I’d be starting a blog about diet and nutrition, I probably would have thought you were nuts. I’m a Neurologist, and my primary interests lean towards disorders of higher cortical function (language, memory, learning, etc.). That said, I do tend to be intellectually restless, and the internet is filled with intriguing diversions. Such was the case with my own detour into the topic of ancestral health and nutrition, which began a couple of years ago with an innocent encounter with Kurt Harris’s Archevore (then “PaNu”) blog. When I first stumbled upon it, I had no idea that my entire conception of chronic disease was about to be turned upside down. But his ideas were too compelling to ignore, despite being completely at odds with what I thought I knew about diet, nutrition, and disease. For years — since early in medical school I suppose — I’d generally accepted the conventional nutritional dogma as truth, which was that a low-fat, low-cholesterol, high-carbohydrate diet was the cornerstone of healthy eating. This is generally treated in medical circles and popular culture as established, proven fact.

But it isn’t.

In fact, it’s completely, utterly, horribly wrong. And it has created a public health nightmare.

Like many who’ve followed along this path, I was profoundly influenced by Gary Taubes’s Good Calories, Bad Calories, a book that may one day be widely regarded as the spark that ignited a paradigm shift. It is an eye-opening and disheartening account of how science — in this case the science of nutrition — can be perverted by the corrupting influence of hubris, greed, carelessness, corporate interests and hasty, misinformed public policy making. It also so thoroughly dismantles the saturated fat/cholesterol/heart disease hypothesis that it is impossible for anyone with a half-open mind to come away believing any shred of it.

Yet, the fat/heart disease hypothesis is at the heart of mainstream nutritional dogma. It is the foundation upon which everything has been built for the past forty years, and almost all research published since the hypothesis gained widespread acceptance has been interpreted through the distortion of its lens. Without it, we wouldn’t have the public-health-disaster that is the USDA food pyramid, nor would we find ourselves in the midst of an ever-expanding epidemic of diabetes and obesity.

In spite of how far afield we’ve gotten ourselves, I remain confident that in the end good science will prevail. At some point, the mainstream nutritional edifice has to collapse under the weight of the evidence against it — a process that in some respects has already begun. It is my hope that the internet can accelerate this process. The more voices of reason and thoughtful dissension there are, the faster this will happen.

Which in the end is how a Neurologist starts a blog about nutrition.

Righting the Ship

Thankfully, we have a very reasonable guiding principle from which to re-build our foundation. It is a principle that has been conspicuously absent from the field of nutrition over the past several decades, despite the fact that it has been instrumental in fueling the advances that have been made in the biological sciences for over a century. It is the principle of evolution by natural selection.

Over two million years, the human species has roamed planet Earth. And for the vast majority of that time, the human diet consisted of the animals we killed and the plants we foraged. Those who thrived on these foods lived long enough to pass their genes on to subsequent generations. Those who didn’t perished. In this manner, the human genome became exquisitely adapted to the diet of the hunter-gatherer. Not surprisingly, humans on this diet are lean, fit and free of chronic illness. They are physical specimens worthy of our species’ position on the top of the food chain.

The diet of our hunter-gatherer ancestors didn’t include wheat.

It didn’t include refined sugar.

It didn’t include vegetable and seed oils.

These items were introduced only very recently in the course of human history — roughly 10,000 years ago — through the adoption of agriculture. 10,000 years is a blip on an evolutionary scale, nowhere near enough time to adapt to such a radical change in our internal metabolic environment. Moreover, there is minimal selection pressure for such adaptation to occur, since the diseases wrought by the modern diet do not affect fecundity. And the health consequences of this transition are clear, as demonstrated by numerous observational accounts of modern day hunter-gatherer societies who transition to a post-agriculural diet. The very same diseases that strain our bloated healthcare system — diabetes, heart disease, obesity, hypertension, cancer — are all a consequence of this nutritional transition. They are “diseases of civilization,” and as such are all entirely preventable.

Alas, preventing them requires that we first correctly identify their root causes. The evolutionary perspective provides us with an ideal framing device for doing so. It is a perspective supported by sound a priori reasoning, compelling observational evidence, basic science, and an impressive and ever-expanding collection of anecdotal experiences.

Naturally, I’m particularly interested in the diseases of civilization that affect the nervous system, which include (but are likely not limited to) Stroke, Dementia (Alzheimer’s included), Multiple Sclerosis, Parkinson’s and Migraine. I find it both maddening and exhilarating to think that all the suffering wrought by these illnesses is largely optional. This will be the primary focus of this blog: employing an ancestral health perspective to understand the ways in which we can preserve and protect our trillions of neuronal connections from disease, and maintain optimal brain function. In other words, how to save our synapses.