Are ladies' hips compromised?
Women aren't called broads for nothing. We have, on average, larger dimensions of the pelvis that comprise the birth canal (linked into broader hips) than men do and this is not just relatively but absolutely and this is not just in the U.S., this is species-wide (1).

There is no better explanation for this than it's due to selection for successful childbirth.

But somehow with the combination of classic biomechanical theory, plus obvious performance differences between the sexes, it has become ingrained in our thinking that wider hips make women athletically inferior to men. In this line of thinking, the male pelvis is the human ideal because it's part of a superior athlete's body. The female pelvis, therefore, is second-rate--compromised for necessary reasons to do with childbirth.

But recent research by Anna Warrener--which she was so generous to contribute to our paper--shows that hip breadth fails to predict the biomechanical values that are used to calculate walking and running economy. There's support, too, from prior studies that used different or less complex models. So the notion that wide hips are worse at walking and running is not supported by current evidence.

He must be the fastest swimmer alive because of his man hips.

Not only does Anna's research call into question whether women's hips are to blame for our failure to dominate sports, but it also lends strength to any doubt that slightly wider hips that could better accommodate a neonate or that could birth a more developed neonate (i.e. one that is more precocial like all the other primates) are being selected against in favor of proper walking and running ability.

Of course, there could be alternative explanations. Selection could be keeping wide hips from getting wider because of some yet to be understood horrible side effect of too-wide hips. It could be that wider hips than we have now would increase the stress on the hips, knees and ankles to the point of immobility. It could mean that the soft tissues of the pelvic floor would be stressed beyond their mechanical properties and strained to failure. It's also possible that pregnancy itself requires a narrow-enough pelvis to carry the fetus above it, and that getting any wider down there in the swimsuit area would mean the fetus literally falls out before it's ready. But to my knowledge, we don't have good understandings of any of this, at least not in evolutionary terms.

Given research like Anna's, it's much harder to support this idea that we're at the perfect balance now in the ladies' pelvis: With pelvic width (despite all the variation) thanks to childbirth keeping it perfectly wide but bipedalism keeping it perfectly narrow.

But that's--as I'm finding more and more with this research--what many people support. There's this thinking that humans are presently at this perfect balance and that everything would fall to pieces if we weren't. From the childbirth side of the scale it's true: Gotta be big enough! (But that's true for all animals that give birth through a bony birth canal.) However, it's not clear why it's true from the bipedalism side of the balanced equation.

And if turns out to be true that narrow hips do not contribute to male domination in sports (assuming male sport domination is on mother nature's radar), then tell me what is ideal about the male pelvis? Maybe you never thought about it that way, but by assuming that the female is compromised, the "ideal" status for males is implied.

Neither the existence of sexual dimorphism in pelvic dimensions, nor anything else that I can think of supports a tradition of placing suboptimal value on the female form. The female births the babies so if (iff) there’s an "ideal" it’s female. Selection maintains its adequacy for locomotion and for childbirth. If it didn’t, humans would have gone extinct.

To some that may still mean the female pelvis is compromised. To me, it's a multi-tasker and a good one.Always look on the bright side of life. But up to a limit, please.
A popular reaction to our paper is, But why the tight fit at birth? It's impossible to ignore! Why should childbirth be so difficult?
Answers of "because the baby is big and the birth canal is not" or "it's a coincidence that might mean nothing because clearly we overcome it just fine" or "I like to think it's just a coincidence that my finger fits perfectly into my nostril" ... these sorts of replies rarely appease a protester.

When I imagine what it was like to first hear about the obstetrical dilemma back in the 1950s and '60s when it was first suggested, here's what I think my reaction would have been:

Hooray! The pains of childbirth (2) and the curse of helpless babies (3) are no longer Eve's fault but Evolution's! This is great. Eve was framed! Point for evolution AND point for feminism. Woot!
And what I've been yapping away about for four posts is not refuting that evolution's the process behind all this. Of course it is. Everything in biology is either evolution or it's magic.

However, evolution isn't always unicorns and double rainbows. Sometimes life just sucks.

For example, I'm going to die. I could look on the bright side and say that my decomposing carcass will nourish myriad life forms in the complex web of life that lives beyond my death in amazing and beautiful ways. And I often like to think about my molecules living on in a narwhal or a mango tree. But c'mon. Death sucks.

That's an extreme example, but I sense a similar need to make lemonade out of painful and dangerous labor and needy babies. There must be some good reason...

As if the simple observation ...that childbirth works warts and all... is insufficient reason.

With everyone who was born to a mom who was born to a mom who was born to a mom, etc..., with all the billions of us here today, why are we refusing to accept human reproduction as adequate?

We are nothing less than a raging evolutionary success, just like nearly everything else that is alive right now.

Instead, the downsides to reproduction mean to some that it's deficient, leading them to seek reasons or evolutionary upsides: It's okay ladies, childbirth sucks because humans have such big wonderful brains! It's okay ladies, childbirth sucks so we can walk and run properly! It's okay ladies, babies are so needy and helpless so they get out into the environment where there's proper stimulation for learning and development!

But why do we need any other reason or evolutionary upside than the cute little bundle of joy?

2. The Book of Genesis (See where it all goes wrong after Eve first ate fruit with the serpent then ate it with Adam.)

3. Influential fourth century orthodox Christian, Augustine bishop of Hippo, called upon the fact that infants are born helpless to support his description of the sinful, suffering, terrifyingly vulnerable natural state of the human species. In Pagels, Elaine (1988) Adam, Eve and the Serpent. New York: Vintage.

***

And for your Friday happiness...this never fails to crack me up into all kinds of stitches... Jeff Tweedy reads "My Humps" (the song that my title of this post riffs on) by the Black-Eyed Peas.

Thursday, August 30, 2012

To get up to speed, click on Part 1 here and Part 2 here to learn about the paper I'm writing about below...Or read it for yourself in early view here at PNAS.

There have been some very personal reactions to the press that came with our recent paper on the evolution of human gestation length.

And I don't mean this kind:

I mean the what about my short/ long/ weird pregnancy? kind.

Result of googling "weird pregnancy"

This research has always been wrapped up in questions about human variation and even draws upon observations of human variation in gestation length. So I'm not surprised it's causing people to reflect on their own experiences. And I'm also not totally surprised because I've been on the planet long enough to know that if you claim to know anything about pregnancy, you get all the stories.

But I didn't fully anticipate how strongly our work about humans as a species would be seen as work about "me." I guess we're only human.

So today's post is for all the people who read media about our paper and are dying to know what it's got to do with their own pregnancy.

Some things first.
1. I see the world through evolution goggles. Take that as close to literal as you can.

2. I have more scholarly experience with skeletonized (dead) and fossilized (extremely dead) humans than living ones.

3. I am not trained in medicine or health sciences.
4. I will not give medical advice.
5. I do not know what doctors are, or should be, telling pregnant women about eating and exercise.
6. It took me five years to write this paper from first notes to publication and I needed the help of brilliant experts to make it as strong as it is. I do not expect to fully appreciate its implications on the week it is published--not for human evolution, not for pregnant human mothers, not yet! If you have ideas... go on with your bad self and test them! I'll try to do the same.

Here we go, then...

How to apply an evolutionary hypothesis about gestation to your pregnancy

#1 thing to think about.

Evolution is everything about you, but it is not all about you.

When reports of our research say "moms" we're not talking about you in particular. We're talking about "moms" in a general comparative evolutionary context, species-wide, primate-wide, mammal-wide.

#2 thing to think about.

The EGG hypothesis explains species-level phenomena

Many evolutionary papers like ours are about understanding species level phenomena and comparing differences and similarities between species to better understand those phenomena, to explain whether patterns exist and, if they do, how or why.

So using the EGG hypothesis to explain why you gestated 9 days past your due date is a little bit like this: Try using the broad ecological and biological rules and patterns that explain variation in body size across mammals to explain why Fred the elephant is 9 cm taller than Frank the elephant. That's a challenge. That's what you're attempting to do if you read our paper (or reports on it) and think of yourself first rather than your species.

Here's another way to think about it. You might have seen our paper described as finding, "Metabolism, not the hips, limits gestation." Metabolism might get you thinking of yourself but the hips hypothesis (obstetrical dilemma; OD) never did right? I could be so so wrong but nobody thinks that there's some way the fetus can sense when its head or shoulders are about to be too big to fit through the birth canal and then initiates labor so it can escape. Nobody thinks that the mom's body can detect when the baby is about to get too big to pass through her birth canal and initiates labor so it can escape. Nobody really thinks that these sorts of mechanisms exist in mothers do they? (It's possible but I don't know of any literature suggesting this.) So the hip constraint hypothesis (OD) was never about individuals, it's about our species over evolutionary history, with hips shaping our gestation length to be the right length for babies to escape in time. Generations over deep time... that's where your brain needs to be with this EGG idea too.

Sure, we need to consider individual human variation, like yours and mine. To formulate the EGG hypothesis we drew heavily upon Ellison's (2001; and in our paper) metabolic crossover (MC) hypothesis for the timing of human birth: Babies are born when they begin to starve in utero. This happens when the needs of the fetus surpass the mother’s ability to meet them or, in other words, cross over to become larger than what the mother can provide. Labor is then triggered and carried out by a complex biochemical process. Some of the evidence he provides includes:

The MC is very much about individual within-species variation and it's possible that the MC explains all individual variation in gestation length among humans, however, that's uncertain right now. Since it's specific to the biochemical pathways of humans, the details of the MC don't apply to other species with different physiologies. However, the idea that human gestation is limited by mother's metabolism--the cornerstone of the MC-- is what EGG applies to human/hominin evolutionary history and to gestation in other primates and other mammals as well, since a mother's body size (a nice proxy for metabolism) predicts fetal size and gestation length across mammals. This is really not news to a lot of researchers considering the body of research supporting it.

It's a useful method in evolutionary biology to look at variation within a species and use it to hypothesize why variation exists between species. That is what we have done with EGG. Mother's body sizes differ between species like say, humans and orangutans, and so do their metabolic traits! EGG suggests variation in metabolism between species explains variation in gestation length. It predicts that species do not exceed their species specific metabolic ceiling during pregnancy. It will be exciting to find out whether some species give birth well before they reach their metabolic capacity!

#3 thing to think about.

Evolution is about common ancestry and change over time. "Ideals", "optimization," "standards," "greater value in this form, lesser value in that one"... these do not exist in nature except in our minds.

You worrying that you gestated too long or too little compared to the species average is a bit like you worrying that you're shorter or taller than average, have a larger or smaller head than average, have more saliva than average, or that you can't intentionally fart. Stop worrying about your normal variation. Variation exists because it works. There's safe wiggle room around most traits and sometimes there's even full-on spasmodic dancing room. We'd be extinct if there wasn't any room for variation in how to survive and reproduce. Celebrate your weirdness, your slightly long healthy gestation, your slightly short healthy gestation, your big healthy baby, your small healthy baby, your freckles, your asymmetrical face, your hairy knuckles, your lack of wisdom teeth, your pterodactyl toes. Who cares! If life's getting on with your weird ass, then you can certainly get on with life.

Further, it helps if you don't require EGG to be all about adaptation. It could be. But it's easier to think of it as just the way it is. Mothers can only gestate so long. Period. The mechanism that initiates labor based on those metabolic cues (MC)... totally adaptive! The process the EGG explains? Not really ... a limit's a limit! How could it surpass it? It would be physiologically impossible. Adaptive ideas aren't necessary for EGG unless it's somehow adaptive to keep the fetus inside mother right up until that threshold. Which is possible. But it could also just be the only way to trigger labor. And so we're back to the EGG being just how it is.

***

So how should you apply this evolutionary hypothesis to your pregnancy?

It sheds light on why it's difficult to give birth. It sheds light on why babies seem so helpless compared to other primates.

But regarding your specific individual details that differ compared to other human mothers and their babies? Please talk to your doctor who's your main brain on this. And read read read read read, if you're interested.

There are some pretty cool cultural and philosophical implications of our paper. I'll save those for tomorrow's post.

References
Ellison, P. 2001. On fertile ground: A natural history of
human reproduction. Cambridge: HarvardUniversity Press. [link to book on amazon]

Wednesday, August 29, 2012

Conventional wisdom has it that pharmaceutical companies won't invest in drugs for rare, or 'orphan' diseases because it's just not profitable. Many diseases are rare and the cause unknown, but there are some rare diseases for which enough is known that, in principle at least, one can envision developing a targeted medicinal approach. Many of these are also known to be, or seem to be genetic, and that at least leads to the idea of a targetable problem for Pharma. The costs of the research can be so high that pharmaceutical companies just don't want to wade in, or this has been the usual view. But, in an unusual twist on the story, it turns out there's some reason for optimism for those with one of the 8000 or so such diseases, with 250 more discovered every year.

Rare diseases are defined by the National Institutes of Health Office of Rare Disease Research as those that affect fewer than 200,000 people in the US. Some, far fewer than 200,000. A new study, reported in Medical Marketing and Media, and published in the July Drug Discovery Today, suggests that in fact there is money to be made in rare disease pharmaceuticals. And that's potentially good news for a lot of people.

The government offers an "Orphan Drug Designation" program, which allows tax credits for what can be prohibitively expensive RandD for drugs for rare diseases. In addition it offers grants to cover costs, waived FDA fees, which can be high, and the promise of seven years of exclusivity. Granted, there's a lot that's unsavory about the workings of the pharmaceutical industry, but patents, of course, are the prime protection that pharmaceuticals have for recouping RandD costs and eventually profiting from a drug. And, given that most orphan drugs are novel, "biosimilars" are not as much of a threat when the patent expires as they are for more popular compounds such as Viagra or Lipitor, say.

And, the MMM piece points out that marketing to a small target population is much cheaper than the population at large, clinical trials can be less expensive, and in fact the orphan drug market grew at a higher rate than the drug market in general.

There's a lot to be said against the pharmaceutical industry, but that's not our point today. Of course bottom line is the primary consideration when it comes to developing new drugs, but if it really does become financially attractive to develop new drugs for rare diseases, this is good news for a lot of people. The scientific challenges are still real, but at least there's motivation to take them on.

Now to hope that it becomes profitable to invest in research into drugs for another and much larger group of forgotten people, those with "neglected tropical diseases."

Monday, August 27, 2012

Update (Aug 30, 6:41 am): Paper's up. Here.Update (Aug 28, 3:32 pm): Jeepers, if I'd have known readership (err, clickership) was going to jump way above normal with this post, or that writers would lift quotes from here, I'm sure I would have crafted it better. Note: We're past our embargo, so I'm posting this now even though it doesn't appear that the article is posted on-line yet. I'll update this post and link to it once it is.

Some colleagues and I have a paper this week in early view at PNAS (1). I already told a big part of the story here.

In our paper, we show how weak--given current evidence--the popular obstetrical dilemma hypothesis (OD) is for explaining human gestation length. And, we offer up an alternative hypothesis as well.

The EGG hypothesis
What limits fetal growth during pregnancy? The OD says it's the pelvis--implying it's a unique constraint due to bipedalism. But the EGG hypothesis suggests that the primary constraint on fetal growth and gestation length is maternal metabolism (energetics, growth, gestation). Mothers give birth when they do because they cannot possibly give anymore energy into gestation and fetal growth. And when you look at the data available on pregnancy and lactation metabolism in humans... it shows that right around 9 months of gestation, mothers reach the energetic throughput ceiling for most humans.

Here's Herman's Figure 3 showing the EGG for humans, plotted with real metabolic data. Circles are the offspring, squares are the mother. Notice how fetal energy demands increase exponentially as the end of a normal human gestation period approaches. To keep it in any longer, mother would have to burst through her normal metabolic ceiling. Instead, she gives birth and remains in a safe and possible (!) metabolic zone.

The starred dot is a human infant at the developmental equivalency of a newborn chimpanzee. This is the thought experiment that Stephen Jay Gould famously wrote about. That's the age you'd have to birth a human baby to be like a newborn chimp, since we're born born more helpless than chimps. Keeping a fetus in this long--that is, adding 7 or more months to our gestation--would be physiologically impossible because it would require a mother to exceed 2.1x the basal metabolic rate, bursting through the ceiling for most humans. We actually gestate as long or maybe a little longer than you'd expect for a primate or a mammal, not shorter! So our relative helplessness at birth is indicating how much more neurological growth we have to achieve during our lives, after birth, than chimps and other relatives.

The EGG is a more general incarnation and a broader application of Peter Ellison's "metabolic crossover hypothesis" for the timing of human birth. The EGG branches out beyond our species, considering humans to operate within the physiological confines of other primates and mammals. But comparable data for other species, for testing the EGG, are not yet available to our knowledge.

Why do we grow babies that seem too big to fit through our birth canals? A strong hypothesis is that it's our diets that have radically changed compared to most of our evolutionary history. Many humans have constant and easy access to high calorie foods while pregnant and they can grow bigger babies over longer pregnancies. Very much related to this idea, check out Herman's recent NYT article about how our energy intake affects our health: "Debunking the Hunter-Gatherer Workout."

***

We named the hypothesis for ease of communication, not because we're eggomaniacs. We were tempted to call it HAM (humans are mammals) but felt that EGG better described the idea and was also adorable considering how babies are made.

HAM and EGG, or EGG and HAM, to me, is the ideal name but try saying that without going all Dr. Seuss on a wumbus full of thneed-suited who-scientists.

And that goes for here too. Your pop culture references best remain R-rated if you're to retain an ounce of R-word. And because it's just so enlightening, we'll continue to employ the very mature and refined Lebowski theme from Part 1 in our discussion here.

Call me Maude.

“The [species] abides.”
Part of the trouble I and others have with the obstetrical dilemma is this: We do just fine in the face of the tight fit at birth. Just because there's a tight fit, just because childbirth is terrifying, just because it's not an easy or enjoyable experience, that's not necessarily a "bad" thing evolutionarily. Clearly it's the opposite. It's a good thing. We're here to think about it! It can't possibly be "bad" if we keep having babies despite the hellishness of childbirth. This perspective was one of the contributions of "The obstetrical dilemma revisited" (2): Our behaviors, our aiding of women during childbirth, have probably reduced selection pressures against the tightness of fit, or other contributors to childbirth difficulty and danger. The species abides.

When you look at childbirth not as a biological failure, or as God's plague on lascivious women invited by Eve, but when you see it instead as a raging success, the obstetrical dilemma hypothesis is much easier to doubt.

"Say what you like about the tenets of [Natural Selection], Dude, at least it's an ethos."

The widespread popularity of the OD may well be rooted in its adaptationist appeal, where nonoptimality (e.g. human altriciality, or helplessness and relative underdeveloped-ness at birth) is explained as a contribution to the best possible design of the whole (e.g. big brain and efficient bipedalism). Gould and Lewontin (3) famously criticized the “adaptationist programme” by cautioning that “organisms must be analyzed as integrated wholes” that are “constrained by phyletic heritage, pathways of development, and general architecture” and that “the constraints themselves become more interesting and more important in delimiting pathways of change than the selective force that may mediate change when it occurs.” They faulted this approach for failing to “consider alternatives to adaptive stories” and for its “reliance on plausibility alone as a criterion for accepting speculative tales.”

From this perspective, it's inappropriate to root the evolution of human altriciality in a compromise between adaptations for big brains and adaptations for bipedalism when there are most likely more basic, conserved, phyletic constraints on pathways of development and general architecture (e.g. gestation, pregnancy and fetal growth) at play.

“Yeah, well, that’s just like, uh, your opinion man.”

Over the last five years as I’ve been thinking about this, specifically, I've had a teeny tiny bit of resentment creep up now and again towards the field that coaxed me into buying this OD as dogma. But I have nobody to blame but myself! A hypothesis is just that and why I just swallowed it whole without doubt is partly because it's a cool idea! And partly because an alternative idea just wasn't as well-known yet! Our paper is not attacking anyone, despite the guilt we've induced in folks who have been treating the OD as fact and teaching it to hordes of students for the last 50 years. To those researchers and teachers who came before us, we’re grateful! And to anyone who thinks of our paper as gotcha or an attempt at it: Please remember how science works and how knowledge accumulation works. That's all this is. It's just a little more hyped because it's about humans, not sea squirts.

The OD is not dead. It's just put in a less omnipotent place. The heaviest burdens should always be on supporting hypotheses for human exceptionalism; we should never default to them. Humans are animals/mammals/primates/hominoids and when we fail at that default view, that's when we can claim human exceptionalism.

3. Gould, SJ and RC Lewontin (1979)
The spandrels of San Marco and the Panglossian paradigm: A critique of the
adaptationist programme. Proceedings of the Royal Society of London, Series B 205(1161): 581-598.

Note
Please do look to the PNAS paper to read about the EGG or to see how we've exposed the challenges to testing the traditional OD. I did not write this post to stand for anyone's sole source. If you cannot access the paper once it's posted on-line, then please email me and I'll happily send the paper to you.

Well, here's a new low in plagiarism. Someone lifted three years of posts from "Mermaid's Tale," through half of March, 2012, and put them all up on their own site. Not only do we get no credit there but the name of the blog has been changed and any instance of our names "updated" to Cathy Smith.

It turns out there's a lot of money to be made by lifting other people's blog content, "updating" it, adding ads, the kind that when you click on them you get a share of the profits, and then clicking on them, ad infinitum. It's a little odd to call this plagiarism because content is completely irrelevant. It's the bloggy shell that counts. The blogginess, to coin a phrase.

We won't grace the thief by giving you his name, but since we outed him on Friday he's taken the site down. But then put it back up again. So, in addition to clicking on ad links, he clearly spends a lot of time taking down blogs -- there were a ton of them on his "My Blogs" list. And then putting them up again. This guy made the sloppy mistake of using what's apparently his own name on his copy of MT, which is how we found him. We've contacted Blogger about this, hoping the site will be taken down, but who knows how this will play out. It's not costing us anything, but it is theft of intellectual property, so unacceptable.

Yes, it's risky putting your stuff online, and requires trust that this kind of thing won't happen. There are legal protections, such as the Creative Commons License, which won't prevent theft but does give bloggers recourse when it happens.

But there are also great benefits to blogging. It is an intellectual treat to have the freedom to say whatever we want to say about whatever topic we choose, outside of the usual confines of academic publishing, and to refine a point of view over time. It's especially nice to have regular readers who weigh in with comments or emails and let us know they appreciate what we're doing here. And of course to Ken and me, it is a great pleasure to work with Holly and to have her fine contributions.

So this certainly isn't enough to convince us to stop blogging. But it's pretty creepy.

Friday, August 24, 2012

The latest of the adaptive arguments for why menopause evolved appears in the 22 Aug Ecology Letters ("Severe intergenerational reproductive conflict and the evolution of menopause," Lahdenperä et al.). A discussion of the work in this week's Nature points out that humans, killer whales and pilot whales are the only animals known to stop reproducing before they die. So of course the question is why.

Previous adaptive explanations include the "grandmother hypothesis" which suggests that reproductive fitness is higher when women stop having their own children but can then take care of their children's children. And the "mother hypothesis" which suggests that mothers gain more in terms of fitness by investing in their older children than by having new ones. Alternatively, the increased risk of dying in childbirth for older women might be the explanation.

The Shattuck Family;
Aaron Draper Shattuck,
Brooklyn Museum

The first two are basically inclusive fitness explanations: what I contribute to the group is good for me. (Yep, karma.) That's because relatives share the same genes you have, so if you help them it's like helping proliferate your own genes.

Inclusive fitness is a hot topic these days, because the basic argument is shaky in terms of how and when it actually applies, and we won't go into that here. But we will say that the mother and grandmother hypotheses have run into mathematical issues, as fitness differences at late age in populations with little survivorship to that age aren't enough to explain the trait given that grandmothers share 1/2 the genes of their own children, but only 1/4 of their grandchildren's genes. The extra fitness, if it were even measurable and and and were due to some major gene conferring old-age survivorship, would be so low that drift would over-ride it. There just would not be much, if any, selection pressure to live to an old age for that evolutionary reason. These are well-known issues, though answered by simply ignoring them by the behavior-evolution community.

In this new study, Lahdenperä et al. made use of a 200 year data set from the Lutheran Church in Finland. Their sample included 653 women born during 1702-1823, who gave birth to 4703 children, of whom 1736 then had 9164 between 1757-1908, but they drew a smaller sample to look at intergenerational reproductive overlap, defined as grandmothers giving birth within 2 years of the younger mother. They specified 2 years, before or after a birth to a mother/mother-in-law or daughter/daughter-in-law, because they assumed that was when the mothers/mothers-to-be would have the most conflict over resources. This left them with 209 mothers who gave birth to 613 offspring, and of the offspring, 342 were mothers who gave birth to 824 offspring.

They performed a number of analyses to determine the effects of numerous variables on fitness (details in the paper, which is open access), controlling for age, sex of offspring, maternal age, birth intervals (number of years between births), as well as for "potential effects of maternal presence, living area, social class and birth cohort." Of course, sample sizes would have been trivial for detecting whether this potential opportunity for selection was realized in any genetic terms.

They then calculated inclusive fitness, a woman's total fitness counting her own surviving children and the fitness benefits she accrues from helping her relatives. Mind you, reproductive overlap was rare in pre-industrial Finland; only 6.6% of the 556 mothers who had at least 2 births gave birth within 2 years of a grandchild. Of these 30 or so women, offspring survivorship was not affected when mother/daughter pairings were concerned, but survivorship to age 15 of the overlap offspring in mother-in-law/daughter-in-law comparisons was found to be statistically significantly lower. "These results suggest that intergenerational reproductive conflict is low among related mothers and daughters, but is substantial between unrelated in-laws."Evolutionary non-sense, and hence nonsense!
This study is from modern, even if pre-industrial, pre-contraceptive times, and is absolutely irrelevant to the argument trying to explain human longevity, the necessary argument for the hypothesis (because only if you live longer can you take care of your grandchildren). It is absolutely irrelevant to evolutionary conditions that have to have been in Africa more than 100,000 years ago (because all modern humans share the trait). It is not just menopause that must be explained. And Jim Wood and colleagues showed more than 10 years ago that there is no specific issue about human menopausal age. The process is essentially the same in humans as in mice. Beyond that, of course, not many people survived to experience menopause, or put another way, to provide selective pressure not to stop ovulating.

The picture is even worse. Not too many years ago, a story appeared in The Lancet that showed that it is grandparents who are economic burdens on their grandchildren, quite the other way round from this study and the evolutionary hypothesis. Should it not be at least as general in hunter-gather times, that those too feeble to care for themselves not burden the resources of their grandchildren--even if the former occasionally do some baby-sitting? Since very-elders were rare, this would seem comparably, if not much more, the story. But even here, the Finnish story is from modern-times, and wholly irrelevant to any evolutionary speculations.

This is a fanciful evolutionary hypothesis, and cute and consistent with relentless Darwinian selectionism as it is, it is non-sense when it comes to any actual evolutionary evidence, including for the reasons given above. The urge to explain the evolution of menopause is beloved of darwinian determinists, but it has never had legs to stand firmly on. It is so difficult to support genetic causal arguments for complex traits like survival and menopause, and so difficult to find specific adaptive evidence of this sort in genomes, and the relevant energy and resource expenditure arguments so vague, that really, this simply makes no scientific sense.

Thursday, August 23, 2012

We've posted many times about the problems we face today in dealing with multifactorial causation. In metaphoric terms, we wand to find causes that satisfy a statistical criteron of 'significance', by using some test, often some probability, p, of unusualness of the result that points to causation, that we can symbolically refer to as a p-value.

This applies to human genetics and the fashionable 'omics' approach, and to much else in biology. One thing we talked about before and recently is the hypothesis that rare variants cause human trait variation in the sense of the difference between cases and controls. Some investigators have been arguing that rare variants with strong effect, rather than common variants, account for a substantial fraction of disease (combinations of variants, some of them rare, each with small effects, is another version of the rare-variant arguments).

But rare variants present a problem, which is that you don't see them often enough for statistical significance to be achieved. Yet they may be causal. We recently noted that finding the same rare variant in affected family members is one possible way to identify them where significance is less of an overwhelming requirement. Our last couple of posts deal with this subject.

Two back-to-back papers in the August 10 American Journal of Human Genetics are of interest here, because of what they confirm about this problem. These are two reports from David Goldstein's lab, both large-scale searches for genetic causation, one of idiopathic generalized epilepsy and the other of schizophrenia (both open access). Goldstein has argued for some time that genomewide association studies (GWAS) aren't finding genes with large effects because most complex diseases are caused by rare variants, with small effects. They don't reach significance, though they're real causes (one thinks): we're caught in the p-patch!

Idiopathic generalized epilepsy
Idiopathic generalized epilepsy (IGE) is a complex disease that, like many such diseases, is highly heritable but its genetic architecture has been difficult to parse ('Idiopathic' means cause not known). According to the paper, rare copy number variants have been found to explain the disorder in only 3% of affected individuals. So Goldstein's purpose was to test whether rare variants with moderate effect could be found to explain IGE.

The group compared the exomes -- all the exons, DNA coding regions -- of 118 people with IGE with those of 242 controls, and found no variants significantly associated with the disorder. They then looked at almost 4000 variants that they considered to be candidates for epilepsy susceptibility and genotyped 878 cases and 1830 controls for these variants, with no statistically significant finding.

They report that close to 1/2 of these variants were only in cases, which suggested to them that at least some of these must be genetic risk factors. However, the high heterogeneity of epilepsy disorders means that any single variant will be difficult to find, and/or that single-nucleotide variants have small effects. E.g., they estimate that the variant they observed most frequently here accounts for 0.6% of the cases of IGE in this study, if it is indeed turns out to be causal, and this is the ballpark figure for causal variants they've identified for other complex diseases. And, a recent study of epilepsy published in Cell by a group at Baylor compared cases to controls looking at all exons, and found potentially pathologic variants statistically as often in controls as in cases.

The current paper concludes that "moderately rare variants with intermediate effects ("goldilocks alleles") do not play a major role in the risk of IGE." Current methods are not adequate for detecting variants with very small effects, even when they exist. The epilepsies are considered to be channelopathies, disorders in which an ion channel disruption plays a major part. Thus, it has been assumed that mutations in ion channel genes would be found to be causal, but the list of candidate genes identified by these authors is not enriched for such genes, suggesting that "the pathophysiology governing epilepsy might be far more complex than simply a disorder of disrupted ion channels..."

Finally, the authors conclude that results from small studies must be treated with caution as they can't provide comprehensive lists of candidate variants. But, studies large enough to detect variants that are at a frequency of, say, 0.06%, as some of the variants in this study, are essentially impossible. Such variants, they say, "will probably only be securely implicated through gene-based association analyses in large sample sizes and, where available, cosegregation analyses within multiplex families."

Schizophrenia
Schizophrenia is another complex trait with high heritability, high phenotypic heterogeneity, and a low success rate with respect to identifying genetic risk factors. As with most traits, GWAS have identified some genes with very low effect, but not always replicably.Again, the question is whether the causal variants are moderately rare but identifiable in large studies, or so heterogeneous and rare as to remain hidden with current large-population based methods.

In the study reported in the AJHG, Goldstein's group followed the same 2-step analysis as described above for IGE, ultimately assessing selected variants in 2,617 cases and 1800 controls. No single variant was statistically significant, though, again, they identified case-specific variants, some of which may actually be causal. They conclude that risk of schizophrenia is unlikely to be due to moderately rare variants with moderate effect, and that "multiple rarer genetic variants must contribute substantially to the predisposition to schizophrenia, suggesting that both very large sample sizes and gene-based association tests will be required for securely identifying genetic risk factors."

In essence, this is either polygenic control in which each case is due to some combination of large numbers of individually weak, mainly rare, contributing variants, or that individual strong-variants exist but are so rare that we may struggle to get enough samples. Follow-up or family studies that find many different variants in the same gene, and where the gene's function seems plausible for the trait, could help. But it could be that there aren't enough humans on earth to achieve significance in the statistical sense....and that in important ways means the variant or gene isn't 'significant' in the public health or clinical setting either: approaches to aggregate causation may be needed. A way to escape from the p-patch. We think so, at least, as we've said many times here before.

Wednesday, August 22, 2012

…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

We'd like to suggest that Borges' short story can be aptly applied to the current state of disease prediction. Fifteen years ago or so we were being told that once we had the human genome (HG) sequenced we'd be able to predict the diseases people were going to get, prevent them, and everyone would live to older ages than we'd ever attained before. Aside from the questionable ethics of enabling such a demographic catastrophe, not to mention the idea that "everyone" would surely be an exclusive club, this promise is not much closer to realization now than in pre-HG days.

The first HG sequence, such as it was, was published in 2001. Since then the promises have been honed a bit--ok, so the sequence itself wasn't going to bring us as close to immortality as we'd hoped, but the Common Disease Common Variant project would. That was the theory that was used to justify the HapMap project, to provide resources to use case-control comparisons to find causal variants; then we'd have the data in hand for disease prediction and prevention. That project was itself fine-tuned and scaled-up over the years, eventually bringing us genomewide association studies (GWAS) which, depending on who you ask, are either justifiably dead because they're mainly finding genes with very small effects, or alive and well because there have been some successful studies (macular degeneration studies are always cited) and if we just fine-tune the method some more it will really work. And think what we'll be able to do with more whole genomes.

The -omics boom was being born. This is the era of 'hypothesis free' approaches. When we don't know the cause or can't develop useful actual hypotheses, our 'hypothesis' is just that some element in the realm we're searching has causal effects. The genome was the first such realm, and the idea was that the trait had to have some genetic cause and if we blindly search the entire genome it must be there, and so we'll find it (or instead of 'it', some tractable few numbers of such causal sites).

Genomics was driven by increasing technology and was addictive, because, it is not too cynical to say, it was thought-free, meat-grinder, factory science. It was lucrative, did indeed teach us a lot about what genes and genomes do, and found a modest number of important causal genes. Its success, at least in the fashion and funding senses, understandably spawned other hypothesis-free blind technological approaches, cashing in on the cachet of the 'omics' word and its rejection of the need for actual prior hypotheses to design studies: nutriomics, connectomics, metabolomics, microbiomics, immunomics, epigenomics, and more. How much of this was because the same people who were promising us that successful disease prediction with genetics was right around the corner realized that this just wasn't true, and needed to figure out ways to keep their labs running we can't say, but we certainly are a fad-following, money-following research culture and we know this is part of the story. To be fair, when other approaches hadn't solved any of the problems, there was natural appeal to a thought-free, safely factory-like turn. In any case, many of the same people who were gung-ho about genetics are now equally gung-ho about the promise of the -omics boom to bring us disease prediction and prevention that will really work this time.

The current interest in the -omics of supercentenarians in order to figure how they lived to their ripe old ages, and thus how we can live to 120 is, we think, an example of this misguided fad. One basic assumption of this work is that every cause is individually identifiable, predictable and replicable. This is in fact true for causes with large effects--Mendelian diseases, e.g., or point source infections like cholera or malaria and so on--but there are many paths to heart disease or stroke. When everyone's genome is unique and causes many and variable, however, too often each combination of environmental and genetic factors will be extremely rare if not singular, and impossible to identify with current statistical-sampling based methods, the identification of rarely replicated events will be next to impossible. The idea that every cause can be identified is a reductionist approach to disease akin to the reductionist approach to evolution, which requires every trait to have an adaptive reason to have evolved when in fact sometimes it's just chance.

But, once we venture into the quest to find environmental factors that influence longevity, we're necessarily identifying these factors retroactively, if they are even identifiable, and yet none of us is going to live in the past. Future environments are unpredictable. So, again, unless a factor has large effects--heavy radiation exposure, infectious agents, toxins, e.g.--it's unlikely to be useful in predicting individual cases of disease.

We can see the issues by the proliferation of ever-more 'omics' approaches. Each omics-community advocates its realm as if it is the, or at least the critical, one. Essentially, we always add but rarely reduce, the number of potential causes of the traits in the lives of individuals. This adds to the combinatorial realm--number of possible combinations of factors (and their intensity)--through which we must search. More causes, inevitably individually rare, means that to show that a combination is causal it has to be seen enough times. That means ever larger samples because 'seen enough times' means to enable us to rule out chance as the explanation for the association between the combination of risks and the outcome. But when there are more reasonably plausible combinations than grains of sand on the earth's beaches (this is no exaggeration--it's if anything an understatement), there aren't enough people to get such results. And subsequent generations will have different people with different combinations of risk factors.

We certainly wouldn't argue with the idea that what we
eventually succumb to is likely to be the result of multiple -omics, that is, a combination of factors.
But, we do question the idea that they will be identifiable, or useful
in prediction, which is presumably the point of all this work. The current interest in documenting every possible factor that might have an effect on health and longevity is bringing us closer and closer to Borges' map of the Empire.

Tuesday, August 21, 2012

We are desperate to find a genetic cause, or at least a tractably small number of genetic causes of every trait we want to study. It's understandable that we want this kind of answer. Unfortunately, GWAS has not accounted for much of the heritability (estimated genetic fraction of causation) of most diseases and other similarly complex traits that have been studied, as we've often pointed out.

Rather than abandon the game, and especially since the dream of common variants having major effects on common disease (so they would be a usefully large market for Pharma to invest in targeting) is diminishing, we have had to become contortionists to try to find how or why genomics is still the way to approach such traits.

Our approach to this question is statistical and hence is based on repeated observation of enough observed instances of a variant for it to achieve statistical 'significance' in our data, whether or not that makes its effect of enough importance' for counter measures to be develop that are targeted against it. But this means that in any practical sense, we can't get large enough samples to detect the effects of very rare variants. We need other approaches.

One is to track variants in key gene regions among family members, looking for correlations between the presence of the variant and that of disease. How effective this will be depends on whether we know enough about the trait or about genes to find those variants that have such a track. If we find enough different variants in the same gene doing this in different families, that's strong evidence.

There are two ways, however, for rare variants to work. One, as just described, is for the variant to have a major effect all on its own. That could be detectable. But if combinations of many different rare variants are required, the variants coming from a number of genes and many different combinations having similar effects, this method may not work well. Unfortunately, there are theoretical reasons to think this will likely be the case.

Recently there has been a story in Science News about the number of rare variants that we each carry around. The story summarizes various recent papers, citing the authors. The following graph shows the results of various studies of genome sequencing of different individuals:

This shows that each of us carries quite a few variants. The estimate is that there is about one variant per 1000 sites, if you compare both instances of the genome in a given person (or any two randomly chosen copies), which means about 3.1 million variants per pair compared. But the more people you look at the larger the number of sites that you'll see varying (even if the frequency of the rarer variant at the site is increasingly lower).

Obviously, in a huge population, almost any site will vary, and if lots of sites can potentially contribute to a disease, there will be lots of instances of 'causal' variants per gene, but these won't be detectable by group studies where one needs statistical association between the variant and the outcome (again, because statistical significance can't be achieved with very rare observations in this kind of study design).

The story was about disease hunting, which seems to be the international obsession (or, more accurately perhaps, rationale for funding the work). However, sites contributing to a trait's variation today will also potentially contribute to its evolution. Thus, in a subtle way too complex to go into here, hunting for the genes or variants that are responsible for the evolution of a trait is going to be very challenging, to say the least. It's hard enough to explain the selective (or chance) reasons for the trait's presence, much less what genes were responsible for its evolution.

The story also cites work by Andy Clark and Alon Keinan who have pointed out that the very rapid expansion of the human species in the 10,000 years since agriculture has generated a massive number of rare to very to very very rare variants. In a statistical sense, each gene lineage present at the beginning of that time has a million descendants today. This is not new or speculative theory but simply the consequence of the sequence-nature of genes: long strings of nucleotides mean many places where a nucleotide can change. Even if any given change is very rare, the genome and number of people born each generation are large. The variation is being found now that we can sequence at a high scale, as reports such as those mentioned here clearly show.

One sobering implication is that if we are concerned with the ways one can get a common disease or trait (behavior, morphology, or whatever, normal or not), then we face trying to work out this sea of nearly unique variation. It could be a hopeless task!

However, comparing close species with and without the trait in a sense aggregates the results of countless variants and genes and individuals over countless generations. In a subtle and statistically detectable way this could point to responsible genes , because the sample size that generated the result over time might leave enough evidence. Some methods to find such evidence are available (one test is called the Macdonald-Kreitman test, after its developers), though so far they are statistically rather weak for close species. Perhaps creative thinking will lead to new and better ideas.

Whether approaching disease in this way is the best thing to do is a separate question from how it can be done in practice if that is what, as currently, people are deciding they need to do.

Monday, August 20, 2012

An op/ed piece in Sunday's New York Times by Peter Hotez, dean of the National School of Tropical Medicine at Baylor College of Medicine, entitled "Tropical Diseases: The New Plague of Poverty" is a sobering reminder that for a lot of people the idea that genetics is going to improve their health would be laughable if genetics weren't taking so much money from things that actually would improve their health.

Genetics under Francis Collins as director of the US National Institutes of Health has fared as well as many of us predicted it would when he was chosen to lead the Institutes. Genetics gets more NIH funding than any other category except for clinical research, having received more than $7 billion in 2011. Infectious diseases, on the other hand, got half that. Yet they are the diseases with real, strong, classically simple and targetable cause. And they may affect more people in reality than do genetic diseases, as many diseases assumed to have genetic causes only do so in theory.

Hotez writes that 2.6 million children in the US "are living in households with incomes of less than $2 per person per day, a benchmark more often applied to developing countries. An additional 20 million Americans live in extreme poverty. In the Gulf Coast states of Louisiana, Mississippi and Alabama, poverty rates are near 20 percent." The rate is nearly 30 percent in parts of Texas. "In these places, the Gini coefficient, a measure of inequality, ranks as high as in some sub-Saharan African countries."

Most of us healthy enough to think about how our genes might contribute to our eventual ill-health at advanced age rarely think about infectious diseases, unless it's the flu, HIV/AIDS or the celebrity disease of the moment, West Nile. But even in the US neglected tropical diseases continue to take their toll. As Hotez writes:

Outbreaks of dengue fever, a mosquito-transmitted viral infection that is endemic to Mexico and Central America, have been reported in South Texas. Then there is cysticercosis, a parasitic infection caused by a larval pork tapeworm that leads to seizures and epilepsy; toxocariasis, another parasitic infection that causes asthma and neurological problems; cutaneous leishmaniasis, a disfiguring skin infection transmitted by sand flies; and murine typhus, a bacterial infection transmitted by fleas and often linked to rodent infestations.

Among the more frightening is Chagas disease. Transmitted by a “kissing bug” that resembles a cockroach but with the ability to feed on human blood, it is a leading cause of heart failure and sudden death throughout Latin America. It is an especially virulent scourge among pregnant women, who can pass the disease on to their babies. Just last month, the first case of congenital Chagas disease in the United States was reported.

These are, most likely, the most important diseases you’ve never heard of.

Hotez says that one of the most important reasons that these diseases are still afflicting people in the US is that these are the people who can't afford medical care, so their disease goes unrecognized and unreported. He proposes a series of steps that might help eliminate these diseases, however. First, surveillance programs are required to enable a more accurate estimate of incidence and prevalence. Then, better diagnostic tests, and safer and better drugs, and new vaccines, although there's little incentive for pharmaceutical companies to invest in these. What we need fundamentally, Hotez says, is to turn our attention once again to fighting poverty in America.

A story in the NYTSunday Magazine does just that, pointing out that President Obama learned from his community organizing days in Chicago that eliminating poverty was going to take political power. This is what impelled him to law school, and then into politics. He drew attention to poverty frequently during his first campaign for president, detailing his plans for cutting it at least in half during his presidency "because we can't afford not to." The piece argues that, despite the economic downturn that Obama inherited, he has done more than any president since Lyndon Johnson to at least prevent poverty from getting worse.

But there is no discussion of poverty by either candidate this time around. Why that is can be debated, and that isn't our point. Our point is something we've said many times, and that is that billions of dollars is being poured into the assumption that chronic diseases have an identifiable genetic cause, and are preventable, but the payoff for this investment has, at best, reached a plateau. The money could be more fruitfully spent, in terms of healthy person years gained, on vaccines, treatments for infectious diseases, health, nutrition and sports education in elementary schools, and so on, rather than on the rather elitist idea that genetics research is going to bring us all personalized medicine and longer, healthier lives.

When so many people don't even have access to impersonalized medicine, and so many conditions that we currently do know how to prevent and treat aren't being addressed, it's arguable that the $7+ billion going to genetics is another very real indicator of the economic inequality that affects quality of life for so many in the US. It's middle class welfare, largely for the science and university social class itself, relatively remotely related to public health.

There are, of course, many truly genetic diseases whose direct causation is in a way comparably tractable to infectious diseases and we have often argued that these diseases should receive much more funding. If the politics were different, we wouldn't be investing in the genetics of chronic lifestyle diseases at the expense of many healthy person-years for a lot of people for whom the possibility that some disease they might get someday could have a genetic component. This is, to many, so remote as to be irrelevant given the health issues they face today. In most cases this is so even if all the estimated genetic component of risk were targeted by means based on genetic risk. When it's clear to so many of us, even among those benefiting from the investment in genetics, that a lot of the research is pie-in-the-sky anyway, with little foreseeable likelihood of payoff, it does make you realize that economic inequality isn't just something to blame on George W. Bush.

Friday, August 17, 2012

Sometimes it seems that we're posting the same story over and over again. Here are some new study results, here's what the authors say they mean, and here's what we think they really mean. Usually a lot less than the authors report. Just this week, does aspirin prevent cancer? Should we eat eggs? And a post asking simply how we can tell if results are credible. If you read us regularly you know we don't just pick on epidemiology. We give genetics the same treatment -- why should we believe any GWAS results, e.g.? And should we expect to find genes 'for' most diseases? Or behaviors? The same for all those adaptive stories that 'explain' the reason some trait evolved. And Holly is equally circumspect about claims in paleoanthropology, which of course is why we love her posts!

Is it just being curmudgeonly to ask these questions? Or is it that where some see irreducible complexity others see a simple explanation that actually works?

An isomorphic problem
The important thing about these various issues in modern science is that from the point of view of gaining knowledge about the causal world, they are isomorphic problems. They have similar characteristics and are (currently) addressed by approaches with similar logic--in terms of study design, and similar assumptions on which both study design, data collection, and methods of analysis are based. The similarities in underlying causal structure include the following:

Many different factors contribute causally to the outcome

Most of the individual factors contribute only a small amount

The effect of a given factor depends in various ways on the other factors in the individual

The frequency of exposure to the factors varies greatly among individuals

Sampling conditions (how we get the data we use to identify causal elements) vary or can't really be standardized

The conditions change all the time

The evidence for causation is often indirect (esp. in reconstructing evolution)

We have no underlying theory that is adequate to the task, and so we use 'internal' criteria

These days, we use the word 'complexity' to describe such situations. That word is often used in a way that seems to imply wisdom or even understanding on the part of those who use it, so it has become a professionalized flash-word often with little content.

Often, people use the word, but persist in applying enumerative, reductionist approaches that we inherited over the past 400 years largely from the physical sciences (we've posted on this subject before). This is based essentially on the repeatability of experiments or situations. We try to identify individual causal elements and study them on their own. But if the nature of causation is the integrated effects of uniquely varying individuals, then only the individual strong (often rare) factors will be easily identified and characterized in this way.

Item #8 above is important. In physics we have strongly formal theory which yields precise predictions under given conditions. There is measurement error, and the predictions are sometimes probabilistic, but the probabilities involved and the statistics of analyzing error, were designed for such situations. We compare actual data to predictions from that externally derived theory. That is, we have a theory not derived from the data itself. It is critical to science that the theory is largely derived not just in our heads but from prior data. But it's external to new data that we use to test the theory's accuracy.

In the situations we are facing in genetics, evolution, biomedicine, and health, we have little similar theory, and the predictions of what we have are not precise or our assumptions too general. Even the statistical aspects of measurement error or probabilistic causation are not based on rigorously specified expectations from theory. Our theory is simply too vague at this stage. So what do we do?

We use internal test criteria. That is, we test the data against itself. We compare cases and controls, or different species of apes' skeletons, or different diets. We don't use some serious-level theory to predict that so many eggs per day, or some specific genotype at many sites in the genome will have some specific effect based on primary biological theory, but only that there is a per-egg outcome. We don't know why, so we can't really test the idea that eggs really are causal, because we know there are many variables we just aren't adequately measuring or understanding. When we do find strong causal effects, however, which does happen and is our goal of this kind of research, then subsequently we can perhaps develop a real theoretical base for our ideas. But the track record of this approach is mixed.

This is also often called a hypothesis-free approach. For most of the glory period in science, the scientific method was specifically designed to force you to declare your idea in a controlled way, and test it (the 'scientific method'). But when this didn't work very well, as in the above areas, we adopted a hypothesis-free approach that allowed internal controls and tests: our 'hypothesis' is just that eggs do something: we don't have to specify how or why. In that sense, we are simply ignoring the rules of historically real science, and even boasting that we are doing science anyway, by just collecting as much data as we can, as comprehensively as we can, in the hopes that some truth will fall out.

The central tenet of science for the last 400 years has been
the idea that a given cause will always produce the same effect. Even
if the world is not deterministic, and the result will not be the same exact one,
it will at least have some probability distribution specifying the
relative frequency with which we'll observe a given outcome (like Heads
vs Tails in coin-flipping). But we really don't even have such criteria in
the problems we're writing about. Even when we try to replicate, we often don't get the same answer, and do not have good explanations for that.

When we're in this situation, of course we can expect to get the morass of internally inconsistent results that we see in these areas, and it's for the same basic epistemological reason! That is, the same reason relative to the logic of our study designs and testing in these very different circumstances (genetics, epidemiology, etc.). Yet that doesn't seem to slow down the machine that cranks out cranky results: our system is not designed to let us slow down to do that. We have to keep the funds coming in and the papers coming out.

And then of course there's cause #9. Most of us have some underlying ideology that shapes our interpretation of results.

This is all a fault of us and the system. We can't be faulted for Nature's complexity. The issues are much more--yes--complex than we've described here, but we think this captures the gist of the problem. Scientific methods are very good when we have a good theory, or when we are dealing with collections of identical objects (like oxygen or water molecules, etc.), but not when the objects and their behavior are not identical and we can't specify how they aren't. We all clearly see the problem. But we haven't yet developed an adequate way to deal with it.

Thursday, August 16, 2012

An aspirin a day...Last week we blogged about the possibility that many chronic diseases have an infectious origin. Cancers, heart disease, diabetes, asthma and so on rather than having a genetic cause, as the unlimited funding stream supporting the hunt for genes 'for' these diseases suggests, might be infectious instead. It wouldn't be the first time that an unexpected infectious origin was found for a common disease -- stomach ulcers were thought to be due to stress for decades, and the suggestion that they might instead have a bacterial etiology was laughed out of court. Until it was conclusively demonstrated to be the case.

Several studies now have suggested that aspirin taken daily in low doses might protect against some of the most common cancers, or prevent its recurrence, including a study just published in JNCI, the Journal of the National Cancer Institute, described in The Guardianhere. Researchers pooled the results of a variety of clinical trials of aspirin as a preventive for vascular events -- stroke and heart attack, and found a significant reduction in cancer mortality. The relative risk in this sample of 100,000 people for those taking a daily low dose of aspirin for up to 11 years was 16% lower than in those not taking aspirin. This was a smaller effect than that of previous studies -- e.g., a paper in The Lancet in March,one in a series of reports by the same author, reported that cancer mortality among people taking aspirin for at least 3 years was reduced by a quarter.

The latest results are still significant, but the benefit isn't nearly as clear-cut as earlier reports. An editorial accompanying the paper in JNCI discusses the possible reasons for the differing results and concludes that the latest estimates are probably conservative. How this will translate into clinical practice is yet to be determined. There is some risk to a daily dose of aspirin, even if it's a low dose, because it is associated with internal bleeding. So, as long as the size of the effect on cancer mortality is still uncertain doctors probably won't be recommending we all go on aspirin indefinitely.

But how?
If it is true, though, what's the mechanism? Aspirin is an NSAID, a non-steroidal anti-inflammatory drug, and its two major effects are to reduce inflammation and to inhibit blood clotting. Some have suggested that platelets (blood clotting factors) are associated with cancer, but we explore the inflammation angle here instead because that lead seems more solid. If tumors are infectious in origin, reducing inflammation could conceivably reduce the tumor. A report in the May Current Biology, by Yi Feng et al., does suggest a link between cancer and inflammation, but the inflammation appears to be intrinsic to the tumor itself rather than due to an external infectious cause.

An association between the COX2-PGE2 pathway and cancer progression is well-established (COX2 is a gene that regulates one of the prostaglandins, a hormonelike molecule involved in a diverse set of physiological processes such as uterine contraction and regulation of body temperature). COX2 is also associated with inflammation and pain, which is why so many pain relievers, including aspirin, are COX2 inhibitors. It is known that COX-2 is expressed in the initial stages of tumorgenesis by malignant epithelial cells as well as other associated cells including macrophages, a component of the immune system.

Previous work by Feng et al. showed that innate immune cells were involved in the earliest stages of tumor proliferation. They now have shown that PGE2 is the signal that is responsible for this. They further show that blocking PGE2 synthesis by inhibiting COX2 expression slows tumor expansion, and suggest that this may be how aspirin, as a COX2 inhibitor, functions to slow tumor growth.

We have provided evidence here that a trophic inflammatory response is
important for a transformed cell to grow at its inception and that PGE2 produced by innate immune cells via the COX-2
pathway is a key trophic factor for optimal growth of transformed cells
at the earliest stages of tumor progression. Moreover, this trophic
inflammatory response can be suppressed by the inhibition of PGE2 production via COX-2 inhibitors, which might explain why use of non-steroidal anti-inflammatory drugs (NSAIDs) can reduce cancer incidence.

If further work confirms that aspirin inhibits tumor growth, and that the mechanism is as proposed by Feng and colleagues, this precludes the usual idea about cancer and inflammatory response. Rather than infectious agents such as viruses or bacteria, the cells that initiate cancer in the first place are themselves inflammatory.

If that is the case, then cancer cells should show some mutations in inflammatory-related genes, that are not found in surrounding normal cells in the same person. That would indicate that the evolution of a particular lineage of the person's cells included such mutation. It allowed the cells to proliferate, eventually to the detriment of the person.

A kind of selection would prevent these variants from being in the germ line, because the embryo would develop misbehaving cells, and would die before ever being born. This is not 'natural' selection of the usual Darwinian kind, which is about competition. Instead, in the 'Mermaid's Tale' flavor, it is a failure of cooperation among developing cells within an individual. But the effect is similar: it removes harmful variation.

Of course, this is a serendipitous finding if it holds up, because we did not evolve eating aspirin (that is, the same compound, in plants). In the past, cancer got who cancer got. Now, we may be able to counter-act this harmful somatic (body cell) evolution.

Our findings suggest that regular consumption of egg yolk should be avoided by persons
at risk of cardiovascular disease.

What's the story? Other than that journalist gets it wrong again. Or actually, some right some wrong. The headline is literally correct, but the story avoids the nuances. Not surprisingly. So we'll try to fill in.

Knowing that the effect of dietary cholesterol, particularly eggs, is increasingly considered insignificant -- because the relationship between dietary cholesterol and serum cholesterol levels is not at all a linear one -- a group of Canadian researchers undertook to determine once and for all whether in fact eating eggs does increase serum cholesterol levels.

That was thenEggs were first shown to increase serum cholesterol way back when we first were being told to watch our cholesterol intake, back when the Framingham Heart Study showed us this something like 40 years ago. (Indeed, here's your handy heart attack risk calculator, based on data from that study.) So eggs became one of those guilty pleasures we consumed knowing we were knocking hours off our lives with every bite of that runny cholesterol-laden yolk. Same with that hunk of marbled steak we couldn't resist.

But then we were told six years ago or so to forget all that -- at least the eggs bit -- when a study by Christine Greene et al. of the effects of eggs on cholesterol showed that "most people's bodies handle the cholesterol from eggs in a way that is least likely to harm the heart" as described in a 2012 piece in ScienceNews. And in fact the more eggs you eat, the bigger the HDL and LDL lipoproteins you make, which is good because large LDLs (the 'bad' cholesterol) are less likely to enter artery walls and contribute to plaque, and large HDLs (the 'good' stuff) are better at transporting plaque-producing cholesterol out of the body. Eat more eggs! That is, if you aren't already at high risk of heart disease, which you've determined with the heart attack risk calculator in the paragraph above.

This is now

But now we're being told we should go back to an egg free existence -- or the guilty egg indulgence of earlier years. At least according to The Atlantic. The Canadian study of 1200 people, based on lifestyle and dietary questionnaires and assessment of arterial plaque build-up, and including recall data on egg consumption and smoking, found that arterial plaque increased linearly after age 40, most in smokers and then second in people who consumed more than 3 eggs per week. But, they, naturally, recommend further research, including more detailed dietary information (i.e., more reliable dietary information?), but in particular they want to account for the "possible confounders," waist circumference and exercise.

Possible confounders they call these? These are all factors that have been shown over and over again to be associated with heart disease risk, and they didn't include them? Not even exercise?! "Possible confounders" is the scientific way to say "Even we don't believe this study!"? Of course, it also diverts attention from what they really mean, which is that their study is of the additional risk, after all the known major causes are accounted for. Anyway, it's all based on dietary recall, which as we've said numerous times before, is not a greatly accurate way to collect reliable data. So, really, should we believe any of the conclusions of this study?

It's complicated

But let's go back to that "most people" part of the sentence about bodies and how they handle cholesterol from eggs. That's the crucial bit here, as Greene's work and others have pointed out -- but The Atlantic did not. It seems that most bodies can handle the cholesterol from eggs just fine (unless you eat 42 a week, as reported (paywall) in Atherosclerosis this month -- cutting down to 6 a week brought the patient's cholesterol levels down to levels that no longer worried her doctors). People with risk factors like diabetes or existing heart disease tend to have smaller lipoproteins than most other people, which may indicate that they process dietary cholesterol in a way that can lead to arterial plaque, the risky consequence of excess cholesterol levels. Or, it may indicate that these diseases lead the body to process cholesterol differently. And Greene has found that some people are "hyper responders," which means that the pool of study subjects is heterogeneous and should be stratified by how they process cholesterol in any study of the effects of dietary cholesterol. But then, at least some of Greene's work has been funded by the American Egg Board, and it's not uncommon for industry-supported work to come out in favor of the industry. So yet more caveats (not proof, but issues to be aware of).

As the Canadian study itself points out, many studies have shown that dietary cholesterol, including eggs, had no effect on blood levels, some that it raised some lipoproteins and not others, others that the effect depended on genetic background. In short, it wasn't possible to issue blanket dietary advice that was true for everyone. Do we just ignore all of those results now? And indeed, the Canadian researchers themselves acknowledge that the effects of dietary cholesterol are different for different people because they fall short of recommending that we all limit our egg consumption, only suggesting this for people who are already at high risk of heart disease.

So, we venture to say as with all complex diseases, there's no one-size-fits-all answer here. It wouldn't be at all surprising if the cholesterol in eggs were actually protective against heart disease for some, but risky for others. Population-level statistics which are the basis of all recommendations about diet are hard to interpret clinically.

By the way, 50 years on the Framingham study is still going at a nicely funded level....

Comments

We always welcome comments, but we moderate them to reduce spam, gratuitous unkindness and so forth. Because we moderate comments, they won't appear on the blog until one of us publishes them, but we try to do that in a timely way.

We've had to make a change to the commenting page. People had told us that Blogger was eating their comments, so now, rather than embedding comment editing with the posts, it has to be done on a separate, full page. Unfortunately, the 'reply' option has disappeared so comments will just follow one another. We'll see how this goes.