Harriet Hall on Darwinian medicine: it is a useless endeavor?

In one of my very first posts on this site I questioned the value of “Darwinian medicine”—that area of inquiry that tries to explain disease and the behavior of pathogens as results of natural selection. Cold viruses, for example, are supposed to make you sneeze to facilitate their spreading, while malaria parasites make you prostrate so that mosquitoes can transmit them to the next host without being swatted. We shouldn’t dismiss these as idle speculations, for we know from the case of the zombie ants that very simple pathogens can do complicated things to their hosts to facilitate the parasites’ transmission.

This is a popular area of evolutionary study, but I’ve sometimes questioned its value in helping us treat disease. After all, how does knowing that we crave fats and sweets because they were valuable to our ancestors on the savanna help us deal with heart disease? I find the value of the discipline more in raising interesting questions about the evolution of pathogens than in really dealing with the problems of human health. Perhaps the malaria protist has indeed evolved to debilitate us as a way of spreading its genes. But those speculations are difficult to test, and I’m not sure how, if confirmed, they’d help us deal with the disease. But I can’t resist thinking that thinking about these things is fascinating, and could one day lead to medical payoffs. As we all know, pure science often has unexpected practical consequences.

In response to my questions about evolutionary medicine, David Hillis, an evolutionary biologist, at the University of Texas at Austin, gave a number of practical applications of evolutionary medicine (see his list at the link above). Most of David’s examples involved tracking disease epidemics by using molecular markers or using similar markers to identify, say, the strain of influenza most likely to cause future outbreaks. And yes, these are useful contributions of evolutionary biology to medicine, though the flu method hasn’t worked all that well.

Darwinian medicine includes other fascinating speculations, like Margie Profet‘s famous theory that morning sickness is a way to protect fetuses from ingestion of damaging toxins by the mother, and therefore doctors shouldn’t try to prevent that nausea lest the alleviation cause birth defects. I’m not sure where that’s led, though: do doctors now allow morning sickness to proceed untreated? (Profet’s personal story, by the way, is a sad one; you can read a precis here.)

Other studies have shown that fever might be adaptive: the body’s way of killing pathogens by raising its temperature to levels that kill infections. Work on lizards, who can cure themselves of infectious disease by basking in the sun, and hence raising their body temperature, show that preventing that basking allows the disease to persist. The adaptive strategy, then, would be for doctors not to try reducing non-dangerous fevers in their patients, but I’m not sure whether they do that.

So there are two ways to regard Darwinian medicine. First, as a way to frame evolutionary hypothesis about disease that might be testable. Second, as a way to cure disease. But even understanding the first won’t necessarily lead to the second. We know with certainty, for example, the molecular cause of sickle-cell anemia, and are nearly certain how the gene for that disease (a form of beta-hemoglobin with a unique mutation) came to be in such high frequencies (see below), but that knowledge hasn’t lead to a cure or new advances in treatment. But in other cases doctors simply might be unaware of the potential value of Darwinian analysis, in which case they should be educated in those aspects of evolutionary medicine that promise real benefit.

In a recent post on Science-Based Medicine, “Do we need ‘evolutionary medicine?,” Dr. Harriet Hall discusses the value of these endeavors. The impetus for her piece was reading the 1994 book Why We Get Sick: The New Science of Darwinian Medicine, by Randolph M. Nesse and George C. Williams. My own take on the book was that it was a fascinating read, and did help open up a new area of evolutionary thinking, though it’s early days to expect practical results. (Let me add here that Hall has been a tremendously important voice in the battle for scientific medicine versus quackery.)

Hall’s is a strange article in one respect: she argues both that many evolutionary speculations are untestable or untested, and thus constitute useless “just so stories,” but at the same time claims that doctors have already incorporated evolutionary principles into their practice.

As an example of Hall’s dismissal of evolutionary explanations, here’s what she says about Profet’s Darwinian theory of morning sickness:

This is a testable prediction and there is some evidence to support it; but there is no way to prove that this is the true explanation or the only one. They suggest that suppressing morning sickness might increase the risk of congenital defects. But there is no evidence for that. They recommend that women “respect their nausea” and remember that it may be beneficial. (It would likely decrease your survival prospects if you said that to your wife while she was throwing up for the umpteenth time!) They admit that relieving suffering is important too, but they recommend that any anti-nausea medicine should be carefully evaluated to make sure it doesn’t cause any harm. Of course, we already do that for all medications used during pregnancy. I fail to see how evolutionary thinking adds anything to the care of pregnant women. In fact, I can see how it might result in unnecessary worry and suffering.

But of course one could in principle test that explanation—if not in humans then in animals. (Has that been done? If not, somebody should do it.) That’s the only way to truly show that anti-nausea medication “doesn’t cause any harm.” And Hall says this about fever:

Should we treat fevers? Fever probably evolved as a defense mechanism: it may do something towards helping fight off the infection. Evolutionary thinking makes us ask why we developed this adaptation and whether it is wise to interfere. But do we need evolutionary thinking for this? Doctors have already questioned the need to lower a fever, recognizing that it is not the fever but the infection that needs to be treated, that fever itself doesn’t do much harm, and that lowering a fever might have adverse effects in some cases. I’ve read many discussions of those points, and nowhere did they mention wondering about why we evolved to have fevers. I don’t see that evolutionary thinking adds anything useful to the discussion. Fever is what it is, and we can study it and deal with it without speculating about how it came to be that way.

Yes, we do need evolutionary thinking for this, because it makes more doctors question the value of lowering fevers. When you have a cold, treating the infection is useless, so maybe we should contemplate not taking fever-reducing medicine. Such studies could be done (indeed, perhaps they have been—this is not my area of study!), and one would predict that colds would last longer in those individuals who didn’t try to reduce their fevers. Low fever may not be harmful, but it’s still debilitating, and we need to know whether or not to treat it beyond wiping out the underlying infection.

But then Hall reverses her argument and says that doctors already appreciate the value of evolutionary thinking:

Evolutionary thinking is already an integral part of medicine and an essential element of all biology. E. O. Wilson’s description of medicine as “one of the last unconquered provinces” simply is not true. Doctors regularly think about evolution and study its effects. The evolution of drug resistance in bacteria is the best-known example but there are many others. For instance, we think that sickle-cell anemia has persisted because it only affects those who inherit the gene from both parents, while those with only one copy of the gene (heterozygotes) have an increased resistance to malaria. G6PD deficiency causes hemolytic anemia but also offers protection against malaria.

But consider this: malaria is a credible explanation, but we can’t prove that it is the real one. Some other factor that we have not considered might be the true explanation, and malaria resistance might be a coincidence. And the malaria explanation is intellectually satisfying to those who ask “why” but it has had no practical impact on diagnosis or treatment.

I’m not sure that I agree with Hall’s characterization of doctors as deeply educated in evolution and its usefulness in medicine. Certainly drug resistance in bacteria is something that most doctors know about (but many still give in to importuning patients and prescribe unneeded antibiotics), but that’s a rare example. Beyond drug resistance, I doubt that most doctors know a lot about the application of evolution to medicine.

And Hall’s dismissal of the “malaria” hypothesis for sickle-cell anemia is unfortunate: there is lots of evidence that being a heterozygote for the sickle-cell gene helps stave off malaria. These include studies of direct fitness of patients. “Normal” individuals carrying two copies of the nonmutant gene have about 85% of the reproductive output of heterozygotes carrying only one copy, because homozygotes are at higher risk for lethal malaria. Individuals with two copies of the sickle-cell gene usually die before reproducing, because they have the illness, and their evolutionary fitness is zero. When carriers of a single copy have higher evolutionary fitness than either normal or mutant homozygotes, this heterozygote advantage (we geneticists call it “overdominance” or “heterosis”) keeps the disease gene at fairly high frequency in the population. This is one example of how evolution can’t produce absolute perfection, and indeed can lead to considerable suffering. (A beneficent God would have never allowed that mutation to occur). Too, in U.S. blacks, who are not subject to the selective pressures of malaria, the frequency of the sickle-cell gene has decreased, just as evolutionary theory predicts (there’s also some reduction via intermarriage with whites.) Finally, there is the remarkable concordance between the distribution of malaria in Africa and the distribution of the sickle-cell gene.

When Hall says that we can’t “prove” that malaria is the evolutionary cause of high frequencies of sickle-cell anemia, she’s asking for too much. Science can’t “prove” anything. But the evidence is strong that the evolutionary theory is correct. Of course, as I noted above, in this case it hasn’t helped us treat the disease.

Hall concludes again that evolutionary thinking is deeply ingrained in doctors, but also that that this thinking hasn’t helped us much:

I’m sorry, but I just don’t “get it.” Am I missing something? Am I just a contrary curmudgeon? Evolution is already an essential part of all science. Medical scientists already understand evolution and apply its principles appropriately. I didn’t see a single example in their book of any significant practical development in medical care that would not have occurred in the general course of medical science as it is commonly practiced, without any need for a separate discipline of “Darwinian medicine.” Evolutionary explanations, whether true or speculative, may satisfy our wish to understand “why,” but I can’t see that they have much objective usefulness. Instead, they have produced at least one major annoyance: a movement that preaches to us how we ought to revert to the supposed diet of our ancestors (the Cave Man Diet, etc.).

The answer to Hall’s question is “yes,” she is being somewhat of a curmudgeon. She makes some good points in her piece—the most important being that understanding the evolutionary basis of disease or pathogen behavior may not help us find cures—but I think she’s wrong in believing that most doctors are deeply ingrained with principles of evolution. Many doctors haven’t taken a course in evolution in college, and certainly don’t learn about it in medical school. (Readers who are physicians may want to weigh in here.) And I’m pretty sure that some evolutionary hypotheses will lead to testable treatments that might not have arisen without an evolutionary viewpoint. Just because an idea remains a speculation rather than graduating to a full blown theory with some empirical support does not mean that we’ll find a way to test it. I haven’t given up on Darwinian medicine.

75 Comments

Alas, not true that you can assume a medical doctor has learnt about, or understands, evolution. Don McLeroy, leading Creationist (or should one here say Destructionist?) on the Texas Board of Education, is a Doctor of Dental Surgery, and the students who walked out on Steve Jones in London when he talked about evolution were studying medicine. And Dr David Galloway, Vice President of the Royal College of Physicians and Surgeons, Glasgow, Scotland, is a flat-out Creationist, and VP of Glasgow’s own Centre for Intelligent Design.

One area where an evolution viewpoint is likely to lead to greater understanding (and, one hopes, to better treatments) is cancer research and treatment. I think many non-scientists ask the question “Why did cancer evolve to exist?” But the more relevant question is “How do cancer cells continue to evolve in a person once normal cells have evolved into cancer cells?” Both Carl Zimmer and Ed Yong have written excellent posts about recent findings in this area. Popular articles about cancer treatment have usually assumed that if we can find some kind of marker in, say, a woman’s breast cancer, a drug that targets that marker can be developed and, voila, cure. Recent research on kidney cancer has found that taking multiple samples of the cancer cells in a single patient reveals cells having different markers, which makes sense to anyone who thinks about evolution. In the short term, unfortunately, this will probably explain more about why treatment fails than what will effect cure. But longer term, understanding this fact should help in the development of more effective treatments, even if those treatments are more about management than about absolute cure.

The current model for cancer is somatic cell evolution. It’s progressed to the point where it was tested by sequencing the basal genome and end stage cancer genomes. Not surprisingly, there are a lot of differences.

This understanding informs the treatment strategies for cancer and research towards future treatment.

Cancer is a big killer, one in three people in the USA will die of cancer. IIRC, roughly 2 billion people in the world today will die of cancer. How many billions of people have to die before a scientific field is considered “important”.

Hall is so wildly wrong, that it would take pages to explain it. And I’m too busy to bother.

One more example of evolutionary thinking driving a treatment strategy.

HIV mutates and evolves rapidly, a property of many small viruses.

Each patient becomes a little world where the viruses adapt and mutate and eventually kill the host. This is one of the ways the virus overcomes the immune system, antigen escape by evolution.

The current treatment is 3 or 4 anti-HIV drugs. This keeps the replicating virus at low levels so there are less chances to evolve anti drug resistance. In addition, we have ca. 30 drugs from many different classes and can always switch when a patient’s HIV does evolve resistance.

This has turned HIV/AIDS from a death sentence to a chronic manageable disease. HIV/AIDS is one of the top 3 single agent infectious disease killers in the world.

Come to think of it. Hall isn’t even wrong. She has reached that rare place beyond wrong, Not Even Wrong.

My understanding is that the HIV model you describe also describes the newest thinking on cancer. Once one understands that in evolution “fitness” means nothing more than leaving offspring, and that variation is the name of the game, so that as long as any cancer cells are circulating in a body they are likely to evolve in multiple directions, this all makes depressing sense. Well, depressing short-term. Longer-term, understanding is always better.

It does not seem that knowing how evolution got us where we are is particularly useful if you actually know where we are. But one area it is useful is knowing where we are going. That is, knowing how evolution is going to affect things might change the preferred course of treatment in a particular case.

Using evolutionary thinking to help understand where we are is also useful. If we can really show that fevers evolved to fight some infections and that is the purpose, then we can further classify which infections they are good for and which they are not. If a particular type of infection is known to involve bacteria or viruses that are not affected by the higher temperature, then that is a good candidate for lowering the patients fever to alleviate suffering. And if a type of infection is known to be retarded by high temperatures, then that one should be left alone.

– Understanding disease seems to demand a fist cause intellectual perspective, i.e., how did the disease evolve? What adaptive function might it play for disease agents? This though is going to be theoretical and, hopefully a series of heuristic ideas. The main benefit would be asking useful questions.
– Practical treatments are a whole other domain and probably will not be helped by these same questions. Treatment must always be opportunistic, concrete and immediate term oriented.

People misunderstand evolutionary significance. Perhaps morning sickness was a rule that statistically increased survival in a world of questionable food supply. That does not suggest that the modern world should preserve it. Some years ago it become fashionable to decry even judicious caesarean births as unnatural and anti feminist with some tragic results. A while back I attended a lecture by an evolutionary biologist who emphasized how deeply humans had evolved into a corner with childbirth and the value of appropriate intervention.

Agreed. This seems to be a no-brainier to me. If we’re also treating the infection with antibiotics, why on dog’s green earth shouldn’t we reduce the fever?! And fevers are dangerous. If they’re high enough.

Wen I was in medical school, in Canada no less, 1958 – 1962, I heard not one word relating to evolution in any of its aspects. To give most physicians the label of ‘scientist’ is to be facetious. Of the sciences (in broad terms), medicine contains the smallest proportion of true scientists. Of my ‘colleagues’ locally (now in the US), a depressing number are evangelical christians, evolution and GW deniers, science illiterates. Technology does not innoculate against ignorance.

I can’t speak for other medical schools, but my profs at St. Mary’s in London, UK, in the 1950s certainly covered many evolutionary aspects. Unfortunately, this type of prof has since been edged out by the peer-review system. Is he fundable by the agencies, is the first question an appointment committee asks. At many medical schools, poor teaching of basic science (e.g. biochemistry) led to its banishment from the medical curriculum. Now basic science is attempting to re-enter “by the back door” in the guise of “evolutionary medicine.”

If raising the body temperature a few degrees is more detrimental to a pathogen than to the host and mammals have a range of body temperatures, why couldn’t a pathogen adapt to the higher temperature and exploit the body reaction? A cat virus that jumped to humans might like the human body’s fever better than the normal temperature.

Rabies kills by destroying brain cells faster than the host’s immune system can respone. Doctor’s saved the life of a girl with rabies by lowering her body temperature to slow the spread of the rabies virus while her immune system developed antibodies against it.

I tried talking to a young medical student who happens to be a 7DA faithful about the Baby Fae case, and the need for medical practitioners to understand the implications of evolution. Blank stare, embarrassed silence, weasel words. Just one example, out of the many I could multiply ad nauseam.

So yes, I think there is a strong case for making every effort to ensure that students of medicine, and medical practitioners in general, have a solid understanding of evolutionary biology and its implications. It’s not even about finding cures; it’s about the cardinal principle of primum non nocere: first of all, avoid harming the patient.

No, you should not let a fever go untreated because it’s an evolutionary adaptation for the body to fend off the illness.

Nor should you just allow yourself to throw up if you have a condition that causes emesis.

Nor should you just let nature take its course if you have diarrhea on the theory that it’s the body’s mechanism for getting rid of GI pathogens.

And on and on.

This is dangerous advice. Potentially deadly advice for the very young and the very old.

The reason ob-gyns don’t prescribe anti-emetics for pregnant women has nothing to do with the benefits of emesis. It has to do with lawsuits. Emesis is common — birth defects are common. Prescribing a mediation for emesis where later that child is born with a birth defect equals an increase in the doctor’s liability insurance premiums. Even if the drug had nothing to do with the defect (correlation =/= causation).

Nor should you just let nature take its course if you have diarrhea on the theory that it’s the body’s mechanism for getting rid of GI pathogens. … This is dangerous advice. Potentially deadly advice for the very young and the very old.

Exactly. And to take the extreme example of cholera, where the etiology is known, the diarrhea isn’t the body’s de novo response, it’s the result of the infection, as a result of ADP ribosylation of a host G-protein BY A PROTEIN FROM THE BACTERIUM, Vibrio cholerae, yielding increased cAMP production, which opens a chloride channel, leading to massive water and electrolyte loss as explained in more detail here. The diarrhea is a direct result of the infection, not some evolved response of the host. And before the need for electrolyte replenishment treatment was understood, you usually died.

Yes, the problem is that there are many thousands of different diarrhoea-causing organisms, and they don’t all have the same evolutionary strategy. In by far the majority of cases, diarrhoea facilitates the spread of the organism concerned, and so it makes sense for diarrhoea to be at least partially caused by the “bug” itself.

But it doesn’t follow that it’s not ALSO in the body’s interest to get rid of as many of the (say) bacteria as possible by the method of diarrhoea. In the case of cholera, as you mentioned, the body doesn’t seem to have much to do with the mechanism for diarrhoea. But cholera is an exception – it causes by FAR the most profuse diarrhoea (sorry!) and it quite atypical in elaborating a specific toxin that constitutively activates cAMP. There are other organisms that do similar things, but most forms of diarrhoea are more clearly a REACTION to the pathogen (i.e. the pathogen doesn’t elaborate any factors that directly cause diarrhoea).

In some cases at least, diarrhoea may well be “beneficial” to the body in smaller doses. There is at least circumstantial evidence that this is the case, for (as mentioned below), anti-diarrhoeal agents do worsen outcomes for some types of diarrhoea.

(This is of course NOT a criticism of taking anti-diarrhoeal agents in general! In most cases, they are really helpful and quite safe. And by preventing excessive fluid and electrolyte loss – almost always the cause of death in diarrhoea – they may save lives in some cases.)

Sorry, but I have to correct this. In fact, taking antimotility (anti-diarrhoeal) agents in SOME types of infectious diarrhoea can make the condition worse. As a general guide, taking antimotility agents for diarrhoeas with a prominent fever, or for bloody diarrhoeas, is contraindicated. They have been shown to increase the rate of infectious complications (e.g. “toxic megacolon”) and death.

As far as I know, there are no comparable risks to stopping emesis (so go right ahead and take symptomatic treatment!)

With fever, there is little clinical evidence either way. In vitro evidence has been contradictory (e.g. fever seems to enhance bacterial killing, but treating it raises anti-influenza antibodies, etc.). There haven’t been any well-designed studies showing much clinical benefit to fever, so most doctors in my experience treat it if it’s bothering to the patient, which seems fair.

Evolution was never a part of my medical school curriculum. No doctor I know has been trained in “Darwinian Medicine,” or really ever thinks much about evolution in the course of a day spent in treating patients. Certainly, with respect to antibiotic resistance, we see natural selection (or I suppose I should say unnatural selection) at work, but even that knowledge has little impact on how doctors react to such resistance (“That drug didn’t work? Let’s try another that will.”)

I do agree, however, with Jerry’s point that, “as we all know, pure science often has unexpected practical consequences.” However, I’ve yet to see a practical consequence (other than perhaps the example of antibiotic resistance) where “Darwinian medicine” made a contribution to the care of patients.

And as for allowing emesis to continue in pregnancy or fevers in infections, just because evolution may have given us these things doesn’t mean they’re necessarily good or even necessary (perhaps they’re epiphenomena)–and they certainly cause a lot of suffering. If we can safely ameliorate them (a big question!), there’s little doubt in my mind that we should.

Although these days, being responsible about using antibiotics is becoming more important. We know that drug resistance will occur and the number of effective antibiotics is becoming less and less for some pathogens. Everyone should be concerned about bugs such as MRSA, common and getting commoner.

But that sort of misses the point. You can use a computer without having the slightest idea of how it works. Or drive a car.

But someone has to know how a computer works or a car. Otherwise, there wouldn’t be any computers or cars.

It’s the same with medicine. Medical researchers in many fields including cancer and infectious disease have and are acutely aware of evolution simply because they need it to do their jobs.

The best way to understand antibiotic resistance is understand the natural role of the organisms from which they are derived, principally the actinomycetes which live in symbiotic relationship with trees in the soil. Among other things they control the carbon:nitrogen ratio in the soil.

Antibiotics control the balance of bacterial populations and thus the detritivores which break down material on the forest floor to get this balance right. Fascinating stuff in a little Russian MIR book, a product of Soviet Union. NO LYSENKO, FREE OF LYSENKO, ZERO LYSENKO. Great.

One example where evolutionary thinking may have medical benefits: the apicoplast turns out to be an algal chloroplast that has lost the ability to photosynthesize as it became an endosymbiont essential for the metabolism in Plasmodium that causes malaria. This has opened up the possibility of using herbicidal chemicals to kill the malarial parasites while doing little harm to the hosts. This may or may not lead to an effective treatment for malaria, but without evolutionary thinking this possibility would not have occurred to researchers.

To say that evolutionary thinking isn’t useful to medicine is like saying your medical history isn’t useful to medicine, or worse; that “understanding” isn’t useful. It’s useful to understand that the human body is an ecosystem, and as ecologists know, or are now learning; evolution is an integral part to keeping an ecosystem healthy. As Nesse and Williams show in their book Why We Get Sick; it’s useful to understand:

-that organisms are ‘designed’ to maximize their success in reproduction.
-that adaptations work best in the circumstances in which they evolved.
-how pathogens facilitate their own spread.
-that within-host selection favors increased virulence, while between-host selection acts to decrease it.
-that disease spread by personal contact should generally be less virulent than those conveyed by insects or other vectors.
-how pathogens evolve molecular mimicry.
-that anxiety is probably a defense that can sometimes be maladaptive in modern society.
-how senescence is not caused by mistakes but compromises carefully wrought by natural selection.
-why many people have a deficiency in Vitamin D
-that autoimmune diseases are the price of our remarkable ability to attack invaders.
-how emotions evolved, and why they’re often a misfire in modern society.

and many more!

I think Randolph Nesse comments in Jerry’s link above, naming a few of the promising practical applications in this new field of study such as:

-Identifying pathogenic mechanisms by using positive selection.
-Using phylogenies and positive selection to predict which currently circulating strains of influenza are most likely to be closely related to future flu epidemics.
-How we might influence the evolution of lower virulence
etc.

from “Why We Get Sick”:

“We should be careful to distinguish defences from other manifestations of infection, slow to conclude that a bodily response is maladaptive, and cautious about overriding defensive responses. In short, we should respect the evolved wisdom of the body.”

“By trying to find the flaws that cause disease without understanding normal functions of the mechanisms, psychiatry puts the cart before the horse.”

Understanding evolution is useful to medicine, because ‘understanding’ is useful in medicine. If evolutionary thinking had been applied to medicine far earlier, i have no doubt that more lives could have been saved and people would be living healthier lives now.

You’re absolutely right.
And I should say that my rough scan of personal doctors implies that they have very little knowledge of Darwinian Medicine.
It is also unfortunate that most research done in this field is by biologists and there seems to be little crossover to medical researchers about the knowledege they get from experiments.
For example, doctors insist on raising children in completely hygienic conditions while more and more research in favor of the “hygiene hypothesis” completely contradicts this approach in the prevention of allergies.

It seems to me the conclusions of the type that advocate for letting fevers run their course, allowing morning sickness to continue, etc. perpetuate the fallacy that just because something has become prevalent in our species, it has evolved in us “naturally” it must be good. Of course there is a reason these things happen, but just because the behavior has replicated in our species does not mean it’s advantageous per se — it might simply be an unfortunate side effect of a truly advantageous mutation. Remember, RANDOM mutation and nonrandom selection! Evolution does not have a purpose or direction per se, and not all species traits are advantageous; some are quite limiting.

It just seems really clear to me, but maybe I am simple minded. Figuring out the evolutionary history of all the organisms involved in a disease situation can only be of benefit when it comes to figuring out how to treat patients. It may not be necessary in all cases, but it will never be detrimental in any case.

The only conditions in which it would be detrimental is when people misunderstand and or misuse such knowledge. That there are always some people that will fall prey to the naturalistic fallacy is just something that has to be dealt with, but that is not relevant at all regarding the question of whether or not such knowledge is useful for medicine.

Sounds to me like some medical professionals just don’t want the hassle of having to learn more and therefore rationalize that evolution is not important for the practice of medicine. That may be fine on an individual basis, mechanics are absolutely useful and necessary, but the institution as a whole can not afford to not utilize such knowledge.

Isn’t it hoped that evolutionary lines of questioning can help isolate genes related to genetic disease? Maybe something like a blood disorder in humans that is not seen in chimps–by assuming that there is a common ancestor we raise the question of whether we developed the potential for the disorder or the chimps overcame it. Either way we might hope that by comparing genomes in clever ways we could see where ours broke down or where theirs was overwritten. They might even have a disabled copy of our broken gene in their junk DNA.

Most doctors, at least the GPs, are little more than pill pushers these days. For them, Darwinian medicine offers nothing directly, but indirectly an evolutionary perspective on disease may lead to a more subtle understanding by researchers and thus to better treatments.

Indeed, a Darwinian perspective on disease illuminates some obscure corners of human history; understanding the impact of epidemic disease on the course of history requires some evolutionary thinking (primarily the co-evolution of host and parasite that turns dangerous diseases into mild fevers.)

Evolution is a funny thing; it has left us with appendices that rupture, hernias that strangulate, autoimmune diseases that rub us out prematurely and childhood cancer that is selectively perplexing. Not to mention testosterone-induced risky behavior, suicide which seems to have a genetic component and hundreds of genetic diseases where there is no heterozygote advantage. It would seem that some adaptations arise to bump us off. Is there a benefit to a gene if it’s carrier doesn’t live forever? Just as genes arise to limit brood sizes for birds or mammals perhaps “deadly” genes arise to ensure that our broods don’t get too big. Or at least things like appendices or Tay Sachs disease aren’t selected out because families and families of genes actually do better in times of dearth if there are fewer mouths to feed.
Rather than trying to explain away these Darwinian conradictions, I would suggest we look for reasons why the death of a gene’s carrier is beneficial for the gene. It follows from this, that we have evovled to survive in the margins and when the environment is at its worst. We are evolutionarily forever fighting famine. We have evolved genetic mechanisms that leave us with a strong desire to reproduce but don’t punish us with the dividends of that desire.

They have produced one major annoyance; a movement that preaches to us how we ought to revert to the supposed diet of our ancestors (The Caveman Diet, etc.).
For one thing, it is not preached to you, the facts are stated to you if you choose to read them. The caveman diet (Paleo, primal, ancestrial diet, etc.)is how primative man, from walking upright up until the agricultural revolution, ate and evolved. Human bodies have not evoloved much since then so it only makes sense that we should eat the same way (or as close as we can get to today in modern society). The diet consists of eating basicly only what we could hunt or gather (no grains, lentels, dairy, etc. Primative man had abundant health, being very lean, muscular and free of all of the diseases we (as humans) suffer from today. They would have lived very long lives if it wasn’t for infections and injuries. I have been following it for years now and reap the benefits. My wife had no nausea during her pregnancy eating primally and my 9 month old little cave man has never been close to having any health problems (not even an earache or diaper rash). He is very strong and muscular already.
As far as the fever killing off the infection, I have found this to be true. In my 20+ years as an EMT I have had 1000’s of patients with fevers, the one’s that let the fever ride (103 and lower, 105+ tends to cook brain cells) for a day or so will beat the infection very quickly. The ones that keep trying to bring the fever down tend to be sicker much longer. That is just in my very unscientific study of patients my departments have had outside of Chicago.

HIBERNATION DIABETES 2
With weight gain the body interprets this as the person heading for deep freezing winter where food will become scarce or unavailable (as with hibernation). Fat stores have to increase to carry him or her through the deep freeze, and what is happening in the plant world exactly matches that of the animal world. The plants provide precisely the right nutrients for the animals for that time of year and vice versa, the bowel flora of the animal’s faeces, the composition of it’s urine is precisely right for the plants. More nitrogen is locked up in the animals and more carbon is locked up in the plants and I’d reckon that the carbon:nitrogen ratio shifts between the seasons. More muscle mass in in the animal relative to fat in summer means more nitrogen to carbon in summer. Have to think about the plants, eg lignin.

Coming on a winter freeze he or she has to conserve energy by being less active, as is taking place throughout the entire forest, hence the shift in the number of insulin receptors from muscle cells to fat cells. (peripheral resistance to insulin in diabetes 2) The simple sugar sucrose in the berries and other fruits, necessary for fast ‘fight or flight’ reactions in summer has been replaced by an increase in the more slowly ingested starch. The glucose from the breakdown of starch is being increasingly converted to fat in fat cells and of course liver (fatty liver) rather than glycogen in muscle cells as they atrophy. Soon glucose, even from starch, won’t be available or needed as the animal ceases eating and goes into ketosis and ketones will replace glucose as a fuel in so much of the animals tissue.

What glucose the animal must have will come from gluconeogenesis, a part of going into ketosis in this context. Gluconeogenesis is the process whereby the liver MAKES glucose from the bodies own tissues, from 3 sources, lactic acid from the muscle glycogen (although that disappears once starch stops being eaten), amino acids from from the breakdown of muscle protein, glycerol (glycerine) and fatty acids. Fatty acids from fat cells are broken down to ketones, some of which drive what is called the TCA cycle to provide the energy to drive gluconeogenesis and the rest are exported from the liver as fuel for most of the body as cells adapt to ketones. There are 3 different ketones, acetone, betahydroxybutyic acid and acetoacetic acid. When muscle glycogen runs out and to conserve muscle protein for essentials, fatty acids and glycerol from fat cells become the sole source of raw material for gluconeogenesis. Cortisol is the hormone responsible for mobilising the muscle protein as well as fatty acids and glycerol from fat cells.

Thus the big muscles developed by the spring and summer activity are now atrophying as big muscles are no longer needed. As the large fat deposits are needed for fuel during the long fast so the muscles act as a reservoir of amino acids for cell repair, peptide hormone synthesis etc., although this will be minimal as the animal lies asleep, only rousing every now and then. Bone breaks down to provide not just calcium and phosphate, but all of the minerals the body needs, copper, zinc, selenium etc The hibernating animal stops urinating to conserve water. I think hibernating animals can store urea, the nitrogen of protein breakdown, in a crystal form, so a range of other things, eg uric acid, can probably be stored in a crystaline form for recycling as the animal COMES OUT of hibernation.

NOW TO THE POINT.
There is NO FUEL STORAGE going on at all in the hibernating animal. Thus there is no point in hanging onto a lot of cells in organs involved in stimulating any sort of storage, e.g. beta cells of the pancreas and cells of the thyroid. So why not get rid of them as they are just wasting fuel AND CAN BE RECYCLED. Apoptosis or cell suicide is the shot as everything in the cell is broken down to bite sized chunks for other cells to use. Their is no energy wasting inflammatory reaction with apoptosis. Enter auto-immune ‘disease’, only a disease in the concrete jungle which bears absolutely no resemblance to pristine wilderness. (DIABETES IN THE CONCRETE JUNGLE)

I first focussed on this idea when I read about gene clusters DR3 and DR4? associated with auto immune disease. I had already built a model of an animal in the wild of a temperate zone or a human in a temperate zone and these gene clusters were found in people from Central Europe who had diabetes 1 They were active in the pancreas and thyroid, both endocrine organs. Temperate zone equals big swings in temperature, snow in the winter and hot in the summer.

SUMMER DROUGHT DIABETES 1
Then I began to focus on the advantages of trimming these organs with the sudden onset of a life threatening drought in summer when the animal did NOT have big deposits of fat on it’s body, but did have big muscles. THE REVERSE OF WINTER. If the mechanism of storage didn’t fight with those of release from storage, then the transition to ketosis would be much quicker, easier and smoother. Very little or no adrenalin needed to initiate gluconeogenesis, just cortisol, so no water wasting sweat and energy wasting anxiety from hunger. The animal would quickly become soporific, go lay in the shade under a tree and sleep a lot of the time. Sort of summer hibernation.

So Diabetes 1 could be this summer model gone awry in the concrete jungle. I believe that beta cells of the pancreas are vulnerable to big swings in osmotic pressure caused by big swings in blood sugar, for the same reason that neurons of the brain are vulnerable i.e. they don’t have insulin receptors on them and thus don’t store glycogen. I have never been able to confirm this idea, but I can’t see how they can and regulate blood sugar if they store it as glycogen. They wouldn’t produce insulin yet have insulin receptors on them Doesn’t add up. Loss of weight in pre-diabetic 1 equals drought adaption but with lots of sugar thrown in to initiate reactive hypoglycaemia plus expression of gene clusters DR3 and DR4. Thus auto immune ‘attack’, is perhaps apoptosis PLUS necrotic cell death and inflammatory immune reaction. Dunno. All impure speculation of course.

THE CURE
Ok, if the hibernating animal provides a model for Diabetes 2 then it MUST BE REVERSABLE, ie CURABLE, following the path of the animal coming OUT of hibernation. The right ketone diet modelled on what this animal eats once it resumes eating. That’s next, as soon as I clean it up. Paul Hill

On the morning sickness thing, surely there are enough populations out there with and without morning-sickness drugs to come up with a valid conclusion? We just need a concerted effort to actually record the necessary information. Profet’s claim sounds to me like naturalistic bullshit. The body is disposing of toxins all the time. Puking is not an effective method of removing toxins from the body – it has some effectivity in cases where the toxins have not yet been absorbed and are still within the stomach and this is why the IPECAC syrup persists. For Profet’s claim to be even remotely plausible we would need a mechanism for the transfer of toxins from the body and into the stomach cavity. I call bullshit on Profet.

As for treating fevers, even 40 years ago medical practitioners would only give antipyretics if the patient was extremely uncomfortable, otherwise the patient was left to suffer. The advice was not to cover up because that can cause heat fatigue especially in children (and the elderly) and then you’re really screwed. In the case of fevers it should be possible to test the idea of heat = shorter fever – it’s one of the instances where an ethical human experiment can be designed. Someone needs to cough up the money, design the protocol, etc. Since when did colds bring on a fever?

Every decent pre-med training would involve general biology (so some evolution) and pathology (which should have a lot more on evolution). The virologists and bacteriologists can’t get by without knowing quite a bit about evolution. I wouldn’t dismiss Hall’s claim so quickly.

I’ll have to go off and read Hall’s article. I’d be a little surprised if she doesn’t get that malaria was a force which, in some areas, weeded out the people who didn’t have congenital sickle-cell anemia.

OK, I read Hall’s article and I agree with her 100%. The mention of malaria confused me a bit because no background was given. I think what Hall is saying is that it’s not clear whether humans developed sickle-cell anemia long after exposure to malaria, in which case humans can be said to have evolved the trait as a defense, or if the trait was already there and simply selected by the exposure to malaria – in either case it doesn’t matter at all to medical science. At any rate I don’t believe Hall gets her evolution wrong.

What Hall is saying is that “Darwinian Medicine” currently contributes nothing at all of significance to medicine and she doubts it ever will, and the “Darwinian Medicine” people should stop whining about how the evil medical conspiracy is depriving them of much deserved research funds. Well, there’s that and Hall says something along the lines of “so what if you’ve established an evolutionary basis for X – unless it actually helps us solve medical problems it’s of no value whatsoever to scientific medicine”. I agree with her 100%. This “Darwininan Medicine” thing, at least in its current form, is just like “Evolutionary Psychology”.

VIRUSES
‘Nature red in tooth and claw’ Tennyson. Something attributed to Darwin. Competition is everything and altruism doesn’t exist in the wild. Yet apparently it was Darwin’s Bulldog Huxley that put all the emphasis on competition whilst Darwin not only stressed altruism a lot he also went back to Larmark before he died. If you are obsessed about competition to the point of blind dogma like Dawkins you will see it every where, including where it is absolutely impossible.

If a virus is a living thing, as both neo-Darwinists AND Dawkinist claim it is, then it competing with it’s host might be a valid claim. What is a living thing? It must at least have metabolism, ie the ability to produce energy from some sort of fuel. This will enable it to have motility ie. get around. A virus has neither, no organelles, nothing, just nucleic acid and enzymes in the case of retroviruses. It is simply a message in an envelope with an address printed on it.

Yet the language of the neo Darwinist Dawkinist is that the virus sneaks up on the cell disguised as something else to fool the cell it wishes to invade and thus tricks the cell into letting it in. The cell unwittingly takes the virus in, then, realising to it’s horror that it is a virus, turns on its defences, such as interferon, to block replication or to try to kill it. Once in the cell the virus takes over the machinery of the cell to reproduce itself to the detriment of the cell if the viral offspring have to lyse the cell to escape.

There is a more rational way of putting it. The virus bumps into the cell and matching receptors on both link up, the cell takes the virus in, used its machinery to make replicas of the virus from its template, it’s nucleic acid. Things like interferon are used by the cell to MODULATE the expression of the virus.

What if the basic paradigm is upside down and viruses in virgin wilderness, which unfortunately no longer exists, are not pathogens at all but the means of transferring genes horizontally, as well as being a toolbox of viruses, HIV and some cancer viruses such as RAS included, whose role is inducing modifications in the way cells behave.

Viruses in the concrete jungle, which is the very antithesis of wilderness, are analogous to weeds, and like weeds have also undergone a range of mutations that make them more infectious and virulent in the transition from wilderness to the concrete jungle as well as infecting a larger range of species.

In this model a variety of viruses in wilderness form collaborative, synergistic relationships just as plants do in wilderness. There would be no such thing as single viral infections as with plagues. Same with plants, no one plant, as a weed, taking over and destroying everything else.

AN AIDS CURE?
Many years ago I read in a Health and Fitness magazine about a woman who had been completely cured of HIV/AIDS without any medical intervention. Actually she cured herself but didn’t realise it until she was tested. Now at first I thought ‘Oh Yeah’. But as I read on I became intrigued. She was living with a bloke who she was pretty besotted with and next thing she tests HIV positive. It turns out that her bloke was bisexual and had secretly been having it away with other blokes, become infected then infected her. She was devastated, betrayed and infected. Then, he died, and even more devastated. Not much treatment back then, so believing she didn’t have long to live, she moved to a place in the country to die. I think she started on some sort of health regime, change of diet and exercise, but when in the entire process I can’t remember.

Somewhere along the line she goes through a rebirth, not a religious rebirth, if any thing one OUT of religion and thus the fear of death. This is a period of intense anxiety and depression and finally a total acceptance of impending death. With this very painful tearful acceptance comes an extraordinary calm and inner peace. Death is inevitable, can’t get around it, cant make it go away but paradoxically, its at the moment of total acceptance it DOES go away, well STARTS to go away.

For many years I have studied virology and was gobsmacked with fascination when HIV was discovered in 1984, especially with it’s extraordinary complexity and it’s ability to avoid all forms destruction. Now, I’m not a Darwinist but a Larmarkian and one glaring omission with Darwinist evolutionary theory is that the evolution of new bodily structures must coincide precisely with a suppression of immunity against said new structures otherwise there would be a so called ‘auto immune’ response against them and they’d be destroyed. There is no way these two events could coincide purely by chance.

The HIV virus seemed to fill the bill perfectly, bot NOT in the way it behaves in the concrete jungle. I believe that viruses are NOT pathogens in pristine wilderness but in the conjung (new word) behave like weeds, ie plants in the wrong place at the wrong time. I can’t go right into this as space doesn’t permit, but what if this virus, in collaboration perhaps with other viruses, could selectively suppress immunity against newly emerging structure which develop as a response to environmental change by Larmarkian evolution.

Then one day reading a book on the epidemiology of AIDS came a revelation. T4 cells containing the HIV genome in vitro cannot make viral particles without CORTISOL added to the petri dish soup. Now cortisol is released by the adrenal glands in response to stress and starvation. So I started checking and it seemed that ALL viruses and their cousins, the so called jumping genes, need cortisol to activate them, but I’m not sure.

The Larmarkian link here is that as an animal gets out of kilter with a changing environment it becomes increasingly stressed with cortisol being released as a result, thereby activating so called genetic instability, jumping genes activated which bring about TARGETED mutations and viral genes to make viral particles which spread genes horizontally by cross infection to the same end. HIV is a relative of the retrotransposon or reverse transcription jumping gene. It IS a retrotransposon which can be transported out of cell by being coated.

Short term stress, as with ‘fight or flight’ in wilderness boosts immunity whereas long term stress does the opposite, depresses it. That’s the effect of cortisol because it depresses immunity, hence it’s use in ‘auto immune’ diseases such as MS. That fits the Larmarkian hypothesis PERFECTLY, viral particles produced and spread simultaneous to depression of immunity AGAINST said particles. So what seems to have happened with this woman was that when she calmed right down her, her cortisol level dropped, immune system bounced back and infected T4 cells stopped making virus even though the cells still contained the fully integrated viral genome.

Would those cells eventually be replaced so there would be no ‘infected’ T4 cells remaining? Dunno. T4 cells are screened in the thymus gland to weed out those that would attack ‘self’ and the thymus atrophies early in life? Next, refined sugar triggering cortisol. That is, after I see what sort of response this post gets.

A CORRECTION.
When I posted the above on Science Blogs I got this reply and admit I hadn’t thought about germ lime mutations or new genes resulting from Larmarkian transfer being accepted as ‘self’ before the immune system and thymus screening of T8 and T4 cells arose during embryogenesis.. However it doesn’t matter with Larmarkiam evolution as it can take place any time after formation of the blastula as there is a lot of retrotransposon activity between blastulation and gastrulation. Primordial germ cells within the blastula/gastrula do not enter the seminiferous tubules of male testes until well into development by a process identical to metastasis in cancer.

So presumably environmental conditions in the womb can influence the above retrotransposon activity which in turn has an effect on primordial germ cells and thus sperm. After the immune system is fully developed it is cancer that has all of the mechanisms of Larmarkian evolution, ‘wild type’ cancer that has no resemblance to clinical cancers.

The Reply.
As for your immune system question – i.e. why doesn’t the immune system destroy newly emerging structures – it stems from a lack of understanding of how the immune system works.

In the fetus, the immune system (and in particular self vs. non-self recognition) doesn’t develop until quite late in gestation. Once it does develop, how does it decide what is self and what is non-self? Easy – everything that’s already there is “self”, and everything that enters from that point on is “non-self”.

That’s how chimeras form. Add cells from a different individual before the cut-off date, and they’ll happily coexist forever after. This happens naturally fairly regularly, and with species where multiple embryos share one placenta things can get really quite haphazard – fertile female monkeys with genetically male ovaries have been documented. It even works with quite distantly related species.

Anyway, the point is that there is no “huge oversight in Darwinian evolutionary theory” here. The problem you’re trying to address simply doesn’t exist – by the time the immune system develops, any novel structures are already part of the “self”.

BATTLE WITH ORAC.
I have had a monstrous battle with my ideas on evolution and medicine on Orac’s blogs on Science Blogs. Posts are by Paul Hill and start down the page on link 2 and 3. To Orac the conventional view is everything and anything else is woo (bullshit in my language). I have been subject to an unremitting tide of the most vacuous, banal, psychotic tide of abuse, but have managed to take the piss out of most of it. The shear hypocrisy of the attacks is almost unbelievable, almost nothing constructive at all. Good for a giggle as well as witnessing the extreme difficulties of trying to promote new ideas, which is what science is SUPPOSED to be all about.

Thanks for reminding me that you are the sort of psychopath responsible for gun massacres by bullying some poor bastard until they are completely psychotic. You don’t use a gun, you just use a computer and it’s perfectly legal and you hide behind a pseudonym. It won’t work with me because I am sane and enjoy up to a point take the piss out of morons.

It’s easy to say that in hindsight, but much of our protocols arise from the investigation of such oversights. As another example, despite engineers’ careful planning, quite a few hospital patients had been electrocuted, some fatally, by medical test equipment. By the 1980s the medical electronics industry had compiled a huge amount of information to help engineers design safer devices and engineers have continued to improve on things. These days there’s quite a list of tests to be done before a chemical is approved for use as a food additive, drug, or even as a poison, and although these requirements help prevent unintended effects, there are still no guarantees that there will be no harmful unintended effects – that’s just the way things are.

In my opinion, evolution in medicine is more useful as a tool to understand rather than to cure. It is a way to add a story to collections of otherwise unorganised facts.

That’s at least how evolution was used at the Belgian university of Gent when I was there, now over thirty years ago.

If nothing else, it is a nice and useful method to help doctors remember facts in a meaningful context, making it less likely that they’ll forget them.

Because we shouldn’t ignore this important fact about medicine: even though we still have far, far, far more to learn than has already been discovered, the field of medicine has become so incredibly large and complex that we can no longer reasonably expect any doctor to still know everything he/she should know.

This, in my view is the main tragedy and difficulty medicine has to face and isn’t coping with very well.

There is simply far too much for anyone to remember. I frequently find myself working on a problem and later discover from my old notebooks or publications that I’d already solved the problem.

In the case of evolutionary theory in medicine, there is far more to it than you imply in your post. For example, for over 20 years now virologists have been studying the surface structure of some viruses to gain some insight into how the viruses develop a resistance to the human immune response – the hope is that such work may lead to the development of a chemical agent which can interfere with the virus’ reproductive cycle and which the virus is unlikely to develop a resistance to.

“far more to it?” possibly, but how exactly does your example demonstrate that?

To me, evolution can only be used here as an explanation once a solution has been found, not as an indicator for where to look or even to narrow the search. The trouble with evolution is that it is based on random mutations that just happen to give an advantage to the creatures they are a part of, in the situation they happen to be.

We know extremely little of the present, far less of the past and how it came about. That, in my view, drastically reduces the usefulness of evolution as a medical tool.

Evolutionary theory is, I think, a unifying theory for biology, a system that allows us to understand how things have happened: it explains (almost) everything, but only *after* it has happened.

It’s a lot like meteorology: the weatherman is quite good and convincing in explained what has happened, and how it has happened, far far less good is he/she in predicting what will happen.

I’d like to clarify. I was questioning the need for “Darwinian medicine” as a separate medical entity. I fully accept that an understanding of evolution is essential to all medical science, but I think it is better to integrate it than to separate it. Many of the examples of benefits from “evolutionary medicine” refer to advances in genetics and to simple study of the way things are rather than how they got that way. We can study what the genes do and understand that they probably evolved to do what they do because of some survival advantage and we can even look for that advantage without needing to understand exactly how that evolution occurred. The “just so” stories might be the true explanation or it might be a spandrel or an accident. We can’t rewind history to find out for sure.

From a health professional: The vast majority of doctors, pharmacists, optometrists and dentists do not understand evolution. As a matter of fact most practitioners are scientifically illiterate. Many practitioners even offer their patients a “holistic” experience, peddling questionable supplements and alternative medicine like acupuncture. This is not being done to just make an extra buck. Close to 50% of practitioners have tried an alternative med treatment for themselves or a family member.
This happens despite the fact that the very existence of their professions is because of science. But your run-of-the-mill health professional might as well live on a different planet from the academic who gets published in NEJM, JAMA or the Annals of Internal Medicine (and I think Dr Hall had the latter in mind in her article). On evolution specifically, all health professionals have at some point answered an exam question on what an allele is or what carbon dating is. This does not mean they truly understand the concepts, or how scientific questioned are posed, or how robust the findings are. In most minds the phrase “survival of the fittest” is indistinguishable from the evolution. That alone should tell you much science and evolution your average doctor has actually mastered. They may very well excel in the daily motions of their work. But scientifically literate they certainly are not.

Evolution is essential to medical science. I acknowledge that many doctors fail to understand evolution or even the scientific method. I would argue that what we need is better basic science education, not a new separate discipline of “Darwinian medicine.”

I think there can be no such thing as “Darwinian medicine”. There is simply medicine, and the reason that it works to the degree it does, is that we all have a lot in common… thanks to genetics. Evolution makes medicine possible, but -in my view- there are far too many variables for it to be useful as the basis for a new branch of medicine. It is too unpredictable. Maybe it won’t be in the future, but not now.

Many doctors do indeed fail to understand evolution and the scientific method. In my view, current doctors are essentially mechanics. I don’t know of that many mechanics who have a thorough grasp of physics, either.

I’m a doctor and I can promise you that the majority (probably vast majority) of doctors do not understand evolution much at all. Most never studied it beyond maybe sleeping through a class in college, and what little they know is misunderstood. I even met some creationist doctors during school and residency. It’s depressing. Also, please make Paul Hill go away.

When I first started to go to the Lorne Genome Conference I asked questions at question time after a lecture that had biochemistry and maybe endocrinology in them, not realizing that all of the delegates were molecular biologists. The lecturer stood with egg on his face, stating that he didn’t understand the question as if it was me that hadn’t asked the question properly. Then the symposium meister would quickly say something like. “We’re running behind time so we’d better move on” to save further embarrassment.

I asked a question at a very prestigious Julian Wells lecture in all innocence, which involved developmental biology. Egg on face again. That enraged the conveners so much that I was told that I had to take a lower profile or I might not be invited to come the following year. No mention why however, indeedy no.

I told this to a couple of student a coupla years later. You know what one said to me “Noddy, I would have paid good money to have seen that.” What I had missed because I always sat up the front row, as I’m deaf, was all of the the sniggers by student behind me at seeing one of their dogmatic overbearing professors squirming like hell and at the hands of an amateur multidisiplinary.

I became the student’s David Carradine in Kung Fu. Grovel enough and you also can bathe in my blinding light.

Another meathead lining up for a verbal serve. It was a completely different reaction at my first Lorne Cancer conference. I was eulogized, well not OVERTLY

HYPOXIA.
The first Cancer conference I attended about 16 years ago was a joint Lorne (Vic Aust) AACR (American Association of Cancer Research) conference. A lecture on the second day was about bowel cancer in which the number one tumour repressor p53 was mutated, presumably the reason for the tumour, and it secreted a prostaglandin PGF2. Now I knew that PGF2 constricted or narrowed blood vessels supplying the gastrointestinal tract and suddenly I had a different spin on the sequence of events.

At question time I suggested that hypoxia or oxygen starvation was the cause of the bowel cancer, not the mutations in the p53 tumour repressor. I suggested that the mutations in the p53 were a RESULT of the hypoxia. As I said this tumour secreted a prostaglandin PGF2 which is a vasoconstrictor. Adrenalin activates PGF2 in the gastro-intestinal tract in response to ‘fight or flight’, stress in other words.

At the end of the symposium a chap approached me and suggested that this was a very interesting idea as he was working on the same cells. It turned out that he was the AACR convenor. I saw him later and explained to him what is called the RAS pathway by which adrenalin works. He said, “You must be a very good biochemist” I was nonplussed by this as I didn’t thing my suggestion was anything extraordinary and I replied that “I’m not a biochemist, I’m an electronic technician, this is just a hobby for me. He looked dumbfounded as if I was taking the Mickey out of him and didn’t say anything, then took off. I didn’t run into him again for the rest of the Conference.

The following year when I went to pay the Conference secretary for everything, registration, all meals and accommodation she whispered as she handed me my bag, “Don’t worry Noddy, it all been taken care off” ($750}. I was gobsmacked as there was no explanation as to why. The same thing happened for the following two years the gradually tapered off. I know now. You see my answer that day involved endocrinology, biochemistry, haematology, oncology, molecular biology etc. all of which I had studied BECAUSE I DIDN’T KNOW THAT I WASN’T SUPPOSED TO. All of the delegates were molecular biologist looking for GENES to blame, then design the appropriate drug. It is pure commerce and that blinds totally.

These conferences would not be possible without large scale funding from lab suppliers, pharmaceutical companies etc. This, plus Dawkinist dogma, makes it IMPOSSIBLE to understand cancer, let alone find a cure. How could the delegates POSSIBLY admit that an amateur scientist had worked something out that the entire global community of cancer researchers couldn’t.

At the last cancer conference that I attended about 5 years ago there were two lectures on the discovery of the HIF (hypoxia induction factor) genes that switched on proliferation of cells and angiogenesis (growth of blood vessels to allow cell growth), CANCER in other words and based on my suggestion all those years before, yet I got no recognition whatsoever. Hypoxia is probably the most important tumour inducer of all.

Go check with the AACR, not that I’d expect them to admit it as it must be embarrassing for them. Convener’s name was something like Horwitz. For that most profound bit of wisdom gbjames, snivel AND grovel enough and you may touch the hem of my garment, just don’t dirty it with bullshit please.

SEASONAL CONTROL of BASAL METABOLIC RATE in the WILD vs the CONCRETE JUNGLE. A Model for Understanding a range of Diseases, including Schizophrenia.

After a rebirth I went through in 1972, OUT of religion, I began a fitness program and to get rid of about four stone of excess weight. I waked several miles each day to a Turkish bath and heated seawater swimming pool. Then I started to do long distance swims in Port Phillip bay. To my amazement I lost the weight in about 3 months. I’d tried many weight loss programs in the past and had failed. Now I was just eating good food and doing a lot of exercise. I figured that it had something to do with my completely changed attitude, some relationship between my mind and my metabolism. I’d been a guilt ridden introverted alcoholic manic depressive (my diagnosis) back-slidden Christian fundamentalist. Now I was full of self confidence, happy and completely cured from the grog. I could have one or two glasses of beer then go home.

Okay, standard medical dogma had told me that if you were an alcoholic, and I was a shocking alcoholic, you could never have another drink, join AA and ask God to help you. Once an alcoholic always an alcoholic. It was in your genes. But I did the exact OPPOSITE, not intentionally, it just happened. Freed of the guilt and despair of believing that I was going to Hell one day, I just didn’t feel like the booze any more. AA does the opposite, adds more mumbo jumbo, self flagellation, self loathing etc. It was believing that I was genetically predisposed to the grog that only ADDED to this despair. NOW I WOULD QUESTION ALL DOGMA. I knew that Freud was insane for a start, Eros, Logos, Thanatos, Id, Ego, Superego etc. etc. Bloody absurd. I was now ONE not a pantheon of lunatics all at each others throats. So I had already demolished one ‘science’.

So, to find out how I’d been able to lose all the lard I trotted off to Ramsey’s surgical and bought a swag of medical books. I got into neurology for a while, then lost interest when I began to read an incredible textbook,’Metabolic and Endocrine Physiology’ by Tepperman. This book was different in that it speculated about all sorts of things, discussed different experiments etc.

ENTER YIN AND YANG.
There were two prefixes that struck me over and over again, Hypo and Hyper and the ones that stuck me the most were Hypothyroidism and Hyperthyroidism. They were opposites in so many ways. The hypothyroid, was overweight, slow, lethargic, constipated, unresponsive, had high blood pressure to which I added introverted, self critical etc. There was something manic depressive about this person. I’d been one myself.

Then there was the hyperthyroid, underweight, very reactive, suffered diarrhoea, anxious, high blood pressure but for a different reason, to which I added extroverted, perhaps narcissistic (depending on the degree), socially critical. There was some thing schizophrenic about this person as I’d done a mild version as a sort of reflex reaction to the manic depression on coming out the other side. It was beginning to dawn on me that these two opposite individuals were an exaggeration of the extremes of physiology of an animal in the wild of a temperate forest, and was thus ENTRAINED into the extremes of climate. There was no option other than to be entrained,

What stuck out from my dabble in neurology was the amount of THOUGHT between the summer and winter, In summer there was a lot of ‘fight or flight’ and of course with the much longer days more time to do that fast reactive thinking of chasing lunch or or avoiding becoming lunch. Without going into unnecessary detail winter was the opposite.

Now looking at the regulatory path for the THYROID gland it was STRESS?-thyrotropin releasing factor-thyrotropin-THYROXIN. The stress was speculative. I’ve been around all sorts of mechanical devices all my life and I started to think ‘What if there is some sort of flow meter that measures the amount of blood going to the brain and it’s that rate which regulates the amount of thyrotropin releasing factor’. More thought, more neural activity, more metabolic activity, more glucose needed for that activity, more oxygen, more blood flow.

So the animal in the wild was entrained, had no control over its degree of thought and so no control over thyroid activity. In the winter it became (sub-clinically) hypothyroid and in the summer it became (sub clinically) hyperthyroid. Later I would learn about light shining through the eyes and its regulation of serotonin and melatonin, but I was on the right track. It seemed that the thyroid grew more cells (hyperplasia) in the summer and lost cells (hypoplasia not goitre)in the winter.

Later I began to study calcium metabolism. This was a terrific model as sunlight regulated the amount of vitamin D in both plant and animal. Two opposite words stuck out osteoblasts and osteoclasts. Osteoblasts laid down bone and osteoclasts removed it. So there was more osteoblast activity compared to osteoclast coming on summer and the bones got thicker and more osteoclast activity compared to osteoblast coming on winter and the bones got thinner. No problems, thicker bones for more activity in summer, thinner bones for less in winter, perfect. This seasonal remodelling also speeds up fracture repair.

Muscles got bigger in the summer and adipose fat deposits got smaller, the reverse in the winter. Later it struck me that if the thyroid enlarged in the summer and shrunk in the winter then the same must apply to other organs of the endocrine system, pancreas, adrenals etc. Following that reason it therefore meant that cell surface receptors that bound these endocrine hormones increased in density in summer or the other way round depending on whether they were associated with sympathetic or parasympathetic nervous activity. For example one would expect receptors associated with the catecholamine hormones, adrenalin, noradrenalin and dopamine to increase in summer and decrease in winter. Insulin is a bit more complicated as it is associated with all forms of storage, fat, glucose, protein and bone.

But what about HUMAN animal? They are not entrained into any seasonal rhythm at all. Artificial lighting and heating, wearing glasses, driving cars, eating foods out of season. Worrying like hell in the winter cause she has lost her job and relaxing in the summer because she found one. What about pathological extremes, One one side slow anxiety, guilt depression, (no religion in the wild) self loathing fluctuating between unreal optimism and unreal pessimism. Goes out and buys everyone a present on the manic high believing doors will open to al sorts of thing including lots of sexual partners, then plunges into a depressive low when all the presents have been rejected and the huge bill comes in. Prospects worse than before. On the flip side gets aggressive and threatening, then realising the possible backlash becomes conciliatory. Whilst conciliatory is deemed to be open to criticism for the prior aggression, suddenly snaps at this self righteousness and returns to the insults. Schizophrenia. Later I add blood sugar instability and lots of endocrine deregulation insomnia etc. But it was with the THYROID that this all began.

Schizophrenia. Following the dopamine receptor over abundance theory, this model tells me that it is not a genetic predisposition, but that the actual reason is more likely to be PERSISTANT ANXIETY, probably from early childhood. This fits in with RD Laing’s research that it is extreme conflict in the family with totally opposing beliefs and philosophies imposed on the child i.e. double binds, by each parent, along with violence and sexual abuse. Daddy says to little daughter after sexually abusing her for the first (and subsequent) times, ‘This is a really, really good thing between us, BUT DON”T TELL ANYONE!!!!!! This deranged double bind become the foundation stone of the child’s ‘development’ .

Uncle Sam comes into the village, puts a gun to the villager’s head and says menacingly, ‘You even THINK about collaborating with Charlie (VC) and BOOM!!! A couple of hours later, after Uncle Sam has gone, Charlie comes into the village, puts a gun the the villager’s head and says equally menacingly, ‘You even think about collaborating with Uncle Sam and BOOM!!! VERY, VERY, BIG double bind.
Noddy the hyper hypo sheep.

It is great to see the interest in evolutionary approaches to medicine, and all the great discussion. But is is disheartening to see that so few have found their way to the many papers and books from the past decade that address these questions. For links, see my website or The Evolution & Medicine Review http://evmedreview.com

I agree with Harriet that Darwinian or evolutionary medicine is not a guide to treatment decisions, and have said so repeatedly in print because I share her concern about naive enthusiasts causing clinical harm-and harming attempts to use evolution in medicine. In the clinic the utility of understanding evolution is the same as for other basic medical sciences such as embryology–a deeper understanding of the body. In the research lab and public health, it frames new questions and provides useful methods such as tracing phylogenies, and calculating expected gene frequencies.

I do sometimes wish that I had succeeded in convincing George Williams to change the title of our article to something more modest than the “Dawn of Darwinian Medicine,” but if I had, evolutionary applications in medicine might be far behind where they are now. Eventually all medicine will be grounded in evolutionary biology, but it will take awhile. In the meanwhile, calling it evolutionary medicine helps to point out the gap in knowledge and the opportunity, in the same way as genetic medicine.

The comments about specific hypotheses are especially welcome, but some ideas have been disproven, others are now accepted, and many remain to be tested. The challenge of testing evolutionary hypotheses about disease is considerable. I wrote an article “Ten Questions for Evolutionary Studies of Disease” to try to help students avoid the most common mistakes, but this is a very difficult area of science for all of us.