Abstract: Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wetbulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Huh? Temperatures going up by 12 degrees Celsius??? Well, this is a worst-case scenario — the sort of thing that’s only likely to kick in if we keep up ‘business as usual’ for a long, long time:

Recent studies have highlighted the possibility of large global warmings in the absence of strong mitigation measures, for example the possibility of over 7 °C of warming this century alone. Warming will not stop in 2100 if emissions continue. Each doubling of carbon dioxide is expected to produce 1.9–4.5 °C of warming at equilibrium, but this is poorly constrained on the high side and according to one new estimate has a 5% chance of exceeding 7.1 °C per doubling. Because combustion of all available fossil fuels could produce 2.75 doublings of CO2 by 2300, even a 4.5 °C sensitivity could eventually produce 12 °C of warming. Degassing of various natural stores of methane and/or CO2 in a warmer climate could increase warming further. Thus while central estimates of business-as-usual warming by 2100 are 3–4 °C, eventual warmings of 10 °C are quite feasible and even 20 °C is theoretically possible.

A key notion in Sherwood and Huber’s paper is the concept of wet-bulb temperature. Apparently this term has several meanings, but Sherwood and Huber use it to mean “the temperature as measured by covering a standard thermometer bulb with a wetted cloth and fully ventilating it”.

This can be lower than the ‘dry-bulb temperature’, thanks to evaporative cooling. And that’s important, because we sweat to stay cool.

Indeed, this is the big difference between Riverside California (my permanent home) and Singapore (where I’m living now). It’s dry there, and humid here, so my sweat doesn’t evaporate so nicely here — so the wet-bulb temperature tends to be higher. In Riverside air conditioning seems like a bit of an indulgence much of the time, though it’s quite common for shops to let it run blasting until the air is downright frigid. In Singapore I’m afraid I really like it, though when I’m in control, I keep it set at 28 °C — perhaps more for dehumidification than cooling?

Sherwood and Huber write:

A resting human body generates ∼100 W of metabolic heat that (in addition to any absorbed solar heating) must be carried away via a combination of heat conduction, evaporative cooling, and net infrared radiative cooling. Net conductive and evaporative cooling can occur only if an object is warmer than the environmental wet-bulb temperature TW, measured by covering a standard thermometer bulb with a wetted cloth and fully ventilating it. The second law of thermodynamics does not allow an object to lose heat to an environment whose TW exceeds the object’s temperature, no matter how wet or well-ventilated. Infrared radiation under conditions of interest here will usually produce a small additional heating.

[…]

Humans maintain a core body temperature near 37 °C that varies slightly among individuals but does not adapt to local climate. Human skin temperature is strongly regulated at 35 °C or below under normal conditions, because the skin must be cooler than body core in order for metabolic heat to be conducted to the skin. Sustained skin temperatures above 35 °C imply elevated core body temperatures (hyperthermia), which reach lethal values (42–43 °C) for skin temperatures of 37–38 °C even for acclimated and fit individuals. We would thus expect sufficiently long periods of TW > 35 °C to be intolerable.

Now, temperatures of 35 °C (we say 95 degrees Fahrenheit) are entirely routine during the day in Riverside. Of course, it’s much cooler in my un-air-conditioned home because we leave open the windows when it gets cool at night, and the concrete slab under the floor stays cool, and the house has great insulation. Still, after a few years of getting acclimated, walking around in 35 °C weather seems like no big deal. We only think it’s seriously hot when it reaches 40 °C.

But these are not wet-bulb temperatures: the humidity is usually really low! So what’s the wet-bulb temperature when it’s 35 °C and the relative humidity is, say, 20%? I should look it up… but maybe you know where to look?

If you look on page 2 of Sherwood and Huber’s paper you’ll see three graphs. The top graph is the world today. You’ll see histograms of the average temperature (in black), the average annual maximum temperature (in blue), and the average annual maximum wet-bulb temperature (in red). The interesting thing is how the red curve is sharply peaked between 15 °C and 30 °C, dropping off sharply above 31 °C.

The highest instantaneous TW anywhere on Earth today is about 30 °C (with a tiny fraction of values reaching 31 °C). The most-common TW, max is 26–27 °C, only a few degrees lower. Thus, peak potential heat stress is surprisingly similar across many regions on Earth. Even though the hottest temperatures occur in subtropical deserts, relative humidity there is so low that TW, max is no higher than in the deep tropics. Likewise, humid mid-latitude regions such as the Eastern United States, China, southern Brazil, and Argentina experience TW, max during summer heat waves comparable to tropical ones, even though annual mean temperatures are significantly lower. The highest values of T in any given region also tend to coincide with low relative humidity.

But what if it gets a lot hotter?

Could humans survive > 35 °C? Periods of net heat storage can be endured, though only for a few hours, and with ample time needed for recovery. Unfortunately, observed extreme-TW events (TW 26 °C) are long-lived: Adjacent nighttime minima of TW are typically within 2–3 °C of the daytime peak, and adjacent daily maxima are typically within 1 °C. Conditions would thus prove intolerable if the peak TW exceeded, by more than 1–2 °C, the highest value that could be sustained for at least a full day. Furthermore, heat dissipation would be very inefficient unless TW were at least 1–2 °C below skin temperature, so to sustain heat loss without dangerously elevated body temperature would require TW of 34 °C or lower. Taking both of these factors into account, we estimate that the survivability limit for peak six-hourly TW is probably close to 35 °C for humans, though this could be a degree or two off. Similar limits would apply to other mammals but at various thresholds depending on their core body temperature and mass.

I find the statement “Adjacent nighttime minima of TW are typically within 2–3 °C of the daytime peak” quite puzzling. Maybe it’s true in extremely humid climates, but in dry climates it tends to cool down significantly at night. Even here in Singapore there seems to be typically a 5 °C difference between day and night. But maybe it’s less during a heat wave.

The paper does not discuss behavioral adaptations, and that makes it a bit misleading. Even without fossil fuels people can do things like living underground during the day and using windcatchers to bring cool underground air into the house. Here’s a windcatcher that my friend Greg Egan photographed in Yazd during his trip to Iran:

But, of course, this sort of world would support far fewer people than live here now!

Another obvious doubt concerns the distant past, when it was a lot warmer than now. I’m talking about the Paleogene, which ended 23 million years ago. If you haven’t heard of the Paleogene — which is term that came into play after I learned my geological time periods back in grade school — maybe you’ll be interested to hear that it’s the beginning of the Cenozoic, consisting of the Paleocene, Eocene, and Oligocene. Since then the Earth has been in a cooling phase:

How did mammals manage back then?

Mammals have survived past warm climates; does this contradict our conclusions? The last time temperatures approached values considered here is the Paleogene, when global-mean temperature was perhaps 10 °C and tropical temperature perhaps 5–6 °C warmer than modern, implying TW of up to 36 °C with a most-common TW, max of 32–33 °C. This would still leave room for the survival of mammals in most locations, especially if their core body temperatures were near the high end of those of today’s mammals (near 39 °C). Transient temperature spikes, such as during the PETM or Paleocene-Eocene Thermal Maximum, might imply intolerable conditions over much broader areas, but tropical terrestrial mammalian records are too sparse to directly test this. We thus find no inconsistency with our conclusions, but this should be revisited when more evidence is available.

Related

This entry was posted on Friday, July 30th, 2010 at 1:20 am and is filed under climate. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

Post navigation

45 Responses to How Hot Is Too Hot?

Of course if we’re all dead we won’t be burning a lot of carbon. We’re also talking increased surface area of much warmer ocean: hence presumably much more cloud. Is it really possible to get that hot? How do we get expert input on what is possible. I have this comment from an Atmospheric scientist that might be relevant: “In physical terms, the ultimate control on any run-away climate is the Stefan-Boltzmann law giving emitted radiation (i.e the rate of heat loss from the planet) a fourth power dependence on temperature. The positive second derivative of this relation ensures a high degree of stability. Biophysical process can perturb the equilibrium by amounts that are significant in human terms but in absolute terms these perturbations can’t go very far.”

Good question. Physicists could tackle the question of how hot it could get in principle, while paleontologists are the ones to ask about how it’s actually gotten. A very brief perusal of the paleoclimate literature suggests that in the last 200 million years the highest temperatures occurred during the Cretaceous Thermal Maximum. Temperatures may have reached an average of 36 Celsius in the tropics during a nasty phase 93.5 million years ago, called Oceanic Anoxic Event 2.

Abstract: Oceanic Anoxic Event 2 (OAE-2) occurring during the Cenomanian/Turonian transition, is evident from a global positive stable carbon isotopic excursion and presumably represents the most extreme carbon cycle perturbation of the last 100 Myr. However, the impact of this major perturbation on and interaction with global climate remains unclear. OAE-2 occurred in the mid-Cretaceous, a time in Earth history characterized by extreme global warmth culminating in the so-called Cretaceous thermal maximum. Thus, records of paleo-sea surface temperatures (SSTs) from the mid-Cretaceous oceans are particularly important for understanding greenhouse climate conditions. We will present new high-resolution SST-records based on an organic proxy, the TetraEther indeX of 86 carbon atoms (TEX86), and δ18O of excellently preserved, “glassy” planktic foraminifera, combined with stable organic carbon isotopes generated from marine black shales located offshore Suriname/French Guiana (ODP Site 1260) and Senegal (DSDP Site 367). At Site 1260 a good match between conservative SST estimates from TEX86 and δ18O is observed. Late Cenomanian SSTs in the equatorial Atlantic Ocean (~33°C) were substantially warmer than today (~27-29°C) and the onset of OAE-2 coincided with a rapid shift to an even warmer (~35-36°C) regime. Within the early stages of OAE-2 a marked (~4°C) cooling is observed. However, well before the termination of OAE-2, the warm regime was re-established and persisted into the Turonian. Our findings corroborate the view that the C/T-transition represents the onset of peak Cretaceous warmth, that mid-Cretaceous warmth can be attributed to high levels of atmospheric CO2 and that major OAEs were capable of triggering global cooling through the negative feedback effect of organic carbon burial-led CO2-sequestration. However, the factors that gave rise to the observed shift to a warmer climate regime at the onset of OAE-2 were sufficiently powerful that they were only briefly counterbalanced by high rates of carbon-burial attained during OAE-2. The latter becomes even more evident when our detailed dual-proxy SST-records from ODP Site 1260 are compared to the long-term evolution of mid-Cretaceous tropical SSTs at Demerara Rise. Here, we monitored the Albian to Santonian SST-history of the western equatorial Atlantic by generating a solely TEX86 based combined record that spans the entire Cretaceous black shale sequence at ODP Sites 1258 and 1259 in meter-scale resolution, which was recently completed and will be presented here for the first time. Once established the extreme warm climate regime (characterized by averaged tropical SSTs exceeding 35°C) lasted well into the Coniacian at Demerara Rise. However, during the Turonian, several pronounced but relatively short-lived cooler intervals punctuate this otherwise remarkably stable interval of extreme tropical warmth. This observation shows that rapid tropical SST-changes occurred also during the Cretaceous thermal maximum, and implies that even the mid-Cretaceous “super-greenhouse” climate may have been less stable than previously thought.

Of course the Sun has been gradually putting out more energy, and in 1.1 billion years we can expect it to be 10% brighter — enough to drive most of the water from the Earth’s atmosphere out into space!

So, we are bound to cook eventually if we don’t either 1) leave or 2) develop super-duper technology to deal with the problem while staying put. I’m an optimist in the long term: if we deal wisely with the current crunch, we might have god-like powers in a billion years. But I’m a pessimist in the short term.

Despite my pessimism, I should emphasize that I’m not mainly interested in the worst-case scenario described by Sherwood and Huber. It’s much more practical to think about the effects of 1°C — 6°C warming. But the physicist in me relishes extremes, so I couldn’t resist blogging about the possible effects of a 12°C rise.

The wet bulb temperature (or dew point) does change very little from day to night. I am looking at a plot of dew point vs time for Riverside for the month of July and while the temperature changes a lot from night to day, the dew point changes very little.

I can understand why the dew point doesn’t go down much at night in Riverside, but not why the wet-bulb temperature doesn’t change much. Sherwood and Huber seem to be using ‘wet-bulb temperature’ to mean something different from the ‘dew point’:

Puzzle: The zeroth law of thermodynamics says that two systems in thermal equilibrium must have the same temperature. So how could making something wet make it cooler? How could the wet-bulb temperature differ from the dry-bulb temperature?

I think I know the answer to this now, but other people might enjoy taking a crack at it.

In case you want any feedback, even from a non-physics guy, my thoughts about a resolution stem from questioning the zeroth law’s antecedent (keeping it a bit cryptic). But I suspect that’s irrelevant to this particular puzzle.

Since no-one else has responded to the puzzle, I’ll point out what I was thinking and why it’s irrelevent: if any thermometer is continuously in “exact” thermal equilibrium with its environment then, barring instantaneous actions, that must mean the environment is a boring one which is at a constant temperature. Since the general environment does change temperature over the course of the day, there must be times at which any thermometer is not in exact thermal equilibrium with the environment.

But presumably the “distance from equilibrium” as the environment undergoes its daily cycle is minuscule and hence not relevant to the puzzle.

… presumably the “distance from equilibrium” as the environment undergoes its daily cycle is minuscule and hence not relevant to the puzzle.

True, the daily change in temperature and humidity is not relevant to the puzzle of how a wet-bulb thermometer manages to stay cooler than a dry-bulb one without violating the zeroth law. Let’s assume the temperature and humidity in the room are constant as a function of time!

Since nobody has publicly tackled the puzzle, I’ll give away the answer, or at least part of it.

Puzzle: The zeroth law of thermodynamics says that two systems in thermal equilibrium must have the same temperature. So how could making something wet make it cooler? How could the wet-bulb temperature differ from the dry-bulb temperature?

Answer: a wet object is not in thermal equilibrium with the drier air surrounding it. That’s why it tends to dry out. If we keep constantly wetting it to keep it from drying out, it may be in a ‘stationary state‘, but that’s different from thermal equilibrium.

That’s what I understand. What I don’t understand is how people compute the wet-bulb temperature as a function of the dry-bulb temperature and the relative humidity, perhaps with the help of a psychrometric chart. There’s some physics here that I’ve never studied.

Actually, I think it misses one of the most significant points. Measurements of the non-immediate past inevitably involve really nasty use of statistics applied to proxy variables (you infer temperature in the past from measurable artifacts “dependent on temperature in a way we think we understand”, likewise with greenhouse gas levels and you do all of this on a incredibly sparse set of points on the earth and try and extrapolate to the globe). Likewise, my biggest doubts about current measurements come not from big conceptual mistakes, but from the need to use statistical techniques to do things like “correct” readings from the past when temperature measuring sensors were calibrated differently. As such, there is (in my non-expert opinion) a non-negligible potential for interpretations to be wrong. That is something that “there’s no AGW” people are right about, and I think that point should be explicitly acknowledged. The disagreement comes at the point he discusses later, which is that the sensible course is to go with the combination of best current understanding of the evidence weighted by outcome severity, even in the presence of this uncertainty.

Abstract: Oceanic anoxic events (OAEs) were episodes of widespread marine anoxia during which large amounts of organic carbon were buried on the ocean floor under oxygen-deficient bottom waters. OAE2, occurring at the Cenomanian/Turonian boundary (about 93.5 Myr ago), is the most widespread and best defined OAE of the mid-Cretaceous. Although the enhanced burial of organic matter can be explained either through increased primary productivity or enhanced preservation scenarios, the actual trigger mechanism, corresponding closely to the onset of these episodes of increased carbon sequestration, has not been clearly identified. It has been postulated that large-scale magmatic activity initially triggered OAE2, but a direct proxy of magmatism preserved in the sedimentary record coinciding closely with the onset of OAE2 has not yet been found. Here we report seawater osmium isotope ratios in organic-rich sediments from two distant sites. We find that at both study sites the marine osmium isotope record changes abruptly just at or before the onset of OAE2. Using a simple two-component mixing equation, we calculate that over 97 per cent of the total osmium content in contemporaneous seawater at both sites is magmatic in origin, a ~30–50-fold increase relative to pre-OAE conditions. Furthermore, the magmatic osmium isotope signal appears slightly before the OAE2—as indicated by carbon isotope ratios—suggesting a time-lag of up to ~23 kyr between magmatism and the onset of significant organic carbon burial, which may reflect the reaction time of the global ocean system. Our marine osmium isotope data are indicative of a widespread magmatic pulse at the onset of OAE2, which may have triggered the subsequent deposition of large amounts of organic matter.

So, what’s the best theory of how magma led to anoxia? I saw someone argue that increased nutrients led to algal blooms and then anoxia. 23,000 years sounds like a long time for that to happen, but what do I know?

I don’t recall the details for this particular episode, but generally with these things the magma raises CO2 levels directly and/or by burning carbon deposits, the CO2 warms the oceans (which can juice the process by releasing clathrates), the warmed oceans plus the CO2 lead to the productivity spike, and that in turn leads to the anoxia.

Hmm, according to the slightly confused Wikipedia article nutrients are also in the mix as a possible factor, although there’s no source listed. The description I gave is good for the PETM, but that wasn’t an anoxic event as such. In any case the magma=>CO2 in the initial phase seems correct.

To me the point of no return is at the point where methane start to be emitted from the continental shelves, perhaps even earlier than that. I don’t have a good sense when the tipping point will arrive…

The wet bulb temperature is interesting, but trying to do anything after that has become a survival factor is certainly too late. Just enough time to go ‘kaput’ I guess.

Sherwood and Huber say there’s no place where the wet-bulb temperature consistently exceeds 31° C. If you look here, the upper right map shows maximum wet-bulb temperatures around the world.

I wildly guess the reason the reason people can survive wet-bulb temperatures up to 37° C has a lot to do mammalian evolution. Mammals need a core temperature that exceeds the wet-bulb temperature. Maybe we could have evolved to burn hotter than we do, but there was no need to, so we didn’t.

Interesting question, if I recall some highschool chemistry correctly, several proteins coagulate at around 43° Celsius, so that mammals would need a different metabolism to survive higher temperatures.

Sherwood and Huber say people can only survive for long when the wet-bulb temperature is below our core body temperature. If the same is true for other warm-blooded animals, maybe these other ones can take higher temperatures. There certainly were mammals that survived quite high temperatures during the Paleocene-Eocene Thermal Maximum, about 55 million years ago, 10 million years after the big die-off of dinosaurs at the end of the Cretaceous. I’m not finding any estimate of the maximum temperatures during this event…

Unfortunately I’m outisde academia at the moment so I can’t read the actual paper, but the abstract says

Data from farmer-managed fields have not been used previously to disentangle the impacts of daily minimum and maximum temperatures and solar radiation on rice yields in tropical/subtropical Asia. We used a multiple regression model to analyze data from 227 intensively managed irrigated rice farms in six important rice-producing countries. The farm-level detail, observed over multiple growing seasons, enabled us to construct farm-specific weather variables, control for unobserved factors that either were unique to each farm but did not vary over time or were common to all farms at a given site but varied by season and year, and obtain more precise estimates by including farm- and site-specific economic variables. Temperature and radiation had statistically significant impacts during both the vegetative and ripening phases of the rice plant. Higher minimum temperature reduced yield, whereas higher maximum temperature raised it; radiation impact varied by growth phase. Combined, these effects imply that yield at most sites would have grown more rapidly during the high-yielding season but less rapidly during the low-yielding season if observed temperature and radiation trends at the end of the 20th century had not occurred, with temperature trends being more influential. Looking ahead, they imply a net negative impact on yield from moderate warming in coming decades. Beyond that, the impact would likely become more negative, because prior research indicates that the impact of maximum temperature becomes negative at higher levels. Diurnal temperature variation must be considered when investigating the impacts of climate change on irrigated rice in Asia.

I just want to point out one of the possibilities to take relief from heat. When it’s hot one usually takes shower or jumps into the pool. This makes sense due to much higher water heat conductivity so even much smaller temperature difference ensures sufficient cooling effect. And this seems to fit nicely with the hypothesis that in our evolution we had substantially long period of nearly amphibian “lifestyle”. The following arguments are usually provided:
From all primates only humans can swim
Human babies can by born in water
Loss of bodily hair was advantageous to species bathing a lot (consider hairy mammoth and nearly hairless elephant)
Directions of human bodily hair are perfectly adjusted with the flow of water when swimming to the contrast to other primates
Human skin contains much more fats and we to the contrast with other primates have that troubling many of us underskin fat layer, which is more similar to that of other amphibian mammals
Our nose shape with nostrils pointing down is perfectly suited for diving in contrast to the noses of other primates
Human organism (especially children) needs nutrients that are predominantly found in fish and other seafood.
This theory also nicely explains the existence of missing link between Homo Sapiens and earlier Homo species – all evidences are simply washed out by rising sea levels.
This paper lets us presume that out ancestors chose amphibian “lifestyle” not only in search for rich in proteins food, but also as the way to escape from unbearable heat. In the prospect of rising temperatures it seems we’ll be forced to recall this option of bathing more that our Homo ancestors found many thousands years ago.

Sounds like you’re talking about Elaine Morgan’s aquatic ape hypothesis. It’s extremely controversial (check the link) but not completely nuts, as far as I can tell. I haven’t seen anyone suggest before that people would have adopted an aquatic lifestyle because of heat. Interesting…

One counter argument according to Wikipedia is that the “aquatic” properties of humans developed at very different times, which reminds me of a question that has been troubling me since I learned about evolution in school: Does anyone have any idea how fast populations can adapt through evolution to changing environments? Are there any realistic models that are more reliable than, for example, certain genetic algorithms?

For example, there are species living today that have not changed at all during the last, say, 100 million years, like certain sharks (who once had natural foes, dinosaurs, but don’t have any since those died out, which is a nice irony, IMHO :-).

On the rate of evolution. You need to distinguish between changing gene frequencies (which can happen quickly) and the emergence of new mutations (which is slow, especially in small populations). This paper addresses the latter in detail.

You need to distinguish between changing gene frequencies (which can happen quickly) and the emergence of new mutations (which is slow, especially in small populations).

Thanks for the link! I was implicitly talking about the emergence of new mutations, only later did it occur to me that others might think I was talking about a change in the frequency instead, which is a result of the process of selection from an existing gene pool.

Abstract: Natural populations of guppies were subjected to an episode of directional selection that mimicked natural processes. The resulting rate of evolution of age and size at maturity was similar to rates typically obtained for traits subjected to artificial selection in laboratory settings and up to seven orders of magnitude greater than rates inferred from the paleontological record. Male traits evolved more rapidly than female traits largely because males had more genetic variation upon which natural selection could act. These results are considered in light of the ongoing debate about the importance of natural selection versus other processes in the paleontological record of evolution.

(My emphasis.) The experiment measured evolution rates of up to 60,000 darwins, while the typical rate from the fossil record is from 0.7 to 3.7 darwins. I don’t understand ‘darwins’ very well, but here’s what Douglas Theobald has to say:

In 1983, Phillip Gingerich published a famous study analyzing 512 different observed rates of evolution (Gingerich 1983). The study centered on rates observed from three classes of data: (1) lab experiments, (2) historical colonization events, and (3) the fossil record. A useful measure of evolutionary rate is the darwin, which is defined as a change in an organism’s character by a factor of e per million years (where e is the base of natural log). The average rate observed in the fossil record was 0.6 darwins; the fastest rate was 32 darwins. The latter is the most important number for comparison; rates of evolution observed in modern populations should be equal to or greater than this rate.

The average rate of evolution observed in historical colonization events in the wild was 370 darwins—over 10 times the required minimum rate. In fact, the fastest rate found in colonization events was 80,000 darwins, or 2500 times the required rate. Observed rates of evolution in lab experiments are even more impressive, averaging 60,000 darwins and as high as 200,000 darwins (or over 6000 times the required rate).

A more recent paper evaluating the evolutionary rate in guppies in the wild found rates ranging from 4000 to 45,000 darwins (Reznick 1997). Note that a sustained rate of “only” 400 darwins is sufficient to transform a mouse into an elephant in a mere 10,000 years (Gingerich 1983).

One of the most extreme examples of rapid evolution was when the hominid cerebellum doubled in size within ~100,000 years during the Pleistocene (Rightmire 1985). This “unique and staggering” acceleration in evolutionary rate was only 7 darwins (Williams 1992, p. 132). This rate converts to a minuscule 0.02% increase per generation, at most. For comparison, the fastest rate observed in the fossil record in the Gingerich study was 37 darwins over one thousand years, and this corresponds to, at most, a 0.06% change per generation.

Of course, the problem with global warming is that it’s radically changing the environment on timescales of hundreds of years, not tens of thousands. So, we can expect a mass extinction event, and extremely rapid evolution of the populations that survive.

As for sharks:

For example, there are species living today that have not changed at all during the last, say, 100 million years, like certain sharks (who once had natural foes, dinosaurs, but don’t have any since those died out, which is a nice irony, IMHO :-).

I don’t know which species of sharks you’re referring to, but many sharks are under extreme pressure from human hunting, particularly the disgusting practice of killing a shark just to cut off and eat its fin and throwing the rest away. Humans may not be a ‘natural’ foe, but they’re a devastatingly effective one.

Of course we all know that primitive people live in harmony with nature, so they couldn’t have killed off the megafauna. This gives rise to a seemingly endless series of press releases such as the one reported at http://www.bbc.co.uk/news/science-environment-11000635. These invariably report that the press releaser has vanquished their opponents. How can a self respecting news organization fail to get a quote from the supposedly vanquished? How can anyone believe that the megafauna in North Asia, Australia, America, New Zealand, died out soon after humans arrived for some other climate related reason, when the megafauna had survived numerous glacials and interglacials before? How can we set up a system where we can ascertain correctly what the relevant experts believe, and whether they actually agree?

Since journalism is being commented on, it is worth noting that the use of “vanquished” does not appear in the original piece; it is introduced in the comment. In the text it attributes the term “settled” in non-direct quotes to the head of the study.

Clearly it is desirable for all elements in a chain of communication to have a high a fidelity as possible.

The demise of the megafauna is relevant to trying to save the world. It invalidates the “nothing can go wrong” optimists. It also makes us think carefully about what the saved world should be like. In particular: it shouldn’t be covered with fire-friendly forests that destroy all life when they inevitably burn. Megafauna clear out underbrush and knock down trees, as they still do in Africa (where the megafauna co-evolved with humans).

Also slightly relevant is the fact that killer whales also combine high intelligence with thoughtless destruction of other life. Humpback whales migrate up the East coast of Australia. Killer whales used to cooperate with humans in hunting them: with aboriginals then later with whites. The head of the killer whales would wake the humans for the hunt. Everyone agreed that the killer whales were the instigators and organizers. Afterwards the killer whales took the tongues, leaving the rest for the human helpers.

Of course we all know that primitive people live in harmony with nature, so they couldn’t have killed off the megafauna.

Heh. Yeah, right.

To me, the evidence for the Pleistocene overkill hypothesis seems so overwhelming that I’ve never doubted it. Indeed, there have been about a dozen ‘Ice Ages’ (more precisely glacial cycles) in the last million years:

(In this graph, “up” is “cold”.)

But in the New World, giant bison, ground sloths, lions, cheetahs, sabertooth tigers, camels, giant beavers, dire wolves, short-faced bears, 4 species of mammoths, and various other large mammals all died out within 1000 years of the arrival of humans! Butchered mammoth bones have been found in Wisconsin archaeological sites dating back to 12,700 BC and 12,100 BC. And bones from butchered mammoths have been found along with spear points in Wyoming and South Dakota. What more do we need? A signed confession?

Interesting, so I’ll go hunting for explanations of what a “Darwin” really is :-)

a sustained rate of “only” 400 darwins is sufficient to transform a mouse into an elephant in a mere 10,000 years

As an engineer it is hard to believe that such a transformation could be done by random mutations of the genetic code, so alternative explanations of the mechanisms at work here are of interest, too, of course. I’ve sporadically read about “evidence” that the environment directly influences the genetic code, an idea that has been around in scifi for some time now; the alien from the movie “alien” for example inherits memory from it’s parents in this way (mentioned in the movie alien 4, if I remember correctly).

I don’t know which species of sharks you’re referring to, but many sharks are under extreme pressure from human hunting…

Several sources mingle in my memory, so I don’t know which I was referring to, either, but I was leaving out humans deliberately, because that’s a different topic: Humans did not evolve into natural foes, they invented the necessary tools to become one, which is a whole new phenomenon, I’d say, with regard to the speed that a big number of deadly foes appear out of nowhere.

To me, the evidence for the Pleistocene overkill hypothesis seems so overwhelming that I’ve never doubted it.

The mass extinctions on Pacific islands after the arrival of humans, as mentioned by Jared Diamond in his book “Collapse”, seem to be less controversial. Of course, these happened in a small, controlled environment, and faster.

As an engineer it is hard to believe that such a transformation could be done by random mutations of the genetic code, so alternative explanations of the mechanisms at work here are of interest, too, of course.

I don’t find it so terribly hard to believe that plain old Darwinian evolution would suffice to explain evolutionary bursts of 400 darwins — that is, observable quantities changing by a factor of e in 106/400 = 2,500 years. What Graham said is very important: in bursts, a lot of natural selection occurs not by mutation but by selecting among pre-existing variations in the population.

In layman’s terms: you kill all the short people, and people get tall mighty quick.

However, you will be delighted to know that something like Lamarckian evolutiondoes occur! For example:

It turns out that advanced life forms like us — by which I mean eukaryotes, to be precise — are able to pass on traits not just using their DNA and RNA, but also using a trick called histone methylation. In eukaryotic cells, DNA is wound around proteins called histones. Adding one, two or three methyl groups to these proteins controls whether and how gene will be expressed in a given cell. This is one way cells in our body get to be very different even though they have the same DNA! It’s quite complicated and interesting — and in a surprising twist, a mother can apparently do histone methylation to genes in her child’s embryo! So, traits picked up during her life, encoded not in DNA or RNA but in histone methylation, can be passed on to her offspring.

In short, besides genetics we must also study epigenetics — the science of reversible but inheritable changes in gene expression that can occur without any changes in our DNA!

Evolution is like a game that life has been playing for billions of years. The strategies in play are surely far deeper than we’ve been able to fathom so far. We’re like kids watching grand masters play chess. We should continue to expect surprises….

As an engineer it is hard to believe that such a transformation could be done by random mutations of the genetic code, so alternative explanations of the mechanisms at work here are of interest, too, of course.

One thing that’s worth bearing in mind is the extent to which, even now, a mouse and an elephant are already different “parameters” applied to a lot of the same biochemistry/gene interaction networks. (That’s not a terribly precise statement, but the basic idea applies.) Thus the “actual new content that has to be created by evolution” is smaller than you might instinctively think.

From there, you can try to get some idea of what might be feasible by looking at what has been achieved by “genetic programming” (the simpler genetic algorithms are awkward to argue from because they have a custom designed genome). As such, there’s very plausible evidence of the combined power of huge populations, genetic crossover & mutation and selection forces to move towards local optima. (I do some work attempting to create computer code for imprecise tasks using genetic programming, and the biggest problems are not the precise mechanisms but simply not being able to run as many generations on as big a population as I’d like to.)

…the simpler genetic algorithms are awkward to argue from because they have a custom designed genome.

What is a “custom designed genome” in this context?

My knowledge of genetic algorithms pretty much coincides with what Wikipedia has to say about them:

1. There is a feasability set, the possible genomes, consisting of elements of a (custom) data type, say, a byte array of length 300.

2. You start for example with some millions of individuals with randomly chosen genome from the feasability set,

etc.

The set of possible genomes is dictated by the problem you try to solve, the subset of genomes of individuals of the first generation is randomly chosen (maybe with a bias), the genomes of each following generation is determined by mutation, crossover, selection, but the set of possible genomes is immutable.

My comment was in the context of GA/GP in the context of “plausibility evidence” for the power of evolution. It basically comes down to your comment

The set of possible genomes is dictated by the problem you try to solve

The terminology is a bit slippery, and I meant custom genotype. The key distinction is that in GA typically the “genotype representation” (I think) is a data-structure and algorithm that, when filled with parameters (called an actual “genome”), is converted into an example of the thing over which you’re trying to optimize. As a typical example, it has been used construct simple, robust digital circuits for given functions have been done using a genotype that specifies a list of (logic gate-types,interconnections). As an analogue with evolution of animals there’s the criticism that the genotype representation was custom designed to be appropriate for the kind of problem.

Genetic programming is concerned with approximating given “program black boxes”, but it starts with some set of general program operators (eg, for LISP-style programs they’re things like add, sub, etc, nodes in an expression tree, whilst for machine code GP it’s actually instructions in a given machine’s instruction set) and is then allowed to generate non-fixed-size programs in the course of “evolution” (unlike GA genotypes which are fixed size). Relevant to evolution of animals, the basic instructions are generally picked once when constructing the GP library without regard to what learning problems they’ll be used for (much closer to the role the DNA-base coding of amino acids has in natural evolution), unlike the GA genotype which is picked with knowledge of the kind of problem about to be optimised.

Genetic programming … is then allowed to generate non-fixed-size programs in the course of “evolution” (unlike GA genotypes which are fixed size).

Ah, that’s the point I did not get, so it is a little bit misleading to state, as Wikipedia does, that genetic programming is a special case of a genetic algorithm when the latter is understood to have a genome of fixed size.

It would have been straightforward to find a traditional publisher for such a book. However, we want our book to be as accessible as possible to everyone interested in learning about GP. Therefore, we have chosen to make it freely available on-line, while also allowing printed copies to be ordered inexpensively…

[…] • A global average temperature increase of 7° C, which is toward the extreme upper part of the range of current projections, would make large portions of the world uninhabitable to humans (Sherwood et al.). For more, see my article How Hot is Too Hot? […]

Another issue was brought up in the questions. In a paper in Science, Sherwood and Huber argued In a paper in Science, Sherwood and Huber argued that any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals. However, the Paleocene-Eocene Thermal Maximum seems to have been even hotter!

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.