Before getting down to the article, a general reflection on statistics and methodology:

In many areas of science, you’ve got a lot of data and you want to sort out cause and effect. This happens in evolutionary biology, for example, when you want to determine what selective pressures have caused brains to evolve to different sizes. And it happens in medicine, when you want to find out what lifestyle choices generate what health problems. It also happens in social science and public policy, when you want to find out what programs generate what social outcomes. A common method in these cases is to use multivariate regression, looking for the strongest correlates of your dependent variable. This has its limits however. You often find that a lot of your variables are correlated with one another, and it’s hard to figure out what is cause and what’s effect.

So there’s been a lot of interest lately in a different approach, where you start out at the beginning with an explicit model of cause-and-effect pathways and use your data to estimate the strength of causal connections. An excellent popular introduction to this rapidly developing field comes from Judea Pearl, in The Book of Why: The New Science of Cause and Effect. Pearl makes the case here that statistics needs to move beyond pattern recognition, to testing causal models and counterfactual reasoning. Pearl sees counterfactual reasoning in particular as a human specialty, One Weird Trick that distinguishes humans from other creatures, and he is skeptical about current work in Artificial Intelligence, impressive as it is, that is mainly about pattern recognition.

“Automobile engineering can provide an analogy for studying this type of system. It would be difficult to understand racing-car design through regression analysis of how engine size varies depending on changes in other features, such as the mass and shape of the car. Instead, a model is needed that uses physical laws to predict optimal combinations of the variables under different criteria. Understanding brain evolution poses a similar challenge in that an organism’s features co-evolve under biological constraints.”

So turning to the article itself, what the authors do is to test an explicit model in which a developing organism has to allocate energy to growing a brain, growing a body, and reproducing. They ask what sorts of evolutionary challenges would lead to the particular combination of brain size, body size, and reproductive life history that we see in Homo sapiens. The challenges might be ecological (e.g. securing more food). They might be social (outwitting competitors). They might be solitary or cooperative (working with others to secure more food, or banding with others to defeat rival bands). Their conclusion: the best fit to their model comes when they assume that the evolution of big brains is 60% a result of individual ecological adaptation, 30% a result of cooperative ecological adaptation, and 10% a result of group-versus-group social adaptation. More specifically, what mostly drives the evolution of brain size in their model is that marginal returns to investing in ecological skills don’t decline as quickly for humans as for our close relatives. Spending extra years learning stuff continues to have a payoff for us, maybe because culture and language mean that there are a lot more useful tricks floating around to learn.

These results have to be considered pretty tentative at this point. Note however that they count strongly against the view that human brain evolution is mostly about being Machiavellian and outsmarting the other guys, although they do allow a modest role for inter-group competition. And they count against the view, advocated by Geoffrey Miller, that the human mind evolved as a sexual display, like the peacock’s tail. So it may be true that “sexual love … lays claim to half the powers and thoughts of the youngest portion of mankind” (Schopenhauer). But (at least according to González-Forero and Gardner), whatever the claims of love on our hearts, we owe our big brains to our work.

Arms races have been a big engine of evolutionary progress, both in biological evolution and in the evolution of human societies. Another big driver has been improvements in the fidelity of inheritance. We see this in the evolution of genetic systems, including the evolution of life itself, and of the eukaryotic chromosome. And we’ll see it in human social evolution, including the evolution of language, of writing, of the alphabet, and printing.

Both arms races and improved information transmission may have been factors in the evolution of braininess.

The figure above is from the classic work of Harry Jerison, one of the pioneers in studying the evolution of brain size. It’s several steps away from the raw data, but what it shows is how mammalian Encephalization Quotients (EQs), a measure of brain size relative to body size, evolved over the Cenozoic. The figure might be read as the record of a brainy arms race between prey and predators, leading to increased variance in the EQ bell curve for both.

Primates of course are particularly brainy mammals. One popular explanation for this is a series of arms races within species, with bright monkeys and apes outwitting dimmer ones. This has been called the Machiavellian Intelligence hypothesis (or, in the case of macaques, macachiavellian intelligence).

This hypothesis may not hold up too well, however. One complication is that, contrary to what a lot of evolutionary psychology might suggest, social intelligence in primates is not separate from other sorts of intelligence. The same primate species that are good at solving social problem (e.g. tricking other group members) are also clever about things like tool use and other complex foraging skills. Variation in intelligence across primate species mostly boils down to a single general factor, rather than a bunch of domain-specific aptitudes.

An alternative to the Machiavellian Intelligence hypothesis is the cultural intelligence hypothesis, with brainier animals more likely to innovate and more likely to learn others’ innovations. The first part pf this equation holds up: across various groups of organisms, including birds and primates, brainy animals are more flexible in their behavior, more likely to discover new adaptive behaviors, and more successful in colonizing novel environments. The second part is trickier. In recent years we’ve learned that learning useful information by observing others (go ahead, call it culture, if you want to annoy anthropologists) is extremely widespread, and found in organisms like guppies and honeybees that no one thinks are terribly bright. So learning from others doesn’t take special smarts.

Where bigger brained animals may excel is not in how much social learning they do, but in how accurately they do it – in copying fidelity. Theoretical models of the evolution of copying suggest that accurate copying makes a big difference. Small changes in copying fidelity can lead to large changes in the persistence of cultural traits. Of course this will crucially important for human evolution: more on this in days to come.

The expected lifetime, measured in generations, of a cultural trait as a function of the efficiency of social learning (p). Each learning trial uses a new cultural parent drawn from the parent population (see text). Parameter value: n 1⁄4 2.

Short version: It looks like most mammals, at least most large animals, have the brains they need, while primates, especially large primates, have the brains they can afford.

One reason for being interested in monkeys is that they’re brainy mammals. Here’s the conventional graph illustrating that:

Larger mammals tend to have larger brains, but the relationship is non-linear. Multiplying body mass by x doesn’t multiply brain mass by x. Instead it multiplies brain mass by about x.75. In other words, Brain Mass is proportional to (Body Mass).75. Equivalently (taking the logarithm of both sides) Log[Brain Mass] is equal to .75 times Log[Body Mass], plus a constant. So Log[Brain Mass] plotted against Log[Body Mass] gives a straight line with a slope of .75. That means that if one mammal has 16 times the body mass of another, it’s expected to have 8 times the brain mass. 10,000 times the body mass means 1000 times the brain mass. The thing to note is that primates defy expectations. They have larger brains than would be expected based on their body sizes.

But we’ve recently learned that primates – especially big ones – are even more special than this graph suggests. Susan Herculano-Houzel has pioneered a technique that involves chopping up brains (or parts of brains), dissolving their cells to make a kind of brain soup, and counting cell nuclei. This allows her to estimate how many neurons there are in different brains.

Major findings: Among most mammals, the number of neurons increases more slowly than brain size. Increase brain size by x, and you increase number of neurons by about x.67. (H-H shows this flipped around. Increase number of neurons by x and you increase brain mass by x1.5.) But primates are exceptional; the relationship is nearly linear. An x-fold increase in primate brain size corresponds to about an x-fold increase in number of neurons. Humans follow the primate rule here. We have about the same density of neurons as other primates. When you combine the exceptionally large brain sizes of humans with a standard high primate neuron density, you get an animal with an enormous number of neurons. By contrast, a rodent with a human sized brain, if it followed rodent rules for how neuron numbers increase with brain size, would have only 1/7 as many neurons.

Neurons are expensive. Most large animals economize by cutting back on neuron density. A cubic centimeter of cow brain has fewer neurons, and consumes energy at a lower rate, than a cubic centimeter of mouse brain. By contrast, large primates are extravagant, devoting exceptionally large energy budgets to running their brains. And human brains are exceptionally costly. An important question for the study of human evolution is how we paid the bill for such costly brains. That’s a story for later. But another part of the story starts back in the early Cenozoic, when monkeys committed to a different set of rules for building brains.

And here is a chart giving absolute numbers of cortical neurons (cneurons) for a bunch of species. Scott Alexander has some thoughts about the moral implications. Short version: lobster for dinner, skip the pork. (And skip the elephant, chimp, and manflesh. But you knew that.)

A followup to yesterday’s post on the Fermi Paradox, some reasons the Universe could have been less suitable for the evolution of complex life until recently, making us one of the first intelligent species to evolve.

1) Metallicity. Chemical elements heavier than helium are formed inside stars, after the Big Bang. Elements heavier than iron are formed in exploding supernovas. These elements have been building up over time. Maybe they had to reach a threshold abundance to make complex life possible. Consider that in the “family tree” for the Sun, based on the concentrations of different elements, the Sun is the oldest member of its subfamily. Maybe it is only planetary systems associated with this subfamily that are well-suited for the evolution of intelligent life. And recent work suggests that phosphorus in particular may be a limiting and cosmically limited resource for the evolution of life.

2) Gamma Ray Bursts (GRBs). GRBs are bursts of gamma rays (high frequency radiation) lasting from milliseconds to minutes, like GRB 080319B. (Check out tweets for January 11.) GRBs are probably supernovas or even larger explosions where one pole of the exploding star is pointed at the Earth. A major GRB could irradiate one side of the planet, and also affect the other side by destroying the ozone layer, causing mass extinctions. GRBs may have swept the Milky Way frequently in the past. The good news is they’re probably getting less frequent. This could be the first time in the history of the Milky Way that enough time has passed without a major GRB for intelligent life to evolve. If true, we should think about how to protect ourselves from the next one – lots of sunblock recommended.

If GRBs are such a threat, we might expect to find evidence that they have caused mass extinctions in the past (not wiping out all life obviously). For more on this, check out upcoming blog posts and tweets for the end-Ordovician, March 3.

3) Panspermia (life from elsewhere). Pretty much as soon as Earth could support life, we see evidence of single-celled organisms. Then life evolves slowly for a long time. The usual story about this is that the origin of life is easy, and it happens as soon as possible. But there is another possibility (illustrated below). It may be that the transition from simple replicating chemical systems to bacteria with genomes of tens of thousands of DNA base pairs is a slow process that happened over many billions of years somewhere other than Earth. Then newly forming planets in the nebula that gave rise to Earth were “infected” by this source, by meteorites carrying early cells. (It would have been easier for meteorites to carry life from star system to star system when the Earth was first formed than it would be today.) Back when our hypothetical “Urth” was forming, a billion years before Earth, there might not have been any planets with cellular life on them as potential sources of life-bearing meteorites.

Today in Logarithmic History, January 17, covers a period beginning over a billion years before the origin of our solar system. Back then, stars were forming at a fast clip in the Milky Way and other spiral galaxies. So let’s suppose… Suppose one of those older stars resembled the Sun, and had a planet like Earth orbiting around it – call it Urth. And suppose life originated on Urth more or less as on Earth and followed more or less the same evolutionary path. With this head start, intelligent life could have evolved a billion years ago, and today there could be intelligent Urthians (or their robot descendants) a billion years ahead of us.

There’s an urban legend that says that Einstein called compound interest the strongest force in the universe. Einstein didn’t actually quite say this, but it’s not a crazy thing to say. For example, consider how compound interest works, backward, on our Logarithmic History calendar. December 30 covers a period 5.46% longer than December 31, December 29 is 11.2% longer (because 1.0546 * 1.0546 = 1.112), and so on. At this rate of compounding we wind up with January 1 covering 754 million years. The same math implies that if we invested 1 dollar at 5.46% interest, compounded annually, then after 364 years we’d have 754 million dollars.

With even the slightest compound rate of increase, a billion year old Elder Race would have plenty of time to fill up a galaxy, and undertake huge projects like dismantling planets to capture more of their suns’ energy. Which raises the question, posed by Enrico Fermi in 1950: “Where is everybody?” There are more than 100 billion stars in our galaxies, more than 100 billion galaxies in the visible universe (actually, according to recent estimates, the number may be more than 1 trillion). If there are huge numbers of billion year old Elder Races around, why hasn’t at least one of them taken the exponential road and made themselves conspicuous?

There’s alargeliterature on the Fermi paradox. One possible explanation is that we’re one of the first intelligent species to evolve because the universe was somehow less suitable for the evolution of complex life before now. I’ll take that one up tomorrow.

And here’s an interview, just out, from the New Yorker, with Harvard astronomer Abraham Loeb, about ‘Oumuamua, the mysterious interstellar object which passed through our solar system in 2017. ‘Oumuamua might – just might – be an alien light sail; it must be a pretty strange object in any case.

Before getting down to the article, a general reflection on statistics and methodology:

In many areas of science, you’ve got a lot of data and you want to sort out cause and effect. This happens in evolutionary biology, for example, when you want to determine what selective pressures have caused brains to evolve to different sizes. And it happens in medicine, when you want to find out what lifestyle choices generate what health problems. It also happens in social science and public policy, when you want to find out what programs generate what social outcomes. A common method in these cases is to use multivariate regression, looking for the strongest correlates of your dependent variable. This has its limits however. You often find that a lot of your variables are correlated with one another, and it’s hard to figure out what is cause and what’s effect.

So there’s been a lot of interest lately in a different approach, where you start out at the beginning with an explicit model of cause-and-effect pathways and use your data to estimate the strength of causal connections. An excellent popular introduction to this rapidly developing field comes from Judea Pearl, in The Book of Why: The New Science of Cause and Effect. Pearl makes the case here that statistics needs to move beyond pattern recognition, to testing causal models and counterfactual reasoning. Pearl sees counterfactual reasoning in particular as a human specialty, One Weird Trick that distinguishes humans from other creatures, and he is skeptical about current work in Artificial Intelligence, impressive as it is, that is mainly about pattern recognition.

“Automobile engineering can provide an analogy for studying this type of system. It would be difficult to understand racing-car design through regression analysis of how engine size varies depending on changes in other features, such as the mass and shape of the car. Instead, a model is needed that uses physical laws to predict optimal combinations of the variables under different criteria. Understanding brain evolution poses a similar challenge in that an organism’s features co-evolve under biological constraints.”

So turning to the article itself, what the authors do is to test an explicit model in which a developing organism has to allocate energy to growing a brain, growing a body, and reproducing. They ask what sorts of evolutionary challenges would lead to the particular combination of brain size, body size, and reproductive life history that we see in Homo sapiens. The challenges might be ecological (e.g. securing more food). They might be social (outwitting competitors). They might be solitary or cooperative (working with others to secure more food, or banding with others to defeat rival bands). Their conclusion: the best fit to their model comes when they assume that the evolution of big brains is 60% a result of individual ecological adaptation, 30% a result of cooperative ecological adaptation, and 10% a result of group-versus-group social adaptation. More specifically, what mostly drives the evolution of brain size in their model is that marginal returns to investing in ecological skills don’t decline as quickly for humans as for our close relatives. Spending extra years learning stuff continues to have a payoff for us, maybe because culture and language mean that there are a lot more useful tricks floating around to learn.

These results have to be considered pretty tentative at this point. Note however that they count strongly against the view that human brain evolution is mostly about being Machiavellian and outsmarting the other guys, although they do allow a modest role for inter-group competition. And they count against the view, advocated by Geoffrey Miller, that the human mind evolved as a sexual display, like the peacock’s tail. So it may be true that “sexual love … lays claim to half the powers and thoughts of the youngest portion of mankind” (Schopenhauer). But (at least according to González-Forero and Gardner), whatever the claims of love on our hearts, we owe our big brains to our work.

The Miocene (23 – 5 million years ago) is a period of extraordinary success for our closest relatives, the apes. Overall there may have been as many as a hundred ape species during the epoch. Proconsul (actually several species) is one of the earliest. We will meet just a few of the others over the course of the Miocene, as some leave Africa for Asia, and some (we think) migrate back.

Sometimes evolution is a story of progress – not necessarily moral progress, but at least progress in the sense of more effective animals replacing less effective. For example, monkeys and apes largely replace other primates (prosimians, relatives of lemurs and lorises) over most of the world after the Eocene, with lemurs flourishing only on isolated Madagascar. This replacement is probably a story of more effective forms outcompeting less effective. And the expansion of brain size that we see among many mammalian lineages throughout the Cenozoic is probably another example of progress resulting from evolutionary arms races.

But measured by the yardstick of evolutionary success, (non-human) apes — some of the brainiest animals on the planet — will turn out not to be all that effective after the Miocene. In our day, we’re down to just about four species of great ape (chimpanzees, bonobos, gorillas, and orangutans), none of them very successful. Monkeys, with smaller body sizes and more rapid reproductive rates, are doing better. For that matter, the closest living relatives of primates (apart from colugos and tree shrews) are rodents, who are doing better still, mostly by reproducing faster than predators can eat them.

So big brains aren’t quite the ticket to evolutionary success that, say, flight has been for birds. One issue for apes may be that with primate rules for brain growth – double the brain size means double the neurons means double the energy cost – a large-bodied, large brained primate (i.e. an ape) is going to face a serious challenge finding enough food to keep its brain running. It’s not until a later evolutionary period that one lineage of apes really overcomes this problem, with a combination of better physical technology (stone tools, fire) and better social technology (enlisting others to provision mothers and their dependent offspring).