John Shea, a professor of anthropology at Stony Brook University, gives us a double whammy of actual human evolution this month, rather than the typical victorious narrative. Using fossil and archaeological evidence, Shea takes down the idea that we became “modern” late in human evolution, with that sense of modernity (or progress) often tied to causes like “a cognitive revolution” or “an explosion of culture.”

In other words, he wants to contradict the popular image just below:

We have known for years that this image is entirely wrong. Chimpanzees, often cast at the base of the sequence, are – wow – just as evolved as we are. After all, they are an extant species, living today, and have also gone through millions of years of evolution since our common ancestor.

The sequence of being hunched over as a shuffling apeman in the middle to an erect man at the end is also wrong. Bipedality came early in hominin evolution, and is linked with an upright skeletal frame, not some missing link that mixes chimp and human into one. Finally, the linear sequence itself is wrong. There were numerous species of hominins in the past, a veritable branching tree just like Darwin used to diagram.

Shea adds one more important element that helps to completely dismantle this popular image. We often assume that our immediate ancestors were primitive, and that something special must set us apart.

For decades, archeologists have believed that modern behaviors emerged among Homo sapiens tens of thousands of years after our species first evolved. Archaeologists disagreed over whether this process was gradual or swift, but they assumed that Homo sapiens once lived who were very different from us. These people were not “behaviorally modern,” meaning they did not routinely use art, symbols and rituals; they did not systematically collect small animals, fish, shellfish and other difficult-to-procure foods; they did not use complex technologies: Traps, nets, projectile weapons and watercraft were unknown to them.

Premodern humans—often described as “archaic Homo sapiens”—were thought to have lived in small, vulnerable groups of closely related individuals. They were believed to have been equipped only with simple tools and were likely heavily dependent on hunting large game. Individuals in such groups would have been much less insulated from environmental stresses than are modern humans. In Thomas Hobbes’s words, their lives were “solitary, nasty, brutish and short.”

This view is wrong. Much of the evidence used to support this view has relied on stone tool technologies, which are one of the most accessible records we have of human activity in the past, and the presumed florescence of art and sophisticated tool making in Europe by cro-Magnons, contrasted with those nasty, primitive Neanderthals.

Shea uses new evidence on human fossils and tool making to refute this old and worn-out narrative. First, modern humans have a greater time depth and greater geographical distribution than often assumed:

In Europe, the oldest Homo sapiens fossils date to only 35,000 years ago. But studies of genetic variation among living humans suggest that our species emerged in Africa as long as 200,000 years ago. Scientists have recovered Homo sapiens fossils in contexts dating to 165,000 to 195,000 years ago in Ethiopia’s Lower Omo Valley and Middle Awash Valley. Evidence is clear that early humans dispersed out of Africa to southern Asia before 40,000 years ago. Similar modern-looking human fossils found in the Skhul and Qafzeh caves in Israel date to 80,000 to 120,000 years ago. Homo sapiens fossils dating to 100,000 years ago have been recovered from Zhiren Cave in China. In Australia, evidence for a human presence dates to at least 42,000 years ago.

Second, the stone tool evidence shows great variability and overlap in tool types over this time period, rather than some march to modernity.

When [Clark’s model of five modes of technology] is applied to sites in eastern Africa dating 284,000 to 6,000 years ago, a more complex view of prehistoric life there emerges. One does not see a steady accumulation of novel core technologies since our species first appeared or anything like a “revolution.” Instead one sees a persistent pattern of wide technological variability.

But Shea’s real target is not simply the documentation of variation in fossils and tools in the past, it’s how we think about the past, and how we do research and test ideas about recent human evolution.

Paleolithic archaeologists conceptualize the uniqueness of Homo sapiens in terms of “behavioral modernity,” a quality often conflated with behavioral variability. The former is qualitative, essentialist, and a historical artifact of the European origins of Paleolithic research. The latter is a quantitative, statistically variable property of all human behavior, not just that of Ice Age Europeans. As an analytical construct, behavioral modernity is deeply flawed at all epistemological levels.

This paper outlines the shortcomings of behavioral modernity and instead proposes a research agenda focused on the strategic sources of human behavioral variability. Using data from later Middle Pleistocene archaeological sites in East Africa, this paper tests and falsifies the core assumption of the behavioral-modernity concept—the belief that there were significant differences in behavioral variability between the oldest H. sapiens and populations younger than 50 kya. It concludes that behavioral modernity and allied concepts have no further value to human origins research. Research focused on the strategic underpinnings of human behavioral variability will move Paleolithic archaeology closer to a more productive integration with other behavioral sciences.

As Shea writes, the “behavioral modernity” approach focuses too much on the search for “human uniqueness”, and is framed by a narrative of moving from a primordial past to our resounding success as a species. In a typical heroic narrative, obstacles must be overcome, failures fought through, until finally here is a transformation in the hero – ourselves. That transformation is when we became “behaviorally modern” and language and art and all those good things are the trappings of our success.

This narrative comes with significant limitations.

The strongest reason for discarding “behavioral modernity” and “modern human behavior” is that they lack analytical precision. As matters stand today, there are wide and irresoluble theoretical disagreements about the nature of behavioral modernity, how to define it, and how to recognize it. Eurasian prehistorians use the term “modern human behavior” for evidence that occurs consistently over tens of thousands of years at a regional scale (various parts of Europe and/or Southwest Asia). Africanists use the term for behavior that occurs intermittently over hundreds of thousands of years at a continental scale. Neither term clarifies the description of archaeological evidence, nor does either of them refine our understanding of the evolution and variability of a particular behavior. They have become postmodern concepts, words that mean whatever one wants them to.

The idea that behavioral modernity is a derived evolutionary state, one not shared by all morphologically modern-looking H. sapiens and one that can be reliably diagnosed from behavioral characteristics, is rich with potential for abuse. It fits well with racist arguments that there are meaningful grade-level evolutionary differences among living humans. Such views are rarely expressed in scientific circles (or polite company), but they nevertheless can find traction among nonscientific audiences because they incorporate the same unilinear model of human evolution that underlies the behavioral modernity concept. If paleoanthropologists judge humans’ evolutionary state based on their behavior, why shouldn’t others do so as well? Discarding the term “behavioral modernity” will not stop individuals from cherry-picking selected findings of paleoanthropology to support racist agendas, but it will deny them the illusion that they are emulating an accepted scientific method.

Shea proposes a focus on behavioral variability as the solution. He approaches it as a quantifiable problem, and one linked to similar types of research in behavioral ecology.

Variability is a measurable quality of all human behavior expressed in term of modality, variance, skew, and other quantitative/statistical properties. These qualities change through time and space, and they do not necessarily follow a preferred direction. Trends are recognizable only in hindsight, ex post facto. A more versatile individual (or species) can become more specialized, or vice versa, in response to variation in selective pressures…

[This article] calls on Paleolithic archaeologists researching the evolution of H. sapiens to conceptualize human behavior in strategic terms, to seek the cost-benefit structure of the incentives underlying particular behaviors, and to document variation in the contexts in which particular behavioral strategies are deployed (or not). The result will be far better models of human behavioral variability than are currently available.

In doing this work, archaeologists will of necessity have to work more closely with behavioral ecologists. This field has developed a sophisticated and nuanced language for describing and analyzing strategic and behavioral variability of living humans and other species. Such collaboration could be a very good thing for archaeology and for anthropology in general. A common focus on strategic modeling of human behavior is a powerful antidote to the centrifugal forces pulling anthropologists away from one another into ever more rarified specializations.

Shea’s paper is a bold move away from the narrative of human uniqueness, as well as away from the “Stone Age mind” approach of evolutionary psychology, which also relies on this contrast between the Paleolithic and modern behavior. His emphasis on behavioral variability, rather than evolved modules to do specific things and hopeful neural mutations that somehow led to the jump to modern behavior, significantly challenges the dominant evolutionary thinking about the links between past and present.

This conclusion has three important implications for Paleolithic archaeological research on the origins and evolution of H. sapiens. First, the capacity for behavioral variability we think to be uniquely evolved among recent human populations may be evolutionarily primitive. Second, this capacity for behavioral variability may be one shared with now-extinct hominin species. Finally, differences in the capacity for behavioral variability may not explain why these other species are extinct and H. sapiens is not. The case for behavioral variability is a strong one, but few major issues in evolution boil down to single causes.

As a Current Anthropology article, Shea’s piece comes with commentaries. They are uniformly positive in praising the move towards a “behavioral variability” paradigm. Critiques generally focus on the details of how this research can be done , and indicating Shea’s approach (relying on behavioral ecology and using a statistical model to examine behavioral strategies) is not the only one possible. Rick Potts’ summary gives you a taste:

The reason to embrace the paper’s main thrust is that Shea is on the mark in his tough critique of progress-oriented paradigms, limits of extrapolation, and the difficulties in pinpointing the actual origination time of any particular behavior. He gives ample reasons to look for a novel analytical approach and terminology to supplant the concept of behavioral modernity.

The paper thus leads one to expect that a quantitative statistical analysis will capture measurable dimensions of behavioral variability and will help us understand and possibly even better explain the evolution of human behavior. More than one reader will be surprised, then, that Shea’s principal analysis invokes Clark’s system of technological modes to show that East African lithic assemblages from roughly 300 to 100 kya possess the “type artifacts” of modes 1 through 4. Shea takes this as a measure of wide behavioral variability in H. sapiens from its onset. This is surprising, because Clark’s system typifies the linear evolutionary paradigm that Shea decries. It is also surprising because the presence of these modes is considered a demonstration of behavioral variability somehow related to the claim that it is a measurable, statistically testable concept…

The point of this critique is that Shea’s paper aims so impressively for a conceptual breakthrough that I really wished to see a quantifiable statistical treatment of behavioral variability, one that would match the expectation. It is yet to come.

To add my own spot of critique, Shea’s focus on using behavioral ecology, and its cost/benefit approach, and reducing variation to largely normative statistics (“modality, variance, skew”) represents only one spotlight on how to think about human evolution. Adaptationist thinking, specifically a focus on mechanisms of behavior, is largely absent in his paper. So is grappling with questions of how culture evolved, tightly linked to the best line of evidence we have – tool types. “Behavioral variability” in a cost/benefit and normative approach is not equipped to engage with these types of questions.

To take one example, Greg’s work on human malleability represents an entirely different way to think about behavioral variability and its role in human evolution. As he wrote in his recent piece on free diving:

Human skills and adaptation show us how our brains and nervous systems can be trained to do amazing things. Frequent readers will know that I think much of the discussion of ‘human nature,’ carried out by — to put it nicely — exceptionally sedentary theorists, severely underestimates what our bodies are capable of doing.

Too often, in discussions of human adaptation, we allow a flabby distinction between three basic types of adaptation: genetic, phenotypic (or physiological), and cultural (or technology). What I’ve been playing with, and will return to at the end of this piece, is the inseparability of these, especially the last two: physiology and culture. The Bajau fisherman Sulbin shows us how biology and culture are inseparable because what he does ends up shaping his body, but only because he grew up around people who knew how to manage becoming human in this distinctive amphibious way and because his adaptations play upon how his nervous system works.

Shea’s approach is a new derivative of the phenotypic approach, and a welcome break with the genetic/cultural combo of the past two decades pushed by both evolutionary psychologists and cultural anthropologists – genetically encoded adaptations from the Paleolithic give way to the power of language, symbolism, and art.

As Greg argues, human ways of doing things with our brain, bodies, and social environments represents a key adaptation, accounting for both variation and similarity around the globe. In other words, the neuroanthropological approach asks, how do you generate those patterns of variation? Shea assumes that they are given by nature, by an evolutionary process that is ongoing in how it sorts between the match of a behavioral variant and local success. But what if one of the key adaptations is that we can produce such variation?

But, like Potts and many of the reviewers of the Current Anthropology article, these are specific questions about research paradigms, rather than a wholesale critique. Greg and I are interested in neural/cultural linkages and in behavior not just as an evolutionary strategy but something actual people, with specific bodies and brains, do. Hence our different focus.

But Shea is right with his larger critique. His conclusion to the American Scientist piece is a wonderful summary of how we need to shift our thinking about human evolution and how that links to anthropology as a whole.

Dividing Homo sapiens into modern and archaic or premodern categories and invoking the evolution of behavioral modernity to explain the difference has never been a good idea. Like the now-discredited scientific concept of race, it reflects hierarchical and typological thinking about human variability that has no place in a truly scientific anthropology. Indeed, the concept of behavioral modernity can be said to be worse than wrong, because it is an obstacle to understanding. Time, energy and research funds that could have been spent investigating the sources of variability in particular behavioral strategies and testing hypotheses about them have been wasted arguing about behavioral modernity.

Anthropology has already faced this error. Writing in the early 20th century, the American ethnologist Franz Boas railed against evolutionary anthropologists who ranked living human societies along an evolutionary scale from primitive to advanced. His arguments found an enthusiastic reception among his colleagues, and they remain basic principles of anthropology to this day. A similar change is needed in the archaeology of human origins. We need to stop looking at artifacts as expressions of evolutionary states and start looking at them as byproducts of behavioral strategies.

The differences we discover among those strategies will lead us to new and very different kinds of questions than those we have asked thus far. For instance, do similar environmental circumstances elicit different ranges of behavioral variability? Are there differences in the stability of particular behavioral strategies? Are certain strategies uniquely associated with particular hominin species, and if so, why? By focusing on behavioral variability, archaeologists will move toward a more scientific approach to human-origins research. The concept of behavioral modernity, in contrast, gets us nowhere.

I think these are very important papers, everyone should read them. The more I think about them, the more I think Shea’s “solution” is incoherent. It just removes the problem of cognitive evolution backward in time, while denying that cognitive evolution has any observable archaeological correlates.

I find myself a bit surprised to have this reaction, because the viewpoint Shea opposes — the “sudden behavioral modernity” idea — is one that I’ve long opposed as well. But those troublesome Neandertals make just as many problems for Shea’s scenario as for the “behavioral modernity” idea.

Putting hands to flint or wood, clay or bone and trying to reproduce the products of primitive technology brings respect for the work of our ancient ancestors and gives pause to any thoughts that they were less skilled or intelligent than people today. Jared Diamond’s ideas about development of civilization as a product of available resources in the environment where people found themselves has bearing here too. He made the assertion that in his experience the average New Guinea tribesman was somewhat smarter than the average civilized American or European. They were not as materially well off because their environment dictated more time was spent gathering necessities than in societies which had the time to develop their culture. Culture was a product of food surplus according to Diamond.

Thank you, thank you, thank you. And yet, there are scientific disciplines where a narrative of teleological evolution towards “developed” is the mainstream still. I need not even look at the large N shenanigans a study uses to gain significance if I read in the abstract that we can look at “less developed cultures” to extrapolate how the behavior of our common human ancestors somehow became part of our genetic predisposition to behavior. Like indigenous people have been suspended in cryostasis while the western world invented civilization. For §$%! sake!

In so many interdisciplinary fields anthropological hypotheses that are younger than the chauvinist paradigm of Herder are simply ignored. Let’s hope this piece gets some recognition in the respective circles after all.

Mike, that’s just wrong. Race as a biological concept is dead. There are three primary reasons why:

(1) There’s a better concept/term. It’s called “population”; we can refer to particular human populations. The continued use of “race” to refer to populations is rather like using “phlogiston” to talk about combustion. It dates you, and it doesn’t aid in the scientific understanding of population variation.

(2) Race as a social category, with some truly pernicious historical and present-day impacts, continues as a powerful force in contemporary social life. Using “race” as a biological term actually sacrifices good science, because it promotes the easy conflation of population thinking with the sort of racial/racist thinking that exists in popular US discourse. To my mind, anyone who ignores this history, and insists that “race” can somehow be rehabilitated as a scientific concept, does damage to the good science around understanding patterns of human variation and has either explicit or implicit social views that they want to promote (often to the detriment of people placed in particular social categories of “race”). So, once again, given this history around the “race” concept, I just really have a hard time for anyone who says somehow it can still be used as a good scientific category. It willfully obscures one thing with another.

(3) “Race” in US parlance has generally referred to bounded groups who have some distinguishing biological features that together make them different from other groups. Given the overwhelming evidence for (a) greater variation within populations than across them, (b) that human variation generally works along clines, without discrete breaks, and (c) the non-concordance of traits (such as hair type and skin color) which typically are grouped together in the US, then “race” as a distinguishable group, as a sort of definable sub-type or sub-species as Sesardic or Coyne posit, is again just bad science.

We have much better ways to think about human variation than using the concept of “race”, terms that not only are closer to the empirical realities of human variation but also avoid some really contentious social histories.

To put it differently, if “race” were such a good idea for understanding biological variation, why don’t we hear all the biologists out there talking about the different “races” of animals that they study? Once that was fairly common – back in Darwin’s time. We’ve moved far beyond that now. Biologists speak of populations, clines, sub-species, and the like. It makes better biological sense, and is better science terminology.

Our Pages

Neuroanthropology. Sometimes it’s straight-up neuroscience, sometimes it’s all anthropology, most of the time it’s somewhere in the middle. Greg is the cultural guy, now interested in bio stuff. Daniel is the bio guy, now interested in cultural stuff. Or, to say it differently, Greg does capoiera, mixed martial arts, and rugby. Daniel does alcohol, drugs, and video games. Two very different styles of recreation.