2017 was a calamitous year for the North Atlantic right whale. The final count of the 2017 "Unusual Mortality Event" or UME, as defined by the Marine Mammal Protection Act, was eighteen animals. Fourteen North Atlantic right whales were found dead from the Gulf of Saint Lawrence to Cape Cod between June and December, with an additional four strandings and entanglements through the year. An average annual mortality rate for the North Atlantic right whale is four animals. To make matters worse, these right whales began 2017 with an estimated population of just 450 animals, including only about one hundred breeding females who have exhibited such stress in recent years that their breeding rate has slowed. As proof of this, in addition to the UME, 2017 saw no recorded calf births.

​The cause of the UME is no mystery; warming waters have expanded the whales' habitat further north. Bypassing their usual feeding grounds in the Gulf of Maine, most of the whales were found entangled or dead in the Gulf of St. Lawrence, where they have sought out their favorite food, Calanus finmarchicus or copepods, a cold water species. Measures in place in more southern waters to prevent entanglements and ship strikes haven't been implemented further north, but as the whales move, so too must these regulations – if there is time left. Marine ecologist Mark Baumgartner of Woods Hole Oceanographic Institution ominously noted in December 2017 that North Atlantic right whales, without immediate intervention and protective measures, would be extinct in twenty years. The story of the North Atlantic right whale may end in 2040, but the beginning of the end of this species may have begun a thousand years earlier.

2017-2018 North Atlantic Right Whale Unusual Mortality Event, from NOAA Fisheries Marine Life in Distress.

North Atlantic right whales, Eubalaena glacialis, earned their common name because of the ease with which they were hunted – they were literally the 'right' whales to hunt. Right whales are bountiful in blubber, giving them exceptional buoyancy, even after death. Their shallow coastal habitat, slow swimming speed, docility, and sizeable pods - averaging twenty but recorded in superpods of one hundred animals - made them accessible and attractive to coastal predation. Averaging fifteen meters in length and 40 - 100 tons in weight, a single right whale could feed and supply a premodern community for months.

The deck is stacked, it would seem, against the Atlantic species, and it is clear that all of these natural factors led to significant premodern exploitation of the right whale across the Atlantic. While a population of right whales also exists in the Pacific, predation or natural causes have not led to a dramatic population decline as we see in the Atlantic. Biologists have estimated around 5500 right whales taken across the North Atlantic from the 17th through the early 20th centuries. Reaching an estimated population low of 100 in the 1930s, the species’ brief recovery to approximately 500 animals in the modern era is cold comfort. Without a sense of a historical baseline or 'natural' population of North Atlantic right whales, both in population size and genetic diversity, it is difficult to estimate what a recovered population should look like and whether recovery is even possible.

Historians and marine biologists recognize that human exploitation and interference have had a massive impact on the North Atlantic right whale, particularly during and since the sixteenth century. Randall Reeves says that this species has suffered from “one of the most extensive, prolonged, and thorough campaigns of wildlife exploitation in all of human history.” It is a challenge, though, to determine the full extent of human interference in the case of premodern depletions of extinctions. North Atlantic right whales once existed in two presumed separate breeding populations. Still extant, for now, is the population local to the western North Atlantic and the North American coastline. An eastern North Atlantic population, thought to have bred off the coast of the Canaries and migrated along the Atlantic coast to the Subarctic, is presumed extinct. Of this population, though, we know almost nothing with respect to population size, species duration, or genetic diversity. Did changes in premodern climate – the Medieval Climatic Anomaly and the Little Ice Age – affect the whales' migrations and habitats alongside human interference? Were right whales so heavily predated in premodernity that non-industrial whaling could bring about the end of a population? This is the case typically made for North Atlantic gray whales, extinct by the eighteenth century, and for the eastern population of North Atlantic right whales. Does this also explain the precarious state of the western population of North Atlantic right whales?

Reeves, in 2007, wrote that “historical research has provided a general perspective on past right whale distribution, population structure, and numbers, but understanding of just how abundant these animals were when whaling began in the North Atlantic remains vague." In 2008, Brenna McLeod led a genetic analysis of historical North Atlantic right whale remains from whaling stations up and down the Labrador coast. McLeod and her team concluded that “the pre-exploitation population size of right whales was clearly much smaller than previously estimated [which] has effected our modern impressions of the recovery of right whale stocks.” In short, predation almost certainly played a role in the species' decline, but the degree remains unclear.

The deep history of North Atlantic right whales and their ill-fated engagements with human populations could offer a valuable lens at this critical moment for the species. Historical and archaeological proxy data on cetacean populations, especially for the North Atlantic right whale, may contribute to analysis of modern species populations, habitats, behaviors, and other statistics. Working back from peak points of exploitation through the earliest records of right whale use, historical and archaeological evidence may provide useful context for this imperiled species.

Basque Whale-Fishing. Facsimile of a Woodcut in the "Cosmographie Universelle" of Thevet, in folio: Paris, 1574. Project Gutenberg text 10940.

We often begin the story of the North Atlantic right whale extirpation with the medieval Basques, who historically have been blamed for the destruction of the eastern branch of the North Atlantic right whale. The Basques established some of the earliest whale fisheries along the European Atlantic coast, maintaining those fisheries from the 13th through the early 20th centuries. Forty-seven medieval and early modern French and Spanish Basque ports have been identified as possible whale fisheries. Alex Aguilar, using catch data from its beginnings in the 16th century, estimates that each port may have taken one or at most two whales a year through the eighteenth century.

​Far from the depredations wrought by industrial whaling, even this small catch was enough to make an impact on the population, as the Basques were known to target whale calves, and the Bay of Biscay may have been the winter nursery for the eastern population of the North Atlantic right whales. In some ports, Aguilar concluded that the catch of calves accounted for over 20% of records. Additionally, mothers will follow struck calves, making them more readily subject to predation as well, and removing breeding females from the population. This hunting strategy, common among preindustrial whalers, would explain the apparent downturn in catch records by the 18th century and a thinning population that may have precipitated Basque movements to new hunting grounds in North America and the Northeastern Atlantic and Subarctic, where their quarry was the western population of North Atlantic right whales. Also to be considered are possible changing habitats and migrations over the course of the Little Ice Age, when Bowhead whales may have moved south into the Subarctic, potentially competing with right whales for prey.

Basque whalers in Labrador reportedly caught well over 20,000 animals, presumed, again, to be right whales. Archaeological and historical investigations of around twenty whaling ports along the Labrador shore, focusing especially on the major Basque port of Red Bay, have forced a reassessment of the role of the Basques in right whale extirpation. In genetic analysis of nearly three hundred whale bones from ten of those ports, only one sample was identified as right whale. Over two hundred bones came from bowhead whales, of which 72 individual animals were identified. The unanticipated number of bowheads on these sites has altered our perceptions not only of the right whale's decline, but also of the expected habitats of the bowhead. While not clearing the Basques of impact in the decline of the right whales, their involvement may not have been central to this population’s decline. The Basques had another crack, potentially, at the North Atlantic right whales during their ill-fated residence in the Icelandic Westfjords, but they weren’t the only hunters targeting these animal populations.

Right whales and their utility to human societies has been documented for over a thousand years in Europe. This documentation largely comes from the Northern world, and specifically from Norse populations, from the homeland and across the diaspora. Norse whalers in Ireland, according to a Spanish geographer, spent the 11th century picking off right whale calves, perhaps from the same population travelling through the Bay of Biscay: "… on their coasts, [the Norsemen] hunt the young of the whale, which is an exceeding great fish. They hunt its calves, regarding them as a delicacy. They have mentioned that these calves are born in the month of September, and are hunted in the four months October to January. After this their flesh is hard and no longer good for eating…. Then they cut up the meat of the calf and salt it. Its meat is white like snow, and its skin black as ink." The North Atlantic right whale migration up the European coast peaked in January, but lasted from October through March, according to Aguilar's analysis of the Basque hunt in Spain and southern France. If right whale calves were being taken at multiple points up and down the European coast, the seemingly minimal catch of the Basque ports become magnified in its impact.

​In addition to the Norse whalers of Ireland, Norsemen back in the homeland itself had already been targeting right whales some three centuries prior. The laconic merchant-hunter Ottar was an Arctic hunter of right whales off the coast of northern Norway, or so he told King Alfred and his court in the ninth century. The whales of Ottar's homeland, he noted, were far bigger than those he fished from the sea off Tromsø, but his ship, along with five others, reportedly killed sixty large whales in the span of two days. Ottar describes the whales as being up to 20 meters long, and while perhaps exaggerated, many historians have surmised that his quarry were right whales, a species called "the first commercial whale." For ninth-century Norsemen, the ability to shoot at and kill whales was second, perhaps, to their capacity to control and acquire. A whale that sank was of no good to anyone, but whales that floated would certainly be keenly sought. Barring any reference to netting, floats, or lines secured by harpoons, a primitive hunter could kill and acquire a right whale if conditions were right.

There is good reason to place faith in the ability of hunters like Ottar or the Hiberno-Norsemen to recognize whales they could catch. The twelfth-century anonymous King's Mirror, old Norse Konungs skuggsjá , described the behavior and appearance of over a dozen species of North Atlantic whales. Medieval manuscript illuminations from Scandinavian and Western European texts depict whales in various recognized activities that awe and delight us today - breaching, porpoising, spy-hopping, logging and especially predation. Among the most articulate and observant of all premodern authors was the late medieval Icelander Jón Guðmundsson, also known as Jón Laerði, or Jon the Learned (1574-1658). Jón was a sorcerer and a poet, a physician, outlaw, artist, fisherman, historian, and naturalist. He possessed a wealth of local, traditional environmental knowledge on seafaring, fishing, and especially on whales.

Jón Laerði, or Jon the Learned (1574-1658), illustration of whale species from his Natural History of Iceland, GKS 1639 kvart, Royal Library, Copenhagen; photo courtesy of Icelandic Museum of Natural History.

Jón was born and lived in the Snaefellsness peninsula in Western Iceland, where he says he saw many whales. He also lived in the Westfjords, where he witnessed and recorded the infamous killing of Basque whalers in 1615. Sometime before his death, perhaps around 1640, Jón wrote a work called the Natural History of Iceland in which he illustrates and describes twenty-two whale species of Iceland. According to Viðar Hreinsson, recent biographer of Jon Laerði, Jón compiled illustrations with Danish captions of nineteen whale species, in addition to a rather skinny walrus, on a loose leaf of paper preserved in the Royal Archive in Copenhagen. The right whale according to Jón exists in two varieties, one smaller, the sléttbakur and a larger animal which he calls höddunefur, measuring 35 ells at the longest (about 17 meters). The smaller whales are the ones most hunted, particularly for their valuable blubber. Icelandic waters, he notes, had been home to a large number of those whales, but the "foreign whalers have reduced the number of this species the most." One wonders which of the right whales – western or eastern North Atlantic - were being preyed upon and whether the diminished species, which he notes, was the beginning of the end of the North Atlantic right whale.

In what ways can past histories tell us something new, critical or important about modern animal populations? In the case of the North Atlantic right whale, new technologies like ancient DNA analysis offer a possible means of insight into the current state of this population and context for the references to the species throughout medieval and early modern literature and history. Through an ongoing National Science Foundation Arctic Social Science project, (NSF # 1503714, Assessing the Distribution and Variability of Marine Mammals through Archaeology, Ancient DNA, and History in the North Atlantic; henceforth Norse North Atlantic Marine Mammal Project or NNAMMP) genetic materials from whale remains found on numerous archaeological sites in the North Atlantic and Subarctic may provide evidence related to modern right whale populations.

Archaeological sites across the North Atlantic often preserve fragments of marine mammal bones both as artifacts and as butchery or bone working residue. The Norse North Atlantic Marine Mammal Project has compiled over 200 worked and waste whale bone samples from a dozen archaeological sites in Iceland, Greenland, North America, the Faroes and Orkney, ranging from 800 through 1500 CE. Whale bone is a challenging resource for archaeological analysis, defying typical zooarchaeological standards for data recording and analysis. Whale bone is not transported to archaeological sites as part of animal butchery, so bones that are found on a site do not follow regular butchery patterns. Depending on the size of the animal stranded, only 10 to 15% or less of an animal’s body weight may be derived from hard tissues; in the case of North Atlantic right whales, about 13% of an animal’s body weight is bone. This detail becomes important when you consider that the physical evidence of premodern whale use must come from this small percentage of hard tissues, of which only a fraction – if any at all – is transported from a coastal butchering site to an inland settlement.

​Complicating matters, medieval laws and charters, in Iceland and across the Continent, scrupulously divide stranded whales based on location of stranding, species, class or status of the claimant, and other factors. With all of these metrics in play, recovery of whale bone is not assured on any site, and the bone present on a site may not attest to the quantity of soft tissues used from any animal. Further, to isolate the bones of a single species from massively modified and fragmented whale bone creates an additional challenge for species analysis.

Archaeological site at Siglunes, Eyjafjörður; note large fragment of whale cranium in the left foreground. Photo reproduced with permission of Dr. Ramona Harrison, University of Bergen, and Fornleifastofnun Íslands (FSI).

Despite these challenges, the Norse North Atlantic Marine Mammal Project and Brenna McLeod at the Frasier lab at St. Mary's University have identified over thirteen unique cetacean species within 200+ bone samples. In those samples, nine unique examples of Eubalaena glacialis have been genetically confirmed across the sampled site assemblages. Over the course of the next year, our project will continue to identify and analyze additional bone samples from across the North Atlantic. Microsatellite analysis of nuclear DNA from identified samples will help to refine which populations of North Atlantic right whales have been found across archaeological sites from North America, Iceland, Greenland, and the Orkney Islands. By 2020, our project will have analyzed over 400 whale bones and we hope to tell a number of species stories, not postscripts, on the whales of the North Atlantic.

Aguilar, Alex. “A Review of Old Basque Whaling and its Effect on the Right Whales (Eubalaena glacialis) of the North Atlantic.” Report of the International Whaling Commission, Special Issue 10 (1986): 191-199.

Reeves, R. R., T. Smith and E. Josephson. “Near-annihilation of a species: Right whaling in the North Atlantic.” In S.D.Kraus and R. M. Roland (eds), 39-74. The Urban Whale: North Atlantic right whales at the crossroads. Cambridge MA: Harvard University Press, 2007.

Roman, Joe and Stephen R. Palumbi. “Whales Before Whaling in the North Atlantic.” Science 301 (25 July 2003): 508-510.

​Will climate change trigger widespread food shortages and result in huge excess mortality in our future? Many historians have argued that it has before. Anomalous weather, abrupt climate change, and extreme dearth often work together in articles and books on early medieval demography, economy and environment. Few historians of early medieval Europe would now doubt that severe winters, droughts and other weather extremes led to harvest failures and, through those failures, food shortages and mortality events.

Most remaining doubters adhere to the idea that food shortages had causes internal to medieval societies. Instead of extreme weather or abrupt climate change, they blame accidents of (population) growth, deficient agrarian technology, unequal socioeconomic relations and weak institutions. Yet only rarely they have stolen the show or dominated the scholarship. For example, Amartya Sen’s “entitlement approach” to subsistence crises, which assigns primary importance to internal processes, has made few inroads in the literature on early medieval dearth, although in later periods it has many adherents.

Of course, the idea that big events have a single cause – monocausality, in other words – rarely convinces historians for long. Famine theorists and historians of other eras and world regions now argue that neither external forces such as weather, nor internal forces such as entitlements, alone capture the complexity of food shortages. They propose that these two explanatory mechanisms, often labeled “exogenous” and “endogenous,” respectively, should not be considered independent of one another or mutually exclusive. To them, periods of dearth can be explained by environmental anomalies, like unusual and severe plant-damaging weather, that coincide with socioeconomic vulnerability and declining (for most people) entitlement to food.

These explanations are more convincing. It seems that diverse factors acted in concert to cause, prolong and worsen food shortages. But proof for complex explanations for dearth in the distant past is hard to come by. Though they can be misleading, simpler, linear explanations are much easier to pull out of the extant evidence. This is true even when the sources are plentiful, as they are, at least by early medieval standards, for some regions and decades of Carolingian Europe. Food shortages in the Carolingian period, especially those that occurred during the reign of Charlemagne, have attracted the attention of scholars since the 1960s.

Left: Bronze equestrian statuette of Charlemagne or possibly his grandson Charles the Bald (823-877). Discovered in Saint-Étienne de Metz and now in the Louvre. The figure is ninth century in date. The horse might be earlier and Byzantine. Charles the Bald ruled the western portion of the post-Verdun empire, although whether he was actually bald is still debated. Right: A Carolingian denarius (812-814) depicting Charlemagne. The Charlemagne of the Charlemagne reliquary mask (Center) is handsomer. The coin, though, is contemporary and the bust is from the mid fourteenth century. Housed in the Aachener Dom’s treasury, it contains a skullcap thought to be that of the emperor.

​For the Carolingian period, ordinances from the royal court, capitularies, reveal hoarding and speculation, and document official attempts to control the prices and movements of grain, while annalists and hagiographers recount severe winters and droughts. All of this evidence sheds light on dearth. Yet the legislative acts point to internal pressures on food supply, while the narrative sources highlight external ones. As we have seen, neither pressure adequately explains subsistence crises alone.

Unfortunately, however, we rarely have evidence for endogenous and exogenous factors at the same time. Around the year 800, when Leo III crowned Charlemagne imperator, most evidence for dearth comes from the capitularies. Before and after, narrative evidence dominates. So Charlemagne’s food shortages appear to have had internal drivers, and Charles the Bald’s external ones. Or so the written sources lead us to believe.

Carolingian Europe as of August 843 following the Treaty of Verdun. Under rex and imperator Charlemagne (742-814), Carolingian territory stretched to include the area of Europe outlined here.

​Fortunately, evidence from other disciplines allows historians to fill in some of the gaps. External pressures are easier to establish by turning to the palaeoclimatic sciences. Using them, we are beginning to rewrite the history of continental European dearth, weather and climate from 750 to 950 CE. We are working on a new study that combines a near-exhaustive assessment of Carolingian written evidence for subsistence crises and weather with scientific evidence for changes in average temperature, precipitation, and volcanic activity (which can influence climate).

We are trying to answer some big questions, such as: What role did droughts, hard winters and extended periods of heavy rainfall have in sparking, prolonging or worsening Carolingian food shortages? Were these external forces the classic triggers of dearth that many early medievalists think they were?

Indicators of past climate embedded in trees and ice can test and corroborate observations of anomalous temperature and precipitation. For instance, the droughts of 794 and 874 CE, documented respectively in the Annales Mosellani and Annales Bertiniani, show up in the tree ring-based Old World Drought Atlas (OWDA, see below). Additionally, as McCormick, Dutton and Mayewski demonstrated, multiple severe Carolingian winters also align fairly neatly with atmosphere-clouding Northern Hemisphere volcanism reconstructed using the GISP2 Greenlandic ice core.

​By marrying written and natural archives, we are able to perfect our appreciation of the scale and extent of the weather extremes that coincide with Carolingian periods of dearth. Yet instead of simply providing answers, our integrated data are raising questions, and pushing us towards a messier history of early medieval food shortage. This is because the independent lines of evidence often do not agree. For example, only two of the 15 driest years between 750 and 950 CE in the OWDA coincide with drought in Carolingian sources.

Admittedly, some of this dissonance may be artificial. The written record for weather and dearth is incomplete. To be sure, some places and times during the Carolingian era, broadly defined as it is here, are poorly documented. So reported drought years can appear kind of wet in the tree-based OWDA in some Carolingian regions (parts of northern Italy and Provence in 794 and 874 for instance).

Moreover, the detailed or “high-resolution” palaeoclimatology available now for early medieval Europe is much better for some regions than others. Tree-ring series extending back to 750 presently exist for few European regions. It is simply not possible to precisely pair some reported weather extremes or dearths to palaeoclimate reconstructions. Indeed, spatially the two lines of evidence can be mismatched. They can also be seasonally inconsistent, as the trees tell us far less about temperature and precipitation in the winter than they do for the summer.

Matches between historical and scientific evidence are therefore generally limited to the growing seasons, in places where written sources and palaeoclimate data overlap. That is enough to yield some surprising results. When the written record is densest, there is natural evidence for severe weather and rapid climate change, but not for food shortages.

Take the dramatic drop in average temperatures registered in European trees at the opening of the ninth century. According to the 2013 PAGES 2K Network European temperature reconstruction, temperatures were cooler around the time of Charlemagne’s coronation than they had been at any time between the mid sixth and early eleventh centuries. This dramatic cooling aligns well with a relatively small Northern Hemisphere volcanic eruption, detected in the recent ice-core record of volcanism led by Sigl. The eruption would have ejected sunlight-scattering sulfur aerosols into the atmosphere. Notably, larger events in the Carolingian era, like those of 750, 817 and 822, clearly had less of an influence on European temperature. The cold of 800 is equally pronounced but less unusual in a tree-based temperature reconstruction from the Alps. In this series, the late 820s are remarkably cooler.

Documentary sources register the falling temperatures. The Carolingian Annales regni francorum report severe growing-season frosts (aspera pruina) in 800. The Irish Annals of Ulster document a difficult and mortal winter in an entry quite possibly misdated in the Hennessy edition at 798 (799 or the 799/800 winter is more likely). Yet surprisingly, there is no contemporary record of food shortages in Europe.

Scholars tend to focus on instances when the written evidence for dearth and the natural evidence for anomalous weather align tidily. It seems that just as often, however, the two lines of evidence do not match so neatly. Severe weather may not always have triggered dearth in the early Middle Ages. Contemporary peoples could apparently cope with weather extremes in ways that allowed them to escape food shortages.

Early medieval vulnerability to external forces of dearth seems to have varied over space and time. We need to investigate the contrasting abilities of peoples from different early medieval regions and subperiods, participating in distinct agricultural economies with their own agrarian technologies, to withstand plant-damaging environmental extremes.

Several studies already suggest early medievals were capable of responding to gradual climate change. But to argue that they were not rigid or helpless when faced with marked seasonal temperature or precipitation anomalies, we must first identify, from sparse sources, potential moments of resilience. In this we run the risk of reading too much into absences of evidence. Yet the conclusion seems inescapable: when written sources are relatively abundant and there is no record of dearth during notable deviations in temperature and precipitation, early medievals must have adapted successfully.

Going forward, we must identify both moments and mechanisms of early medieval resilience in the face of climate change. Teasing these out from diverse sources might be tough going, but these elements are missing from the history of early medieval dearth and climate. Their omission has allowed for misleadingly neat histories of climate change and disaster in the period. Similar problems might well plague other histories that too clearly link climate changes to food shortages and mortality crises. Research that complicates these links could offer compelling new insights about our warmer future.

Authors' note: this is a short sampling of a much longer and more detailed multidisciplinary examination ofCarolingian dearth, weather and climate, currently in preparation.

P.E. Dutton, “Charlemagne’s Mustache” and “Thunder and Hail over the Carolingian Countryside” in his Charlemagne’s Mustache and Other Cultural Clusters of a Dark Age (Palgrave, 2004), pp. 3-42, 169-188.

In the year 1001 CE, Leif Erikson made landfall in Greenland, and traded with people who “in their purchases preferred red cloth; in exchange they had furs to give.” The Vikings called these people Skraelings. Present-day archeologists and historians call them the Thule. At its height, Thule civilization spread from its origins along the Bering Strait across the Canadian Arctic and into to Greenland. The ancestors of today’s Inuit and Inupiat, the Thule accomplished what Erikson and subsequent generations of Europeans never managed: living in the high Arctic without supplies of food, technology, and fuel from more temperate climates.

The Thule left archeological evidence of a technologically sophisticated, vigorous people. They invented the umiak, an open walrus-hide boat so large that it was sometimes equipped with a sail. These boats, when used alongside small, nimble kayaks, made the Thule formidable marine-mammal hunters. On land, they harnessed dogs to sleds and built homes half-underground, insulated by earth and beamed with whale bones.

People did inhabit the high North American Arctic before the Thule. Their immediate predecessors, called the Dorset by archeologists, were expert carvers, and there are signs of other cultures that date back at least five thousand years. But the Thule appear to have been a particularly robust society, one that inhabited thousands of challenging Arctic miles. Eventually, they even traded with Europeans for metal tools, sending walrus ivory as far abroad as Venice.

In the twentieth century, many archeologists linked the success of the Thule to the climate. In this view, rapid Thule expansion coincided with the Medieval Warm Period in the years between 1000 and 1300. The Thule were expert whalers, especially of bowhead whales. This slow species makes for good prey. Their 100-ton bodies can be fifty percent fat by volume, giving people ample calories to eat and burn through long winters. With the slight increase in temperature during the Medieval Warm Period, the theory went, the range of the bowhead whale expanded across newly ice-free waters. Atlantic and Pacific bowhead populations eventually met in the Arctic Ocean north of Canada, offering an uninterrupted banquet of blubber to hunters.

​The Thule, in this view, were simply whale hunters who followed the migration of their prey in a warming climate. Environmental conditions, not a sophisticated culture, was the key explanation for their success. Emphasizing climate as the cause of migration and social success reduced the achievements of the Thule, essentially, to those of their prey.

However, twenty-first century evidence is changing this account of Thule migration. In 2000, Robert McGhee questioned the validity of the radiocarbon dates that helped establish Thule expansion as an eleventh-century phenomenon. He proposed the 1200s as the earliest date of migration. Then, genetic tests by marine biologists showed that Atlantic and Pacific bowhead whales did not mix their populations during the Medieval Warm Period, meaning that there was a substantial gap in whaling possibilities on the Arctic coast.

Something more complicated than just following the blubber drove the Thule eastward. McGhee speculated that communities moved for iron, which is short supply in the Arctic. Thule hunters learned from the Dorset people of a deposit left by the Cape York meteorite. They colonized huge territories to secure their access to this precious resource from outer space. Other specialists theorized that population pressure, overhunting, or warfare led the Thule to migrate east.

The ongoing work of Canadian archeologists T. Max Friesen and Charles D. Arnold seems to confirm that we must look beyond simple climatic explanations for the Thule expansion. Working on Beaufort Sea and Amundsen Gulf sites, the pair established that there was no definitive Thule occupation in this part of the western Arctic prior to the thirteenth century. Because any Thule migrants would have had to pass through these points as they moved east, their research indicates that the Thule civilization was only beginning its continental spread around the year 1200, well into the period of warming. The climate may have helped the Thule quickly spread toward Greenland, but the onset of the Medieval Warm Period did not automatically draw people eastward.

Moreover, the work of other archeologists on the Melville Peninsula, along Baffin Bay, indicates that the Mediaeval Warm Period was not always so warm. Some areas of the Arctic saw slight temperature increases, but in general the millennium was cooler than those past. In places, the effects of the so-called Little Ice Age began a century or two before they were evident across the globe, meaning the Thule adapted not to a warmer Arctic, but a colder one. This cooling was more apparent in the west, where the team found fewer Thule sites but also more stability, both in the climate and the record of human occupation. To the east of the Melville Peninsula, where temperatures did warm, the climate was also more variable – adding a new set of complexities to social and economic life. The move into the central Arctic, therefore, reflected forces other than climate.

Beginning in the fifteenth century, Thule culture fragmented, specialized, and emerged eventually as distinct contemporary Inuit and Inupiat groups. The Little Ice Age is often the reason given for the disintegration of Thule civilization in the fifteenth century. Yet, the work by Finkelstein, Ross, and Adams indicates that, while the Thule abandoned some sites due to cooling trends, this did not hold in all cases. Other causes, including increased contact with Europeans and their infectious diseases, might have had more to do with the disintegration in some locations.

Overall, the new vision of the Thule prominence in the Arctic makes their rise shorter, but even more impressive. And if the Thule began their migration only in 1200, it seems unlikely they spread east simply to find iron. This would have required only smaller-scale movements to precise locations. Instead, the Thule developed a thriving, intricate network of settlements across the Arctic. For Friesen and Arnold, this is evidence that the Thule expanded in order to recreate the ideological and economic lives that they had enjoyed in their origins along the Bering Strait. And in just a century they did, not only by inhabiting land from the Bering Strait to Greenland, but through explorations to the northern edges of the continent.

All of this also helps us reinterpret a well-known tale from the Viking exploration of the Arctic. When Leif Erikson’s sister Freydis frightened off a band of Skraelingar in the early eleventh century by striking “her breast with the naked sword” of a fallen Viking, she was likely not fighting the Thule, as scholars have assumed. Perhaps it was the Dorset people that “were frightened, and rushed off in their boats.” The Thule, at least, were likely still a century away from the eastern Canadian coastline. They were not easily daunted either by a shifting climate or by Viking weapons.

McGhee, Robert. “Radio Carbon Dating and the Timing of the Thule Migration,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in The Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 181-191.

Dyke, Arthur S., James Hooper, and James M. Savelle. “A History of Sea Ice in the Canadian Arctic Archipelago based on Postglacial Remains of the Bowhead Whale (Balaena mysticetus)”, Arctic 49 (1996): 235-255.

Park, Robert W. “The Dorset-Thule Succession Revisited,” in Appelit, M. Berglund, J, and Gullov, H.C. eds. Identities and Cultural Contacts in the Arctic: Proceedings from a Conference at the Danish National Museum. Copenhagen (2000): 192-205.

It's Maunder Minimum Month at HistoricalClimatology.com. This is our first of two feature articles on the Maunder Minimum. The second, by Gabriel Henderson of Aarhus University, will examine how astronomer John Eddy developed and defended the concept.

​Although it may seem like the sun is one of the few constants in Earth’s climate system, it is not. Our star undergoes both an 11-year cycle of waning and waxing activity, and a much longer seesaw in which “grand solar minima” give way to “grand solar maxima.” During the minima, which set in approximately once per century, solar radiation declines, sunspots vanish, and solar flares are rare. During the maxima, by contrast, the sun crackles with energy, and sunspots riddle its surface.

The most famous grand solar minimum of all is undoubtedly the Maunder Minimum, which endured from approximately 1645 until 1720. It was named after Edward Maunder, a nineteenth-century astronomer who painstakingly reconstructed European sunspot observations. The Maunder Minimum has become synonymous with the Little Ice Age, a period of climatic cooling that, according to some definitions, endured from around 1300 to 1850, but reached its chilliest point in the seventeenth century.

​During the Maunder Minimum, temperatures across the Northern Hemisphere declined, relative to twentieth-century averages, by about one degree Celsius. That may not sound like much – especially in a year that is, globally, still more than one degree Celsius hotter than those same averages – but consider: seventeenth-century cooling was sufficient to contribute to a global crisis that destabilized one society after another. As growing seasons shortened, food shortages spread, economies unraveled, and rebellions and revolutions were quick to follow. Cooling was not always the primary cause for contemporary disasters, but it often played an important role in exacerbating them.

Life on the ice in Amsterdam, during a typically frigid winter in the Maunder Minimum. Source: Jan Abrahamsz. Beerstraten, "Het Paalhuis en de Nieuwe Brug te Amsterdam in de Winter," 1640-1666.

Many people – scholars and journalists included – have therefore assumed that any fall in solar activity must lead to chillier temperatures. When solar modelling recently predicted that a grand solar minimum would set in soon, some took it as evidence of an impending reversal of global warming. I even received an email from a heating appliance company that encouraged me to hawk their products on this website, so our readers could prepare for the cooler climate to come! Of course, the warming influence of anthropogenic greenhouse gases will overwhelm any cooling brought about by declining solar activity.

In fact, scientists still dispute the extent to which grand solar minima or maxima actually triggered past climate changes. What seems certain is that especially warm and cool periods in the past overlapped with more than just variations in solar activity. Granted, many of the coldest decades of the Little Ice Age coincided with periods of reduced solar activity: the Spörer Minimum, from around 1450 to 1530; the Maunder Minimum, from 1645 to 1720; and the Dalton Minimum, from 1790 to 1820. However, one of the chilliest periods of all – the Grindelwald Fluctuation, from 1560 to 1630 – actually unfolded during a modest rise in solar activity. Volcanic eruptions, it seems, also played an important role in bringing about cooler decades, as did the natural internal variability of the climate system. Both the absence of eruptions and a grand solar maximum likely set the stage for the Medieval Warm Period, which is now more commonly called the Medieval Climate Anomaly.

This gets to the heart of what we actually mean when we use a term like “Maunder Minimum” to refer to a period in Earth’s climate history. Are we talking about a period of low solar activity? Or are we referring to an especially cold climatic regime? Or are we talking about chilly temperatures and the changes in atmospheric circulation that cooling set in motion? In other words: what do we really mean when we say that the Maunder Minimum endured from 1645 to 1720? How does our choice of dates affect our understanding of relationships between climate change and human history in this period?

​To find an answer to these questions, we can start by considering the North Sea region. This area has yielded some of the best documentary sources for climate reconstructions. They allow environmental historians like me to dig into exactly the kinds of weather that grew more common with the onset of the Maunder Minimum. In Dutch documentary evidence, for example, we see a noticeable cooling trend in average seasonal temperatures that begins around 1645. On the surface of things, it seems like declining solar activity and climate change are very strongly correlated.

And yet, other weather patterns seem to change later, one or two decades after the onset of regional cooling. Weather variability from year to year, for example, becomes much more pronounced after around 1660, and that erraticism is often associated with the Maunder Minimum. Severe storms were more frequent only by the 1650s or perhaps the 1660s, and again, such storms are also linked to the Maunder Minimum climate. In the autumn, winter, and spring, easterly winds – a consequence, perhaps, of a switch in the setting of the North Atlantic Oscillation – increased at the expense of westerly winds in the 1660s, not twenty years earlier.

All of these weather conditions mattered profoundly for the inhabitants of England and the Dutch Republic: maritime societies that depended on waterborne transportation. Rising weather variability made it harder for farmers to adapt to changing climates, but often made it more profitable for Dutch merchants to trade grain. More frequent storms sank all manner of vessels but sometimes quickened journeys, too. Easterly winds gave advantages to Dutch fleets sailing into battle from the Dutch coast, but westerly winds benefitted English armadas. If we define the Maunder Minimum as a climatic regime, not (just) a period of reduced sunspots, and if we care about its human consequences, what should we conclude? Did the Maunder Minimum reach the North Sea region in 1645, or 1660?

​These problems grow deeper when we turn to the rest of the world. Across much of North America, temperature fluctuations in the seventeenth century did not closely mirror those in Europe. There was considerable diversity from one North American region to another. Tree ring data suggests that northern Canada appears to have experienced the cooling of the Maunder Minimum. Western North America also seems to have been relatively chilly in the seventeenth century, although there chillier temperatures probably did not set in during the 1640s.

By contrast, cooling was moderate or even non-existent across the northeastern United States. Chesapeake Bay, for instance, was warm for most of the seventeenth century, and only cooled in the eighteenth century. Glaciers advanced in the Canadian Rockies not in the seventeenth century, but rather during the early eighteenth century. Their expansion was likely caused by an increase in regional precipitation, not a decrease in average temperatures.

Still, the seventeenth century was overall chillier in North America than the preceding or subsequent centuries, and landmark cold seasons affected both shores of the Atlantic. The consequences of such frigid weather could be devastating. The first settlers to Jamestown, Virginia had the misfortune of arriving during some of the chilliest and driest weather of the Little Ice Age in that region. Crop failures contributed to the dreadful mortality rates endured by the colonists, and to the brief abandonment of their settlement in 1610.

An early seventeenth-century map of New France. Just look at all those trees! Source: Samuel de Champlain, "Map of New France," 1612.

Moreover, many parts of North America do seem to have warmed in the wake of the Maunder Minimum, in the eighteenth century. This too could have profound consequences. In the seventeenth century, settlers to New France had been surprised to discover that their new colony was far colder than Europe at similar latitudes. They concluded that its heavy forest cover was to blame, and with good reason: forests do create cooler, cloudier microclimates. Just as the deforestation of New France started transforming, on a huge scale, the landscape of present-day Quebec, the Maunder Minimum ended. Settlers in New France concluded that they had civilized the climate of their colony, and they used this as part of their attempts to justify their dispossession of indigenous communities.

Despite eighteenth-century warming in parts of North America, the dates we assign to the Maunder Minimum do look increasingly problematic when we look beyond Europe. If we turn to China, we encounter a similar story. Much of China was actually bitterly cold in the 1630s and early 1640s, before the onset of the Maunder Minimum elsewhere. This, too, had important consequences for Chinese history. Cold weather and precipitation extremes ruined crops on a vast scale, contributing to crushing famines that caused particular distress in overpopulated regions. The ruling Ming Dynasty seemed to have lost the “mandate of heaven,” the divine sanction that, according to Confucian doctrine, kept the weather in check. Deeply corrupt, riven by factional politics, undermined by an obsolete examination system for aspiring bureaucrats, and scornful of martial culture, the regime could adequately address neither widespread starvation, nor the banditry it encouraged.

Climatic cooling caused even more severe deprivations in neighboring, militaristic Manchuria. There, the solution was clear: to invade China and plunder its wealth. The first Manchurian raid broke through the Great Wall in 1629, a warm year in other parts of the Northern Hemisphere. Ultimately, the Manchus capitalized on the struggle between Ming and bandit armies by seizing China and founding the Qing (or "Pure") Dynasty in 1644.

China under the Ming Dynasty was arguably the most powerful empire of its time. Even as it unravelled in the early seventeenth century, its cultural achievements were impressive, as this painting of fog makes clear. Source: Anonymous, "Peach Festival of the Queen Mother of the West," early 17th century.

This entire history of cooling and crisis predates the accepted starting date of the Maunder Minimum. Yet, the fall of the Ming Dynasty unfolded in one relatively small part of present-day China. Average temperatures in that region reached their lowest point in the 1640s. By contrast, average temperatures in the Northeast warmed by the middle of the seventeenth century. Average temperatures in the Northwest also warmed slightly during the mid-seventeenth century, and then cooled during the late Maunder Minimum.

​Smoothed graphs that show fluctuations in average temperature across centuries or millennia give the impression that dating decade-scale warm or cold climatic regimes is an easy matter. Actually, attempts to precisely date the beginning and end of just about any recent climatic regime are sure to set off controversy. This is not only because global climate changes have different manifestations from region to region, but also because climate changes, as we have seen, involve much more than shifts in average annual temperature. Did the Maunder Minimum reach northern Europe, for instance, when average annual temperatures declined, when storminess increased, when annual precipitation rose or fell, or when weather became less predictable?

Historians such as Wolfgang Behringer have argued that, when dating climatic regimes, we should also consider the “subjective factor” of human reactions to weather. For historians, it makes little sense to date historical periods according to wholly natural developments that had little impact on human beings. Maybe historians of the Maunder Minimum should consider not when temperatures started declining, but rather when that decline was, for the first time, deep enough to trigger weather that profoundly altered human lives. When we consider climate changes in this way, we may be more inclined to subjectively date climatic regimes using extreme events, such as especially cold years, or particularly catastrophic storms. Dating climate changes with an eye to human consequences does take historians away from the statistical methods and conclusions pioneered by scientists, but it also draws them closer to the subjects of historical research.

In my work, I do my best to combine all of these definitions, and incorporate many of these complexities. I date climatic regimes by considering their cause – solar, volcanic, or perhaps human – and by working with statisticians who can tell me when a trend becomes significant. However, I also try to consider the many different kinds of weather associated with a climatic shift, and the consequences that extremes in such weather could have for human beings.

As you might expect, this is not always easy. I have long held that the Maunder Minimum, in the North Sea region, began around 1660. Increasingly, I find it easier to begin with the broadly accepted date of 1645, but distinguish between different phases of the Maunder Minimum. An earlier phase marked by cooling might have started in 1645, but a later phase marked by much more than cooling took hold around 1660.

These are messy issues that yield messy answers. Yet we must think deeply about these problems. Not only can such thinking affect how we make sense of the deep past, but it can also provide new perspectives on modern climate change. When did our current climate of anthropogenic warming really start? At what point did it start influencing human history, and where? What can that tell us about our future? These questions can yield insights on everything from the contribution of climate change to present-day conflicts, to the timing of our transition to a thoroughly unprecedented global climate, to the urgency of mitigating greenhouse gas emissions.

​Ask most people about climate change, and you will soon find that even the relatively informed make two big assumptions. First: the world’s climate was more or less stable until recently, and second: human actions started changing our climate with the advent of industrialization. If you have spent any time reading through this website, you will know that the first assumption is false. For millions of years, changes in Earth’s climate, driven by natural forces, have radically transformed the conditions for life on Earth. Admittedly, the most recent geological epoch – the Holocene – is defined, in part, by its relatively stable climate. Nevertheless, regional and even global climates have still changed quickly, and often dramatically, in ways that influenced societies long before the recent onset of global warming.

Take, for example, the sixteenth century. Relative to early twentieth-century averages, the decades between 1530 and 1560 were relatively mild in much of the northern hemisphere. Yet, after 1565, average annual temperatures in the northern hemisphere fell to at least one degree Celsius below their early twentieth-century norms. Despite substantial interannual variations, temperatures remained generally cool until the aftermath of a bitterly cold “year without summer,” in 1628. Since the expansion of the glacier near Grindelwald, a Swiss town, was among the clearest signs of a chillier climate, these decades are collectively called the “Grindelwald Fluctuation.” It was one of the coldest periods in a generally cool climatic regime that is today known as the “Little Ice Age.”

Length changes of four well-documented Alpine glaciers, based on historical sources.

​Volcanic eruptions undoubtedly caused some of the cooling. In 1595, the eruption of Nevado del Ruiz released sulphur aerosols into the atmosphere, scattering sunlight and thereby cooling the planet. Just five years later, Huaynaputina exploded in one of the most powerful volcanic explosions of the past 2,500 years. Major volcanic eruptions following in close proximity to one another can trigger long-term cooling by activating "positive feedbacks" in different parts of Earth’s climate system. In the Arctic, for example, volcanic dust veils lead to chillier temperatures, which can increase the extent of Arctic ice that, through its high albedo, reflects more sunlight into space than the water or land it replaces. That in turn leads to even cooler temperatures, more ice, and so on.

However, the onset of the Grindelwald Fluctuation preceded the eruption of Nevado del Ruiz by some thirty years. Clearly, volcanoes were not the only culprits for the colder climate. Some scientists believe that low solar activity also played a role. Yet, although the sun was less active during the Grindelwald Fluctuation than it is today, it will still more active than it was in most other decades of the Little Ice Age.

Solar minima and maxima over the past 2,000 years. The Grindelwald Fluctuation divides the Spörer Minimum from the Maunder Minimum.

That leads us to our second assumption: the idea that anthropogenic climate change began with industrialization. Most scholars of past climate change would still agree, but that might be changing. Recently, a growing body of evidence has started to suggest that humanity’s impact on Earth’s climate might be much older. Human depravity, it seems, might have been to blame for the cooling of the sixteenth century.Back in 2003, palaeoclimatologist William Ruddiman proposed that humans were to blame for preindustrial climate change, in a groundbreaking article that shocked the scientific community. Two years later, he thoroughly explained and defended his conclusions in book called Plows, Plagues, and Petroleum: How Humans Took Control of Climate. Ruddiman argued that humanity had slowly but progressively altered Earth’s atmosphere since the widespread adoption of agriculture. Some 8,000 years ago, communities in China, Europe, and India made room for agricultural monocultures by burning away forests. According to Ruddiman, the scale of deforestation was enough to steadily increase the concentration of atmospheric carbon dioxide. Then, from around 5,000 years ago, rice farming and, to a lesser extent, livestock cultivation slowly raised atmospheric methane concentrations. Ruddiman concluded that the cumulative effect of these anthropogenic greenhouse gas emissions was to gradually increase average global temperatures, and perhaps ward off another ice age.Ruddiman also argued that, since the adoption of agriculture, temporary fluctuations in atmospheric carbon dioxide concentrations followed from dramatic changes in human populations. During major disease outbreaks, Ruddiman insisted, populations declined to such an extent that agricultural land went untilled on a vast scale. Woodlands expanded, pulling more carbon dioxide out of the atmosphere than the agricultural crops they replaced, and thereby cooling the planet. When populations recovered, farmers burned down forests and planted their monocultures, warming the Earth. ​

Mesoamerican infected with smallpox, from the Florentine Codex by Bernardino de Sahagún.

​During the sixteenth century, Spanish soldiers and settlers established a vast empire by waging environmentally destructive wars on the indigenous peoples of central and southern America. They forced many of the survivors into new settlement patterns and gruelling forced labour. They also disrupted indigenous ways of life by appropriating, and often transforming, regional environments. Their animals, plants, and pathogens encountered virgin populations and spread rapidly. Indigenous communities in hot, humid climates were especially vulnerable to outbreaks of Eurasian crowd diseases, which included smallpox, measles, influenza, mumps, diphtheria, typhus, and pulmonary plague. Recent population modelling suggests that the population of the Americas declined from approximately 61 million in 1492 to six million in 1650.

By the late sixteenth century, this holocaust was well underway. Land previously colonized by indigenous communities through controlled burning or the planting of agricultural monocultures gradually reverted to woodlands. While all plants inhale carbon dioxide and exhale oxygen, tropical rainforests are much more effective carbon sinks than human crops. In the Americas, reforestation on a vast scale probably lowered atmospheric concentrations of carbon dioxide by 7 to 10 parts per million between 1570 and 1620. Human cruelty may therefore have contributed to the climatic cooling also caused by volcanic eruptions and, maybe, a decline in solar radiation relative to modern or medieval norms.

A Spanish depiction of Tenochtitlan, the Aztec capitol, 1524.

A growing body of scholarship now provides evidence for these relationships. However, there are many questions that must be answered before we can confidently conclude that depopulation helped trigger the Grindelwald Fluctuation, let alone other episodes of climatic cooling. For instance: did the cooling effect of sixteenth-century reforestation in the Americas overwhelm the warming influence of contemporaneous deforestation in China and India? Were invasive species introduced by Europeans into the Americas incapable of preventing reforestation? Was the pace of depopulation, and in turn reforestation, really so fast and so universal that it could substantially reduce atmospheric carbon dioxide concentrations over the course of a few decades?

It will take a while to answer these questions, but some scholars are already drawing big conclusions. Earlier this year, geographers Simon Lewis and Mark Maslin argued that the cooling set in motion by the depopulation of the Americas could be considered the beginning of the “Anthropocene,” the proposed geological epoch dominated by human transformations of the world’s environment. Dating big changes in geological time is tricky business. The changes must be visible in the global stratigraphic record – that is, in rock layers – and they must be traceable to a specific date. Lewis and Maslin lean on earlier environmental histories of the “Columbian Exchange,” the European transfer of plants, animals, and pathogens between the New and Old Worlds. The impact was a global biotic homogenization that, according to Lewis and Maslin, should be visible in the stratigraphic record. That still leaves them without a specific date, however. They settle on 1610, because that was when atmospheric carbon dioxide levels reached a minimum caused, they say, by European depopulation of the Americas. ​

Warships, stuck in the ice during the Dutch Revolt.

​There may be one more wrinkle to this sad story. In a forthcoming book, I argue that the Dutch revolt against the Spanish empire was provoked, in part, by high food prices that followed from harvest failures during the chilly onset of the Grindelwald Fluctuation. Then, until the early seventeenth century, the Dutch rebellion benefitted from a chilly climate. Dutch fortifications routinely forced Spanish armies to stay in the field during the frigid winters of the Grindelwald Fluctuation. The effect on Spanish soldiers could be disastrous. It is possible, therefore, that Spanish conquests in one part of the world contributed to climate changes that benefitted a rebellion against Spanish rule in another.

If so, the Eighty Years’ War may provide one of the first examples of such a self-defeating climate history of violence. It was certainly not the last. Recently, interdisciplinary researchers have found similar connections at work in the Syrian civil war. In a poorly governed society already destabilized by migrants fleeing the American invasion of Iraq, a severe drought caused, in part, by anthropogenic warming created fertile conditions for rebellion. The countries now at war in Syria and Iraq include those most responsible for the climate change that helped set the conflict in motion. Studying the Grindelwald Fluctuation may provide deep context for these relationships, by rooting them in a long history of violence and environmental transformation. It may also show that both assumptions commonly held about climate change are wrong.

In Europe, the “Bronze Age” lasted nearly 2,000 years, from approximately 3200 BCE to roughly 600 BCE. In this period, bronze tools were forged for the first time, revolutionizing how Europeans manipulated their world and competed for resources. The first trading networks connected the continent, as navigational knowledge reached heights that Europeans would not exceed until the fifteenth century. Centralized “palace economies” flourished throughout Europe and the Middle East, in ancient civilizations we remember today: on Minoan Crete, in Mycenaean Greece, in the Mesopotamian conquests of the Hittites and Akkadians, and of course in Egypt. Then, in the centuries around 1000 BCE, populations collapsed across Europe and the Middle East, sometimes in remarkably sudden events that must have been even more traumatic than the fall of the Roman Empire. In many regions, small, scattered villages were all that remained of the great Bronze Age civilizations. In Europe, it would be centuries before societies of similar complexity would rise again.Those who study past climates are drawn to disaster, and not without reason. If we can establish that social crises coincided with periods of abrupt climate change, we can be pretty sure that further investigation will turn up connections between climate and human history. Historians, archaeologists, anthropologists, and scientists often find that connections between climate and human activity are particularly clear, and especially well-documented, in times of crisis. It is no surprise, then, that scholars have sought to link the Bronze Age collapse to climate change. For example, while surveying 250,000 years of climate history, historian John Brooke of Ohio State University argues in an ambitious new book that the onset of a “cold, dry climate has to be a fundamental explanation of the demise of the Bronze Age of the greater Mediterranean.” (Brooke, 2014) Harvests failed in a changing climate, and subsequent food shortages undermined palace economies while provoking mass migration. Civilizations clashed, populations mingled and therefore spread disease, and piracy spread across the Mediterranean. Other scholars have tied roughly synchronous collapse in Northwestern Europe to changing climatic conditions. (Raftery, 1994; Tipping et al., 2008)

It is a compelling story, especially because it appears to offer a vivid warning for us today. However, like many straightforward narratives that tie climate change to historical collapse, that story is being revised by cutting-edge, interdisciplinary scholarship. In a paper recently published in Proceedings of the National Academy of Sciences, a team of scientists under lead author Ian Armit of the University of Bradford set out to reconstruct the late Bronze Age climate with unprecedented precision. Archaeological activity has surged across Ireland, offering abundant new sources for radiocarbon dating. Altogether, the researchers analyzed 2,023 radiocarbon dates in data from peat bogs and archaeological sites to build their new climate record.

An Irish "crannog" - a defensive structure - from the late Bronze Age. Photo by Christine Westerback.

They found that, in Northwestern Europe, populations began to decline more than a century before the late Bronze Age climate started to cool. Collapse in this part of Europe therefore cannot be tied to climate change. In fact, the authors argue that, all along, social and economic shifts were more than sufficient to explain the fall of regional Bronze Age civilizations. Trading networks and, in turn, stratified civilizations based around bronze production could not survive the advent of the Iron Age, when metals stronger than bronze were suddenly widely accessible. Not surprisingly, this thesis is not quite as straightforward as the scientists suggest, because in many places people only gradually transitioned from bronze to iron. Nor does the climatic history of Northwestern Europe necessarily translate to southern Europe and the Middle East. Moreover, historians like Brooke have long acknowledged that climate change is but one possible explanation among many for the late Bronze Age collapse.

Ian Armit and his coauthors conclude that, in an age of global warming, “it is easy to view climate as the primary driver of past cultural change,” but “such assumptions need to be critically assessed using high-precision chronologies” that “guard against misleading correlations.” Sometimes historical work could use a little more methodological rigour, and certainly scientists, archaeologists, and historians should be prepared to work together in uncovering the climate history of the distant past.

However, at other times excellent historical work is grounded on cutting-edge scientific data that is revised by later studies. That can undermine some compelling narratives, but that does mean those narratives were never worth telling. Scholarship is a conversation, and that conversation gains depth through daring, provocative stories.

According to the most recent summary for policymakers published by the Intergovernmental Panel on Climate Change (IPCC), “climate change can indirectly increase risks of violent conflicts” by exacerbating the socially destabilizing influence of poverty and economic shocks. While the IPCC attaches “medium confidence” to this claim, it is hardly controversial. Similar conclusions were made in the IPCC’s 2007 assessment reports. Since then, several studies have established that warfare is correlated to climatic stress, although their methods ignore social and cultural contexts. Many of the world’s most advanced militaries are now at the forefront of state adaptation to global warming. The American military, for example, is not only curbing its greenhouse emissions, but is also actively preparing for conflict stimulated by future climate change.

But how is the conduct of war – not just its origins – actually influenced by climate change? In the latest issue of the journal Environment and History, I published an article that explores this question. In the seventeenth century, three wars between England and the Dutch Republic – then the leading maritime powers of their day – were fought during the onset of an especially chilly stretch of the Little Ice Age in Europe. In my article, I argue that the weather that accompanied the coming of this “Maunder Minimum” affected military operations during the wars in complex and often counter-intuitive ways.

Dutch and English fleets clash in the final battle of the First Anglo-Dutch War. Jan Abrahamszoon Beerstraten, “Slag bij Ter Heijde,” 1653-1666,

The First Anglo-Dutch War, contested between 1652 and 1654, actually preceded the cooling of the Maunder Minimum. I used ship logbooks, correspondence, intelligence reports, and diary entries written during the war to demonstrate that frequent westerly winds can be associated with warmer temperatures during the early 1650s. That usually allowed English fleets sailing from the west to claim the “weather gage,” the windward position from the enemy that, in naval combat, granted initiative in attack and, occasionally, retreat. The English navy had developed revolutionary tactics in which ships of great size would bombard enemy hulls while sailing past them in line formation. By contrast, Dutch tactics still mandated grappling, boarding, and firing at enemy rigging (ironic, since a Dutch admiral had first debuted “line of battle” tactics some 15 years earlier). English tactics required favourable winds, and English fleets got them in the first Anglo-Dutch War. The Dutch Republic was rich enough to survive several naval reversals, and its shipyards productive enough to stave off defeat. However, on the balance the First Anglo-Dutch War was far more costly for the Dutch than it was for the English.

Human and environmental structures had shifted by the onset of the Second Anglo-Dutch War in 1664. Seasonal temperatures were more variable but generally cooler, storms had become more frequent and more severe, and easterly winds had grown more common. Meanwhile, the Dutch had adopted many of the most effective elements of the English naval system. Dutch fleets sailing to battle from the east now did so with the weather gage, and they were often victorious. Moreover, because English vessels had three tiers of guns while the Dutch only had two, many English guns were located near the water, and had to be retracted in high winds that were more common in a cooler climate. Easterly winds also allowed the Dutch fleet to raid up the Medway River in 1667, forcing the English crown into a peace that clearly benefitted the Dutch.

Dutch ships capture an English flagship during the Second Anglo-Dutch War. Willem van de Velde the Younger, “The Capture of the Royal Prince,” 1666.

In the Third Anglo-Dutch War, the climate of the Maunder Minimum manifested in weather that was defined less by easterly winds than incessant storminess. This time, the Dutch Republic was invaded by French and German armies while besieged at sea by a united Anglo-French fleet. However, in the summer of 1672 relentless gales kept the allied fleet from supporting a naval invasion, just as the Dutch fleet was partially disbanded so its soldiers and artillery could defend against invasion on land. Thereafter, Dutch admiral Michiel de Ruyter conducted a remarkably successful guerrilla campaign, aided by frequent easterly winds. The Dutch Republic survived its greatest crisis of the seventeenth century, and England signed another concessionary peace in 1674.

So, what does this seventeenth-century story tell us about war and climate change today? First, it demonstrates again that climate change is mediated by human decisions, institutions, and cultures. The Dutch Admiralties might not have prevailed in the second and third wars had they not learned from the success of the English, who might have won the third war were it not for the leadership of De Ruyter. Second, the article reveals that military operations are influenced by short-term weather, which is often but certainly not inevitably affected by long-term climate change. The distinction is important, because the weather that most influences a battle can actually be an exception to the climatic trend.

Ultimately, far more studies are required that explore not only how climate change contributes to the cause of war, but also how it shapes wars once they begin.

Note: this article is a greatly condensed version of dissertation chapters that also examine how climate influenced weather that affected shipbuilding, marine intelligence networks, privateering, and warfare on land during the Anglo-Dutch Wars.

Dagomar Degroot, “‘Never such weather known in these seas:’ Climatic Fluctuations and the Anglo-Dutch Wars of the Seventeenth Century, 1652–1674.” Environment and History 20:2 (May 2014): 239-273.

United States Department of Defense, Quadrennial Defense Review Report. February 2010.

A week ago I returned from what was, surprisingly, my first trip to Germany. This year the European Society for Environmental History convened its biannual conference in Munich, a city I’ll remember for its beautiful architecture, sensible public transit and delicious beer. No fewer than fifteen climate history panels were part of the conference, and despite my best attempts I couldn't attend them all. Still, I decided to share some of what I learned (or remembered) while listening to papers that were good enough to keep me from exploring Munich. Note that for the purposes of this little article, the terms “climate history” and “historical climatology” are synonymous.

1. Climate history must be inclusive to be effective.

There is only limited value in mining one kind of documentary source for evidence of past weather and, in turn, past climate change. Of course, the value of such work fluctuates with the source under investigation: registers of past events called chronicles are notoriously prone to exaggeration, for example, while ship logbooks provide standardized, easily quantifiable and remarkably reliable weather observations. Still, a reconstruction of past climate that makes any claim to accuracy must, where possible, employ a wide variety of documentary evidence compiled by many authors. These reconstructions are strengthened when meteorological information contained in some documentary sources can be verified using the data contained in other documents. They gain even more credibility alongside scientific evidence like model simulations, or statistics developed using natural archives (ice cores or tree rings, for example). Good climatic reconstructions are necessarily interdisciplinary, and we should think carefully about what we are really saying when we discuss observations written by a single author, in a single source. Are we reconstructing the climate of a vast region across the decades, or are we engaging in literary criticism? If interdisciplinary work is essential to historical climatology, interactions between sub-disciplines are just as important. Climate history is often considered a sub-discipline of environmental history, which, in turn, is one genre in the broader field of history. Agricultural history, forest history and energy history are all among the sub-disciplines that together constitute environmental history. Like climate history, they lose significance when divorced from one another. Reconstructions of past climates are fascinating, but historians can also incorporate the many different sub-disciplines and genres of their profession to do something scientists can’t: weave the history of climate into the history of humanity. The narratives we get can be as valuable as the models developed by scientists as we struggle to understand our plight on a warming planet.

Beautiful Munich: the perfect city for a conference.

2. We're only scratching the surface of relevant documentary evidence.

Chronicles and weather diaries have long formed the backbone of the documentary evidence used to reconstruct past climates. Questions of interpretation hound both sources, however, and these days, correspondence and ship logbooks are increasingly in vogue. In Munich I heard scholars like Rudolf Brázdil describe how correspondence related to the collection of taxes can yield strikingly detailed climatic reconstructions. Tax records can be placed alongside court documents, toll accounts and maintenance registers as largely quantitative sources that can yield rich climatic data if interpreted using qualitative evidence. New sources – and new methods of source interpretation - can provide data about wind patterns, hailstorms, and other previously unexamined meteorological conditions, deepening our understanding of climate change.

3. We need new ways of conceptualizing the relationship between climate change and human history.

Fernand Braudel, perhaps the greatest historian of the twentieth century, introduced a revolutionary way of conceptualizing time. According to his notion of “total history,” different kinds of historical change transpire differently across time and space. Environmental or economic transformation at a vast scale spanned the centuries, yet the historical “event” was immediate. Understanding the past from this perspective allowed him to gather the entire Mediterranean world into a single narrative using the huge, lumbering structures of history.

The problem for historical climatologists – and indeed, all environmental historians – is that Braudel was wrong. Interdisciplinary research has revealed that many historical structures can be brittle; they can break quickly with immediate yet regionally specific ramifications. For example, climate change influenced by volcanic eruptions and the subsequent expansion of polar ice could alter prevailing weather patterns over the North Sea in one cruel winter. However, the same processes behind routinely cold winters in northern Europe could bring not frost but rather drought to the Mediterranean.

So different kinds of historical change can occur across many scales of time and space, and that complicates our attempt to conceptualize connections between past environments and historical events. Human history cannot be modeled – we simply lack all of the necessary variables – but in recent years environmental historians have made great progress going beyond the problematic concepts like footprint metaphor or social metabolism in their efforts to conceptualize the connections between nature and society. In this effort historical climatologists lag behind their peers, and for that reason climate histories can devolve into lists in which weather events likely stimulated by a climatic shift cause environmental changes that contribute to one event after another in the history of a particular region. Historians need to work with colleagues across many disciplines to develop ways of understanding relationships between climate, weather and human history. These ways of understanding might not completely explain the past, but they might help us conceive of historical change more clearly, with insights applicable for our future on a warming planet.

Note: originally posted on The Otter, blog of the Network in Canadian History and Environment (NiCHE).

On February 10th I embarked on the first leg of a long voyage from Toronto to Goa, a former Portuguese enclave nestled among the beaches of western India. After enduring the concrete monolith that is Frankfurt’s international airport, I finally boarded my second flight and flew south through Turkey, past Syria, across Iran and down towards Mumbai. I left the plane at an hour past midnight. Mosquitos swarming through the airport quickly prompted me to take the malaria medication that would later give me incredibly vivid dreams. Hours later the shock of a violent landing in Goa was nothing compared to the culture shock that followed. As I left the airport and stepped onto the rust-coloured soil I saw signs promoting European luxury vehicles or American cologne towering over slums and endless trash amid lush tropical beauty. After three sunrises and two sunsets without sleep I finally arrived at my hotel, ignoring for the moment the hand-sized spider dangling near my door.

For those planning to attend next month’s ASEH conference, Toronto does not look like this.

With the help of funding generously provided by Network in Canadian History and Environment (NiCHE), I had travelled nearly 13,000 kilometers to attend the fourth Open Science Meeting (OSM) organized by the Past Global Changes (PAGES) initiative. A core project of the International Geosphere-Biosphere Programme, PAGES has over 5000 subscribing scientists across more than 100 countries. Because research supported by PAGES explores past environments to create a roadmap for the future, the initiative is especially concerned with climate change. Every four years its Open Science Meeting is held in a new location, and in case the Olympic parallels were not obvious enough a “PAGES lamp” was lit at the opening ceremonies. It may not have resembled London’s burning torch, but it did avoid the mishap that embarrassed my fellow Canadians at the Vancouver Olympics. It’s easy for historians to forget that we don’t have a monopoly over the interpretation of the past. There’s nothing like a scientific conference to remind us that we can only access a tiny sliver of the very recent past, that other disciplines can find voices which speak to the present in sources beyond the documents we hold sacred. Many of the scientists at the OSM reconstructed past climates to measure the significance of modern warming, to unravel how climatic shifts influence different environments, and to provide a clearer picture of the world’s natural history. In papers and posters scientists presented results derived from the exhaustive analysis of, for example, changes in the growth of trees, the thickness of permanent ice cover and the scope of lakebed deposits. Conclusions were compared with other data that measured shifts in animal ranges, tree lines or glacial extent, all of which can be used to reconstruct changes in regional temperature or precipitation. Evidence from these so-called “proxies” was weighed against a range of sophisticated models, enabling projections of climates past that move seamlessly into the present and future. Not surprisingly, correlating fluctuations in diverse proxy records and tying them to climatic trends is hardly straightforward. Physicist Ashoka Kumar Sinhvi gave an opening keynote address that exposed the frequently overlooked complexity of linking different kinds of data between different environments at different scales, revealing the limitations of our understanding of past and future climates. Later in the day that concept was echoed by André Berger, who explained how the intricate constellation of influences that shapes the global climate is never stable, complicating the attempt to find historical analogues for our present condition. Sinhvi, Berger and others helped frame the rich data presented in the papers and posters that followed by demonstrating yet again that in science, as in history, the past is opaque, unstable, and forever subject to interpretation. Of course, that never stops us from seeking more information and, in turn, greater clarity. Some particularly fascinating papers explored past Antarctic climates at a time when the Antarctic Peninsula is warming at a rate of 5.3° C per century. Michael Weber presented findings that reveal how the Antarctic ice sheet is much more reactive to atmospheric Carbon Dioxide than previously believed. Robert Mulvaney then described how the rate of Antarctic melting, unprecedented in the past millennium, likely had analogues in the distant past when ice shelves were entirely absent. Medieval warmth and early modern cooling, familiar to historians of climates past, apparently were not felt in Antarctica. On the other hand, Guillaume Leduc presented exhaustive findings that, while skewed towards the Atlantic region, nevertheless suggested that the “Little Ice Age” between the fourteenth and nineteenth centuries strongly affected global sea surface temperatures. Those results may have critical implications for the nascent field of marine environmental history, which until now has not adequately considered climatic fluctuation.

High times in the Low Countries during the “Little Ice Age.”

To unravel histories that bridge culture and nature, environmental historians require some scientific literacy, yet I wasn’t sure what to expect as I prepared to give at a talk at a conference where formulas were ubiquitous and historiography unheard of. I argued that documentary evidence can improve the accuracy of reconstructions of temperature or precipitation, giving us a way of testing meteorological patterns recorded by the kinds of sources unearthed by scientists. Accustomed to the critical analysis of diverse documents, historians are ideally situated to filter documents through the kind of methodology that lets us quantify past weather observations and, in turn, reconstruct the climatic past. Moreover, while tree rings or ice cores rarely provide much more than seasonal resolution, surviving documents can record weather with far greater temporal precision, and some even chart hourly changes. Most importantly, documentary evidence grants us access to past wind intensity or direction, weather conditions that are less easily measureable through the analysis of scientific proxy data. For centuries it was necessary for European mariners to estimate longitude by calculating a ship’s speed, direction and any leeway in its course, for which the most important influence was wind. Hence many logbooks kept aboard ships abound with reliable and quantifiable meteorological information taken several times on virtually every day of the vessel’s journey. The bulk of my talk presented results from English and Dutch ship logbooks, which suggest that easterly winds increased in the late seventeenth century as the climate cooled across the North Sea. I was relieved and delighted by the reception I received from the scientists in the audience. More importantly, it was heartening to see the importance of interdisciplinary cooperation in the new “Future Earth” project spearheaded by the International Geosphere-Biosphere Programme. Still, many scholars in both the sciences and the humanities continue to take a passive approach to building connections between disciplines. Conferences like the PAGES OSM have existed for decades, yet many historians fail to realize that their insights are needed and desired. Similarly, most presenters at the upcoming ASEH conference are historians, and scientists or engineers remain underrepresented. Establishing connections between institutions like NiCHE, the ASEH, PAGES and the Climate History Network (CHN) can help move us forward, but what’s even more valuable is feedback from those who have benefitted from conferences in another discipline.

Slums bordering a wealthy part of Mumbai. Though poorly represented in this picture, the smog was overwhelming.

After the conference in Goa I spent a few days in the vast metropolis of Mumbai. My plane was delayed, and as it finally approached the city our pilot was forced to circle the airport for a few minutes before we could land. The slums in Mumbai are so vast that their full extent can only be grasped from the air. As I shifted in my leather seat I glimpsed the innumerable shanties, clustered around open sewage, barely visible through the purple smog. The impoverished people far below, and countless millions like them, will suffer most as our planet continues to warm, yet their voices are never heard in academic or political conferences. The quest to understand climate change must become more inclusive, not just of other academic disciplines, but of all voices, past and present, learned and “unlearned,” rich and poor.

Whether consciously or unconsciously, most scholars study something important to their societies. The walls of the ivory tower are, in fact, quite porous. It's no surprise that the genre of history that deals with environmental issues - environmental history - grew out of the debates surrounding the use of DDT. No surprise, either, that academics within disciplines from anthropology to economics are increasingly considering the influence of climate change just as the effects of global warming are becoming painfully obvious. Now more than ever, research into past climates is not just for scientists.

If environmental history grew steadily in the decades since its conception, so too did its semi-autonomous, interdisciplinary cousin: climate history, or historical climatology. This site regularly describes some of the more interesting work published by historical climatologists, before considering how it can reframe today's environmental issues. Equally striking, however, is what's not (yet) published, but spoken. Testament to the growing importance of climate history within environmental history, interviews about past climates have aired this year on two of the major audio resources in the discipline: Nature's Past and Environmental History Resources. Moreover, last year the growing diversity of climate history was well represented at the major conference for the North American branch of environmental history. In Madison, Wisconsin, papers explored how a shifting climate influenced issues ranging from nineteenth century famines in the far north to the construction of the St. Petersburg ice palace during the frigid winter of 1740. Even more topics are on this year's agenda. In Toronto climate change will be connected to cold war national security, the history of Lake Superior, Alberta's fossil fuel economy, the hydrology of central Mexico, warfare down the Danube, early modern transportation, and much more.

As the study of past climate change claims an increasingly important place within environmental history, it has also entered the mainstream of the historical profession. At this year's meeting for the American Historical Association - the largest conference in the discipline - climate history was featured in three back-to-back sessions. As described by Sam White of the Climate History Network, historians unraveled how past climatic variability influenced hurricanes in New Orleans, agricultural sustainability, and human history across many thousands of years.

Rising interest in climate change within history and other non-scientific disciplines is obvious in published scholarly literature. It is equally apparent online and at conferences, where the insights described and discussed have equal relevance for our struggle to make sense of a warming planet.