On June 10th, 2017, the New York Times reported on Gwen Beatty, a high school junior in Wellston, Ohio, who disagreed with her teacher on climate change. When he presented evidence that human emissions of greenhouse gases were causing the Earth to warm, she repeated a refrain common to many who dispute the scientific evidence tying human actions to climatic changes.

It could be natural causes. Scientists often get it wrong. Predictions of future warming could be inaccurate.

The reporter writes that if Beatty had told her math teacher that one plus one does not equal two then she would be wrong. Not all of the sciences work with the absolutes of mathematics, yet each has a way of establishing accuracy. In the public discourse on climate change—both in schools and during governmental processes—inaccuracies about climate science and global change are commonplace. A lack of public Earth and climate science understanding results in a political debate that often ignores scientific evidence of possible severe long-term impacts.

The result has been increasing concentrations of atmospheric CO2 and serious impacts for human health, the environment and the United States economy.[1] To foster a citizenry capable of engaging productively with policies addressing climate change, we believe that the public requires better understanding of the issue, how it connects to people’s lives, and what policies might benefit them in relation to it.[2],[3]

Over the past five years in McGill University’s Department of Geography and School of Environment, we’ve investigated how lessons learned through decades of science education research can be applied to improving public understandings of climate science and related mitigation/adaptation policies. Our goal has been to educate the scientific community on how to communicate climate change and to help teachers accurately present the science in school classrooms.

We reviewed the literature to look for trends and recommendations. We found that educational reformers advocate teaching climate science using the actual methods of climate scientists. This includes asking students to pose research questions, evaluate evidence-based answers or explanations, and communicate their own findings.[4], [5], [6], [7], [8], [9]

Unlike physics or chemistry, it is not easy to teach climate change this way. Climate change consists of abstract processes occurring at global and local scales. It’s difficult for students to tangibly experience climate change. To compound the problem, the causes and risks of climate change are often represented in politics and public discourse in conflicting manners. Sometimes even graduate students have trouble understanding them.[10]

The treatment group of 39 students worked with Columbia University-NASA GISS’s Educational Global Climate Model (EdGCM). This software is based on an actual GCM. Dr. James Hansen first wrote about GISS’s GCM in 1983 when he used it to conduct long-range climate experiments.[11] EdGCM itself consists of a suite of user interfaces that allows students to design experiments by manipulating inputs, and then run the model and post-process and visualize more than 80 different variables. Other graduate students in Dr. Sieber’s lab have designed new interfaces for GCMs that work online and possess more intuitive user interactions.

All of the student groups posed realistic climate research questions. But only those in the treatment group interrogated the spatial components of climate impacts and its diverse relationships between human actions today and potential regional/global conditions in the future. Overall, these students demonstrated significantly greater learning gains on pre to post diagnostic exams than those in the control.

The control students showed us the power of a well-organized and clear lecture. On the post exam, they out-scored treatment students on five multiple-choice questions that tested recall of facts about GCMs. Yet only those students who had worked with EdGCM appeared highly motivated in their work and demonstrated critical thinking about the work of climate scientists and the issue of climate change.

In the New York Time story, Gwen Beatty’s father had once been a coal miner. If we are to mitigate climate change or adapt to its worst impacts, we’ll need his daughter. Her generation will require well-trained scientists, entrepreneurs, engineers and visionaries who can reshape the world’s use of energy and adapt to any social, ecological and climatic changes.

Achieving this goal means we must treat our students as individuals who possess ideas shaped by their peers and parents. To engage them with science, most will require an introduction to the scientific habits of mind common to all scientists. Such skills will be needed to navigate a world increasingly altered by changes to the Earth’s climate.

If you’re interested in learning more about the emerging field of climate geoengineering, this all-day event, held entirely online, would be a good introduction. The event is organized and sponsored by the Institute on Science for Global Policy, the Forum for Climate Geoengineering Assessment, and Arizona State University.

A couple of days ago there was a feisty exchange on Fox News between host Tucker Carlson and Bill Nye (“the Science Guy”). Carlson challenged Nye to establish that climatic changes are linked to anthropogenic greenhouse gas emissions, and quite frankly, Nye struggled to articulate a response.

However, Mr. Nye, as well as instructors addressing this critical, and ever-abiding issue, might benefit from the results of a new study published in the journal The Anthropocene Review. What I found particularly salutary about this piece was its explanation of why the current bout of climatic change could be linked to different factors than factors than those that precipitated climate change in the past.

Among the conclusions of the study:

During the period of the existence of the Earth’s biosphere, the primary drivers of earth system change have been two external force: astronomical (“forces that affect insolation and relate to solar irradiance include orbital eccentricity, obliquity and precession driven by gravitation effects of the sun and other planets”) and geophysical (forces that include “volcanic activity, weathering and tectonic movement”);

In the past 2.588 billion years, climate forcing has been largely controlled by cyclical variation in Earth’s orbit and other astronomical forces, and irregular geophysical events, such as volcanic eruptions;

Under current astronomical forcing and greenhouse gas concentrations in pre-industrial levels, we would have anticipated Holocene-like conditions for approximately another 50,000 years

Internal dynamics, including biospheric evolutionary processes, can also drive the Earth System, however. Such factors are particularly important given their impacts on carbon dioxide, which exerts a substantial influence on climate.

At this point, human activity (denominated as “H”), as a subset of internal processes, has reached a profound influence on earth system processes. Indeed, it has become the dominant driver of the rate of change of the Earth System. This has included a doubling of the amount of reactive nitrogen in the system relative to pre-industrial levels, and a huge change in ocean carbonate chemistry.

In the context of climate change, atmospheric concentrations in recent years have increased 100x faster than the most rapid increases during last glacial termination. Methane levels have risen at a rate that is more than double the rates of the past 800,000 years. This has resulted in temperature increases 70x the baseline over the course of the past 100 years, and 170 the Holocene baseline rate over the course of the past 45 years

Anthropogenic impacts crossed a critical threshold around 1950 during the “Great Acceleration” of greenhouse gas emissions, with human factors “usurping” the impacts of other factors “entirely.”

Though it may come as cold comfort to humankind, while human factors may control earth systems over the course of the next tens of thousands of years, astronomical, geophysical and other internal factors are likely to ultimately re-assert control over the system.

It might be interesting to couple this reading for students with another recently published study seeking to assess the statistical likelihood that recent warming is primarily attributable to rising levels of anthropogenic emissions. This would be an excellent opportunity to tease out methodological considerations associated with so-called “climate attribution” studies.

While there are numerous analyses indicating that current National Determined Contributions (NDCs) made the Parties to the Paris Agreement do not put the world on track to meet the treaty’s objectives of “holding the increase in the global average temperature to well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C above pre-industrial levels.” However, very few efforts have been made to date to ascertain how to track progress in meeting the temperature targets of Paris. This can be a critical component of informing the treaty’s global stocktaking process (Art. 14) and provisions to increase the ambition of commitments (Art. 3; 4). In a new study published in the journal Nature, Peters, et al. employ a nested structure of a group of indicators intended to track the progress of parties to Paris in meeting their Paris obligations, including aggregated progress, country-level decompositions to track emerging trends, and technology diffusion.

Among the conclusions of the study:

While global carbon dioxide emissions have been flat for the past three years (projected to be 36.4 GtCO2 in 2016), cumulative emissions continue to rise, which will preclude meeting Paris’s objectives given the need for rapid declines in emissions, and ultimately zero emissions;

While Chinese emissions have been largely stable recently, and US and EU emissions have declined in the past few years, it’s unclear if these are long-term trends, or largely attributable to weak economic growth or other short-term factors.

There is empirical evidence of emerging declines in carbon intensity globally, including the China, the United States and the European Union

While the growth of the use of renewable energy has been substantial in recent years, including more than 50% of total energy growth in 2015, it will prove difficult for renewable energy to supply annual energy growth in the short term absent further declines in global energy use. Renewable energy alone may also not be sufficient to keep global temperatures below 2C given physical constraints to large-scale deployment and limited prospects for use in certain sectors, such as agriculture;

While it is possible to keep temperature increases to 2C or below with relatively high fossil fuel energy use, this scenario is predicated on substantial reliance on bioenergy, coupled with large-scale deployment of carbon capture and sequestration (CCS). However, this assumes that bioenergy can be sustainably produced and made carbon-neutral. It’s also predicated on huge scale-up of CCS in conjunction with bioenergy (BECCS). This may require as many as 4000 facilities by 2030 compared with the tens currently proposed by 2020. Scale-up of this magnitude may prove to be a large challenge given social resistance and inadequate assessment of technological risks;

Studies indicate that current emissions pledges may “quickly deviate” from what is required to conform to the 2C objective. Should some technologies lag in deployment, others will have to ramp up more quickly. These is also a lack of scenarios assessing the prospects for transformational changes in lifestyle and behaviors, other forms of carbon dioxide removal, and solar radiation management geoengineering.

Among potential questions for classroom discussion are the following:

What are the challenges associated with large-scale deployment of bioenergy and carbon capture and sequestration/storage? What policy measures could help to ameliorate these challenges;

What are the potential benefits of solar radiation management geoengineering; what are the potential risks;

What measures could be taken to accelerate market penetration of renewable energy?

Poster’s note: There is an interview with Glen Peters that further explicates some of the findings of this study.

Recent studies have projected temperature increases of 2.7-3.5C by 2100 associated with the current Intended Nationally Determined Contributions of the parties to the United Nations Framework Convention on Climate Change. However, the sobering reality is that the inertia of the climate system ensures that temperatures will continue to increase well beyond this in the centuries and millennia to come. A new study by Stanford scholar Carolyn W. Snyder suggests that temperature increases beyond 2100 may be truly alarming. Synder’s study sought to reconstruct global average surface temperatures over the past 2 million years using a spatially weighted proxy reconstruction of globally averaged surface temperature.

Stabilization of atmospheric concentrations of greenhouse gases at today’s levels may already have committed the globe to 5C (3-7C, 95% credible interval)over the next few thousand years.

The study’s findings could provide good grist for class discussion on several issues, including whether it’s possible to substantially bend the projected temperature curve through more aggressive de-carbonization of the world economy, this generation’s obligations, if any, to generations over the next few thousand years, and the potential role of climate geoengineering, including negative emissions technologies, in potentially reversing course of temperature projections.

In a new study published in the journal Nature Climate Change, Clark, et al. suggest that our policy orientation in terms of mitigation and adaptation responses to climate change should be expanded to assess the past 20,000 years and the next 10,000 years, well beyond the IPCC’s focus on the 21st and 22nd Century. The authors contend that this temporal horizon, “on a geological timescale,” provides a more realistic assessment of the ultimate impacts of anthropogenic emissions, as well as the compelling need to substantially accelerate our commitment to reducing emissions.

The study utilized several different scenarios for temperatures and sea-level change over the course of 10,000 years, as well as referencing of our best understanding of climate change over the past 20,00 years. The study’s projections are based on a suite of four future emissions scenarios with carbon releases between 1,280 and 5,120 PgC, with current the current cumulative human carbon emissions already approaching the low-end scenario.

The researchers argue that projections of climatic change over the next 10,000 years is justified by the inertia of the climatic system. As a consequence, “60-70% of the maximum surface temperature anomaly and nearly 100% of the sea-level rise from any given emission scenario remains after 10,000 years, and that the ultimate return to pre-industrial CO2 concentrations will not occur for hundreds of thousands of years.”

Among the conclusions of the study:

Projected temperature increases of 2.0-7.5°C over the course of this century will exceed those during even the warmest levels reached in the Holocene, “producing a climate state not previously experienced by human civilizations.” Moreover, temperatures will remain elevated above Holocene levels for more than 10,000 years;

Even the lowest emissions scenario in the study, 1,280 PgC, results in sea level rise associated with the Greenland ice sheet of 4 meters over 10,000 years. Higher scenarios result in an ice-free Greenland over the course of 2500-6000 years, which could result in approximately 7 meters of sea level rise;

The lowest emissions scenario yields as much as 24 meters of global mean sea level rise over 10,000 years associated with melting of Antarctic ice sheets;

An equilibrium climate sensitivity of 3.5°C, consistent with IPCC AR5 scenarios, could yield sea level rise of 25-52 meters within the next 10,000 years, reaching 2-4 meters per century, “values that are unprecedented in more than 8,000 years.

The only method to avoid a further commitment to sea level rise above the current projections of 1.7 meters “is to achieve net-zero emissions;”

There are 122 countries with at least 10% of current population weighted area that will be directly affected by coastal submergence and 25 coastal megacities will have at least 50% of their population-weighted area impacted

The authors draw several policy implications from this long-term assessment of climatic impacts:

On millennial timescales, the use of conventional discounting approaches ensure that “future climate impacts … would be valued at zero, irrespective of the levels of certainty and magnitude.” This poses profound questions in terms of considerations of intergenerational equity and our obligations to future generations;

There is a compelling need for global energy policies that result in net-zero or net-negative carbon dioxide emissions; “a marginal reduction in emissions is insufficient to prevent future damages.” Such technologies would need to be kept in place for tens of thousands of years “without fail.” It also suggests radical changes in terms of financial incentives, a need to accelerate research and development of technologies to transform energy systems and infrastructure, and a focus on global equity considerations.”

Among the class discussion questions that might be posed are the following:

Is it pertinent for us to consider the potential impacts of climate change on a timescale of 10,000 years? Does it make more sense to formulate policies to protect the current and immediate successor generations?

If, as the article suggests, the current rate of discounting future losses is inappropriate for climatic impacts, what alternative system should we employ?

What are the most viable “negative emissions” technologies that might be available, and are there any risks in their utilization that should be weighed against their benefits of reducing atmospheric concentrations of carbon dioxide?

The U.S. National Oceanic & Atmospheric Administration publishes an Annual Greenhouse Gas Index (AGGI), “a measure of the warming influence of long-lived trace gases and how that influence is changing each year.” Thus, AGGI is an index that measures climate forcing associated with long-lived greenhouse gases. The Index would be an excellent resource for instructors who involve their students in climate negotiations exercises, as well as a potent reminder of how the promise of Paris is confronted by the practical reality of trends in radiative forcing and atmospheric concentrations of greenhouse gases.

Among the findings of the latest assessment:

Carbon dioxide concentrations creased by an average of 1.76 ppm per year from 1979-2015. However, the trend has accelerated in recent years, averaging about 1.5 ppm per year in the 1980s and 1990s, 2.0 ppm per year during the last decade. Moreover, atmospheric carbon dioxide increased 3 ppm in the past year, for only the second time since 1979.

Increases in carbon dioxide concentrations in the atmosphere has resulted in a whopping 50% increase in its direct warming influence on climate since 1990

While methane concentrations remained constant in the atmosphere from 1999-2006 (after declining from 1983-1999), they have been increasing since 2007, due to factors such as increasing temperatures in the Arctic in 2007, increased precipitation in the tropics in 2007 and 2007; this trend accelerated between 2014-2015. Nitrous oxides concentrations have also accelerated in recent years.

Radiative forcing from chloroflourocabons is in decline, primarily due to the Montreal Protocol on Substances that Deplete the Ozone Layer. The importance of this regime, designed to address threats to the ozone layer, in terms of climate change are clear: without the treaty, climate forcing would have been 0.3 watt m-2 greater, or approximately half of the increase in radiative forcing attributable to carbon dioxide since 1990;

Most of the scenarios in the IPCC’s Fifth Assessment Report that set out pathways to avoid temperature increases above 2C rely on large-scale use of so-called “negative emissions technologies,” with the lion’s share coming from the use of Bioenergy and Carbon Capture and Sequestration (BECCS) options. However, there is increasing concern that BECCS could have serious negative ramifications for food security for some of the world’s most vulnerable populations. A recent study by Tim Searchinger et al., analyzing results from a suite of models assessing changes in land use, crop production and food consumption associated with the production of biofuels, suggests that these concerns are valid, and may have serious ramifications for the use of BECCS in the future.

Among the findings of the study were the following:

Approximately 25-50% of net calories associated with diversion of corn or wheat to ethanol are not replaced through planting of other crops, but rather is taken out of food and feed consumption. Failure to replace crops reduces greenhouse gas emissions from land use change and direct emissions of carbon dioxide by people and livestock;

Most analysis of the greenhouse gas emissions implications of biofuel production assume that the emissions associated with fermenting and burning ethanol is offset by carbon absorption by growth of crops diverted to ethanol production. However, this accounting methodology is faulty, because carbon absorbed by crops that would be grown anyway doesn’t constitute a valid offset, because it’s not linked to biofuel production and is not additional.

The only valid form of crop growth offsets would be either from growth of additional crops to replace those diverted to biofuels production, or increases on crop yields on existing croplands;

Moreover, converting forests or grasslands to produce additional food crops would also release carbon, “reducing or negating the net offset from producing more crops;”

One study suggests that approximately 20% of calories diverted by biofuel production are not replaced. “Food reductions result not from a tailored tax on overconsumption or high-carbon foods but from broad global increases in crop prices.”

Among the class discussion questions that might be pertinent to this article are the following:

What would the food implications be of a large-scale commitment to BECCS?

What are the prospects to avoid food security issues outlined in this study by techniques such as increasing crop yields or using alternative bioenergy feedstocks, such as algae or cellulosic sources?;

If trade-offs are indeed inevitable in terms of food production and greenhouse gas emissions, how should the world community make decisions under such conditions?

I have recently published an online commentary on Article 8 of the Paris Agreement on The Conversation site, which addresses the issue of “loss and damage.” I think that loss and damage is an excellent issue to discuss with students because it provides a device to discuss many “meta issues,” including the potential role of State responsibility and liability for climate damages, the role of climate justice, and the effectiveness of risk-pooling mechanisms, such as insurance.

Incidentally, The Conversation site has a lot of interesting energy and climate commentary by academics. Because it’s designed to be accessible to wider audiences, much of the content might be particularly appropriate for undergraduate students.