Author: Tom

Over breakfast this morning (day three), I heard from several participants that the war game was “too easy” – that is, negotiators were too free to agree to aggressive commitments which their real-world constituents or bosses wouldn’t support. That wasn’t my observation in the China team room, and in closing statements it seemed that teams were taking their roles very seriously indeed. It’s not quite the Zimbardo Prison ExperimentÂ but it’s very real in some ways.

The following are some of the more poignant comments from players, with a little editorial license on my part. Hopefully I have attributions and the general drift correct; please comment if I don’t. (Actually, please comment either way!). I’ve colored a few points that I regard as particularly critical.

It was a little surprising that teams didn’t resist the underlying scenario of the game, which includes an aggressive 80% reduction from 2005 emissions by 2050.Â This may have been a recognition of theÂ critical need, pointed out by Sharon Burke,Â to look past what is politically feasible and keep the real goal in sight (i.e. emissions that will achieve stable climate).

Drew Jones of the Sustainability Institute stumbled on a great opportunity for model-based decision support. There are lots of climate models and integrated assessment models, but they’re almost always used offline. That is, modelers work between negotiations to develop analyses that (hopefully) address decision makers’ questions, but during any given meeting, negotiators rely on their mental models and static briefing materials. It’s a bit like training pilots in a flight simulator, then having them talk on the radio to guide a novice, who flies the real plane without instruments.

MIT researchers have developed a cool digital drawing board that simulates the physics of simple systems:

You can play with something like this with Crayon Physics or Magic Pen. Digital physics works because the laws involved are fairly simple, though the math behind one of these simulations might appear daunting. More importantly, they are well understood and universally agreed upon (except perhaps among perpetual motion advocates).

I’d like to have the equivalent of the digital drawing board for the public policy and business strategy space: a fluid, intuitive tool that translates assumptions into realistic consequences. The challenge is that there is no general agreement on the rules by which organizations and societies work. Frequently there is not even a clear problem statement and common definition of important variables.

However, in most domains, it is possible to identify and simulate the “physics” of a social system in a useful way. The task is particularly straightforward in cases where the social system is managing an underlying physical system that obeys predictable laws (e.g., if there’s no soup on the shelf, you can’t sell any soup). Jim Hines and MIT Media Lab researchers translated that opportunity into a digital whiteboard for supply chains, using a TUI (tangible user interface). Here’s a demonstration:

There are actually two innovations here. First, the structure of a supply chain has been reduced to a set of abstractions (inventories, connections via shipment and order flows, etc.) that make it possible to assemble one tinker-toy style using simple objects on the board. These abstractions eliminate some of the grunt work of specifying the structure of a system, enabling what Jim calls “modeling at conversation speed”. Second, assumptions, like the target stock or inventory coverage at a node in the supply chain, are tied to controls (wheels) that allow the user to vary them and see the consequences in real time (as with Vensim’s Synthesim). Getting the simulation off a single computer screen and into a tangible work environment opens up great opportunities for collaborative exploration and design of systems. Cool.

I don’t usually have TV, but I’m in a hotel tonight. I just saw a McCain ad that would be funny if it weren’t serious. It starts with some blather about high gas prices, and a picture of an old pump (designed to trigger nostalgia for 25 cents a gallon?). It goes on to imply that domestic drilling is the oil security answer. Then it makes the really amazing assertion (“you know who’s to blame”) that the only thing standing in the way of domestic drilling is … Obama. Wow … I had no idea that one senator could single-handedly wield such power.

I’m at CNAS’Â climate war game listening to Diana Farrell from the McKinsey Global Institute.Â So farÂ my takeaway from the exercise has been rather gloomy; Farrell presented a moreÂ hopeful view,Â informed by McKinsey’s construction of supply curves for carbon emissions reductions. I foundÂ her opening point particulaly critical: don’t wait for energy supply-side silver bullets to save us, when there are demand-side opportunities now.

McKinsey’s supply curves (and others from bottom-up modeling efforts)Â indicateÂ largeÂ reductions available at low or negative cost. The idea of cheap negawatts has been around for a long time. Hard line economists have also been declaring the idea dead for a long time (see my old bibliography of the top-down/bottom-up debate).

Sunday evening I attended the kickoff of a CNAS war game, Clout & Climate Change, examining the intersection of climate policy and national security. I’m on the science advisory team, along with Drew Jones and Lori Siegel from the Sustainability Institute. We hope to provide real-time, model-based decision support. The exercise brings together a rather extraordinary talent pool, so it’s a daunting task.

Peter Schwartz delivered an opening talk on scenario planning; he made a compelling case for robust thinking. He also set a tone that has haunted much of the subsequent proceedings: there are striking technological options for dealing with climate, but progress is difficult because people are caught up in their individual worlds (they want more stuff). As a result, he expects that lots of adaptation will be needed. Negotiations since have borne out his framing: aggressive targets (-80% by 2050) appear elusive in the face of the perceived need for continued economic growth to keep US and EU workers happy, avoid regime change in China, and provide for development in the poorest regions.

Paty Romero-Lankao asked the key question of the evening. To paraphrase crudely, she wondered whether a focus on technology and adaptation missed the point; that one must also address the lifestyle issues that are the underlying driver of growth. Amen. Lifestyle is a tougher nut to crack than technology, but to borrow Peter’s phrase guiding scenario planning, it’s important to avoid thinking, “that can’t happen.”

GreenCarCongress has some interesting tidbits of recent data: vehicle miles traveled is down and sales of cars surpass trucks, reversing long run trends. An interesting side effect of diminished driving and fuel use is that less money goes into the federal Highway Trust Fund. Because a small portion funds transit, this pulls the rug out from under alternative modes just when they’re needed. I immediately wondered whether the recent falloff in VMT is similar to what happened around the 1973 and 1979 oil price spikes. DOT’s Traffic Volume Trends data provides the answer, if you add historic archive data to recent reports:

The Pew Climate Center has a roster of international, US federal, and US state & regional climate initiatives. Wikipedia has a list of climate initiatives. The EPA maintains a database of state and regional initiatives, which they’ve summarized on cool maps. The Center for Climate Strategies also has a map of links. All of these give some idea as to what regions are doing, but not always why. I’m more interested in the why, so this post takes a look at the models used in the analyses that back up various proposals.

In a perfect world, the why would start with analysis targeted at identifying options and tradeoffs for society. That analysis would inevitably involve models, due to the complexity of the problem. Then it would fall to politics to determine the what, by choosing among conflicting stakeholder values and benefits, subject to constraints identified by analysis. In practice, the process seems to run backwards: some idea about what to do bubbles up in the political sphere, which then mandates that various agencies implement something, subject to constraints from enabling legislation and other legacies that do not necessarily facilitate the best outcome. As a result, analysis and modeling jumps right to a detailed design phase, without pausing to consider the big picture from the top down. This tendency is somewhat reinforced by the fact that most models available to support analysis are fairly detailed and tactical; that makes them too narrow or too cumbersome to redirect at the broadest questions facing society. There isn’t necessarily anything wrong with the models; they just aren’t suited to the task at hand.

My fear is that the analysis of GHG initiatives will ultimately prove overconstrained and underpowered, and that as a result implementation will ultimately crumble when called upon to make real changes (like California’s ambitious executive order targeting 2050 emissions 80% below 1990 levels). California’s electric power market restructuring debacle jumps to mind. I think underpowered analysis is partly a function of history. Other programs, like emissions markets for SOx, energy efficiency programs, and local regulation of criteria air pollutants have all worked OK in the past. However, these activities have all been marginal, in the sense that they affect only a small fraction of energy costs and a tinier fraction of GDP. Thus they had limited potential to create noticeable unwanted side effects that might lead to damaging economic ripple effects or the undoing of the policy. Given that, it was feasible to proceed by cautious experimentation. Greenhouse gas regulation, if it is to meet ambitious goals, will not be marginal; it will be pervasive and obvious. Analysis budgets of a few million dollars (much less in most regions) seem out of proportion with the multibillion $/year scale of the problem.

One result of the omission of a true top-down design process is that there has been no serious comparison of proposed emissions trading schemes with carbon taxes, though there are many strong substantive arguments in favor of the latter. In California, for example, the CPUC Interim Opinion on Greenhouse Gas Regulatory Strategies states, “We did not seriously consider the carbon tax option in the course of this proceeding, due to the fact that, if such a policy were implemented, it would most likely be imposed on the economy as a whole by ARB.” It’s hard for CARB to consider a tax, because legislation does not authorize it. It’s hard for legislators to enable a tax, because a supermajority is required and it’s generally considered poor form to say the word “tax” out loud. Thus, for better or for worse, a major option is foreclosed at the outset.

With that little rant aside, here’s a survey of some of the modeling activity I’m familiar with:

I recently discovered a cool set of tools from MIT’s Simile project. My favorites are Timeline and Exhibit, which provide a fairly easy way to create web sites where visitors can interact with data. As a test, I built an Exhibit containing Richard Tol’s survey of assessments of the social cost of carbon (SCC):