Month: May 2011

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. – Mercier & Sperbervia Edge.org, which has a video conversation with coauthor Mercier.

This makes sense to me, but I think it can’t be the whole story. There must be at least a little evolutionary advantage to an ability to predict the consequences of one’s actions. The fact that it appears to be dominated by confirmation bias and other pathologies may be indicative of how much we are social animals, and how long we’ve been that way.

It’s easy to see why this might occur by looking at the modern evolutionary landscape for ideas. There’s immediate punishment for touching a hot stove, but for any complex system, attribution is difficult. It’s easy to see how the immediate rewards from telling your fellow tribesmen crazy things might exceed the delayed and distant rewards of actually being right. In addition, wherever there are stocks of resources lying about, there are strong incentives to succeed by appropriation rather than creation. If you’re really clever with your argumentation, you can even make appropriation resemble creation.

The solution is to use our big brains to raise the bar, by making better use of models and other tools for analysis of and communication about complex systems.

Nothing that you will learn in the course of your studies will be of the slightest possible use to you in after life, save only this, that if you work hard and intelligently you should be able to detect when a man is talking rot, and that, in my view, is the main, if not the sole, purpose of education. – John Alexander Smith, Oxford, 1914

So far, though, models seem to be serving argumentation as much as reasoning. Are we stuck with that?

Change management is one of the great challenges in modeling projects. I don’t mean this in the usual sense of getting people to change on the basis of model results. That’s always a challenge, but there’s another.

Over the course of a project, the numerical results and maybe even the policy conclusions given by a model are going to change. This is how we learn from models. If the results don’t change, either we knew the answer from the outset (a perception that should raise lots of red flags), or the model isn’t improving.

The problem is that model consumers are likely to get anchored to the preliminary results of the work, and resist change when it arrives later in the form of graphs that look different or insights that contradict early, tentative conclusions.

Fortunately, there are remedies:

Start with the assumption that the model and the data are wrong, and to some extent will always remain so.

Recognize that the modeler is not the font of all wisdom.

Emphasize extreme conditions tests and reality checks throughout the modeling process, not just at the end, so bugs don’t get baked in while insights remain hidden.

Do lots of sensitivity analysis to determine the circumstances under which insights are valid.

Keep the model simpler than you think it needs to be, so that you have some hope of understanding it, and time for reflecting on behavior and communicating results.

Involve a broad team of model consumers, and set appropriate expectations about what the model will be and do from the start.

All of us, even if we have no knack for science, look at the weather, at our children, at our markets, at the sky, and we see rhythms and patterns that seem to repeat, that give us the ability to predict. …

Do any of us live beyond pattern? …

I don’t think so. Artists may be, oddly, the most pattern-aware. Case in point: The totally unpredictable, one-of-a-kind novelist Kurt Vonnegut … once gave a lecture in which he presented — in graphic form — the basic plots of all the world’s great stories. Every story you’ve ever heard, he said, are reflections of a few, classic story shapes. They are so elementary, he said, he could draw them on an X/Y axis.

“With some help from wedges, the world decided that dealing with global warming wasn’t impossible, so it must be easy,” Socolow says. “There was a whole lot of simplification, that this is no big deal.”

I spoke to Socolow today at length, and he stands behind every word of that — including the carefully-worded title. Indeed, if Socolow were king, he told me, he’d start deploying some 8 wedges immediately. A wedge is a strategy and/or technology that over a period of a few decades ultimately reduces projected global carbon emissions by one billion metric tons per year (see Princeton website here). Socolow told me we “need a rising CO2 price” that gets to a serious level in 10 years. What is serious? “$50 to $100 a ton of CO2.”

Revkin weighs in with a broader view, but the tone is a bit Pielkeish,

From the get-go, I worried about the gushy nature of the word “solving,” particularly given that there was then, and remains, no way to solve the climate problem by 2050.

1. Look closely at what is in quotes, which generally comes from my slides, and what is not in quotes. What is not in quotes is just enough “off” in several places to result in my messages being misconstrued. I have given a similar talk about ten times, starting in December 2010, and this is the first time that I am aware of that anyone in the audience so misunderstood me. I see three places where what is being attributed to me is “off.”

a. “It was a mistake, he now says.” Steve Pacala’s and my wedges paper was not a mistake. It made a useful contribution to the conversation of the day. Recall that we wrote it at a time when the dominant message from the Bush Administration was that there were no available tools to deal adequately with climate change. I have repeated maybe a thousand times what I heard Spencer Abraham, Secretary of Energy, say to a large audience in Alexandria. Virginia, early in 2004. Paraphrasing, “it will take a discovery akin to the discovery of electricity” to deal with climate change. Our paper said we had the tools to get started, indeed the tools to “solve the climate problem for the next 50 years,” which our paper defined as achieving emissions 50 years from now no greater than today. I felt then and feel now that this is the right target for a world effort. I don’t disown any aspect of the wedges paper.

b. “The wedges paper made people relax.” I do not recognize this thought. My point is that the wedges people made some people conclude, not surprisingly, that if we could achieve X, we could surely achieve more than X. Specifically, in language developed after our paper, the path we laid out (constant emissions for 50 years, emissions at stabilization levels after a second 50 years) was associated with “3 degrees,” and there was broad commitment to “2 degrees,” which was identified with an emissions rate of only half the current one in 50 years. In language that may be excessively colorful, I called this being “outflanked.” But no one that I know of became relaxed when they absorbed the wedges message.

c. “Well-­?intentioned groups misused the wedges theory.” I don’t recognize this thought. I myself contributed the Figure that accompanied Bill McKibben’s article in National Geographic that showed 12 wedges (seven wedges had grown to eight to keep emissions level, because of emissions growth post-­?2006 and the final four wedges drove emissions to half their current levels), to enlist the wedges image on behalf of a discussion of a two-­?degree future. I am not aware of anyone misusing the theory.

2. I did say “The job went from impossible to easy.” I said (on the same slide) that “psychologists are not surprised,” invoking cognitive dissonance. All of us are more comfortable with believing that any given job is impossible or easy than hard. I then go on to say that the job is hard. I think almost everyone knows that. Every wedge was and is a monumental undertaking. The political discourse tends not to go there.

3. I did say that there was and still is a widely held belief that the entire job of dealing with climate change over the next 50 years can be accomplished with energy efficiency and renewables. I don’t share this belief. The fossil fuel industries are formidable competitors. One of the points of Steve’s and my wedges paper was that we would need contributions from many of the available option. Our paper was a call for dialog among antagonists. We specifically identified CO2 capture and storage as a central element in climate strategy, in large part because it represents a way of aligning the interests of the fossil fuel industries with the objective of climate change.

…

It is distressing to see so much animus among people who have common goals. The message of Steve’s and my wedges paper was, above all, ecumenical.

My take? It’s rather pointless to argue the merits of 7 or 14 or 25 wedges. We don’t really know the answer in any detail. Do a little, learn, do some more. Socolow’s $50 to $100 a ton would be a good start.

this
three

a. “It
It
time
available
thousand
audience
akin
the
tools
to
get
started,
indeed
the
tools
to
“solve
the
climate
problem
for
the
next
50

years,”
than
disown
any
aspect
of
the
wedges
paper.

b. “The
wedges
paper
made
people
relax.”
I
do
not
recognize
this
thought.
My
point
is
that

the
wedges
people
made
some
people
conclude,
not
surprisingly,
that
if
we
could

achieve
after
our
paper,
the
path
we
laid
out
(constant
emissions
for
50
years,
emissions
at

stabilization
was
only
half
the
current
one
in
50
years.
In
language
that
may
be
excessively
colorful,
I

called
this
being
“outflanked.”
But
no
one
that
I
know
of
became
relaxed
when
they

absorbed
the
wedges
message.

c.
“Well-­?intentioned
myself
contributed
the
Figure
that
accompanied
Bill
McKibben’s
article
in
National

The result: 284 tons CO2eq per million dollars of output. That translates to 340 kg for a $1200 computer. This is almost the same as Apple’s number, except that the Apple figure includes lifecycle emissions from use, for about a third of the total, so Apple’s manufacturing emissions are about a third lower than the generic computer sector in the EIO-LCA tool.

Directionally, it’s interesting that Apple’s estimate (presumably a process-based accounting) is lower, given that manufacturing happens in China, where electricity and GDP are both carbon-intensive on average. I wouldn’t read too much into the differences without digging much deeper though.

We are pleased to announce the launch of the 2011 Climate CoLab Contest. This year, the question that the CoLab poses is:

How should the 21st century economy evolve bearing in mind the reality of climate change?

This year’s contest will feature two competition pools:

Global, whose proposals outline how a feature of the world economy should evolve,

Regional/national, whose proposals outline how a feature of a regional or national economy should evolve.

The contest will run for six months from May 16 to November 15. Winners will be selected based on voting by community members and review by the judges.

The winning teams will present their proposals at briefings at the United Nations in New York City and U.S. Congress in Washington, D.C. The Climate CoLab will sponsor one representative from each of the winning teams.

We encourage you to form teams with other CoLab members who share your regional or global interests. Fill out your profile and start debating and brainstorming. If you would like to join a team, please send me a message.

Here’s a pretty array of pendulums of different lengths and therefore different natural frequencies:

This is a nice demonstration of how structure (length) causes behavior (period of oscillation). You can also see a variety of interesting behavior patterns, like beats, as the oscillations move in and out of phase with one another.

Synchronized metronomes:

These metronomes move in and out of sync as they’re coupled and uncoupled. This is interesting because it’s a fundamentally nonlinear process. Syncprovides a nice account of such things, and there’s a nifty interactive coupled pendulum demo here.

Mousetrap fission:

This is a physical analog of an infection model or the Bass diffusion model. It illustrates shifting loop dominance – initially, positive feedback dominates due to the chain reaction of balls tripping new traps, ejecting more balls. After a while, negative feedback takes over as the number of live traps is depleted, and the reaction slows.

There were three surprised when I recently ordered an Apple Macbook Pro. The first was how good the industrial design is compared to any PC laptop I’ve had. The second was getting a FedEx tracking number – straight from Shanghai. The third was how big the carbon footprint of this svelte machine is.

Grist covers a detailed report on the rebound effect, which recently appeared at ElectricityPolicy.com (pdf from NRDC). The report discusses a wide range of rebound arguments, basically concluding that rebounds are not a big deal.

Some of the reasons derive from the microeconomic effects of efficiency improvements. For example, improving the efficiency of light bulbs makes light services cheaper. But user’s don’t immediately increase lighting in proportion to the cost reduction, because their demand for lighting is saturated: there are only so many fixtures in a house, hours in the day requiring light, etc. Similarly, the elasticity of dirty dish production with respect to the energy cost of running a dishwasher is pretty darn low. This is reminiscent of the dynamics of process improvement at Analog Devices, where TQM improved productivity, but the company had a hard time translating that to expansion of its market niche in the short term.

I think the report underweights the long term effects of efficiency though. Efficiency increases contribute to aggregate productivity growth in the economy (more than you’d expect, if you believe that agency problems and other market failures create a bias toward overuse of energy). With wealth comes an expansion of energy use, hence the boom in such energy hogs as undercounter freezers and wine chillers, countering Energy Star improvement in refrigeration. However, this is not really an efficiency problem; it’s a progress problem, and it brings welfare benefits along with the added energy (at least until you get to the absurd margin).

The report cites an Energy Policy survey of empirical estimates:

Improvements in energy efficiency make energy services cheaper, and therefore encourage increased consumption of those services. This so-called direct rebound effect offsets the energy savings that may otherwise be achieved. This paper provides an overview of the theoretical and methodological issues relevant to estimating the direct rebound effect and summarises the empirical estimates that are currently available. The paper focuses entirely on household energy services, since this is where most of the evidence lies and points to a number of potential sources of bias that may lead the effect to be overestimated. For household energy services in the OECD, the paper concludes that the direct rebound effect should generally be less than 30%. doi:10.1016/j.enpol.2008.11.026

Sadly, a press release for related studies from the same research group spins this as a catastrophe:

‘Rebound Effects’ Threaten Success of UK Climate Policy

This is really only a catastrophe for a politician foolish enough to try to set and hit a hard emissions target, with efficiency mandates as the only measure for achieving it. As soon as you have any course correction (i.e. negative feedback) built into your policies, like an adaptive carbon tax or cap & trade system (the latter being the less stable option), the catastrophe goes away. The real catastrophe is failing to price GHG emissions and other externalities due to misperceptions about efficiency.

The real bottom line for rebound effects should be, “who cares?” If rebound effects are large, efficiency programs have small energy effects, but potentially large welfare improvements (if you accept that there are energy market failures tending towards overconsumption), and emissions pricing has large energy effects, because high rebound implies high price elasticity. If rebound effects are small, efficiency programs work and emissions pricing is a good way to collect taxes. Neither condition is a reason to avoid efficiency or emissions pricing, though emissions pricing is the preferable way to proceed.