I introduced some ideas from systems thinking last month, and especially the idea of second order cybernetics: the study of how people’s perceptions of systems affect their ability to understand and control them. I want to pick up on this idea, because I think it’s crucial to understanding the predicament we’re now in with respect to climate change. The system of systems that we have to understand in order to grasp the challenges of climate change is so complex that naturally, everyone sees it a little differently.

When describing relatively simple systems, most people’s descriptions coincide to some degree. Typically, one person will give more detail than another, such that the simpler description is completely subsumed in the more detailed one. However, for more complex systems, different people’s descriptions tend to diverge more. For any reasonably complex system, it will be impossible to completely derive any one person’s description from another persons – each will offer unique details that the other missed. Weinberg dubs this the principle of complementarity in his book on General Systems Thinking: any two descriptions of a complex system are likely to be complementary.

Here’s a simple example – these two photos are of the same lake, but are complementary views:

The principle applies whenever we have partial descriptions of the world from our observers, and may disappear if we ask the observers to make increasingly detailed observations. Assuming they really are describing the same system, it should eventually be possible to reconcile their descriptions completely. For example, with a little effort, you can match up the peaks in the two photos above, and even some of the trees (it’s a little easier with the enlarged photos – click on them for bigger). Unfortunately, if the systems are complex enough, the descriptions can only ever be partial, and it may be infeasible to trace down every last detail in order to reconcile them.

When it comes to climate, the principle of complementarity works overtime. People end up talking past one another because they don’t even realise they’re describing the same systems – their descriptions appear to have no common ground. For example, one person might talk in terms of atmospheric carbon concentrations, and emphasize the need to stop using fossil fuels. Another person might talk in terms of the costs of climate policies, and the risk to the economy if we place a price on carbon. Because they don’t stop to explore how the systems they are describing inter-relate, they don’t understand that they are each focussing on just one part of a much larger system of systems.

And the problem is that most people are so embedded in a particular worldview, they are incapable of understanding the systems in the way that others see them. To illustrate the depth of this problem, consider this story from Bill Tomlinson’s book “Greening through IT“:

One day, when I was in graduate school, I was walking along a paved bicycle path near my Davis Square apartment in Somerville, Massachusetts, on the way to the T station (the Boston area subway). A father and son were walking a few yards in front of me. The boy was about four years old. He was running back and forth across the path, looking under rocks and investigating things. I saw him find something small, pick it up, and carry it over to his father. I heard the father say, “Oh, you found a snail!” I could feel a life lesson about to ensue. “Let’s see how far you can chuck that snail, Bobby!” (p109)

I feel a strong sense of revulsion towards this father, because my values are very different from his. I see the snail as a fascinating creature, to be studied and admired for its behaviours, and it’s interaction with the urban environment in which it lives – my kids and I have spend ages admiring how they wave their feelers and how they move. The father in the story sees the snail as part of a system of objects that can be hefted and thrown in sport. But this is just the principle of complementarity at work: we’re focussing on very different systems, which overlap. If we can’t step back and understand how our different values cause us to have complementary views of the ‘same’ system, then we’ll never manage to reach agreement on the broader goals of tackling a problem as complex as climate change.

In pulling together my thoughts for a workshop last week on systems thinking, I’ve realised how much systems thinking has affected my approach to climate change, and how systems thinking is an essential tool for understanding the different responses people have to climate change. For systems thinking offers not just a way to think about and understand the interactions that occur in very complex systems, but also a way of understanding how people relate to systems, and how our conceptions of systems affect our interactions with them.

A simple introduction to systems thinking usually starts by pointing out how familiar we are with the idea of “a system” – for example we use the word as a suffix in many different ways: an ecosystem, the transport system, the education system, a weather system, the political system, a computer system, and so on. [Note: The use of the definite article, “the … system”, is a little unfortunate here, as we shall see].

Most people are used to the idea of identifying different aspects of a system they wish to describe: inputs and outputs, a control (or management) mechanism, a boundary that separates the system from its environment, a possible purpose or function of the system, different elements or subsystems, different states that the system can be in, and so on.

This then leads to insights about the dynamic behaviour of a system, especially in terms of stocks and flows, and positive and negative feedback loops. For example, John Sterman has a simple demonstration of stocks and flows in an atmospheric system, with his bathtub model of greenhouse gas emissions and concentrations.

But where systems thinking really gets interesting is when we include ourselves as part of the system we’re describing. For example, for the climate system, we should include ourselves as elements of the system, as the many of our actions affect the release of greenhouse gases. But we’re also the agents that give some aspects of the system their meaning or purpose – the fossil fuel extraction and production system exists to provide us with energy, and one could even argue that the climate system exists to provide us with suitable conditions to live in, and that ecosystems exist to provide us with food, resources, and even a sense of wonder and belonging. The interesting part of this is that different people will ascribe different meanings and/or purposes to these systems, and some would argue that to ascribe such purposes is inappropriate.

Which leads us to the next level of insight, which is that these descriptions of systems are really just ways of looking at the world, and different people will see and describe different systems, even when observing the same parts of the world. As Reynolds points out, systems thinking starts when we begin to see the world through other people’s eyes, and the idea of multiple perspectives is a central concept. In this sense, systems don’t really exist in the world at all, they only exist as convenient descriptions of the world. Moreover, when we choose to describe some part of the world as a system, we make explicit choices about where to draw boundaries, and which things to ignore, and these choices themselves are important, because they reveal our biases and interests, and certain choices may help or hinder our attempts to analyze a system.

Taking this even further, we can then conceive of the system that consists of a group of people and their descriptions of the systems they are interested in, and we can study the dynamics of this system: how people affect one another’s perceptions of the systems, and how those perceptions shape their interactions with those systems. For example, we could describe climate change primarily in terms of the physical processes: carbon emissions, the radiative balance of the atmosphere, average temperatures, and impacts on human life and ecosystems. The leads to a view the problem of climate change as primarily about reducing emissions (and many people who write about climate change take this view). Alternatively, we could describe climate change as one aspect of a system of human growth (in population, energy use, resource use, economic activity, etc) and the many ways in which that growth is constrained on a finite planet. Which then leads to a very different characterization of the problem in which carbon emissions are really just a by-product of a cheap energy consumerist society, and the problem isn’t to reduce emissions, it is to restructure our entire societies (and our conceptions of them) so that we no longer depend on growth in resource consumption as our definition of human progress.

A key term here is second-order cybernetics. Cybernetics (of the first order) studies the ways in which processes can be controlled, and the engineering of process control systems. Second order cybernetics studies how our perceptions of systems affects our ability to design ways of controlling them. In other words, there are interesting dynamics in the interplay between our understanding of systems, and our attempts to design controllers for them. Much of the problem in understanding and responding to climate change is due to a failure by most writers to appreciate the dynamics in second order cybernetic systems.

I’ll write more about the application of systems thinking to climate change in the next few weeks. In the meantime, here’s some recommended reading – two excellent introductory books, which I think might appeal to different audiences:

Here’s an interesting article entitled “Decoding the Value of Computer Science” in the Chronicle of Higher Education. The article purports to be about the importance of computer science degrees, and the risks of not enough people enrolling for such degrees these days. But it seems to me it does a much better job of demonstrating the idea of computational thinking, i.e. that people who have been trained to program approach problems differently from those who have not.

It’s this approach to problem solving that I think we need more of in tackling the challenge of climate change.

I went to a workshop earlier this week on “the Future of Software Engineering Research” in Santa Fe. My main excuse to attend was to see how much interest I could raise in getting more software engineering researchers to engage in the problem of climate change – I presented my paper “Climate Change: A Software Grand Challenge“. But I came away from the workshop with very mixed feelings. I met some fascinating people, and had very interesting discussions about research challenges, but overall, the tone of the workshop (especially the closing plenary discussion) seemed to be far more about navel-gazing and doing “more of the same”, rather than rising to new challenges.

The break-out group I participated in focussed on the role of software in addressing societal grand challenges. We came up with a brief list of such challenges: Climate Change; Energy; Safety & Security; Transportation; Health and Healthcare; Livable Mega-Cities. In all cases, we’re dealing with complex systems-of-systems, with all the properties laid out in the SEI report on Ultra-Large Scale Systems – decentralized systems with no clear ownership; systems that undergo continuous evolution while they are being used (you can’t take the system down for maintenance and upgrades); systems built from heterogeneous elements that are constructed at different times by different communities for different purposes; systems where traditional distinctions between developers and users disappear, as the human activity and technical functionality intertwine. And systems where the “requirements” are fundamentally unknowable – these systems simultaneously serve multiple purposes for multiple communities.

I’ve argued in the past that really all software is like this, but that we pretend otherwise by drawing boundaries around small pieces of functionality so that we can ignore the uncertainties in the broader social system in which it will be used. Traditional approaches to software engineering work when we can get away with this game – on those occasions when it’s possible to get local agreement about a specific set of software functions that will help solve a local problem. The fact that software engineers tend to insist on writing a specification is a symptom that they are playing this game. But such agreements/specifications are always local and temporary, which means that software built in this way is frequently disappointing or frustrating to use.

So, for societal grand challenge problems, what is the role of software engineering research, and what kinds of software engineering might be effective? In our break-out group, we talked a lot about examples of emergent successful systems such as Facebook and Wikipedia (and even the web itself), which were built not by any recognizable software development process, but by small groups of people incrementally adding to an evolving infrastructure, each nudging it a little further down an interesting road. And by frequently getting it wrong, and seeking continual improvement when things do go wrong. Software innovation is then an emergent feature in these endeavours, but it is the people and the way they collaborate that matters, rather than any particular approach to software development.

Obviously, software alone cannot solve these societal grand challenges, but software does have a vital role to play: good software infrastructure can catalyze the engagement of multiple communities, who together can tackle the challenges. In our break-out group, we talked specifically about healthcare and climate change – in both cases there are lots of individuals and communities with ideas and enthusiasm, but who are hampered by socio-technical barriers: lack of data exchange standards, lack of appropriate organizational structures, lack of institutional support, lack of a suitable framework for exploratory software development, tools that ignore key domain concepts. It seems increasingly clear that typical governmental approaches to information systems will not solve these problems. You can’t just put out a call for tender and commission construction of an ultra-large scale system; you have to evolve it from multiple existing systems. Witness repeated failures of efforts around shared health records, carbon accounting systems, etc. But governments do need to create the technical infrastructure and nurture the coming together of inter-disciplinary communities to address these challenges, and strategic funding of trans-disciplinary research projects is a key element.

But what was the response at the workshop to these issues? The breakout groups presented their ideas back to the workshop plenary on the final afternoon, and the resulting discussion was seriously underwhelming. Several people (I could characterize them as the “old guard” in the software engineering research community) stood up to speak out against making the field more inter-disciplinary. They don’t want to see the “core” of the field diluted in any way. There were some (unconvincing) arguments that software engineering research has had a stronger impact than most people acknowledge. And a long discussion that the future of software engineering research lies in stronger ties between academic and industrial software engineering. Never mind that increasingly, software is developed outside the “software industry”: e.g. open source projects, scientific software, end-user programmers, community engagement, and of course college students building web tools that go on to take the internet world by storm. All this is irrelevant to the old guard – they want to keep on believing that the only software engineering that matters is that which can be built to a specification by a large software company.

I came away from the workshop with the feeling that this community is in the process of dooming itself to irrelevancy. But then, as was pointed out to me over lunch today, the people who have done the best under the existing system are unlikely to want to change it. Innovation in software research won’t come from the distinguished senior people in the field…

Many moons ago, I talked about the danger of being distracted by our carbon footprints. I argued that the climate crisis cannot be solved by voluntary action by the (few) people who understand what we’re facing. The problem is systemic, and so adequate responses must be systemic too.

In the years since 9/11, it’s gotten steadily more frustrating to fly, as the lines build up at the security checkpoints, and we have to put more and more of what we’re wearing through the scanners. This doesn’t dissuade people from flying, but it does make them much more grumpy about it. And it doesn’t make them any safer, either. Bruce Schneier calls it “Security Theatre“: countermeasures that make it look like something is being done at the airport, but which make no difference to actual security. Bruce runs a regular competition to think up a movie plot that will create a new type of fear and hence enable the marketing of a new type of security theatre countermeasure.

Now Jon Udell joins the dots and points out that we have an equivalent problem in environmentalism: Carbon Theatre. Except that he doesn’t quite push the concept far enough. In Jon’s version, carbon theatre is competitions and online quizes and so on, in which we talk about how we’re going to reduce our carbon footprints more than the next guy, rather than actually getting on and doing things that make a difference.

I think carbon theatre is more insidious than that. It’s the very idea that an appropriate response to climate change is to make personal sacrifices. Like giving up flying. And driving. And running the air conditioner. And so on. The problem is, we approach these things like a dieter approaches the goal of losing weight. We make personal sacrifices that are simply not sustainable. For most people, dieting doesn’t work. It doesn’t work because, although the new diet might be healthier, it’s either less convenient or less enjoyable. Which means sooner or later, you fall off the wagon, because it’s simply not possible to maintain the effort and sacrifice indefinitely.

Carbon theatre means focussing on carbon footprint reduction without fixing the broader system that would make such changes sustainable. You can’t build a solution to climate change by asking people to give up the conveniences of modern life. Oh, sure, you can get people to set personal goals, and maybe even achieve them (temporarily). But if it requires a continual effort to sustain, you haven’t achieved anything. If it involves giving up things that you enjoy, and that others around you continue to enjoy, then it’s not a sustainable change.

I’ve struggled for many years to justify the fact that I fly a lot. A few long-haul flights in a year adds enough to my carbon footprint that just about anything else I do around the house is irrelevant. Apparently a lot of scientists worry about this too.When I blogged about the AGU meeting, the first comment worried about the collective carbon footprint of all those scientists flying to the meeting. George Marshall worries that this undermines the credibility of climate scientists (or maybe he’s even arguing that it means climate scientists still don’t really believe their own results). Somehow all these people seem to think it’s more important for climate scientists to give up flying than it is for, say, investment bankers or oil company executives. Surely that’s completely backwards??

This is, of course, the wrong way to think about the problem. If climate scientists unilaterally give up flying, it will make no discernible difference to the global emissions of the airline industry. And it will make the scientists a lot less effective, because it’s almost impossible to do good science without the networking and exchange of ideas that goes on at scientific conferences. And even if we advocate that everyone who really understands the magnitude of the climate crisis also gives up flying, it still doesn’t add up to a useful solution. We end up giving the impression that if you believe that climate change is a serious problem you have to make big personal sacrifices. Which makes it just that much harder for many people to accept that we do have a problem.

For example, I’ve tried giving up short haul flights in favour of taking the train. But often the train is more expensive and more hassle. If there is no direct train service to my destination, it’s difficult to plan a route, buy tickets, and the trains are never timed to connect in the right way. By making the switch, I’m inconveniencing myself, for no tangible outcome. I’d be far more effective getting together with others who understand the problem, and fixing the train system to make it cheaper and easier. Or helping existing political groups who are working towards this goal. If we make the train cheaper and easier than flying, it will be easy to persuade large number of people to switch as well.

So, am I arguing that working on our carbon footprints is a waste of time? Well, yes and no. It’s a waste of time if you’re doing it by giving up stuff that you’d rather not give up. However, it is worth it if you find a way to do it that could be copied by millions of other people with very little effort. In other words, if it’s not (massively) repeatable and sustainable, it’s probably a waste of time. We need changes that scale up, and we need to change the economic and policy frameworks to support such changes. That won’t happen if the people who understand what needs doing focus inwards on their own personal footprints. We have to think in terms of whole systems.

There is a caveat: sacrifices such as temporarily giving up flying are worthwhile if done as a way of understanding the role of flying in our lives, and the choices we make about travel; they might also be worthwhile if done as part of a coordinated political campaign to draw attention to a problem. But as a personal contribution to carbon reduction? That’s just carbon theatre.

The recording of my Software Engineering for the Planet talk is now available online. Having watched it, I’m not terribly happy with it – it’s too slow, too long, and I make a few technical mistakes. But hey, it’s there. For anyone already familiar with the climate science, I would recommend starting around 50:00 (slide 45) when I get to part 2 – what should we do?

The slides are also available as a pdf with my speaking notes (part 1 and part 2), along with the talk that Spencer gave in the original presentation at ICSE. I’d recommend these pdfs rather than the video of me droning on….

Having given the talk three times now, I have some reflections on how I’d do it differently. First, I’d dramatically cut down the first part on the climate science, and spend longer on the second half – what software researchers and software engineers can do to help. I also need to handle skeptics in the audience better. There’s always one or two, and they ask questions based on typical skeptic talking points. I’ve attempted each time to answer these questions patiently and honestly, but it slows me down and takes me off-track. I probably need to just hold such questions to the end.

Mistakes? There are a few obvious ones:

On slide 11, I present a synoptic view of the earth’s temperature record going back 500 million years (it’s this graph from wikipedia). I use it to put current climate change into perspective, but also also to make the point that small changes in the earth’s temperature can be dramatic – in particular, the graph indicates that the difference between the last ice age and the current inter-glacial is about 2°C average global temperature. I’m now no longer sure this is correct. Most textbooks say it was around 8°C colder in the last ice age, but these appear to be based on an assumption that temperature readings taken from ice cores at the poles represent global averages. The temperature change at the poles is always much greater than the global average, but it’s hard to compute a precise estimate of global average temperature from polar records. Hansen’s reconstructions seem to suggest 3°C-4°C. So the 2°C rise shown on the wikipedia chart is almost certainly an underestimate. But I’m still trying to find a good peer-reviewed account of this question.

On slide 22, I talk about Arrhenius’s initial calculation of climate sensitivity (to doubling of CO2) back in the 1880’s. His figure was 4ºC-5ºC, whereas the IPCC’s current estimates are 2ºC-4.5ºC. And I need to pronounce his name correctly.

Having talked with some of our graduate students about how to get a more inter-disciplinary education while they are in grad school, I’ve been collecting links to collaborative grad programs at U of T:

The Dynamics of Global Change Doctoral Program, housed in the Munk Centre. The core course, DGC1000H is very interesting – it starts with Malcolm Gladwell’s Tipping Point book, and then tours through money, religion, pandemics, climate change, the internet and ICTs, and development. What a wonderful journey.

Had an interesting conversation this afternoon with Brad Bass. Brad is a prof in the Centre for Environment at U of T, and was one of the pioneers of the use of models to explore adaptations to climate change. His agent based simulations explore how systems react to environmental change, e.g. exploring population balance among animals, insects, the growth of vector-borne diseases, and even entire cities. One of his models is Cobweb, an open-source platform for agent-based simulations.

He’s also involved in the Canadian Climate Change Scenarios Network, which takes outputs from the major climate simulation models around the world, and extracts information on the regional effects on Canada, particularly relevant for scientists who want to know about variability and extremes on a regional scale.

We also talked a lot about educating kids, and kicked around some ideas for how you could give kids simplified simulation models to play with (along the line that Jon was exploring as a possible project), to get them doing hands on experimentation with the effects of climate change. We might get one of our summer students to explore this idea, and Brad has promised to come talk to them in May once they start with us.

Computer Science, as an undergraduate degree, is in trouble. Enrollments have dropped steadily throughout this decade: for example at U of T, our enrollment is about half what it was at the peak. The same is true across the whole of North America. There is some encouraging news: enrollments picked up a little this year (after a serious recruitment drive, ours is up about 20% from it’s nadir, while across the US it’s up 6.2%). But it’s way to early to assume they will climb back up to where they were. Oh, and percentage of women students in CS now averages 12% – the lowest ever.

What happened? One explanation is career expectations. In the 80’s, its was common wisdom that a career in computers was an excellent move, for anyone showing an aptitude for maths. In the 90’s, with the birth of the web, computer science even became cool for a while, and enrollments grew dramatically, with a steady improvement in gender balance too. Then came the dotcom boom and bust, and suddenly a computer science degree was no longer a sure bet. I’m told by our high school liaison team that parents of high school students haven’t got the message that the computer industry is short of graduates to recruit (although with the current recession that’s changing again anyway).

A more likely explanation is perceived relevance. In the 80’s, with the birth of the PC, and in the 90’s with the growth of the web, computer science seemed like the heart of an exciting revolution. But now computers are ubiquitous, they’re no longer particularly interesting. Kids take them for granted, and a only a few über-geeks are truly interested in what’s inside the box. But computer science departments continue to draw boundaries around computer science and its subfields in a way that just encourages the fragmentation of knowledge that is so endemic of modern universities.

Which is why an experiment at Georgia Tech is particularly interesting. The College of Computing at Georgia Tech has managed to buck the enrollment trend, with enrollment numbers holding steady throughout this decade. The explanation appears to be a radical re-design of their undergraduate degree, into a set of eight threads. For a detailed explanation, there’s a white paper, but the basic aim is to get students to take more ownership of their degree programs (as opposed to waiting to be spoonfed), and to re-describe computer science in terms that make sense to the rest of the world (computer scientists often forget the the field is impenetrable to the outsider). The eight threads are: Modeling and simulation; Devices (embedded in the physical world); Theory; Information internetworks; Intelligence; Media (use of computers for more creative expression); People (human-centred design); and Platforms (computer architectures, etc). Students pick any two threads, and the program is designed so that any combination covers most of what you would expect to see in a traditional CS degree.

At first sight, it seems this is just a re-labeling effort, with the traditional subfields of CS (e.g. OS, networks, DB, HCI, AI, etc) mapping on to individual threads. But actually, it’s far more interesting than that. The threads are designed to re-contextualize knowledge. Instead of students picking from a buffet of CS courses, each thread is designed so that students see how the knowledge and skills they are developing can be applied in interesting ways. Most importantly, the threads cross many traditional disciplinary boundaries, weaving a diverse set of courses into a coherent theme, showing the students how their developing CS skills combine in intellectually stimulating ways, and preparing them for the connected thinking needed for inter-disciplinary problem solving.

For example the People thread brings in psychology and sociology, examining the role of computers in the human activity systems that give them purpose. It explore the perceptual and cognitive abilities of people as well as design practices for practical socio-technical systems. The Modeling and Simluation thread explores how computational tools are used in a wide variety of sciences to help understand the world. Following this thread will require consideration of epistemology of scientific knowledge, as well as mastery of the technical machinery by which we create models and simulations, and the underlying mathematics. The thread includes in a big dose of both continuous and discrete math, data mining, and high performance computing. Just imagine what graduates of these two threads would be able to do for our research on SE and the climate crisis! The other thing I hope it will do is to help students to know their own strengths and passions, and be able to communicate effectively with others.

The good news is that our department decided this week to explore our own version of threads. Our aims is to learn from the experience at Georgia Tech and avoid some of the problems they have experienced (for example, by allowing every possible combination of 8 threads, it appears they have created too many constraints on timetabling and provisioning individual courses). I’ll blog this initiative as it unfolds.

Many years ago, Dave Parnas wrote a fascinating essay on software aging, in which he compares old software with old people, pointing out that software gets frail, less able to do things than when it was young, and gets more prone to disease and obesity (actually, I can’t remember whether he mentioned obesity, but you get the idea – software bloat). At some point we’re better off retiring the old system rather than trying to keep updating it.
Well, this quote by Thomas Friedman that showed up on Gristmill over the weekend made me think more about how our entire economic system is in the same boat. We’ve got to the point where we can’t patch it any longer without just making it worse. Is it time for industrialization 2.0? Or maybe it should be globalization 2.0?
The question is, do any of our political leaders understand this? No sign of any enlightenment in Canada’ House of Commons, I’m afraid.

Greg reminded me the other day about Jeanette Wing‘s writings about “computational thinking“. Is this what I have in mind when I talk about the contribution software engineers can make in tackling the climate crisis? Well, yes and no. I think that this way of thinking about problems is very important, and corresponds with my intuition that learning how to program changes how you think.

But ultimately, I found Jeanette’ description of computational thinking to be very disappointing, because she concentrates too much on algorithmics and machine metaphors. This reminds me of the model of the mind as a computer, used by cognitive scientists – it’s an interesting perspective that opens up new research directions, but is ultimately limiting because it leads to the problem of disembodied cognition: treating the mind as independent from it’s context. I think software engineering (or at least systems analysis) adds something else, more akin to systems thinking. It’s the ability to analyse the interconnectedness of multiple systems. The ability to reason about multiple stakeholders and their interdependencies (where most of the actors are not computational devices!). And the rich set of abstactions we use to think about structure, behaviour and function of very complex systems-of-systems. Somewhere in the union of computational thinking and systems thinking.