Blog by James Millington, PhD

Tag: Philosophical

This week I visited one of my former PhD advisors, Prof John Wainwright, at Durham University. We’ve been working on a manuscript together for a while now and as it’s stalled recently we thought it time we met up to re-inject some energy into it. The manuscript is a discussion piece about how agent-based modelling (ABM) can contribute to understanding and explanation in geography. We started talking about the idea in Pittsburgh in 2011 at a conference on the Epistemology of Modeling and Simulation. I searched through this blog to see where I’d mentioned the conference and manuscript before, but to my surprise, before this post I hadn’t.

In our discussion of what we can learn through using ABM, John highlighted the work of Kurt Godel and his incompleteness theorems. Not knowing all that much about that stuff I’ve been ploughing my way through Douglas Hofstadter’s tome ‘Godel, Escher and Bach: An Eternal Golden Braid’ – heavy going in places but very interesting. In particular, his discussion of the concept of recursion has taken my notice, as it’s something I’ve been identifying elsewhere.

The general concept of recursion involved nesting, like Russian dolls, stories within stories (like in Don Quixote) and images within images:

Computer programmers of take advantage of recursion in their code, calling a given procedure from within that same procedure (hence their love of recursive acronyms like PHP [PHP Hypertext Processor]). An example of how this works is in Saura and Martinez-Millan’s modified random clusters method for generating land cover patterns with given properties. I used this method in the simulation model I developed during my PhD and have re-coded the original algorithm for use in NetLogo [available online here]. In the code (below) the grow-cover_cluster procedure is called from within itself, allowing clusters of pixels to ‘grow themselves’.

However, rather than get into the details of the use of recursion in programming, I want to highlight two other ways in which recursion is important in social activity and its simulation.

The first, is in how society (and social phenomena) has a recursive relationship with the people (and their activities) composing it. For example, Anthony Gidden’s theory of structuration argues that the social structures (i.e., rules and resources) that constrain or prompt individuals’ actions are also ultimately the result of those actions. Hence, there is a duality of structure which is:

“the essential recursiveness of social life, as constituted in social practices: structure is both medium and outcome of reproduction of practices. Structure enters simultaneously into the constitution of the agent and social practices, and ‘exists’ in the generating moments of this constitution”. (p.5 Giddens 1979)

Another example comes from Andrew Sayer in his latest book ‘Why Things Matter to People’ which I’m also progressing through currently. One of Sayer’s arguments is that we humans are “evaluative beings: we don’t just think and interact but evaluate things”. For Sayer, these day-to-day evaluations have a recursive relationship with the broader values that individuals hold, values being ‘sedimented’ valuations, “based on repeated particular experiences and valuations of actions, but [which also tend], recursively, to shape subsequent particular valuations of people and their actions”. (p.26 Sayer 2011)

However, while recursion is often used in computer programming and has been suggested as playing a role in different social processes (like those above), its examination in social simulation and ABM has not been so prominent to date. This was a point made by Paul Thagard at the Pittsburgh epistemology conference. Here, it seems, is an opportunity for those seeking to use simulation methods to better understand social patterns and phenomena. For example, in an ABM how do the interactions between individual agents combine to produce structures which in turn influence future interactions between agents?

Second, it seems to me that there are potentially recursive processes surrounding any single simulation model. For if those we simulate should encounter the model in which they are represented (e.g., through participatory evaluation of the model), and if that encounter influences their future actions, do we not then need to account for such interactions between model and modelee (i.e., the person being modelled) in the model itself? This is a point I raised in the chapter I helped John Wainwright and Dr Mark Mulligan re-write for the second edition of their edited book “Environmental Modelling: Finding Simplicity in Complexity”:

“At the outset of this chapter we highlighted the inherent unpredictability of human behaviour and several of the examples we have presented may have done little to persuade you that current models of decision-making can make accurate forecasts about the future. A major reason for this unpredictability is because socio-economic systems are ‘open’ and have a propensity to structural changes in the very relationships that we hope to model. By open, we mean that the systems have flows of mass, energy, information and values into and out of them that may cause changes in political, economic, social and cultural meanings, processes and states. As a result, the behaviour and relationships of components are open to modification by events and phenomena from outside the system of study. This modification can even apply to us as modellers because of what economist George Soros has termed the ‘human uncertainty principle’ (Soros 2003). Soros draws parallels between his principle and the Heisenberg uncertainty principle in quantum mechanics. However, a more appropriate way to think about this problem might be by considering the distinction Ian Hacking makes between the classification of ‘indifferent’ and ‘interactive’ kinds (Hacking, 1999; also see Hoggart et al., 2002). Indifferent kinds – such as trees, rocks, or fish – are not aware that they are being classified by an observer. In contrast humans are ‘interactive kinds’ because they are aware and can respond to how they are being classified (including how modellers classify different kinds of agent behaviour in their models). Whereas indifferent kinds do not modify their behaviour because of their classification, an interactive kind might. This situation has the potential to invalidate a model of interactive kinds before it has even been used. For example, even if a modeller has correctly classified risk-takers vs. risk avoiders initially, a person in the system being modelled may modify their behaviour (e.g., their evaluation of certain risks) on seeing the results of that behaviour in the model. Although the initial structure of the model was appropriate, the model may potentially later lead to its own invalidity!” (p. 304, Millington et al. 2013)

The new edition was just published this week and will continue to be a great resource for teaching at upper levels (I used the first edition in the Systems Modeling and Simulation course I taught at MSU, for example).

More recently, I discussed these ideas about how models interact with their subjects with Peter McBurney, Professor in Informatics here at KCL. Peter has written a great article entitled ‘What are Models For?’, although it’s somewhat hidden away in the proceedings of a conference. In a similar manner to Epstein, Peter lists the various possible uses for simulation models (other than prediction, which is only one of many) and also discusses two uses in more detail – mensatic and epideictic. The former function relates to how models can bring people around a metaphorical table for discussion (e.g., for identifying and potentially deciding about policy trade-offs). The other, epideictic, relates to how ideas and arguments are presented and leads Peter to argue that by representing real world systems in a simulation model can force people to “engage in structured and rigorous thinking about [their problem] domain”.

John and I will be touching on these ideas about the mensatic and epideictic functions of models in our manuscript. However, beyond this discussion, and of relevance here, Peter discusses meta-models. That is, models of models. The purpose here, and continuing from the passage from my book chapter above, is to produce a model (B) of another model (A) to better understand the relationships between Model A and the real intelligent entities inside the domain that Model A represents:

“As with any model, constructing the meta-model M will allow us to explore “What if?” questions, such as alternative policies regarding the release of information arising from model A to the intelligent entities inside domain X. Indeed, we could even explore the consequences of allowing the entities inside X to have access to our meta-model M.” (p.185, McBurney 2012)

Thus, the models are nested with a hope of better understanding the recursive relationship between models and their subjects. Constructing such meta-models will likely not be trivial, but we’re thinking about it. Hopefully the manuscript John and I are working on will help further these ideas, as does writing blog posts like this.

So term is drawing to an end. There’s lots been going on since I last posted here and I’ll write a full update of that over the Christmas break. I’ll just highlight here quickly that the agent-based modelling book I contributed to has now been published.

Agent-Based Models of Geographical Systems, is editied by Alison Heppenstall, Andrew Crooks, Linda See and Mike Batty and presents a comprehensive collection of papers on the background, theory, technical issues and applications of agent-based modelling (ABM) in geographical systems. David O’Sullivan, George Perry, John Wainwright and I put together a paper entitled ‘Agent-based models – because they’re worth it?’ that falls into the ‘Principles and Concepts of Agent-Based Modelling’ section of the book. To give an idea of what the paper is about, here’s the opening paragraph:

“In this chapter we critically examine the usefulness of agent-based models (ABMs) in geography. Such an examination is important be-cause although ABMs offer some advantages when considered purely as faithful representations of their subject matter, agent-based approaches place much greater demands on computational resources, and on the model-builder in their requirements for explicit and well-grounded theories of the drivers of social, economic and cultural activity. Rather than assume that these features ensure that ABMs are self-evidently a good thing – an obviously superior representation in all cases – we take the contrary view, and attempt to identify the circumstances in which the additional effort that taking an agent-based approach requires can be justified. This justification is important as such models are also typically demanding of detailed data both for input parameters and evaluation and so raise other questions about their position within a broader research agenda.”

In the paper we ask:

Are modellers agent-based because they should be or because they can be?

What are agents? And what do they do?

So when do agents make a difference?

To summarise our response to this last question we argue;

“Where agents’ preferences and (spatial) situations differ widely, and where agents’ decisions substantially alter the decision-making con-texts for other agents, there is likely to be a good case for exploring the usefulness of an agent-based approach. This argument focuses attention on three model features: heterogeneity of the decision-making context of agents, the importance of interaction effects, and the overall size and organization of the system.”

Hopefully people will find this, and the rest of the book useful! You can check out the full table of contents here.

I just updated the Philosophy of Modelling page on my website. It’s not anything too detailed but I was prompted to add something by my activities over the last few weeks. I’ve been working on both making progress with my ‘modelling narratives’ project and a paper I’ve started working on with John Wainwright exploring the epistemological roles agent-based simulation might play beyond mathematical and statistical modelling (expected to appear in the new-ish journal Dialogues in Human Geography).

Model Histories: The generative properties of agent-based modellingFri 2 Sept, Session 4, Skempton Building, Room 060b James Millington (King’s College London)David O’Sullivan (University of Auckland, New Zealand)George Perry (University of Auckland, New Zealand)Novels, Kundera has suggested, are a means to explore unrealised possibilities and potential futures, to ask questions and investigate scenarios, starting from the present state of the world as we observe it – the “trap the world has become”. In this paper, we argue that agent-based simulation models (ABMs) are much like Kundera’s view of novels, having generative properties that provide a means to explore alternative possible futures (or pasts) by allowing the user to investigate the likely results of causal mechanisms given pre-existing structures and in different conditions. Despite the great uptake in the application of ABMs, many have not taken full advantage of the representational and explanatory opportunities inherent in ABMs. Many applications have relied too much on ‘statistical portraits’ of aggregated system properties at the expense of more detailed stories about individual agent context and particular pathways from initial to final conditions (via heterogeneous agent interactions). We suggest that this generative modelling approach allows the production of narratives that can be used to i) demonstrate and illustrate the significance of the mechanisms underlying emergent patterns, ii) inspire users to reflect more deeply on modelled system properties and potential futures, and iii) provide a means to reveal the model building process and the routes to discovery that lie therein. We discuss these issues in the context of, and using examples from, the increasing number of studies using ABMs to investigate human-environment interactions in geography and the environmental sciences.

Trees, Birds and Timber: Coordinating Long-term Forest ManagementFri 2 Sept, Session 4, Skempton Building, Room 060bJames Millington (King’s College London)Megan Matonis (Colorado State University, United States)Michael Walters (Michigan State University, United States)Kimberly Hall (The Nature Conservancy, United States)Edward Laurent (American Bird Conservancy, United States)Jianguo Liu (Michigan State University, United States)Forest structure is an important determinant of habitat use by songbirds, including species of conservation concern. In this paper, we investigate the combined long-term impacts of variable tree regeneration and timber management on stand structure, bird occupancy probabilities, and timber production in the northern hardwood forests of Michigan’s Upper Peninsula. We develop species-specific relationships between bird occupancy and forest stand structure from field data. We integrate these bird-forest structure relationships with a forest model that couples a forest-gap tree regeneration submodel developed from our field data with the US Forest Service Forest Vegetation Simulator (Ontario variant). When simulated over a century, we find that higher tree regeneration densities ensure conditions allowing larger harvests of merchantable timber, and reducing the impacts of timber harvest on bird forest-stand occupancy probability. When regeneration is poor (e.g., 25% or less of trees succeed in regenerating), timber harvest prescriptions have a greater relative influence on bird species occupancy probabilities than on the volume of merchantable timber harvested. Our results imply that forest and wildlife managers need to work together to ensure tree regeneration and prevent detrimental impacts on timber output and habitat for avian species over the long-term. Where tree regeneration is currently poor (e.g., due to deer herbivory), forest and wildlife managers should pay particularly close attention to the long-term impacts of timber harvest prescriptions on bird species.

Next year’s Annual meeting of the Association of American Geographers will be in Seattle. I was considering attending but I think it might be best to let the dust settle after moving back to the UK in January. Many others will be there however, including James Porter, a colleague and friend from PhD times at King’s College, London. On his behalf, here’s the call for papers for a session he’s organising at the meeting. Deadline is 1st October, more details at the bottom.

Call for PapersThe Politics of Expectations: Nature, Culture, and the Production of Space

Association of American Geographers, Annual Meeting, 12-16th April 2011, Seattle.

Expectations are incredibly powerful things. Whether materialized via climatic models, economic forecasts, or based on the promise of personalised medicines, expectations (and those who engineer them) play a deeply political yet often unsung role in bringing into being a particular kind of future as well as shaping a particular kind of present. Savvy actors seeking to engineer change may decide to write editorials, give press briefings, or try to normalise trust between the communities involved so as to enrol support and resources for an emerging marketplace (and consumer) they have envisioned. Such discursive as well as performative practices pre-emptively shape the social and economic context for developing technologies so that the actors involved not only develop their physical objects but also influence other people’s thinking. Rather than dismiss such efforts as exaggerated or self-serving claims, the “sociology of expectations” (cf. Brown, 2003; Hedgecoe, 2004; Law, 1994) points to the constructive, performative, and even destructive role such expectations have in today’s world where competition for funding, research impact and innovation are so intense. As many geographers researching the ‘commercialization of nature’ have noted (cf. Castree, 2003; Johnson, 2010; Lave et al., 2010; Prudham, 2005), expectations of future natures inhabit contemporary environmental management in a series of subtle and not so subtle ways for all actors.

But how are expectations created, configured, and stabilized? What, and whose, interests shape them, and in turn, whose interests do they shape? And why do some persist whilst others don’t? Such questions speak directly to the ways in which nature (and knowledge of it) is being increasingly commercialized and commodified through its interactions with science and technology. This session builds on controversies such as the climate change emails at UEA, medical trials, carbon forestry and much more to showcase how the “future” is mobilized to govern or proliferate uncertainty and justify particular mechanisms for managing environmental problems. Geographers are uniquely placed to comment on this providing theoretical depth and empirical evidence that sheds light on the commodification of nature whilst also contributing to the socio-technical analyses employed by science and technology studies scholars. We therefore invite papers addressing (though not limited to) the following questions:

Who constructs expectations and why? How / where do they get enacted (i.e. technological, sociocultural, artefacts, etc.)? And how do they get accepted, institutionalized, or perhaps resisted?

How are expectations of nature commercialized? To what extent are expectations central to processes of commercialization and does this vary depending on the specific environmental arena? Are there unnatural expectations?

Do expectations have agency? Can they be negotiated or adapted? If so, what role have geographers played in shaping past perceptions and might hope to play in the future?

What happens if a set of expectations is not successful? Why didn’t they succeed? And what lessons can we learn?

Abstracts should be sent to both James Porter (james.porter at kcl.ac.uk) and Samuel Randalls (s.randalls at ucl.ac.uk) by Friday 1st October 2010.

This week I went to a seminar presented by Dr Richard Bawden of the Systemic Development Institute, Australia. This was the first event in MSU’s “conversation about our food future”. It turned out to be much more interesting than I had hoped; Bawden is an engaging and charismatic speaker who presented a thoughtful perspective on what he termed ‘The Omnivores’ Trifecta’: Agriculture, Food and Health and the Systemic Relationships between them. He covered a hearty spread of ideas, so I’ll recap his most interesting points in bite-sized pieces:

i) Bawden suggested that Agriculture, Food and Health (A-F-H) when considered separately are not a system. But by understanding each as a discourse (i.e. as a subject for “formal discussion of debate”) they become viewed in a systemic perspective.

ii) At the intersection of these three subjects are four very important (sub-)discourses which Bawden termed the “engagement discourse subsystem”. These are: business, lay citizens, governance, and experts.

iii) Bawden proposed that it is the profound differences in episteme (worldview) between these discourse ‘subsystems’ that are at the heart of the majority of the conflicts across the A-F-H system and the environment in which it is situated.

iv) These epistemic differences are so profound as to be polemic. Bawden bemoaned this fact and highlighted that “Dialectic yields to Polemic“. He emphasised that dialectics are the only way forward to forge a world in common and that polemics prevent deliberation, debate and kill democracy.

v) To illustrate these points Bawden used the case of Australian agriculture since the mid-20th century. He described this case as being characteristic of many messy, wicked problems and argued that reductionist science alone was insufficient to bring resolution (and hence is why he founded the Systemic Development Institute). During this argument he quoted Beck but questioned whether we have reached second modernity. Bawden argued that the “culture of technical control” still prevails within current modernist society has an episteme that privileges fact over value, analysis over synthesis, individualism over communalism, teaching over learning and productionism over sustainablism.

vi) On these last two dichotomies, Bawden suggested that the question of what is to be sustained (and therefore what sustainability is) is a moral question not a technical one.

vii) He proposed that higher education is about learning differently not learning more; the ability to look the world and make sense of it for oneself (and then take action in response) is what characterises a good education. Awareness of the presence of different worldviews is key to this ability. Furthermore, Bawden argued that the complete learner will be prepared to enter a form of learning that the academy is currently unable to provide because it is too reductionist. This learning would require critical reflection of one’s own worldview, as Jack Mezirow has proposed.

viii) Bawden then presented the diagram that synthesises his message (see below). This diagram describes the “integrated process of the critical learning system” and shows how perceiving, understanding, planning and acting are connected within our rational experience of the world and how they are linked to the intuitive facets of learning.

Quite the feast of ideas eh? I’m still digesting them and might be for a while. But the key message I take away from this is a post-normal one; in learning about human-environment interactions and to solve current wicked problems, inter-epistemic as well as inter-disciplinary work will be needed. Although different scientific disciplines such as ecology, biology, and chemistry have different terminology and conventions, they share a worldview – the one that favours facts over values and aims to subsume empirical observations into universal laws and theories. Other worldviews are available. Inter-epistemic human-environment study would seek to cross the boundaries between worldviews, recognize that reductionist science is only one way to understand the world and is unlikely provide complete answers to wicked problems, and emphasise dialectics over polemics.

I have a new paper to add to my collection of favourites. Hidden in the somewhat obscure Journal of Critical Realism it touches on several issues that I often find myself thinking about and studying: Interdisciplinarity, Ecology and Scientific Theory.

Karl Høyer and Petter Naess also have plenty to say about sustainability, planning and decision-making and, although they use the case of sustainable urban development, much of what they discuss is relevant to broader issues in the study of coupled human and natural systems. Their perspective resonates with my own.

For example, they outline some of the differences between studying open and closed systems (interestingly with reference to some Nordic writers I have not previously encountered);

… The principle of repetitiveness is crucial in these kinds of [reductionist] science [e.g. atomic physics, chemistry] and their related technologies. But such repetitiveness only takes place in closed systems manipulated by humans, as in laboratories. We will never find it in nature, as strongly emphasised by both Kvaløy and Hägerstrand within the Nordic school. In nature there are always open, complex systems, continuously changing with time. This understanding is in line with key tenets of critical realism. Many of our most serious ecological problems can be explained this way: technologies, their products and substances, developed and tested in closed systems under artificial conditions that generate the illusion of generalised repetitiveness, are released in the real nature of open systems and non-existing repetitiveness. We are always taken by surprise when we experience new, unexpected ecological effects. But this ought not to be surprising at all; under these conditions such effects will necessarily turn up all the time.

…

At the same time, developing strategies for a sustainable future relies heavily on the possibility of predicting the consequences of alternative solutions with at least some degree of precision. Arguably, a number of socio-technical systems, such as the spatial structures of cities and their relationships with social life and human activities, make up ‘pseudo-closed’ systems where the scope for prediction of outcomes of a proposed intervention is clearly lower than in the closed systems of the experiments of the natural sciences, but nevertheless higher than in entirely open systems. Anticipation of consequences, which is indispensable in planning, is therefore possible and recommendable, although fallible.

The main point of their paper, however, is the important role critical realism [see also] might play as a platform for interdisciplinary research. Although Høyer and Naess do highlight some of the more political reasons for scientific and academic disciplinarity, their main points are philosophical;

…the barriers to interdisciplinary integration may also result from metatheoretical positions explicitly excluding certain types of knowledge and methods necessary for a multidimensional analysis of sustainability policies, or even rejecting the existence of some types of impacts and/or the entities causing these impacts.

According to a positivist view, social science research should emulate research within the natural sciences as much as possible. Knowledge based on research where the observations do not lend themselves to mathematical measurement and analysis will then typically be considered less valid and perhaps be dismissed as merely subjective opinions. Needless to say, such a view hardly encourages natural scientists to integrate knowledge based on qualitative social research or from the humanities. Researchers adhering to an empiricist/naive realist metatheory will also tend to dismiss claims of causality in cases where the causal powers do not manifest themselves in strong and regular patterns of events – although such strong regularities are rare in social life.

On the other hand, a strong social constructionist position implies a collapsing of the existence of social objects to the participating agents’ conception or understanding of these objects. …strong social constructionism would typically limit the scope to the cultural processes through which certain phenomena come to be perceived as environmental problems, and neglecting the underlying structural mechanisms creating these phenomena as well as their impacts on the physical environment. At best, strong social constructionism is ambivalent as to whether we can know anything at all about reality beyond the discourses. Such ‘empty realism’, typical of dominant strands of postmodern thought, implies that truth is being completely relativised to discourses on the surface of reality, with the result that one must a priori give up saying anything about what exists outside these discourses. At worst, strong social constructionism may pave the way for the purely idealist view that there is no such reality.

At opposite ends of the positivist-relativist spectrum neither of these perspectives seem to be the most useful for interdisciplinary research. Something that sits between these two extremes – critical realism – might be more useful [I can’t do this next section justice in an abridged version – and this is the main point of the article – so here it is in its entirety];

The above-mentioned examples of shortcomings of reductionist metatheories do not imply that research based on these paradigms is necessarily without value. However, reductionist paradigms tend to function as straitjackets preventing researchers from taking into consideration phenomena and factors of influence not compatible with or ignored in their metatheory. In practice, researchers have often deviated from the limitations prescribed by their espoused metatheoretical positions. Usually, such deviations have tended to improve research rather than the opposite.

However, for interdisciplinary research, there is an obvious need for a more inclusive metatheoretical platform. According to Bhaskar and Danermark, critical realism provides such a platform, as it is ontologically characterised doubly by inclusiveness greater than competing metatheories: it is maximally inclusive in terms of allowing causal powers at different levels of reality to be empirically investigated; and it is maximally inclusive in terms of accommodating insights of other meta-theoretical positions while avoiding their drawbacks.

Arguably, many of the ecologists and ecophilosophers referred to earlier in this paper have implicitly based their work on the same basic assumptions as critical realism. Some critical realist thinkers have also addressed ecological and environmental problems explicitly. Notably, Ted Benton and Peter Dickens have demonstrated the need for an epistemology that recognises social mediation of knowledge but also the social and material dimensions of environmental problems, and how the absence of an interdisciplinary perspective hinders essential understanding of nature/society relationships.

According to critical realism, concrete things or events in open systems must normally be explained ‘in terms of a multiplicity of mechanisms, potentially of radically different kinds (and potentially demarcating the site of distinct disciplines) corresponding to different levels or aspects of reality’. As can be seen from the above, the objects involved in explanations of the (un)sustainability of urban development belong partially to the natural sciences, partially to the social sciences, and are partially of a normative or ethical character. They also belong to different geographical or organisational scales. Thus, similar to (and arguably to an even higher extent than) what Bhaskar and Danermark state about disability research, events and processes influencing the sustainability of urban development must be understood in terms of physical, biological, socioeconomic, cultural and normative kinds of mechanisms, types of contexts and characteristic effects.

According to Bhaskar, social life must be seen in the depiction of human nature as ‘four-planar social being’, which implies that every social event must be understood in terms of four dialectically interdependent planes: (a) material transactions with nature, (b) social interaction between agents, (c) social structure proper, and (d) the stratification of embodied personalities of agents. All these categories of impacts should be addressed in research on sustainable urban development. Impacts along the first dimension, category (a), typically include consequences of urban development for the physical environment. Consequences in terms of changing location of activities and changing travel- ling patterns are examples of impacts within category (b). But this category also includes the social interaction between agents leading to changes in, among others, the spatial and social structures of cities. Relevant mechanisms at the level of social structure proper (category [c]) might include, for exam- ple, impacts of housing market conditions on residential development projects and consequences of residential development projects for the overall urban structure. The stratified personalities of agents (category [d]) include both influences of agents on society and the physical environment and influences of society and the physical environment on the agents. The latter sub-category includes physical impacts of urban development, such as unwholesome noise and air pollution, but also impacts of the way urban planning and decision- making processes are organised, for example, in terms of effects on people’s self esteem, values, opportunities for personal growth and their motivation for participating in democratic processes. The influence of discourses on the population’s beliefs about the changes necessary to bring about sustainable development and the conditions for implementing such changes also belongs to this sub-category. The sub-category of influences of agents on society and the physical environment includes the exercise of power by individual and corporate agents, their participation in political debates, their contribution to knowledge, and their practices in terms of, for example, type and location of residence, mobility, lifestyles more generally, and so on.

Regarding issues of urban sustainability, the categories (a)–(d) are highly interrelated. If this is the case, we are facing what Bhaskar and Danermark characterise as a ‘laminated’ system, in which case explanations involving mechanisms at several or all of these levels could be termed ‘laminated expla- nations’. In such situations, monodisciplinary empirical studies taking into consideration only those factors of influence ‘belonging’ to the researcher’s own discipline run a serious risk of misinterpreting these influences. Examples of such misinterpretations are analyses where increasing car travel in cities is explained purely in terms of prevailing attitudes and lifestyles, addressing neither political-economic structures contributing to consumerism and car-oriented attitudes, nor spatial-structural patterns creating increased needs for individual motorised travel.

Moreover, the different strata of reality and their related mechanisms (that is, physical, biological, socio-economic, cultural and normative kinds of mechanisms) involved in urban development cannot be understood only in terms of categories (a)–(d) above. They are also situated in macroscopic (or overlying) and less macroscopic (or underlying) kinds of structures or mechanisms. For research into sustainable urban development issues, such scale-awareness is crucial. Much of the disagreement between proponents of the ‘green’ and the ‘compact’ models of environmentally sustainable urban development can probably be attributed to their focus on problems and challenges at different geographical scales: whereas the ‘compact city’ model has focused in particular on the impacts of urban development on the surrounding environment (ranging from the nearest countryside to the global level), proponents of the ‘green city’ model have mainly been concerned about the environment within the city itself. A truly environmentally sustainable urban development would require an integration of elements both from the former ‘city within the ecology’ and the latter ‘ecology within the city’ approaches. Similarly, analyses of social aspects of sustainable development need to include both local and global effects, and combine an understanding of practices within particular groups with an analysis of how different measures and traits of development affect the distribution of benefits and burdens across groups.

Acknowledging that reality consists of different strata, that multiple causes are usually influencing events and situations in open systems, and that a pluralism of research methods is recommended as long as they take the ontological status of the research object into due consideration, critical realism appears to be particularly well suited as a metatheoretical platform for interdisciplinary research. This applies not least to research into urban sustainability issues where, as has been illustrated above, other metatheoretical positions tend to limit the scope of analysis in such a way that sub-optimal policies within a particular aspect of sustainability are encouraged at the cost of policies addressing the challenges of sustainable urban development in a comprehensive way.

In conclusion; critical realism can play a very important role as an underlabourer of interdisciplinarity, with its maximal inclusiveness both in terms of allowing causal powers at different levels of reality to be empirically investigated and in terms of accommodating insights of other meta-theoretical positions while avoiding their drawbacks

I’m going to have to spend some time thinking about this but there seems to be plenty to get ones teeth into here with regards the study of coupled human and natural systems and the use of agent-based modelling approaches. For example, agent-based modelling seems to offer a means to represent Bhaskar‘s four planes but there are plenty of questions about how to do this appropriately. I also need to think more carefully about how these four planes are manifested in the systems I study. Generally however, it seems that critical realism offers a useful foundation from which to build interdisciplinary studies of the interaction of humans and their environment for the exploration of potential pathways to ensure sustainable landscapes.

Previously, I mentioned a thread on SIMSOC initiated by Scott Moss. He asked ‘Does anyone know of a correct, real-time, [agent] model-based, policy-impact forecast?. Following on to the responses to that question, earlier this week he started a new thread entitled ‘What’s the Point?:

“We already know that economic recessions and recoveries have probably never been forecast correctly — at least no counter-examples have been offered. Similarly, no financial market crashes or recoveries or significant shifts in market shares have ever, as far as we know, been forecast correctly in real time.

I believe that social simulation modelling is useful for reasons I have been exploring in publications for a number of years. But I also recognise that my beliefs are not widely held.

So I would be interested to know why other modellers think that modelling is useful or, if not useful, why they do it.”

After reading others’ responses I decided to reply with my own view:

“For me prediction of the future is only one facet of modelling (whether agent-based or any other kind) and not necessarily the primary use, especially with regards policy modelling. This view stems party from the philosophical difficulties outlined by Oreskes et al. (1994), amongst others. I agree with Mike that the field is still in the early stages of development, but I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world’. As Pablo suggested, if we are to predict the future the inherent uncertainties will be best highlighted and accounted for by ensuring predictions are tied to a probability.”

There was a brief response to mine then and then another, more assertive, response that (I think) highlights a common confusion of the different uses of prediction in modelling:

“If models of economic policy are fundamentally unable to at some point predict the effects of policy — that is, to in some measure predict the future — then, to be blunt, what good are they? If they are unable to be predictive then they have no empirical, practical, or theoretical value. What’s left? I ask that in all seriousness.

Referring to Epstein’s article, if a model is not sufficiently grounded to show predictive power (a necessary condition of scientific results), then how can it be said to have any explanatory power? Without prediction as a stringent filter, any amount of explanation from a model becomes equivalent to a “just so” story, at worst giving old suppositions the unearned weight of observation, and at best hitting unknowably close to the mark by accident. To put that differently, if I have a model that provides a neat and tidy explanation of some social phenomena, and yet that model does not successfully replicate (and thus predict) real-world results to any degree, then we have no way of knowing if it is more accurate as an explanation than “the stars made it happen” or any other pseudo-scientific explanation. Explanations abound; we have never been short of them. Those that can be cross-checked in a predictive fashion against hard reality are those that have enduring value.

…

But the difficulty of creating even probabalistically predictive models, and the relative infancy of our knowledge of models and how they correspond to real-world phenomena, should not lead us into denying the need for prediction, nor into self-justification in the face of these difficulties. Rather than a scholarly “the dog ate my homework,” let’s acknowledge where we are, and maintain our standards of what modeling needs to do to be effective and valuable in any practical or theoretical way. Lowering the bar (we can “train practitioners” and “discipline policy dialogue” even if we have no way of showing that any one model is better than another) does not help the cause of agent-based modeling in the long run.

I felt this required a response – it seemed to me that difference between logical prediction and temporal prediction was being missed:

“In my earlier post I wrote: “I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world'”. I was careful about how I worded this [more careful than ensuring correct formatting of the post it seems – my original post is below in a more human-readable format] and maybe some clarification in the light of Mike’s comments would be useful. Here goes…

Precisely predicting the future state of an ‘open’ system at a particular instance in time does not imply we have explained or understand it (due to the philosophical issues of affirming the consequent, equifinality, underdetermination, etc.). To be really useful for explanation and to have enduring value model predictions of any system need to be cross-checked against hard reality *many times*, and in the case of societies probably also in many places (and should ideally be produced by models that are consistent with other theories). Producing multiple accurate predictions will be particularly tricky for things like the global economy for which only have one example (but of course will be easier where experimental replication more ogistically feasible).

My point is two-fold:1) a single, precise prediction of a future does not really mean much with regard our understanding of an open system,2) multiple precise predictions are more useful but will be more difficult to come by.

This doesn’t necessarily mean that we will never be able to consistently predict the future of open systems (in Scott’s sense of correctly forecasting of the timing and direction of change of specified indicators). I just think it’s a ways off yet, that there will always be uncertainty, and that we need to deal with this uncertainty explicitly via probabilistic output from model ensembles and other methods.Rather than lowering standards, a heuristic use of models demands we think more closely about *how* we model and what information we provide to policy makers (isn’t that the point of modelling policy outcomes in the end?).

Let’s be clear, the heuristic use of models does not allow us to ignore the real world – it still requires us to compare our model output with empirical data. And as Mike rightly pointed out, many of Epstein’s reasons to model – other than to predict – require such comparisons. However, the scientific modelling process of iteratively comparing model output with empirical data and then updating our models is a heuristic one – it does not require that precise prediction at specific point in the future is the goal before all others.

Lowering any level of standards will not help modelling – but I would argue that understanding and acknowledging the limits of using modelling in different situations in the short-term will actually help to improve standards in the long run. To develop this understanding we need to push models and modelling to their limits to find our what works, what we can do and what we can’t – that includes iteratively testing the temporal predictions of models. Iteratively testing models, understanding the philosophical issues of attempting to model social systems, exploring the use of models and modelling qualitatively (as a discussant, and a communication tool, etc.) should help modellers improve the information, the recommendations, and the working relationships they have with policy-makers.

In the long run I’d argue that both modellers and policy-makers will benefit from a pragmatic and pluralistic approach to modelling – one that acknowledges there are multiple approaches and uses of models and modelling to address societal (and environmental) questions and problems, and that [possibly self evidently] in different situations different approaches will be warranted. Predicting the future should not be the only goal of modelling social (or environmental) systems and hopefully this thread will continue to throw up alternative ideas for how we can use models and the process of modelling.”

Note that I didn’t explicitly point out the difference between the two different uses of prediction (that Oreskes and other have previously highlighted). It took Dan Olner a couple of posts later to explicitly describe the difference:

“We need some better words to describe model purpose. I would distinguish two –

a. Forecasting (not prediction) – As Mike Sellers notes, future prediction is usually “inherently probabalistic” – we need to know whether our models can do any better than chance, and how that success tails off as time passes. Often when we talk about “prediction” this is what we mean – prediction of a more-or-less uncertain future. I can’t think of a better word than forecasting.

b. Ontological prediction (OK, that’s two words!) – a term from Gregor Betz, Prediction Or Prophecy (2006). He gives the example of the prediction of Neptune’s existence from Newton’s laws – Uranus’ orbit implied that another body must exist. Betz’s point is that an ontological prediction is “timeless” – the phenomenon was always there. Einstein’s predictions about light bending near the sun is another: something that always happened, we just didn’t think to look for it. (And doubtless Eddington wouldn’t have considered *how* to look, without the theory.)

In this sense forecasting (my temporal prediction) is distinctly temporal (or spatial) and demands some statement about when (or where) an event or phenomena will occur. In contrast, ontological prediction (my logical prediction) is independent of time and/or space and is often used in closed system experiments searching for ‘universal’ laws. I wrote more about this in a series of blog posts I wrote a while back on the validation of models of open systems.

This discussion is ongoing on SIMSOC and Scott Moss has recently posted again suggesting a summary of the early responses:

“I think a perhaps extreme summary of the common element in the responses to my initial question (what is the point?, 9/6/09) is this:

**The point of modelling is to achieve precision as distinct from accuracy.**

That is, a model is a more or less complicated formal function relating a set of inputs clearly to a set of outputs. The formal inputs and outputs should relate unambiguously to the semantics of policy discussions or descriptions of observed social states and/or processes.

This precision has a number of virtues including the reasons for modelling listed by Josh Epstein. The reasons offered by Epstein and expressed separately by Lynne Hamill in her response to my question include the bounding and informing of policy discussions.

I find it interesting that most of my respondents do not consider accuracy to be an issue (though several believe that some empirically justified frequency or even probability distributions can be produced by models). And Epstein explicitly avoids using the term validation in the sense of confirmation that a model in some sense accurately describes its target phenomena.

So the upshot of all this is that models provide a kind of socially relevant precision. I think it is implicit in all of the responses (and the Epstein note) that, because of the precision, other people should care about the implications of our respective models. This leads to my follow-on questions:

Is precision a good enough reason for anyone to take seriously anyone else’s model? If it is not a good enough reason, then what is?

And so arises the debate about the importance of accuracy over precision (but the original ‘What is the point’ thread continues also). In hindsight, I think it may have been more appropriate for me to use the word accurate than precise in my postings. All this debate may seem to be just semantics and navel-gazing to many people, but as I argued in my second post, understanding the underlying philosophical basis of modelling and representing reality (however we might measure or perceive it) gives us a better chance of improving models and modelling in the long run…