Category Archives: CAT_ontology

Supervenience is the view that the properties of some composite entity B are wholly fixed by the properties and relations of the items A of which it is composed (link, link). The transparency of glass supervenes upon the properties of the atoms of silicon and oxygen of which it is composed and their arrangement.

Can the same be said of a business firm like Xerox when we consider its constituents to be its employees, stakeholders, and other influential actors and their relations and actions? (Call that total field of factors S.) Or is it possible that exactly these actors at exactly the same time could have manifested a corporation with different characteristics?

Let’s say the organizational properties we are interested in include internal organizational structure, innovativeness, market adaptability, and level of internal trust among employees. And S consists of the specific individuals and their properties and relations that make up the corporation at a given time. Could this same S have manifested with different properties for Xerox?

One thing is clear. If a highly similar group of individuals had been involved in the creation and development of Xerox, it is entirely possible that the organization would have been substantially different today. We could expect that contingent events and a high level of path dependency would have led to substantial differences in organization, functioning, and internal structure. So the company does not supervene upon a generic group of actors defined in terms of a certain set of beliefs, goals, and modes of decision making over the history of its founding and development. I have sometimes thought this path dependency itself if enough to refute supervenience.

But the claim of supervenience is not a temporal or diachronic claim, but instead a synchronic claim: the current features of structure, causal powers, functioning, etc., of the higher-level entity today are thought to be entirely fixed by the supervenience base (in this case, the particular individuals and their relations and actions). Putting the idea in terms of possible-world theory, there is no possible world in which exactly similar individuals in exactly similar states of relationship and action would underlie a business firm Xerox* which had properties different from the current Xerox firm.

One way in which this counterfactual might be true is if a property P of the corporation depended on the states of the agents plus something else — say, the conductivity of copper in its pure state. In the real world W copper is highly conductive, while in W* copper is not-conductive. And in W*, let’s suppose, Xerox has property P* rather than P. On this scenario Xerox does not supervene upon the states of the actors, since these states are identical in W and W*. This is because dependence on the conductivity of copper make a difference not reflected in a difference in the states of the actors.

But this is a pretty hypothetical case. We would only be justified in thinking Xerox does not supervene on S if we had a credible candidate for another property that would make a difference, and I’m hard pressed to do so.

There is another possible line of response for the hardcore supervenience advocate in this case. I’ve assumed the conductivity of copper makes a difference to the corporation without making a difference for the actors. But I suppose it might be maintained that this is impossible: only the states of the actors affect the corporation, since they constitute the corporation; so the scenario I describe is impossible.

The upshot seems to be this: there is no way of resolving the question at the level of pure philosophy. The best we can do is to do concrete empirical work on the actual causal and organizational processes through which the properties of the whole are constituted through the actions and thoughts of the individuals who make it up.

But here is a deeper concern. What makes supervenience minimally plausible in the case of social entities is the insistence on synchronic dependence. But generally speaking, we are always interested in the diachronic behavior and evolution of a social entity. And here the idea of path dependence is more credible than the idea of moment-to-moment dependency on the “supervenience base”. We might say that the property of “innovativeness” displayed by the Xerox Corporation at some periods in its history supervenes moment-to-moment on the actions and thoughts of its constituent individuals; but we might also say that this fact does not explain the higher-level property of innovativeness. Instead, some set of events in the past set the corporation on a path that favored innovation; this corporate culture or climate influenced the selection and behavior of the individuals who make it up; and the day-to-day behavior reflects both the path-dependent history of its higher-level properties and the current configuration of its parts.

(Thanks, Raphael van Riel, for your warm welcome to the Institute of Philosophy at the University of Duisburg-Essen, and for the many stimulating conversations we had on the topics of supervenience, generativity, and functionalism.)

Like this:

I’ve argued for the idea that social phenomena are generated by the actions, thoughts, and mental frameworks of myriad actors (link). This expresses the idea of ontological individualism. But I also believe that social arrangements — structures, ideologies, institutions — have genuine effects on the actions of individual actors and populations of actors and on intermediate-level social structures. There is real downward and lateral causation in the social world. Are these two views compatible?

I believe they are compatible.

The negative view holds that what appears to be downward causation is really just the workings of the lower-level components through their aggregation dynamics — the lower struts of Coleman’s boat (link). So when we say “the ideology of nationalism causes the rise of ultraconservative political leaders”, this is just a shorthand for “many voters share the values of nationalism and elect candidates who propose radical solutions to issues like immigration.” This seems to be the view of analytical-sociology purists.

But consider the alternative view — that higher level entities sometimes come to possess stable causal powers that influence the behavior and even the constitution of the entities of which they are composed. This seems like an implausible idea in the natural sciences — it is hard to imagine a world in which electrons have different physical properties as an effect of the lattice arrangement of atoms in a metal. But human actors are different from electrons and atoms, in that their behavior and constitution are in fact plastic to an important degree. In one social environment actors are disposed to be highly attentive to costs and benefits; in another social environment they are more amenable to conformance to locally expressed norms. And we can say quite a bit about the mechanisms of social psychology through which the cognitive and normative frameworks of actors are influenced by features of their social environments. This has an important implication: features of the higher-level social reality can change the dispositions and workings of the lower-level actors. And these changes may in turn lead to the emergence of new higher-level factors (new institutions, new normative systems, new social practices of solidarity, …). So enduring social arrangements can cause changes in the dynamic properties of the actors who live within them.

Could we even say, more radically and counter-intuitively, that a normative structure like extremist populism “generates” behavior at the individual level? So rather than holding that individual actions generate higher-level structures, might we hold that higher-level normative structures generate patterns of behavior? For example, we might say that the normative strictures of patriarchy generate patterns of domination and deference among men and women at the individual level; or the normative strictures of Jim-Crow race relations generate individual-level patterns of subordination and domination among white and black individuals. There is a sense in which this statement about the direction of generation is obviously true; broadly shared knowledge frameworks or normative commitments “generate” typical forms of behavior in stylized circumstances of choice.

Does this way of thinking about the process of “generation” suggest that we need to rethink the directionality implied by the micro-macro distinction? Might we say that normative systems and social structures are as fundamental as patterns of individual behavior?

Consider the social reality depicted in the photograph above. Here we see coordinated action of a number of soldiers climbing out of a trench in World War I to cross the killing field of no mans land. The dozen or so soldiers depicted here are part of a vast army at war (3.8 million by 1918), deployed over a front extending hundreds of miles. The majority of the soldiers depicted here are about to receive grievous or mortal wounds. And yet they go over the trench. What can we say about the cause of this collective action at a specific moment in time? First, an order was conveyed through a communications system extending from commander to sergeant to enlisted man: “attack at 7:00 am”. Second, the industrial wealth of Great Britain permitted the state the ability to equip and field a vast infantry army. Third, a system of international competition broke down into violent confrontation and war, leading numerous participant nations to organize and fund armies at war to defeat their enemies. Fourth, the morale of the troops was maintained at a sufficiently high level to avoid mass desertion and refusal to fight. Fifth, an infantry training regime existed which gave ordinary farmhands, workers, accountants, and lords the habits and skills of infantry soldiers. All of these factors are part of the causal background of this simple episode in World War I; and most of these factors exist at a meso- or macro-level of social organization. Clearly this particular group of social actors was influenced by higher-level social factors. But equally clearly, the mechanisms through which these higher-level social factors work are straightforward to identify through reference to systems of individual actors.

Think for a minute about materials science. The hardness of titanium causes the nail to scratch the glass. It is true that material properties like hardness depend upon their microstructures. Nonetheless we are perfectly comfortable in attributing real causal powers to titanium at the level of a macro-material. And this attribution is not merely a way of summarizing a long story about the micro-structure of metallic titanium.

I’ve generally tried to think about these kinds of causal stories in terms of the idea of microfoundations. The hardness of titanium derives from its microfoundations at the level of atomic and subatomic causation. And the causal powers of patriarchy derive from the fact that the normative principles of partriarchy are embedded in the minds and behavior of many individuals, who become exemplars, enforcers, and encouragers of compliant behavior. The processes through which individuals acquire normative principles and the processes through which they behaviorally reflect these principles constitute the microfoundations of the meso- and macro-power of patriarchy.

So the question of whether there is downward causation seems almost too easy. Of course there is downward causation in the social world. Individuals are influenced in their choices and behavior by structural and normative factors beyond their control. And more fundamentally, individuals are changed in their fundamental dispositions to behavior through their immersion in social arrangements.

Like this:

The worrisome likelihood that Russians and other malevolent actors are tinkering with public opinion in Western Europe and the United States through social media creates various kinds of anxiety. Are our democratic values so fragile that a few thousand Facebook or Twitter memes could put us on a different plane about important questions like anti-Muslim bigotry, racism, intolerance, or fanaticism about guns? Can a butterfly in Minsk create a thunderstorm of racism in Cincinnati? Have white supremacy and British ultra-nationalism gone viral?

There is an interesting analogy here with the weather. The weather next Wednesday is the net consequence of a number of processes and variables, none of which are enormously difficult to analyze. But in their complex interactions they create outcomes that are all but impossible to forecast over a period of more than three days. And this suggests the interesting idea that perhaps public opinion is itself the result of complex and chaotic processes that give rise to striking forms of non-linear change over time.

Can we do a better job of understanding the dynamics of public opinion by making use of the tools of complexity theory? Here is a summary description of complex systems provided by John Holland in Complexity: A Very Short Introduction:

Complexity, once an ordinary noun describing objects with many interconnected parts, now designates a scientific field with many branches. A tropical rainforest provides a prime example of a complex system. The rainforest contains an almost endless variety of species—one can walk a hundred paces without seeing the same species of tree twice, and a single tree may host over a thousand distinct species of insects. The interactions between these species range from extreme generalists (‘ army’ ants will consume most anything living in their path) to extreme specialists (Darwin’s ‘comet orchid’, with a foot-long nectar tube, can only be pollinated by a particular moth with a foot-long proboscis—neither would survive without the other). Adaptation in rainforests is an ongoing, relatively rapid process, continually yielding new interactions and new species (orchids, closely studied by Darwin, are the world’s most rapidly evolving plant form). This lush, persistent variety is almost paradoxical because tropical rainforests develop on the poorest of soils—the rains quickly leach all nutrients into the nearest creek. What makes such variety possible? (1)

Let’s consider briefly how public opinion might fit into the framework of complexity theory. On the positive side, public opinion has some of the dynamic characteristics of systems that are often treated as being complex: non-linearity, inflection points, critical mass. Like a disease, a feature of public opinion can suddenly “go viral” — reproduce many times more rapidly than in previous periods. And the collective phenomenon of public opinion has a feature of “self-causation” that finds parallels in other kinds of systems — a sudden increase in the currency of a certain attitude or belief can itself accelerate the proliferation of the belief more broadly.

On the negative side, the causal inputs to public opinion dynamics do not appear to be particularly “complex” — word-of-mouth, traditional media, local influencers, and the new factor of social media networks like Twitter, Weibo, or Facebook. We might conceptualize a given individual’s opinion formation as the net result of information and influence received through these different kinds of inputs, along with some kind of internal cognitive processing. And the population’s “opinions” are no more than the sum of the opinions of the various individuals.

Most fundamentally — what are the “system” characteristics that are relevant to the dynamics of public opinion in a modern society? How does public opinion derive from a system of individuals and communication pathways?

This isn’t a particularly esoteric question. We can define public opinion at the statistical aggregate of the distribution of beliefs and attitudes throughout a population — recognizing that there is a distribution of opinion around every topic. For example, at present public opinion in the United States on the topic of President Trump is fairly negative, with a record low 35% approval rating. And the Pew Research Center finds that US public opinion sees racism as an increasingly important problem (link):

Complexity theorists like Scott Page and John Holland focus much attention on a particular subset of complex systems, complex adaptive systems (CAS). These are systems in which the agents are themselves subject to change. And significantly, public opinion in a population of human agents is precisely such a system. The agents change their opinions and attitudes as a result of interaction with other agents through the kinds of mechanisms mentioned here. If we were to model public opinion as a “pandemonium” process, then the possibility of abrupt non-linearities in a population becomes apparent. Assume a belief-transmission process in which individuals transmit beliefs to others with a volume proportional to their own adherence to the belief and the volume and number of other agents from whom they have heard the belief, and individuals adopt a belief in proportion to the number and volume of voices they hear that are espousing the belief. Contagion is no longer a linear relationship (exposure to an infected individual results in X probability of infection), but rather a non-linear process in which the previous cycle’s increase leads to amplified infection rate in the next round.

Here is a good review article of the idea of a complex system and complexity science by Ladyman, Lambert and Wiesner (link, link). Here is a careful study of the diffusion of “fake news” by bots on Twitter (link, link). (The graphic at the top is taken from this article.) And here is a Ph.D. dissertation on modeling public opinion by Emily Cody (link).

Like this:

Critical realism proposes an approach to the social world that pays particular attention to objective and material features of the social realm — property relations, impersonal institutional arrangements, supra-individual social structures. Between structure and agent, CR seems most often to lean towards structures rather than consciously feeling and thinking agents. And so one might doubt whether CR has anything useful to offer when it comes to studying the subjective side of social life.

Take for example the idea of a social identity. A social identity seems inherently subjective. It is the bundle of ideas and frameworks through which one places himself or herself in the social world, the framework through which a person conceptualizes his/her relations with others, and an ensemble of the motivations and commitments that lead to important forms of social and political action. All of this sounds subjective in the technical sense — a part of the subjective and personal experience of a single individual. It is part of consciousness, not the material world.

So it is reasonable to ask whether there is anything in a social identity that is available for investigation through the lens of critical realism.

The answer, however, seems to be fairly clear. Ideas and mental frameworks have social antecedents and causal influences. Individuals take shape through concrete social development that is conducted through stable social arrangements and institutions. Consciousness has material foundations. And therefore, it is perfectly appropriate to pursue a realist materialist investigation of social consciousness. This was in fact one important focus of the Annales school of historiography.

This is particularly evident in the example of a social identity. No one is born with a Presbyterian or a Sufi religious identity. Instead, children, adolescents, and young adults acquire their religious and moral ideas through interaction with other individuals, and many of those interactions are determined by enduring social structures and institutional arrangements. So it is a valid subject of research to attempt to uncover the pathways of interaction and influence through which individuals come to have the ideas and values they currently have. This is a perfectly objective topic for social research.

But equally, the particular configuration of beliefs and values possessed by a given individual and a community of individuals is an objective fact as well, and it is amenable to empirical investigation. The research currently being done on the subcultures of right wing extremism illustrates this point precisely. It is an interesting and important fact to uncover (if it is a fact) that the ideologies and symbols of hate that seem to motivate right wing youth are commonly associated with patriarchal views of gender as well.

So ideas and identities are objective in at least two senses, and are therefore amenable to treatment from a realist perspective. They have objective social determinants that can be rigorously investigated; and they have a particular grammar and semiotics that need to be rigorously investigated as well. Both kinds of inquiry are amenable to realist interpretation: we can be realist about the mechanisms through which a given body of social beliefs and values are promulgated through a population, and we can be realist about the particular content of those belief systems themselves.

Ironically, this position seems to converge in an unexpected way with two streams of classical social theory. This approach to social consciousness resonates with some of the holistic ideas that Durkheim brought to his interpretation of religion and morality. But likewise it brings to mind Marx’s views of the determinants of social consciousness through objective material circumstances. We don’t generally think of Marx and Durkheim as having much in common. But on the topic of the material reality of ideas and their origins in material features of social life, they seem to agree.

These considerations seem to lead to a strong conclusion: critical realism can be as insightful in its treatment of objective social structures as it is in study of “subjective” features of social consciousness and identities.

Like this:

Brian Epstein is adamant that the social sciences need to think very differently about the nature of the social world. In The Ant Trap: Rebuilding the Foundations of the Social Sciences he sets out to blow up our conventional thinking about the relation between individuals and social facts. In particular, he is fundamentally skeptical about any conception of the social world that depends on the idea of ontological individualism, directly or indirectly. Here is the plainest statement of his view:

When we look more closely at the social world, however, this analogy [of composition of wholes out of independent parts] falls apart. We often think of social facts as depending on people, as being created by people, as the actions of people. We think of them as products of the mental processes, intentions, beliefs, habits, and practices of individual people. But none of this is quite right. Research programs in the social sciences are built on a shaky understanding of the most fundamental question of all: What are the social sciences about? Or, more specifically: What are social facts, social objects, and social phenomena—these things that the social sciences aim to model and explain?

My aim in this book is to take a first step in challenging what has come to be the settled view on these questions. That is, to demonstrate that philosophers and social scientists have an overly anthropocentric picture of the social world. How the social world is built is not a mystery, not magical or inscrutable or beyond us. But it turns out to be not nearly as people-centered as is widely assumed. (p. 7)

Here is one key example Epstein provides to give intuitive grasp of the anti-reductionist metaphysics he has in mind — the relationship between “the Supreme Court” and the nine individuals who make it up.

One of the examples I will be discussing in some detail is the United States Supreme Court. It is small— nine members— and very familiar, so there are lots of facts about it we can easily consider. Even a moment’s reflection is enough to see that a great many facts about the Supreme Court depend on much more than those nine people. The powers of the Supreme Court are not determined by the nine justices, nor do the nine justices even determine who the members of the Supreme Court are. Even more basic, the very existence of the Supreme Court is not determined by those nine people. In all, knowing all kinds of things about the people that constitute the Supreme Court gives us very little information about what that group is, or about even the most basic facts about that group. (p. 10)

Epstein makes an important observation when he notes that there are two “consensus” views of the individual-level substrate of the social world, not just one. The first is garden-variety individualism: it is individuals and their properties (psychological, bodily) involved in external relations with each other that constitute the individual-level substrate of the social. In this case is reasonable to apply the supervenience relation to the relation between individuals and higher-level social facts (link).

The second view is more of a social-constructivist orientation towards individuals: individuals are constituted by their representations of themselves and others; the individual-level is inherently semiotic and relational. Epstein associates this view with Searle (50 ff.); but it seems to characterize a range of other theorists, from Geertz to Goffman and Garfinkel. Epstein refers to this approach as the “Standard Model” of social ontology. Fundamental to the Standard View is the idea of institutional facts — the rules of a game, the boundaries of a village, the persistence of a paper currency. Institutional facts are held in place by the attitudes and performances of the individuals who inhabit them; but they are not reducible to an ensemble of individual-level psychological facts. And the constructionist part of the approach is the idea that actors jointly constitute various social realities — a demonstration against the government, a celebration, or a game of bridge. And Epstein believes that supervenience fails in the constructivist ontology of the Standard View (57).

Both views are anti-dualistic (no inherent social “stuff”); but on Epstein’s approach they are ultimately incompatible with each other.

But here is the critical point: Epstein doesn’t believe that either of these views is adequate as a basis for social metaphysics. We need a new beginning in the metaphysics of the social world. Where to start this radical work? Epstein offers several new concepts to help reshape our metaphysical language about social facts — what he refers to as “grounding” and “anchoring” of social facts. “Grounding” facts for a social fact M are lower-level facts that help to constitute the truth of M. “Bob and Jane ran down Howe Street” partially grounds the fact “the mob ran down Howe Street” (M). The fact about Bob and Jane is one of the features of the world that contributes to the truth and meaning of M. “Full grounding” is a specification of all the facts needed in order to account for M. “Anchoring” facts are facts that characterize the constructivist aspect of the social world — conformance to meanings, rules, or institutional structures. An anchoring fact is one that sets the “frame” for a social fact. (An earlier post offered reflections on anchor individualism; link.)

Epstein suggests that “grounding” corresponds to classic ontological individualism, while “anchoring” corresponds to the Standard View (the constructivist view).

What I will call “anchor individualism” is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (100)

And he believes that a more adequate social ontology is one that incorporates both grounding and anchoring relations. “Anchoring and grounding fit together into a single model of social ontology” (82).

Here is an illustrative diagram of how the two kinds of relations work in a particular social fact (Epstein 94):

So Epstein has done what he set out to do: he has taken the metaphysics of the social world as seriously as contemporary metaphysicians do other important topics, and he has teased out a large body of difficult questions about constitution, causation, formation, grounding, and anchoring. This is a valuable and innovative contribution to the philosophy of social science.

But does this exercise add significantly to our ability to conduct social science research and theory? Do James Coleman, Sam Popkin, Jim Scott, George Steinmetz, or Chuck Tilly need to fundamentally rethink their approach to the social problems they attempted to understand in their work? Do the metaphysics of “frame”, “ground”, and “anchor” make for better social research?

My inclination is to think that this is not an advantage we can attribute to The Ant Trap. Clarity, precision, surprising conceptual formulations, yes; these are all virtues of the book. But I am not convinced that these conceptual innovations will actually make the work of explaining industrial actions, rebellious behavior, organizational failures, educational systems that fail, or the rise of hate-based extremism more effective or insightful.

In order to do good social research we do of course need to have a background ontology. But after working through The Ant Trap several times, I’m still not persuaded that we need to move beyond a fairly commonsensical set of ideas about the social world:

individuals have mental representations of the world they inhabit

institutional arrangements exist through which individuals develop, form, and act

institutions and norms are embodied in the thoughts, actions, artifacts, and traces of individuals (grounded and anchored, in Epstein’s terms)

social causation proceeds through the substrate of individuals thinking, acting, re-acting, and engaging with other individuals

These are the assumptions that I have in mind when I refer to “actor-centered sociology” (link). This is not a sophisticated philosophical theory of social metaphysics; but it is fully adequate to grounding a realist and empirically informed effort to understand the social world around us. And nothing in The Ant Trap leads me to believe that there are fundamental conceptual impossibilities embedded in these simple, mundane individualistic ideas about the social world.

And this leads me to one other conclusion: Epstein argues the social sciences need to think fundamentally differently. But actually, I think he has shown at best that philosophers can usefully think differently — but in ways that may in the end not have a lot of impact on the way that inventive social theorists need to conceive of their work.

(The photo at the top is chosen deliberately to embody the view of the social world that I advocate: contingent, institutionally constrained, multi-layered, ordinary, subject to historical influences, constituted by indefinite numbers of independent actors, demonstrating patterns of coordination and competition. All these features are illustrated in this snapshot of life in Copenhagen — the independent individuals depicted, the traffic laws that constrain their behavior, the polite norms leading to conformance to the crossing signal, the sustained effort by municipal actors and community based organizations to encourage bicycle travel, and perhaps the lack of diversity in the crowd.)

John Holland describes some of the features of behavior of complex systems in these terms in Complexity:

self-organization into patterns, as occurs with flocks of birds or schools of fish

chaotic behaviour where small changes in initial conditions (‘ the flapping of a butterfly’s wings in Argentina’) produce large later changes (‘ a hurricane in the Caribbean’)

‘fat-tailed’ behaviour, where rare events (e.g. mass extinctions and market crashes) occur much more often than would be predicted by a normal (bell-curve) distribution

adaptive interaction, where interacting agents (as in markets or the Prisoner’s Dilemma) modify their strategies in diverse ways as experience accumulates. (p. 5)

In CAS the elements are adaptive agents, so the elements themselves change as the agents adapt. The analysis of such systems becomes much more difficult. In particular, the changing interactions between adaptive agents are not simply additive. This non-linearity rules out the direct use of PDEs in most cases (most of the well-developed parts of mathematics, including the theory of PDEs, are based on assumptions of additivity). (p. 11)

Miller and Page put the point this way:

One of the most powerful tools arising from complex systems research is a set of computational techniques that allow a much wider range of models to be explored. With these tools, any number of heterogeneous agents can interact in a dynamic environment subject to the limits of time and space. Having the ability to investigate new theoretical worlds obviously does not imply any kind of scientific necessity or validity— these must be earned by carefully considering the ability of the new models to help us understand and predict the questions that we hold most dear. (Complex Adaptive Systems, kl 199)

Much of the focus of complex systems is on how systems of interacting agents can lead to emergent phenomena. Unfortunately, emergence is one of those complex systems ideas that exists in a well-trodden, but relatively untracked, bog of discussion. The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. Clearly such notions are important when considering the decentralized systems that are key to the study of complex systems. Here we discuss emergence from both an intuitive and a theoretical perspective.

(Complex Adaptive Systems, kl 832)

As discussed previously, we have access to some useful “emergence” theorems for systems that display disorganized complexity. However, to fully understand emergence, we need to go beyond these disorganized systems with their interrelated, helter-skelter agents and begin to develop theories for those systems that entail organized complexity. Under organized complexity, the relationships among the agents are such that through various feedbacks and structural contingencies, agent variations no longer cancel one another out but, rather, become reinforcing. In such a world, we leave the realm of the Law of Large Numbers and instead embark down paths unknown. While we have ample evidence, both empirical and experimental, that under organized complexity, systems can exhibit aggregate properties that are not directly tied to agent details, a sound theoretical foothold from which to leverage this observation is only now being constructed.

(Complex Adaptive Systems, kl 987)

And here is Joshua Epstein’s description of what he calls “generative social science”:

The agent-based computational model— or artificial society— is a new scientific instrument. 1 It can powerfully advance a distinctive approach to social science, one for which the term “generative” seems appropriate. I will discuss this term more fully below, but in a strong form, the central idea is this: To the generativist, explaining the emergence2 of macroscopic societal regularities, such as norms or price equilibria, requires that one answer the following question:

The Generativist’s Question

* How could the decentralized local interactions of heterogeneous autonomous agents generate the given regularity?

The agent-based computational model is well-suited to the study of this question since the following features are characteristics. (5)

Here Epstein refers to the characteristics of heterogeneity of actors, autonomy, explicit space, local interactions, and bounded rationality. And he believes that it is both possible and mandatory to show how higher-level social characteristics emerge from the rule-governed interactions of the agents at a lower level.

There are differences across these approaches. But generally these authors bring together two rather different ideas — the curious unpredictability of even fairly small interconnected systems familiar from chaos theory, and the idea that there are simple higher level patterns that can be discovered and explained based on the turbulent behavior of the constituents. And they believe that it is possible to construct simulation models that allow us to trace out the interactions and complexities that constitute social systems.

So does complexity science create a basis for a general theory of society? And does it provide a basis for understanding the features of contingency, heterogeneity, and plasticity that I have emphasized throughout? I think these questions eventually lead to “no” on both counts.

Start with the fact of social contingency. Complexity models often give rise to remarkable and unexpected outcomes and patterns. Does this mean that complexity science demonstrates the origin of contingency in social outcomes? By no means; in fact, the opposite is true. The outcomes demonstrated by complexity models are in fact no more than computational derivations of the consequences of the premises of these models. So the surprises created by complex systems models only appear contingent; in fact they are generated by the properties of the constituents. So the surprises produced by complexity science are simulacra of contingency, not the real thing.

Second, what about heterogeneity? Does complexity science illustrate or explain the heterogeneity of social things? Not particularly. The heterogeneity of social things — organizations, value systems, technical practices — does not derive from complex system effects; it derives from the fact of individual actor interventions and contingent exogenous influences.

Finally, consider the feature of plasticity — the fact that social entities can “morph” over time into substantially different structures and functions. Does complexity theory explain the feature of social plasticity? It does not. This is simply another consequence of the substrate of the social world itself: the fact that social structures and forces are constituted by the actors that make them up. This is not a systems characteristic, but rather a reflection of the looseness of social interaction. The linkages within a social system are weak and fragile, and the resulting structures can take many forms, and are subject to change over time.

The tools of simulation and modeling that complexity theorists are in the process of developing are valuable contributions, and they need to be included in the toolbox. However, they do not constitute the basis of a complete and comprehensive methodology for understanding society. Moreover, there are important examples of social phenomena that are not at all amenable to treatment with these tools.

This leads to a fairly obvious conclusion, and one that I believe complexity theorists would accept: that complexity theories and the models they have given rise to are a valuable contribution; but they are only a partial answer to the question, how does the social world work?

Like this:

In an earlier post I considered the topic of phase transitions as a possible source of emergent phenomena (link). I argued there that phase transitions are indeed interesting, but don’t raise a serious problem of strong emergence. Tarun Menon considers this issue in substantial detail in the chapter he co-authored with Craig Callender in The Oxford Handbook of Philosophy of Physics, “Turn and face the strange … ch-ch-changes: Philosophical questions raised by phase transitions” (link). Menon and Callender provide a very careful and logical account of three ways of approaching the physics of phase transitions within physics and three versions of emergence (conceptual, explanatory, ontological). The piece is technical but very interesting, with a somewhat deflating conclusion (if you are a fan of emergence):

We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible.

Menon and Callendar review three approaches to the phenomenon of phase transition offered by physics: classical thermodynamics, statistical mechanics, and renormalization group theory. Thermodynamics describes the behavior of materials (gases, liquids, and solids) at the macro level; and statistical mechanics and renormalization group theory are theories of the micro states of materials intended to allow derivation of the macro behavior of the materials from statistical properties of the micro states. They describe this relationship in these terms:

Statistical mechanics is the theory that applies probability theory to the microscopic degrees of freedom of a system in order to explain its macroscopic behavior. The tools of statistical mechanics have been extremely successful in explaining a number of thermodynamic phenomena, but it turned out to be particularly difficult to apply the theory to the study of phase transitions. (193)

Here is the mathematical definition of phase transition that they provide:

Mathematically, phase transitions are represented by nonanalyticities or singularities in a thermodynamic potential. A singularity is a point at which the potential is not infinitely differentiable, so at a phase transition some derivative of the thermo dynamic potential changes discontinuously. (191)

And they offer this definition:

(Def 1) An equilibrium phase transition is a nonanalyticity in the free energy. (194)

Here is their description of how the renormalization group theory works:

To explain the method, we return to our stalwart Ising model. Suppose we coarse grain a 2 D Ising model by replacing 3 × 3 blocks of spins with a single spin pointing in the same direction as the majority in the original block. This gives us a new Ising system with a longer distance between lattice sites, and possibly a different coupling strength. You could look at this coarse graining procedure as a transformation in the Hamiltonian describing the system. Since the Hamiltonian is characterized by the coupling strength, we can also describe the coarse graining as a transformation in the coupling parameter. Let K be the coupling strength of the original system and R be the relevant transformation. The new coupling strength is K′ = RK. This coarse graining procedure could be iterated, producing a sequence of coupling parameters, each related to the previous one by the transformation R. The transformation defines a flow on parameter space. (195)

Renormalization group theory, then, is essentially the mathematical basis of coarse-graining analysis (link).

The key difficulty that has been used to ground arguments about strong emergence of phase transitions is now apparent: there seems to be a logical disjunction between the resources of statistical mechanics and the findings of thermodynamics. In theory physicists would like to hold that statistical mechanics provides the micro-level representation of the phenomena described by thermodynamics; or in other words, that thermodynamic facts can be reduced to derivations from statistical mechanics. However, the definition of a phase transition above specifies that the phenomena display “nonanalyticities” — instantaneous and discontinuous changes of state. It is easily demonstrated that the equations used in statistical mechanics do not display nonanalyticities; change may be abrupt, but it is not discontinuous, and the equations are infinitely differentiable. So if phase transitions are points of nonanalyticity, and statistical mechanics does not admit of nonanalytic equations, then it would appear that thermodynamics is not derivable from statistical mechanics. Similar reasoning applies to renormalization group theory.

This problem was solved within statistical mechanics by admitting of infinitely many bodies within the system that is represented (or alternatively, admitting of infinitely compressed volumes of bodies); but neither of these assumptions of infinity is realistic of the material world.

So are phase transitions “emergent” phenomena in either a weak sense or a strong sense, relative to the micro-states of the material in question? The strongest sense of emergence is what Menon and Callender call ontological irreducibility.

Ontological irreducibility involves a very strong failure of reduction, and if any phenomenon deserves to be called emergent, it is one whose description is ontologically irreducible to any theory of its parts. Batterman argues that phase transitions are emergent in this sense (Batterman 2005). It is not just that we do not know of an adequate statistical mechanical account of them, we cannot construct such an account. Phase transitions, according to this view, are cases of genuine physical discontinuities. (215)

The possibility that phase transitions are ontologically emergent at the level of thermodynamics is raised by the point about the mathematical characteristics of the equations that constitute the statistical mechanics description of the micro level — the infinite differentiability of those equations. But Menon and Callender give a compelling reason for thinking this is misleading. They believe that phase transitions constitute a conceptual novelty with respect to the resources of statistical mechanics — phase transitions do not correspond to natural kinds at the level of the micro-constitution of the material. But they argue that this does not establish that the phenomena cannot be explained or derived from a micro-level description. So phase transitions are not emergent according to the explanatory or ontological understandings of that idea.

The nub of the issue comes down to how we construe the idealization of statistical mechanics that assumes that a material consists of an infinite number of elements. This is plainly untrue of any real system (gas, liquid, or solid). The fact that there are boundaries implies that important thermodynamic properties are not “extensive” with volume: twice the volume leads to twice the entropy. But the way in which the finitude of a volume of material affects its behavior is through the effects of novel behaviors at the edges of the volume. And in many instances these effects are small relative to the behavior of the whole, if the volume is large enough.

Does this fact imply that there is a great mystery about extensivity, that extensivity is truly emergent, that thermodynamics does not reduce to finite N statistical mechanics? We suggest that on any reasonably uncontentious way of defining these terms, the answer is no. We know exactly what is happening here. Just as the second law of thermodynamics is no longer strict when we go to the microlevel, neither is the concept of extensivity. (201-202)

There is an important idealization on the thermodynamic description as well — the notion that several specific kinds of changes are instantaneous or discontinuous. But this assumption can also be seen as an idealization, corresponding to a physical system that is undergoing changes at different rates under different environmental conditions. What thermodynamics describes as an instantaneous change from liquid to gas may be better understood as a rapid process of change at the molar level which can be traced through in a continuous way.

(The fact that some systems are coarse-grained has an interesting implication for this set of issues (link). The interesting implication is that while it is generally true that the micro states in such a system entail the macro states, the reverse is not true: we cannot infer from a given macro state to the exact underlying micro state. Rather, many possible micro states correspond to a given macro state.)

The conclusion they reach is worth quoting:

Phase transitions are an important instance of putatively emergent behavior. Unlike many things claimed emergent by philosophers (e.g., tables and chairs), the alleged emergence of phase transitions stems from both philosophical and scientific arguments. Here we have focused on the case for emergence built from physics. We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible. And if one goes past textbook statistical mechanics, then an argument can be made that they are not even conceptually novel. In the case of renormalization group theory, consideration of infinite systems and their singular behavior provides a central theoretical tool, but this is compatible with an explanatory reduction. Phase transitions may be “emergent” in some sense of this protean term, but not in a sense that is incompatible with the reductionist project broadly construed. (222)

Or in other words, Minon and Callender refute one of the most technically compelling interpretations of ontological emergence in physical systems. They show that the phenomena of phase transitions as described by classical thermodynamics are compatible with being reduced to the dynamics of individual elements at the micro-level, so phase transitions are not ontologically emergent.

Are these arguments relevant in any way to debates about emergence in social system dynamics? The direct relevance is limited, since these arguments depend entirely on the mathematical properties of the ways in which the micro-level of physical systems are characterized (statistical mechanics). But the more general lesson does in fact seem relevant: rather than simply postulating that certain social characteristics are ontologically emergent relative to the actors that make them up, we would be better advised to look for the local-level processes that act to bring about surprising transitions at critical points (for example, the shift in a flock of birds from random flight to a swarm in a few seconds).

Like this:

A primary reason for thinking that assemblage theory is important is the fact that it offers new ways of thinking about social ontology. Instead of thinking of the social world as consisting of fixed entities and properties, we are invited to think of it as consisting of fluid agglomerations of diverse and heterogeneous processes. Manuel DeLanda’s recent book Assemblage Theory sheds new light on some of the complexities of this theory.

Particularly important is the question of how to think about the reality of large historical structures and conditions. What is “capitalism” or “the modern state” or “the corporation”? Are these temporally extended but unified things? Or should they be understood in different terms altogether? Assemblage theory suggests a very different approach. Here is an astute description by DeLanda of historical ontology with respect to the historical imagination of Fernand Braudel:

Braudel’s is a multi-scaled social reality in which each level of scale has its own relative autonomy and, hence, its own history. Historical narratives cease to be constituted by a single temporal flow — the short timescale at which personal agency operates or the longer timescales at which social structure changes — and becomes a multiplicity of flows, each with its own variable rates of change, its own accelerations and decelerations. (14)

DeLanda extends this idea by suggesting that the theory of assemblage is an antidote to essentialism and reification of social concepts:

Thus, both ‘the Market’ and ‘the State’ can be eliminated from a realist ontology by a nested set of individual emergent wholes operating at different scales. (16)

I understand this to mean that “Market” is a high-level reification; it does not exist in and of itself. Rather, the things we want to encompass within the rubric of market activity and institutions are an agglomeration of lower-level concrete practices and structures which are contingent in their operation and variable across social space. And this is true of other high-level concepts — capitalism, IBM, or the modern state.

DeLanda’s reconsideration of Foucault’s ideas about prisons is illustrative of this approach. After noting that institutions of discipline can be represented as assemblages, he asks the further question: what are the components that make up these assemblages?

The components of these assemblages … must be specified more clearly. In particular, in addition to the people that are confined — the prisoners processed by prisons, the students processed by schools, the patients processed by hospitals, the workers processed by factories — the people that staff those organizations must also be considered part of the assemblage: not just guards, teachers, doctors, nurses, but the entire administrative staff. These other persons are also subject to discipline and surveillance, even if to a lesser degree. (39)

So how do assemblages come into being? And what mechanisms and forces serve to stabilize them over time? This is a topic where DeLanda’s approach shares a fair amount with historical institutionalists like Kathleen Thelen (link, link): the insight that institutions and social entities are created and maintained by the individuals who interface with them, and that both parts of this observation need explanation. It is not necessarily the case that the same incentives or circumstances that led to the establishment of an institution also serve to gain the forms of coherent behavior that sustain the institution. So creation and maintenance need to be treated independently. Here is how DeLanda puts this point:

So we need to include in a realist ontology not only the processes that produce the identity of a given social whole when it is born, but also the processes that maintain its identity through time. And we must also include the

downward causal influence

that wholes, once constituted, can exert on their parts. (18)

Here DeLanda links the compositional causal point (what we might call the microfoundational point) with the additional idea that higher-level social entities exert downward causal influence on lower-level structures and individuals. This is part of his advocacy of emergence; but it is controversial, because it might be maintained that the causal powers of the higher-level structure are simultaneously real and derivative upon the actions and powers of the components of the structure (link). (This is the reason I prefer to use the concept of relative explanatory autonomy rather than emergence; link.)

DeLanda summarizes several fundamental ideas about assemblages in these terms:

“Assemblages have a fully contingent historical identity, and each of them is therefore an individual entity: an individual person, an individual community, an individual organization, an individual city.”

“Assemblages are always composed of heterogeneous components.”

“Assemblages can become component parts of larger assemblages. Communities can form alliances or coalitions to become a larger assemblage.”

“Assemblages emerge from the interactions between their parts, but once an assemblage is in place it immediately starts acting as a source of limitations and opportunities for its components (downward causality).” (19-21)

There is also the suggestion that persons themselves should be construed as assemblages:

Personal identity … has not only a private aspect but also a public one, the

public persona

that we present to others when interacting with them in a variety of social encounters. Some of these social encounters, like ordinary conversations, are sufficiently ritualized that they themselves may be treated as assemblages. (27)

Here DeLanda cites the writings of Erving Goffman, who focuses on the public scripts that serve to constitute many kinds of social interaction (link); equally one might refer to Andrew Abbott’s processual and relational view of the social world and individual actors (link).

The most compelling example that DeLanda offers here and elsewhere of complex social entities construed as assemblages is perhaps the most complex and heterogeneous product of the modern world — cities.

Cities possess a variety of material and expressive components. On the material side, we must list for each neighbourhood the different buildings in which the daily activities and rituals of the residents are performed and staged (the pub and the church, the shops, the houses, and the local square) as well as the streets connecting these places. In the nineteenth century new material components were added, water and sewage pipes, conduits for the gas that powered early street lighting, and later on electricity and telephone wires. Some of these components simply add up to a larger whole, but citywide systems of mechanical transportation and communication can form very complex networks with properties of their own, some of which affect the material form of an urban centre and its surroundings. (33)

(William Cronon’s social and material history of Chicago in Nature’s Metropolis: Chicago and the Great West is a very compelling illustration of this additive, compositional character of the modern city; link. Contingency and conjunctural causation play a very large role in Cronon’s analysis. Here is a post that draws out some of the consequences of the lack of systematicity associated with this approach, titled “What parts of the social world admit of explanation?”; link.)

Like this:

The question of the relationship between micro-level and macro-level is just as important in physics as it is in sociology. Is it possible to derive the macro-states of a system from information about the micro-states of the system? It turns out that there are some surprising aspects of the relationship between micro and macro that physical systems display. The mathematical technique of “coarse-graining” represents an interesting wrinkle on this question. So what is coarse-graining? Fundamentally it is the idea that we can replace micro-level specifics with local-level averages, without reducing our ability to calculate macro-level dynamics of behavior of a system.

A 2004 article by Israeli and Goldenfeld, “Coarse-graining of cellular automata, emergence, and the predictability of complex systems” (link) provides a brief description of the method of coarse-graining. (Here is a Wolfram demonstration of the way that coarse graining works in the field of cellular automata; link.) Israeli and Goldenfeld also provide physical examples of phenomena with what they refer to as emergent characteristics. Let’s see what this approach adds to the topic of emergence and reduction. Here is the abstract of their paper:

We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram’s classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.

This paragraph involves several interesting ideas. One is that the micro-level details do not matter to the macro outcome (italics above). Another related idea is that macro-level patterns are (sometimes) forced by the “rules that govern the large scale dynamics” — rather than by the micro-level states.

Coarse-graining methodology is a family of computational techniques that permits “averaging” of values (intensities) from the micro-level to a higher level of organization. The computational models developed here were primarily applied to the properties of heterogeneous materials, large molecules, and other physical systems. For example, consider a two-dimensional array of iron atoms as a grid with randomly distributed magnetic orientations (up, down). A coarse-grained description of this system would be constructed by taking each 3×3 square of the grid and assigning it the up-down value corresponding to the majority of atoms in the grid. Now the information about nine atoms has been reduced to a single piece of information for the 3×3 grid. Analogously, we might consider a city of Democrats and Republicans. Suppose we know the affiliation of each household on every street. We might “coarse-grain” this information by replacing the household-level data with the majority representation of 3×3 grids of households. We might take another step of aggregation by considering 3×3 grids of grids, and representing the larger composite by the majority value of the component grids.

How does the methodology of coarse-graining interact with other inter-level questions we have considered elsewhere in Understanding Society (emergence, generativity, supervenience)? Israeli and Goldenfeld connect their work to the idea of emergence in complex systems. Here is how they describe emergence:

Emergent properties are those which arise spontaneously from the collective dynamics of a large assemblage of interacting parts. A basic question one asks in this context is how to derive and predict the emergent properties from the behavior of the individual parts. In other words, the central issue is how to extract large-scale, global properties from the underlying or microscopic degrees of freedom. (1)

Note that this is the weak form of emergence (link); Israeli and Goldenfeld explicitly postulate that the higher-level properties can be derived (“extracted”) from the micro level properties of the system. So the calculations associated with coarse-graining do not imply that there are system-level properties that are non-derivable from the micro-level of the system; or in other words, the success of coarse-graining methods does not support the idea that physical systems possess strongly emergent properties.

Does the success of coarse-graining for some systems have implications for supervenience? If the states of S can be derived from a coarse-grained description C of M (the underlying micro-level), does this imply that S does not supervene upon M? It does not. A coarse-grained description corresponds to multiple distinct micro-states, so there is a many-one relationship between M and C. But this is consistent with the fundamental requirement of supervenience: no difference at the higher level without some difference at the micro level. So supervenience is consistent with the facts of successful coarse-graining of complex systems.

What coarse-graining is inconsistent with is the idea that we need exact information about M in order to explain or predict S. Instead, we can eliminate a lot of information about M by replacing M with C, and still do a perfectly satisfactory job of explaining and predicting S.

There is an intellectual wrinkle in the Israeli and Goldenfeld article that I haven’t yet addressed here. This is their connection between complex physical systems and cellular automata. A cellular automaton is a simulation governed by simple algorithms governing the behavior of each cell within the simulation. The game of Life is an example of a cellular automaton (link). Here is what they say about the connection between physical systems and their simulations as a system of algorithms:

The problem of predicting emergent properties is most severe in systems which are modelled or described by undecidable mathematical algorithms[1, 2]. For such systems there exists no computationally efficient way of predicting their long time evolution. In order to know the system’s state after (e.g.) one million time steps one must evolve the system a million time steps or perform a computation of equivalent complexity. Wolfram has termed such systems computationally irreducible and suggested that their existence in nature is at the root of our apparent inability to model and understand complex systems [1, 3, 4, 5]. (1)

Suppose we are interested in simulating the physical process through which a pot of boiling water undergoes sudden turbulence shortly before 100 degrees C (the transition point between water and steam). There seem to be two large alternatives raised by Israeli and Goldenfeld: there may be a set of thermodynamic processes that permit derivation of the turbulence directly from the physical parameters present during the short interval of time; or it may be that the only way of deriving the turbulence phenomenon is to provide a molecule-level simulation based on the fundamental laws (algorithms) that govern the molecules. If the latter is the case, then simulating the process will prove computationally impossible.

Here is an extension of this approach in an article by Krzysztof Magiera and Witold Dzwinel, “Novel Algorithm for Coarse-Graining of Cellular Automata” (link). They describe “coarse-graining” in their abstract in these terms:

The coarse-graining is an approximation procedure widely used for simplification of mathematical and numerical models of multiscale systems. It reduces superfluous – microscopic – degrees of freedom. Israeli and Goldenfeld demonstrated in [1,2] that the coarse-graining can be employed for elementary cellular automata (CA), producing interesting interdependences between them. However, extending their investigation on more complex CA rules appeared to be impossible due to the high computational complexity of the coarse-graining algorithm. We demonstrate here that this complexity can be substantially decreased. It allows for scrutinizing much broader class of cellular automata in terms of their coarse graining. By using our algorithm we found out that the ratio of the numbers of elementary CAs having coarse grained representation to “degenerate” – irreducible – cellular automata, strongly increases with increasing the “grain” size of the approximation procedure. This rises principal questions about the formal limits in modeling of realistic multiscale systems.

Here K&D seem to be expressing the view that the the approach to coarse-graining as a technique for simplifying the expected behavior of a complex system offered by Israeli and Goldenfeld will fail in the case of more extensive and complex systems (perhaps including the pre-boil turbulence example mentioned above).

I am not sure whether these debates have relevance for the modeling of social phenomena. Recall my earlier discussion of the modeling of rebellion using agent-based modeling simulations (link, link, link). These models work from the unit level — the level of the individuals who interact with each other. A coarse-graining approach would perhaps replace the individual-level description with a set of groups with homogeneous properties, and then attempt to model the likelihood of an outbreak of rebellion based on the coarse-grained level of description. Would this be feasible?

Like this:

Part of Manuel DeLanda’s work in Assemblage Theory is his hope to clarify and extend the way that we understand the ontological ideas associated with assemblage. He introduces a puzzling wrinkle into his discussion in this book — the idea that a concept is “equipped with a variable parameter, the setting of which determines whether the ensemble is coded or decoded” (3). He thinks this is useful because it helps to resolve the impulse towards essentialism in social theory while preserving the validity of the idea of assemblage:

A different problem is that distinguishing between different kinds of wholes involves ontological commitments that go beyond individual entities. In particular, with the exception of conventionally defined types (like the types of pieces in a chess game), natural kinds are equivalent to essences. As we have already suggested, avoiding this danger involves using a single term, ‘assemblage’, but building into it parameters that can have different settings at different times: for some settings the social whole will be a stratum, for other settings an assemblage (in the original sense). (18)

So “assemblage” does not refer to a natural kind or a social essence, but rather characterizes a wide range of social things, from the sub-individual to the level of global trading relationships. The social entities found at all scales are “assemblages” — ensembles of components, some of which are themselves ensembles of other components. But assemblages do not have an essential nature; rather there are important degrees of differentiation and variation across assemblages.

By contrast, we might think of the physical concepts of “metal” and “crystal” as functioning as something like a natural kind. A metal is an unchanging material configuration. Everything that we classify as a metal has a core set of physical-material properties that determine that it will be an electrical conductor, ductile, and solid over a wide range of terrestrial temperatures.

A particular conception of an assemblage (the idea of a city, for example) does not have this fixed essential character. DeLanda introduces the idea that the concept of a particular assemblage involves a parameter or knob that can be adjusted to yield different materializations of the given assemblage. An assemblage may take different forms depending on one or more important parameters.

What are those important degrees of variation that DeLanda seeks to represent with “knobs” and parameters? There are two that come in for extensive treatment: the idea of territorialization and the idea of coding. Territorialization is a measure of homogeneity, and coding is a measure of the degree to which a social outcome is generated by a grammar or algorithm. And DeLanda suggests that these ideas function as something like a set of dimensions along which particular assemblages may be plotted.

Here is how DeLanda attempts to frame this idea in terms of “a concept with knobs” (3).

The coding parameter is one of the knobs we must build into the concept, the other being territorialisation, a parameter measuring the degree to which the components of the assemblage have been subjected to a process of homogenisation, and the extent to which its defining boundaries have been delineated and made impermeable. (3)

Later DeLanda returns to this point:

A different problem is that distinguishing between different kinds of wholes involves ontological commitments that go beyond individual entities. In particular, with the exception of conventionally defined types (like the types of pieces in a chess game), natural kinds are equivalent to essences. As we have already suggested, avoiding this danger involves using a single term, ‘assemblage’, but building into it parameters that can have different settings at different times: for some settings the social whole will be a stratum, for other settings an assemblage (in the original sense). (18)

This is confusing. We normally think of a concept as identifying a range of phenomena; the phenomena are assumed to have characteristics that can be observed, hypothesized, and measured. So it seems peculiar to suppose that the forms of variation that may be found among the phenomena need to somehow be represented within the concept itself.

Consider an example — a nucleated human settlement (hamlet, village, market town, city, global city). These urban agglomerations are assemblages in DeLanda’s sense: they are composed out of the juxtaposition of human and artifactual practices that constitute and support the forms of activity that occur within the defined space. But DeLanda would say that settlements can have higher or lower levels of territorialization, and they can have higher or lower levels of coding; and the various combinations of these “parameters” leads to substantially different properties in the ensemble.

If we take this idea seriously, it implies that compositions (assemblages) sometimes undergo abrupt and important changes in their material properties at critical points for the value of a given variable or parameter.

DeLanda thinks that these ideas can be understood in terms of an analogy with the idea of a phase transition in physics:

Parameters are normally kept constant in a laboratory to study an object under repeatable circumstances, but they can also be allowed to vary, causing drastic changes in the phenomenon under study: while for many values of a parameter like temperature only a quantitative change will be produced, at critical points a body of water will spontaneously change qualitatively, abruptly transforming from a liquid to a solid, or from a liquid to a gas. By analogy, we can add parameters to concepts. Addition these control knobs to the concept of assemblage would allow us to eliminate their opposition to strata, with the result that strata and assemblages (in the original sense) would become phases, like the solid and fluid phases of matter. (19)

These ideas about “knobs”, parameters, and codes might be sorted out along these lines. Deleuze introduces two high-level variables along which social arrangements differ — the degree to which the social ensemble is “territorialized” and the degree to which it is “coded”. Ensembles with high territorialization have some characteristics in common; likewise ensembles with low coding; and so forth. Both factors admit of variable states; so we could represent a territorialization measurement as a value between 0 and 1, and likewise a coding measurement.

When we combine this view with DeLanda’s suggestion that social ensembles undergo “phase transitions,” we get the idea that there are critical points for both variables at which the characteristics of the ensemble change in some important and abrupt way.

W, X, Y, and Z represent the four extreme possibilities of “low coding, low territorialization”, “high coding, low territorialization”, “high coding, high territorialization”, and “low coding, high territorialization”. And the suggestion from DeLanda’s treatment is that assemblages in these four extreme locations will have importantly different characteristics — much as solid, liquid, gas, and plasma states of water have different characteristics. (He asserts that assemblages in the “high-high” quadrant are “strata”, while ensembles at lower values of the two parameters are “assemblages”; 39.)

Here is a phase diagram for water:

There are five material states represented here, along with the critical values of pressure and temperature at which H20 shifts through a phase transition (solid, liquid, compressible liquid, gaseous, and supercritical fluid). (There is a nice discussion of critical points and phase transitions in Wikipedia (link).)

What is most confusing in the theory offered in Assemblage Theory is that DeLanda appears to want to incorporate the ideas of coding (C) and territorialization (T) into the notation itself, as a “knob” or a variable parameter. But this seems like the wrong way of proceeding. Better would be to conceive of the social entity as an ensemble; and the ensemble is postulated to have different properties as C and T increase. This extends the analogy with phase spaces that DeLanda seems to want to develop. Now we might hypothesize that as a market town decreases in territorialization and coding it moves from the upper right quadrant towards the lower left quadrant of the diagram; and (DeLanda seems to believe) there will be a critical point at which the properties of the ensemble are significantly different. (Again, he seems to say that the phase transition is from “assemblage” to “strata” for high values of C and T.)

I think this explication works as a way of interpreting DeLanda’s intentions in his complex assertions about the language of assemblage theory and the idea of a concept with knobs. Whether it is a view that finds empirical or historical confirmation is another matter. Is there any evidence that social ensembles undergo phase transitions as these two important variables increase? Or is the picture entirely metaphorical?

(Gottlob Frege changed logic by introducing a purely formal script intended to suffice to express any scientific or mathematical proposition. The concept of proof was intended to reduce to “derivability according to a specified set of formal operations from a set of axioms.” Here is a link to an interesting notebook in Rudolph Carnap’s hand of his participation in a seminar by Frege; link.)

Like this:

Post navigation

A web-based monograph

This site addresses a series of topics in the philosophy of social science. What is involved in "understanding society"? The blog is an experiment in thinking, one idea at a time. Look at it as a dynamic web-based monograph on the philosophy of social science and some foundational issues about the nature of the social world.

The "topics and threads" box below provides a way to read a group of posts as "chapters" in a hypertext book.

DANIEL LITTLE'S PROFILE

I am a philosopher of social science with a strong interest in Asia. I have written books on social explanation, Marx, late imperial China, the philosophy of history, and the ethics of economic development. Topics having to do with racial justice in the United States have become increasingly important to me in recent years. All these topics involve the complexities of social life and social change. I have come to see that understanding social processes is in many ways more difficult than understanding the natural world. Take the traditional dichotomy between structure and agency as an example. It turns out that social actions and social structures are reciprocal and inseparable. As Marx believed, “people make their own histories, but not in circumstances of their own choosing.” So we cannot draw a sharp separation between social structure and social agency. I think philosophers need to interact seriously and extensively with working social researchers and theorists if they are to be able to help achieve a better understanding the social world.