Organisations & communities in the knowledge age

Intervening in a complex system

Stephen Bounds — Fri, 21/01/2011 - 12:29

One of the most critical tasks of a knowledge manager is the ability to effectively intervene in the functioning of complex systems. Because deterministic cause and effect of interventions is not possible in a complex environment, the basic model established by Dave Snowden and others is termed safe-fail experimentation.

To effectively perform such interventions, when dealing with complex systems there are a number of questions we need to ask such as:

where is our observational boundary?

is the system open or closed? if open, how rapidly are systems conditions changing?

is the system deterministic or adaptive? (Adaptive implies intelligent agents that are part of the system can actively provide feedback to modify systems behavior.)

is the system atomic (in the sense of "unable to be split") or self-similar?

The answers to these questions have an impact on our approach to experimentation:

Observational boundary? The observational boundary is the limit of systems behavior that we are interested in. For example, the Australian government performs many interventions each year, but in most cases will confine its interest in the results to what happens within the borders of Australia. As we will see, this is potentially different from the "interventional boundary", which defines the limits of what is to be targeted by the intervention.

Open or closed? Most real-world complex systems are "open", that is, external factors can have an impact on the operations of the system. In many cases, open systems even have "fuzzy" boundaries with components that cannot be definitely included or excluded from the system.

When systems are fuzzy, we prefer to choose artificial boundaries that produce low rates of change in system conditions ("low turbulence") -- both in terms of system components included and external factors causing an impact. If turbulence in the system is too high, we will not be able to reliably correlate the observed change in systems behavior with the results of experimentation.

Deterministic or adaptive? A deterministic complex system is also known as a chaotic system. Chaotic systems follow principles of cause-and-effect, but are extremely sensitive to starting conditions, hampering our ability to do long-term predictions. Depending on the nature of our chaotic system, we have two main avenues of attack. Firstly, we can try to modify parameters of the system to resolve the chaotic behavior into more linear behavior. Secondly, we can expand the scope of our system into a macro view where chaotic effects do not overpower predictability. One of the ways of doing this is to increase our time scale -- events that are chaotic over the short term can still show trends and linear patterns in the long term.

On the other hand, a complex adaptive system or CAS cannot ever be completely reduced to cause and effect behavior due to it containing intelligent agents. (One of the defining characteristics of intelligence is that it cannot be always predicted.) That said, any organisation with a definable boundary has a tendency to self-preservation and proliferation and there are lots of interesting consequences flowing from that.

Atomic or self-similar? An atomic system is one that cannot be sub-divided while preserving the emergent behaviors of the system. With atomic systems, the observational boundary and interventional boundaries must be identical. On the other hand, for self-similar systems, the same characteristic behaviors can be observed at multiple levels (eg a branch or division of an organisation has much the same emergent team dynamics as the whole). Here, we have the ability to intervene in multiple areas at a sub-system level and generalise from the results.

Note that the line here is not black and white. Many compex systems can be split into sub-systems, with some common emergent behaviors still observable in the individual systems. However, results from experimenting on these common sub-system behaviors may lead to unexpected emergent results when applied across all sub-systems and reconsidered at the macro level.

What does it all mean? One of the principles espoused by Dave Snowden is the idea of running multiple safe-fail experiments in parallel. I only partially agree, since the goal of controlled experimentation must always be to minimize confounding factors.

All of the above leads us to several important strategies that should be used:

(a) Always decide where to set an observational boundary and interventional boundar(ies) before experimenting.
(b) Only intervene in systems or sub-systems with low turbulence.
(c) Only conduct one safe-fail experiment per interventional boundary (and the boundaries should not overlap).
(d) If you wish to generalise about results, for self-similar systems only intervene on behaviors that are common to all levels of the system (up to the observational boundary).