Project management has been one of the most productive and successful areas of system dynamics. And yet, when I recently looked at project management tools and advice, I couldn’t find a hint of SD dynamic insights into product management. Lists of reasons for project failure almost entirely neglect endogenous explanations.

I think there’s an insight and a puzzle here. The insight is that mismanaged dynamics and misperceptions of feedback aren’t the only way to screw up. There are exogenous and single-cause failure modes, like hiring people with the wrong skill set for a job, building something no one wants, or just failing to keep in touch with your team.

However, I’m pretty sure the dominant cause of execution failure is dynamic. Large projects are like sleeping monsters. They are full of positive feedback loops that, when triggered cause increasing delays and overruns, perhaps explaining the heavy-tailed distribution of massive project failures. So, the puzzle is, how could there be so little mention, and so few tools, for management of the internal causes of project success?

Not coincidentally, this problem is one of the major reasons we built Ventity. We’re currently working on project models that are entirely data driven, so you can switch from building a house to building a power plant just by changing some tables of input. We think this will be the missing link between data-oriented tools that manage projects statically in exquisite detail and dynamic models that realistically describe projects, but have traditionally been hard to build, calibrate and reuse.

DARPA put out a request for a BS detector for science. I responded with a strategy for combining the results of multiple models (using Mohammad Jalali’s multivariate meta-analysis with some supporting infrastructure like data archiving) to establish whether new findings are consistent with an existing body of knowledge.

DARPA didn’t bite. I have no idea why, but could speculate from the RFC that they had in mind something more like a big data approach that would use text analysis to evaluate claims. Hopefully not, because a text-only approach will have limited power. Here’s why.

That was a conceptual model; this is a mathematical model. This is a Vensim replication of:

Marisa Eisenberg, Mary Samuels, and Joseph J. DiStefano III

Extensions, Validation, and Clinical Applications of a Feedback Control System Simulator of the Hypothalamo-Pituitary-Thyroid Axis

Background:We upgraded our recent feedback control system (FBCS) simulation model of human thyroid hormone (TH) regulation to include explicit representation of hypothalamic and pituitary dynamics, and up-dated TH distribution and elimination (D&E) parameters. This new model greatly expands the range of clinical and basic science scenarios explorable by computer simulation.

Methods: We quantified the model from pharmacokinetic (PK) and physiological human data and validated it comparatively against several independent clinical data sets. We then explored three contemporary clinical issues with the new model: …

Discrete time modeling is often convenient, occasionally right and frequently treacherous.

You often see models expressed in discrete time, like
That’s Samuelson’s multiplier-accelerator model. The same notation is ubiquitous in statistics, economics, ABM and many other areas.

So, what’s the problem?

Most of the real world does not happen in discrete time. A few decisions, like electric power auctions, happen at regular intervals, but those are the exception. Most of the time we’re modeling on long time scales relative to underlying phenomena, and we have lots of heterogeneous agents or particles or whatever, with diverse delays and decision intervals.

Discrete time can be artificially unstable. A stable continuous system can be made unstable by simulating at too large a discrete interval. A discrete system may oscillate, where its continuous equivalent would not.

You can’t easily test for the effect of the time time step on stability. Q: If your discrete time model is running with one Excel row per interval, how will you test an interval that’s 1/2 or 1/12 as big for comparison? A: You won’t. Even if it occurs to you to try, it would be too much of a pain.

The measurement interval isn’t necessarily the relevant dynamic time scale. Often the time step of a discrete model derives from the measurement interval in the data. There’s nothing magic about that interval, with respect to how the system actually works.

The notions of stocks and flows and system state are obscured. (See the diagram from the Samuelson model above.) Lack of stock flow consistency can lead to other problems, like failure to conserve physical quantities.

Units are ambiguous. This is a consequence of #5. When states and their rates of change appear on an equal footing in an equation, it’s hard to work out what’s what. Discrete models tend to be littered with implicit time constants and other hidden parameters.

Most logic isn’t discrete. When time is marching along merrily in discrete lockstep, it’s easy to get suckered into discrete thinking: “if the price of corn is lower than last year’s price of corn, buy hogs.” That might be a good model of one farmer, but it lacks nuance, and surely doesn’t represent the aggregate of diverse farmers. This is not a fault of discrete time per se, but the two often go hand in hand. (This is one of many flaws in the famous Levinthal & March model.)

So, what if you find a skanky discrete time model in your analytic sock drawer? Fear not, you can convert it.

Consider the adstock model, representing the cumulative effects of advertising:

Ad Effect = f(Adstock)
Adstock(t) = Advertising(t) + k*Adstock(t-1)

Notice that k is related to the lifetime of advertising, but because it’s relative to the discrete interval, it’s misleadingly dimensionless. Also, the interval is fixed at 1 time unit, and can’t be changed without scaling k.

Also notice that the ad effect has an instantaneous component. Usually there’s some delay between ad exposure and action. That delay might be negligible in some cases, like in-app purchases, but it’s typically not negligible for in-store behavior.

You can translate this into Vensim lingo literally by using a discrete delay:

Now the ad life has a dimensioned real-world interpretation and you can simulate with whatever time step you need, independent of the parameters (as long as it’s small enough).

There’s one fly in the ointment: the instantaneous ad effect I mentioned above. That happens when, for example, the data interval is weekly, and ads released have some effect within their week of release – the Monday sales flyer drives weekend sales, for example.

There are two solutions for this:

The “cheat” is to include a bit of the current flow of advertising in the effective adstock, via a “current week effect” parameter. This is a little tricky, because it locks you into the weekly time step. You can generalize that away at the cost of more complexity in the equations.

A more fundamental solution is to run the model at a finer time step than the data interval. This gives you a cleaner model, and you lose nothing with respect to calibration (in Vensim/Ventity at least).

Occasionally, you’ll run into more than one delayed state on the right side of the equation, as with the inclusion of Y(t-1) and Y(t-2) in the Samuelson model (top). That generally signals either a delay with a complex structure (e.g., 2nd or higher order), or some other higher-order effect. Generally, you should be able to give a name and interpretation to these states (as with the construction of Y and C in the Samuelson model). If you can’t, don’t pull your hair out. It could be that the original is ill-formulated. Instead, think things through from scratch with stocks and flows in mind.

A Simulation-Based Approach to Understanding the Dynamics of Innovation Implementation

The history of management practice is filled with innovations that failed to live up to the promise suggested by their early success. A paradox currently facing organizational theory is that the failure of these innovations often cannot be attributed to an intrinsic lack of efficacy. To resolve this paradox, in this paper I study the process of innovation implementation. Working from existing theoretical frameworks, I synthesize a model that describes the process through which participants in an organization develop commitment to using a newly adopted innovation. I then translate that framework into a formal model and analyze it using computer simulation. The analysis suggests three new constructs—reversion, regeneration, and the motivation threshold—characterizing the dynamics of implementation. Taken together, the constructs provide an internally consistent theory of how seemingly rational decision rules can create the apparent paradox of innovations that generate early results but fail to produce sustained benefit.

This is another nice example of tipping points. In this case, an initiative must demonstrate enough early success to grow its support base. If it succeeds, word of mouth takes its commitment level to 100%. If not, the positive feedbacks run as vicious cycles, and the initiative fails.

When initiatives compete for scarce resources, this creates a success to the successful dynamic, in which an an initiative that demonstrates early success attracts more support, grows commitment faster, and thereby demonstrates more success.

This version is in Ventity, in order to make it easier to handle multiple competing initiatives, with each as a discrete entity. One initialization dataset for the model creates initiatives at random intervals, with success contingent on the environment (other initiatives) prevailing at the time of launch:

This archive contains two versions of the model: “Intervention2” is the first in the paper, with no resource competition. “Intervention5” is the second, with multiple competing initiatives.

The 2016 record in CO2 concentration and increment is exactly what you’d expect for a system driven by growing emissions.

Here’s the data. The CO2 concentration at Mauna Loa has increased steadily since records began in 1958. Superimposed on the trend is a seasonal oscillation, which you can remove by a moving average over a monthly window (red):

In a noiseless system driven by increasing, you’d expect every year to be a concentration record, and that’s nearly true here. Almost 99% of 12-month intervals exceed all previous records.

If you look at the year-on-year difference in monthly concentrations, you can see that not only is the concentration rising, but the rate of increase is increasing as well:

This first difference is noisier, but consistent. As a natural consequence, you’d expect a typical point to be higher than any average of the interval preceding.

In other words, a record concentration coinciding with a record increase is not unusual, dynamically or statistically. Until emissions decline significantly, news outlets might as well post a standing item to this effect.

The CO2 concentration trajectory is, incidentially, closer to parabolic than to exponential. That’s because emissions have risen more or less linearly in recent decades,

CO2 emissions, GtC/yr

CO2 concentration (roughly) integrates emissions, so if emissions = c1*time, concentration = c2*time^2 is expected. The cause for concern here is that a peak in the rate of increase has occurred at a time with flat emissions for a few years, signalling that saturation of natural sinks may be to blame. I think it’s premature to draw that conclusion, given the level of noise in the system. But sooner or later our luck will run out, so reducing emissions is as important as ever.

No one buys a Tesla Model S because it’s cheaper than a regular car. But there’s currently a flurry of breathless tweets, rejoicing that a Tesla roof is cheaper than a regular roof. That’s dubious.

When I see $21.85 per square foot for anything associated with a house, “cheap” is not what comes to mind. That’s in the territory for luxury interior surfaces, not bulk materials like roofing. I’m reminded of the old saw in energy economics (I think from the EMF meetings in Aspen) that above 7000 feet, the concept of discount rates evaporates.

The hospital compiles a big dataset on patient demographics, health status, exposure to procedures, and infection outcomes. A vendor slurps this up and turns some algorithm loose on the data, seeking the risk factors associated with the infection. It might look like this:

… except that there might be 200 predictors, not six – more than you can handle by eyeballing scatter plots or control charts. Once you have a risk model, you know which patients to target for mitigation, and maybe also which associated factors to pursue further.

However, this is only half the battle. Systems thinkers will recognize this model as a dead buffalo: a laundry list with unidirectional causality. The real situation is rich in feedback, including a lot of things that probably don’t get measured, and therefore don’t end up in the data for consideration by the algorithm. For example:

Infections aren’t just a random event for the patient; they happen for reasons that are larger than the patient. Even worse, there are positive feedbacks that can make prevention of infections, and errors more generally, hard to manage. For example, as the number of patients with infections rises, workload goes up, which creates time pressure and fatigue. That induces shortcuts and errors that create risk for patients, leading to more infections. Infections spread to other patients. Fatigued staff burn out and turn over faster, which dilutes the staff experience that might otherwise mitigate risk. (Experience, like many other dynamics, is not shown above.)

An algorithm that predicts risk in this context is certainly useful, because anything that reduces risk helps to diminish the gain of the vicious cycles. But it’s no longer so clear what to do with the patient assessments. Time spent on staff education and action for risk mitigation has to come from somewhere, and therefore might have unintended consequences that aren’t assessed by the algorithm. The algorithm is actually blind in two ways: it can’t respond to any input (like staff fatigue or skill) that isn’t in the data, and it probably isn’t statistically smart enough to deal with the separation of cause and effect in time and space that arises in a feedback system.

Deep learning systems like Alpha Go Zero might learn to deal with dynamics. But so far, high performance requires very large numbers of exemplars for reinforcement learning, and that’s never going to happen in a community hospital dataset. Then again, we humans aren’t too good at managing dynamic complexity either. But until the machines take over, we can build dynamic models to sort these problems out. By taking an endogenous point of view, we can put machine learning in context, refine our understanding of leverage points, and redesign systems for greater performance.