Moving us from the absence of harm, to the presence of safety

I wrote last time about the importance of developing ‘capability’ and how the Health Foundation’s Framework for Measurement and Monitoring of Safety can help us with that. Making care safer involves making decisions. Improvement results from making more good decisions and less bad ones. That can happen either deliberately or by chance. Sustaining improvement is harder and requires us to understand which were good decisions and which were bad. To do that we need to understand some of the basic statistical concepts that allow us to correctly interpret variation.

A lack of understanding of variation is harmful to individuals, teams and organisations on a number of levels. In essence if we don’t understand variation we don’t know whether we are getting better or worse. If we don’t know that, then we don’t know what to start or stop doing, what to do more of and what to do less of. We don’t know how and where to deploy finite resources. We risk making variation worse by changing the wrong things at the wrong time. We cause waste and harm by intervening when it would have been better to do nothing, or not intervening when it would have been timely to do so. We create perverse incentives by rewarding people for the wrong things. We demoralise staff by blaming them for things beyond their control. We waste time looking to explain perceived trends when nothing has changed.

So in essence understanding variation can help us to work out when to step in and when to leave well alone (intelligent intervention) and help us to avoid losses from misinterpretation of variation. By developing a mature approach to data we can ensure that we are better placed to respond correctly to variation and avoid those losses.

Statistical process control allows us to look at our safety critical processes over time and study the variation. Used correctly it helps us to differentiate between statistically significant or ‘special cause’ variation and ‘common cause’ variation indicative of normal ups and downs within the inherent capability of a given process. It can help us to work out if our processes are ‘stable’ and therefore predictable, or ‘unstable’ and an unreliable basis for prediction. Stable processes can be capable or less capable, but provide a solid basis for improvement techniques and predicting the future - an essential part of achieving ‘reliability’ and improved safety. Unstable processes are a poor basis for prediction, are inconsistent and require focus on elimination of external ‘special causes’ by root cause analysis before meaningful process ‘capability’ or ‘predictability’ can be improved.

Our goal in improvement is to eliminate special causes so that our processes behave predictably (consistency). Stable processes showing common cause variation can then be used not only as a basis for incremental improvement work, but can also act as early warning systems for emergence of ’special cause’. This in turn allows us to target early appropriate intervention and analysis (intelligent intervention). Perhaps even more importantly it helps top give us clear idea when to leave well alone.

The Framework for Measurement and Monitoring of Safety helps us to consider the key safety critical processes that we need to monitor as part of our real time early warning systems or dashboards. Key processes can be brought together to form a set of key information needed to maintain wisdom ‘before’ the event. More information about this can be sourced through our programme page on the Healthcare Improvement Scotland website.

Don Berwick in his plenary at the recent IHI / BMJ International Forum on Quality and Safety in Healthcare described variation and 'tampering' with data through the 'red bead game' and went onto discuss Deming's system of profound knowledge and Juran's trilogy. You can catch up on his talk here.

Of course none this is worth a jot unless we ensure that the information we make our decisions on is accurate. In reality the mechanics of this information ‘cycle’ from gathering, through processing, interpreting, identifying risks, actioning, sharing and following up, often presents a significant logistical challenge. Gaps in that cycle (operational wherewithal) can significantly undermine our efforts to improve quality and safety. ‘Capability’ therefore involves far more than statistical analysis and we must also consider in the real world, how we put those building blocks for intelligent decision making in place. Without it we cannot learn. We will talk more about that ‘integration and learning’ and the information cycle next time.