Post 1. Context is everything.
This is a brief description of what we did in the session and some observations on how people think they would respond to failure in the context of different risk management approaches.

Post 2. Is common sense more useful than the rule book?
This reviews the data we collected around how people use different approaches when they are making decisions about risk, and

Post 3.Does service user involvement in decision making lead to better decisions?
This tested the technology we used and pushed our understanding to the limits.

There are of course huge caveats around the information presented here. It is very much work in progress, based upon an experiment we carried out in a shared learning seminar. We are grateful to everyone who took part for doing so willingly and allowing us to share the data (everything here is from people who ticked the box saying it was ok). This is very much ‘working out loud, and in the open for us’. If we’ve got anything wrong, please let us know. If you could do it in a supportive way, that would be far more helpful to us than a public flogging. Thank you.

Context is important

This session was developed to try and share knowledge around well-managed risk taking. The Auditor General has been saying for some time that he wants to see public services taking well-managed risks. You can look at this video where he talks about the importance of trying new things and learning from failure if they don’t work.

There is a however a gap between what the Auditor General has been saying and practice across public services. Nobody is suggesting taking un-necessary risks with services provided to vulnerable people or being reckless with public money, but there is probably some scope to move from the status quo.

Changing our approach

In the spirit of well-managed risk taking we decided to do something different with this event. Usually we would have arranged something where people share practice and knowledge that others could learn from, as presentations or workshops. Around this structure we would facilitate conversations and introductions where people can develop relationships to continue their peer-to-peer knowledge exchange.

Whist this approach is effective, we decided to test an approach which was far more immersive and allowed people to think about situations and how they would respond to them. This scenario type work has been used in other situations, but we wanted to extend it by using a process that allowed people to record their thoughts and opinions in a way that could be analysed and fed back to them rapidly. The idea was that they could see how their attitudes to risk and decision-making fit with those around them, and the context they are sitting in. This level of understanding might then support different behaviours and attitudes to well-managed risk taking.

How the approach works

Very briefly, we did the following:

Explained an approach to risk taking (Framework 1) to the group. These were adapted from existing approaches and chosen to be at opposite ends of what you might be likely to see in public services.

Presented a scenario of a significant challenge facing public services.

Asked people to discuss the scenario then, individually record their responses to a series of questions about; decision making, benefits/impact and attitudes to failure.

This process was repeated for three scenarios and then in the context of the second approach to risk management (Framework 2).

The approach is summarised in the graphic below.

We used SenseMaker as the tool for people to record their thoughts, analyse their responses, and provide some live feedback. We’ve been working with The Cynefin Centre at Bangor University to get a better understanding of how this approach might be useful for our work.

An example of what we asked people to do

In response to Frameworks 1 and 2 and each of the 3 scenarios, we posed people the following question: ‘Transparent reporting of any failure will…’

The options in responding were two extremes; people get fired or people get promoted.

They were asked to move a marker along a sliding scale to a point which they thought reflected the position of the organisation, in response to the risk management framework they had been presented.

What the data told us

We collected a total of 218 separate responses to the question.

The graphic below presents a roughly normal distribution between the two options, which is what you might expect.

When you analyse this information to look at how people responded in the context of the two different frameworks things look different with two distinct patterns forming. Graphic 2 with responses in the context of Framework 1 closer towards the left hand side (people get fired) and pale blue responses (Framework 2), closer to the right hand side (people get promoted).

Framework 1 was, Failure is Not an Option. An approach that assumes all risks can be fully understood, assessed, categorised, documented and managed.

Framework 2 was, Safe to Fail. This approach rooted in the Complex Domain of the Cynefin Framework which proposed a number of small, time limited, low resource tests / pilots / experiments. Their objective is to probe what is happening and gain a better understanding before any decisions are made about what to do next.

Further analysis emphasised this split in the data. Graphic 3 illustrates the distribution in the context of Framework 1 (Failure is not an option), with the mean closer the left hand side. Graphic 4 illustrates a distinct shift towards the right hand side and the ‘people get promoted’ label.

So does this tell us anything?

The data suggests that how something is described or framed will influence how people respond to reporting of failure.

In the context of Framework 1 (Failure is not an option) people are more likely to think that reporting of failure will get people fired.

In the context of Framework 2 (Safe to fail) people are more likely to think that reporting of failure will get people promoted.

This might not be surprising when you sit back and rationally read about it in blog post. However as one of the delegates commented, “This has big implications for how we make decisions on our committees and Public Service Boards. If we talk about decisions in the context of failure is not an option people will be worried about the consequences, so will be less likely to be innovative and take risks. The language we choose to use and how we frame things is important”

What’s next?

This post will be followed by two more that look at:

Post 2. Is common sense more useful than the rule book? This reviews the data we collected around how people use different approaches when they are making decisions about risk, and

Post 3.Does service user involvement in decision making lead to better decisions?This tested the technology we used and pushed our understanding to the limits.

As mentioned earlier, this is an experiment for us and an example of us ‘working out loud, doing things in the open’. There is still a lot more we would like to do with this data. We are certain that we haven’t got things right and would appreciate any comments and feedback on what we have tried here. If anyone would like to have a look at the dataset and help expand our understanding, please get in touch, we would very much like to talk.

So, Whats the PONT?

Presenting people with rapid graphical/numeric feedback had a big impact. There was huge interest in the SenseMaker data and we could have spent much longer exploring what had been generated.

Recognition of the ‘points of interest’ seemed to be greater because people were looking at (and interpreting) their own data, rather than if we’d just told them / done it on their behalf.

People responded to the spirit of ‘this is a pilot’. They were happy to comment on the approach and suggest way we could make it better, which is great and less of an emotional stain that having a ‘perfect offering’ criticised by strangers.