70% of change projects fail: Bollocks!

Once upon a time in a galaxy far, far away…. In 1993 Professor Michael Hammer and Consulting firm Chairman James Champy published the book “Re-engineering the Organisation”. This was based on research on Business Process Re-engineering (BPR) initiatives. BPR initiatives in the 80s and 90’s meant very large organisational changes. The book contained success case studies of IBM, Ford Motor Company, Hallmark and Taco Bell. But what resonated with the business community was the following statement:

‘Sadly, we must report that despite the success stories described in previous chapters, many companies that begin reengineering don’t succeed at it…Our unscientific estimate is that as many as 50 per cent to 70 per cent of the organizations that undertake a reengineering effort do not achieve the dramatic results they intended.’ (Hammer and Champy 1993, p200)

An unscientific estimate. No definitions of success. No investigation of validity of expectations. 70% of BPR projects fail. Sexy stuff, people.

In 1995, Professor John Kotter publishes the article “Leading Change” in the Harvard Business Review. Rather than quote studies, he notes he has “observed” over a 100 companies in the previous ten years with success varying. He is circumspect about success and failure rates, noting the varying stages and reasons for difficulty. Kotter’s 1995 work is often referenced as a source. It’s not in this article. The eight-step framework is in this one.

In 2000, researchers Michael Beer and Nitin Nohria published “Cracking the Code of Change” in the Harvard Business Review. The article is actually about their work on Theory O and Theory E of change. But the sentence that grabbed the attention of the consulting world was almost a throw away line at the beginning:

‘The brutal fact is that about 70% of all change initiatives fail.’ (Beer and Nohria, 2000, p133).

Nothing to support it, no mention of where this fact has come from, how the figure has emerged to be a “brutal fact”. But it does set up a need for an alternative theory of change (eg Theory E and Theory O).

From an academic perspective Mark Hughes published a fascinating challenge to the statistic in the Journal of Organisational Change Management in 2011. From his analysis, many of the subsequent published papers form a version of a set of academic matryoshka dolls. Examination of their proof of the 70% citation inevitably leads to Hammer and Champy and Beer and Nohria. The mind boggles how many times this statistic has set up a justification for the academics following endeavour. Indeed he notes that Michael Hammer distances himself from the original statement

“Unfortunately, this simple descriptive observation has been widely misrepresented and transmogrified and distorted into a normative statement . . .There is no inherent success or failure rate for reengineering. (Hammer and Stanton, 1995, p. 14, cited in Hughes, 2011).

These two sources (Hammer and Champy, and Beer and Nohria) made the curriculum reading lists of pretty much every undergrad and postgrad in the western world. And thus influenced a very large cohort of managers, consultants, project managers and change management practitioners.

The figure gets a life of its own, in 2008 in “A Sense of Urgency”, Professor John Kotter “estimates” more than 70% of needed change fails. His website states “Thirty years of research by leadership guru Dr John Kotter have proven that 70% of all major change efforts in organizations fail”. Yet, I struggle to find any peer reviewed publications by Kotter on the research that led to this. But I fully understand that some-one who researches in the area may be reluctant to challenge this and ask to see the research in order to evaluate the research design. Some sacred cows you don’t touch…

From an academic perspective you have a choice at this point. Do you position against famous professors with best selling books and challenge the “unscientific” statement and “estimates”? To challenge Beer and Nohria on the “brutal fact” is to distract from what is a pretty useful theory and contribution to change (Theory X and Theory O). Maybe you need to wait twenty years to do so. It may be more prudent for career progression to stand on the shoulders of giants and build incremental “knowledge” on 70% failure rates.

So then large consulting firms and IT vendors get in on the act. Some-where along the line some pretty good studies on project implementation and benefits get further twisted into a persistent myth that 70% of all change projects fail. Statistics like that can be very useful in selling services and products. They create fear. If you don’t use our services you may be in the 70% …that would be bad.

Industry heavy weights and thought leaders continue to popularise the statistic with Daryl Conner using it as a big stick to beat up change practitioners and admonish them to do better (why after 30 years are we still having 70% of our change projects fail? We must be culpable). Ron Ashkenas recently used it in the HBR again. This means it must be true.

But it’s not. And here’s six reason’s why:

1. The definition of change project is questionable

A lot of the research studies that reference the 70% failure talk about success of project implementation. Project implementation success is often very different to change management success. Yes, any project by virtue of purpose relates to change – eg it is created to change something, deploy something, and improve something. But not all projects are “change projects”. To assume so is conflation.

A change project needs to have a change management methodology employed and change management resourcing. The studies referenced as proof of the 70% statistic do not control for the presence of a change manager or a change methodology. If neither of these were present I would argue that you couldn’t make any statement about change projects being successful or not.

In IBM’s 2008 study Making Change Work, it was identified that of the 20% of companies who represent “change masters”, their success could be attributed to four factors:

Realistic awareness and understanding from leadership of the complexity of change

A systematic approach to change (eg a methodology)

Dedicated change managers and change resourcing

Permitting the right investment for change.

To my view, if you don’t have these four factors, I’m not sure you can include in a study about change management success.

The notion of “control” in a research design is critical. Finally, earlier this year (and 20 years from the original Hammer and Champy statement) researchers Barends, Janssen, Wouter, ten Have and ten Have publish a marvelous meta-analysis of 563 studies in change in the Journal of Applied Behavioural Science. Only 2% use a case control design, and 13% used control groups.

2. The definition of “success” is questionable.

Looking at some of research quoted success is defined as: did it meet expectations, were benefits realised, was the project delivered in full, on time, on budget.

In my experience change success is defined as

People are using the new technology, policies, and adopting new behaviours

The business outcomes have changed for the better

You can go further (and should go further) and track metrics at various stages of the change.

Change success is rarely measured in absolutes. Things change during the course of an initiative. Sometimes dramatically. Often business sponsors have an unrealistic expectation on what success looks like and when it will happen. It based on personal KPI reporting, not what change really looks like in organisations. If you have change resourcing at a senior level you can reset expectations. If you don’t have some-one who knows change at a senior level influencing these expectations of success you have a senior executive filling out a survey saying that the [change] project failed (an absolute).

3. Success is measured at the wrong time.

There is recognition that successful change takes time – moving up the adoption curve can be a lengthy process. And that depends on the type of change and the type of organisation. “Was the project delivered in full and in time” is simply not a “change success” metric. We know from practice, that culture change can take many years to embed. As change practitioners we need to interrogate expectations of the timeliness of benefits realisation. Benefits realisation is more than in full on time and on budget. For more on this, have a look at Conner Partners paper on Installation or Realization; it’s a great read.

4. The units of analysis are not the same.

The multiple studies reference different types of companies, industries and types of change. Without a proper meta-analysis you can’t make the claim that this is a consistent finding. You are comparing apples, with oranges, tossing in a grape or two and saying the fruit salad is a worrying story. It’s handy that they look similar, but the units of analysis are not comparable. Changing a culture has very different success factors, time frames and methodology to a large-scale system implementation. I take my hat off to Martin Smith for his early efforts at a meta-analysis with “Success rates of different types of Change” in Performance Improvement – this is more like what we need. It is telling though that his concluding comments steer away from a definitive statement about what success looks like during organisational change, and instead makes suggestions to readers on how to use these studies in understanding their own change efforts. The reasoning of this article combined with meta-analytic rigour of Barends et al’ paper starts to tell us a lot more about organisational change success.

5. I don’t think I am [that] special, nor my peers.

If this statistic were to be true, I would have 70% of my change initiatives shelved as failures. So would my peers. We don’t. We’re pretty good. I’ll grant you that. But I don’t think we are the outliers here.

Change is difficult, don’t get me wrong. It is even more difficult in organisations where sponsors and leaders don’t understand the need for change management. No doubt about that. But is the field of change management fraught with persistent failure. Absolutely not. There is such a wide variety of types of change, scale of change, scope of change that to create a mean is well, mean-ingless.

The next time you meet some-one with the title of change manager strike up a conversation. Ask them how many of their initiatives have failed? It is highly unlikely they will say anywhere near 70%. Ask them then about what would have made many of their projects a better success in a quicker period of time. Then you’ll have some useful insight.

6. A Career Limiting Admission for a CEO

Seriously. You want me to believe that 70% of the worlds CEOs have led failed change efforts? Really? Is the talent pool for CEOs that large? I’m not sure they would still be CEOs if that were the case. Even if the surveys are anonymous, some-where there are 70% of company boards looking at poor performances from their CEOs. I struggle with that.

A call to action

Practitioners:

When some-one uses this statistic, call them on why they think it is true. Have they read an influencer or delved into the empirical research? How was success defined? Was the presence of change management support included? Be informed and responsible in your use of the statistic. Please don’t use this statistic to suggest that change management is difficult or risky to do. That’s just plain wrong.

While I don’t agree with Daryl Conner’s view that change practitioners have culpability for the 70% failure statistic, I do think his 23 questions in Physician Heal Thyself are excellent. Create a community event where you focus on these questions – collectively lift the quality of change management practice. We cannot and should not shy away from improving change success rates.

Researchers:

Mark Hughes has made an excellent start with his paper on “Do 70% of all Organisational Change Efforts Really Fail?. This unpacks why it is a myth. But let’s get to the real answer. There is much, much more to do. There is ontological opportunity in addressing understanding the social construction of management myths. Eric Abrahamson’s Managerial Fads and Fashions: the Diffusion and Rejection of Innovation (1991) will be useful as a starting point. There will be more in the critical management literature.

With regards to epistemology, Barend’s et al’s 2013 paper is impressive. One of their implications for further research is to conduct more replication studies. So there is argument for epistemological contribution by doing more like this. Replication studies are high risk though from a publishing perspective. This may be better suited to an honours student (Australian academic pathway) It’s a tough one. Given the lack of quality in OCM research when it comes to success rates, I would argue that there are a series of research studies that involve control case designs, focusing on a specific type of change with each study. So find 30 cases of culture change – control for methodology, resourcing and include time series collection of data. Then do it with restructures, and then systems implementations. Then we build a body of knowledge.

But above all, regardless of the design be clear on face validity: Start with qualitative research on practising change managers. Talk to them and their sponsors on how they define change success. Build your surveys using those definitions and constructs. Then look at the reliability. Use that research on different industries, different types of change. Control for what differs. And then please make sure it gets into a HBR! (yes, I know…) Or share the working papers with the MBA students. Get it out there

Vendors:

Do your studies on the relative difference that change management makes. When you use fear as motivator you run the risk of freaking the customer out and they run away from the whole concept or become paralysed (Fight, flight and freeze). And nothing gets changed at all. Better to maintain status quo because 70% of change projects fail anyway….

Post-script. Timing hey? Just before hitting publish, I come across Jason Little’s post on the same topic. A week ago. It’s a great read. Jason shares more about what the studies tell you, but there are very similar themes to this post. With less snark and frustration ; -) To my delight, Heather Stagl has also taken it on earlier too. And Gail Severini has initiated a terrific discussion in the OCP group with some great insights coming out and pointed me to Barend’s at al, and Smith’s papers. This post is improved for her comments and viewing of the original draft.

So it looks like I’m in good company – would it be too optimistic to say we are at a tipping point?

[1] For the non-UK / Australian readers, Bollocks means “nonsense”. Most times. Think codswallop ; – ) There is also precedent on this blog: Can’t manage change: Bollocks!

32 Comments

Great piece of work Jennifer: I must admit I am guilty of being one of the people who have used this 70% statistic and not digging deeper than one reference for validity. The funny thing is that I recollect ingorning a “nagging doubt” at the time. Sometimes ones first instinct can be correct 🙂

Thanks Scott! Yeah — for my part, it is only with experience and time in the profession that could get to a point where I need to go digging because it just did not reflect my personal experience. When I was an academic, I didn’t question. Let’s face it — we need reasons to justify our research, and this one was pretty handy. The research does need to be done — it’s just not for this reason! As Oprah says “Follow your instincts. That’s where true wisdom manifests itself” ; – )

No end state (the original formulation for any change “ending”) is EVER 100% successful. There are degrees of accomplishment and it is those smaller measures we must strive for as practitioners and leaders.
Using fear (which is always why that bogus stat is bantered about) to somehow address fear (of change), to me, is one nasty double negative. (Trying to build your name and revenue from that borders on unethical in my mind).
Maybe sacred cow goodbye?

Thank you for doing this. There are frequently studies produced showing that roughly 2/3 of all IT projects fail, and from what I’ve seen I easily believe. But I’ve never worked on a process improvement project large or small that failed, and always wondered where this figure came from. From now on, I’ll cite your work instead.

Going after Kotter, IBM, HBR, Towers Watson and the like is bold. And your points are very valid. More research, done via sound longitudinal measures, is much needed.

But it’s the best we have. And it’s interesting that across survey targets (CEOs, Heads of IT, project managers, change managers, brand managers, COOs, etc) and across survey companies (including academics), numbers seem to converge on or near the 70% failure rate. Does questioning the number, as you urge people to do, actually help us understand what does and doesn’t work?

Seems like it’s pointing fingers at imperfect data. Data should never be used as absolutes–the 70% doesn’t matter a bit. But as directionally indicative of a problem–that we don’t manage change successfully in large organizations–I think it stands.

Hi Kelly – sincere apologies for the delay in publishing your comment, it got stuck in the wrong folder. I really appreciate you stopping by and your thoughts. I suspect we both end up at the same point — I do believe there is opportunity to improve change practice in large (and small organisations). I don’t need a 70% stat to indicate that. However, I think the convergence is a result of designing studies to reinforce the previous, rather than design research with integrity which might yield better understanding. Your comment has me wondering – what percentage would we be comfortable with? 49% 30%? I think questioning the number does open up a conversation which says what does success actually look like? When have we introduced change that went well? For what reason?

Hi, thank you for great article and posts. Clearly the ‘70%’ is an urban/management myth and should be left as such and not badged as science by people who should know better. We seem to have a growing fixation with seeking to find a scientific, measurable base for all behaviours, plans and decisions instead of trusting to intuition based on past experience. An example of this is, for me at least, the maddening habit of stuffing the prefix ‘neuro’ before a description of things we have known to be common and ‘true’ before we were ever able to stick electrodes on people’s heads. In the context of change it’s surely a truism that for successful, embedded change it’s probably preferable to ‘take people with you’. I don’t need bogus science to convince me of this – but then intuition doesn’t sell books.

The 70% is not an urban myth but it does depend on what is MEANT by failure, so context is key. Use of the bare statement without context is irresponsible.

What is clear from goodness only knows how much proper research over 30 odd years is that project (programme….portfolio…..) “failure rates” are way higher than for other business processes. If a production line achieved wastage above 5% heads would role at all levels, there would be explanations in annual reports to shareholders etc.

So even if 20% of project investment was wasted you might expect the same. And yet, CEOs heads do not role. Neither do those of executives or even………anybody!

Scientifically derived evidence aside, having worked in multiple industries in both commercial and public sectors, just anecdotal evidence from people like me should shake the halls of the C suite.

But, no, they don’t. Except when a project is publically disastrous. Then the C suite sits up and takes notice. For a while.

This does not mean that CEOs, execs or managers are incompetent. It is more a cultural thing. Most people running organisations have operational experience. Projects, especially those of change, are a different experience. Even the mindset is different. For success you have to create a landscape in which projects can thrive. If such a landscape is not there…….they won’t, and they don’t.

I would suggest that reliance on the purity of specific statistics is foolish. Instead: [a] look at the overall picture numerous results is painting….it is VERY bad compared to operational performance, and [b] open your eyes, look at what is actually happening and believe the truth of what you see.

Jennifer, I seem to arrive a bit late to read your article, somehow through another article from Paul Thoresen in LinkedIn. I find it excellent! I really appreciate all the references you made. I am a fact based person and I was always wondering how this 70% could be drawn from. Of course, I do not support conclusions based on “unscientific” facts (maybe today we could also say “alternative facts”). From my experience, implementation fails due to a lack of a change manager, lack of a methodology, lack of real facts to measure and big resistance to change with strong hierarchies demotivating frontline. But these symptoms can already be present at the beginning of the initiative so the implementation of change has already a big breach. I also agree that reaching any change in some companies, in a really stagnant and chaotic situation, means a long-term process, where good support and follow-up is needed, from all roles in the organisation. In many cases, the perseverance of the parts is not strong enough and all the work breaks finally into pieces.

What a great article! It kills me to find this so many years after original publication. Great information, with great links to additional resources. I am currently in a DBA program, looking for a research question. This might be a good start, and is certainly a catalyst.

If you want proof of broad cross-industry success for one class of “change projects,” you might look at the article CMI published in 2012 on the ROI of quality programmes (the implementation of which meet most of the criteria mentioned here). Patrick Woodman of CMI is working to get a link re-initiated, but he will provide you a copy if you email him at Patrick.Woodman@managers.org.uk, or if you contact me, I have copies.

Hi Rip – thanks for the link to Patrick’s work, I’ll follow up. I would thoroughly encourage you to address a component of this in your DBA. It’s probably too big overall – but hopefully there are enough threads in this to pursue!

Jenn, excellent article. I would like to add some updated information for the people, like me, who is reading this article today. I found out that Towers and Watson’s Change and Communication ROI for 2013-2014 (https://www.towerswatson.com/en/Insights/IC-Types/Survey-Research-Results/2013/12/2013-2014-change-and-communication-roi-study) increased the outperforming KPI to 3.5 from 2.5. However, in the pdf document in the same web address, page 8, it is stated that: “Most change projects fail to meet their objectives. Only 55% of change projects are initially successful, and only one in four are successful in the long run.”. This would bring a 75% of non successful change projects in the long term. I cannot find here neither the facts behind

Thanks Nuria – it’s so entrenched isn’t it? T&W Change and Communication ROI studies are great – they ask the right questions. It’s a shame they don’t make the research more transparent. But what they do call out is temporality – eg installation is more successful than benefits realisation. I’m not sure if I have updated this post to include a later post on how to measure success that addresses this. Thanks for commenting!

I have seen many a change effort that was far less successful than it could have or should have been. Most CEO’s and those in power will juggle the data to claim some sort of victory but few are as successful in the long term as vendors, consultants and CEO would like us to think. I agree it would be nice to quote something other than a SWAG, but I also can’t say they were wrong either. I think you are correct in saying that it depends how we define change effort. I’ve seen project like new IT systems that never finish on time or on budget. There are a lot of those.

Hmmm … 70% sounds awfully close to 80-20 Pareto … Maybe it is 70 because 80 would be too obvious? It is conceptually easy to stick, therefore. And fear is a well-recognized motivator … hence the broad appeal? It may also coincide with …. ‘blame your predecessor’, delay between implementation and results, ‘not-invented-here’ syndrome – all of which may contribute to the perceived high failure rate? Just a thought …

So it seems this article has been around for a few years. I am glad to have found it now. I have always been suspicious of numbers that don’t seem to attach to a sense of reality. Principled research demands more of us who seek to influence others. However, titles with numbers, e.g. “Seven Habits … (now Eight, I believe); “21 Irrefutable Laws of Leadership;” “8 Steps of Organizational Change” have all been around for many years and are used indiscriminately. I always tell my students to be judicious consumers of the popular business press, and to question the sources that are offered up (whenever they are), and to do their own research to make up their mind.

great article… my experience with managing change in organisations is that it is a discovery process. Quite often the goals shift organically. What emerges sometimes is different from the original intent. That could also be one of the reasons.