Not going to the “gemba” to talk to the people involved and to observe first hand

Solving the wrong problem (the real problem isn’t what you think it is)

Stating the problem as the lack of something (pre-supposing a countermeasure), such as “problem = we don’t have enough MRI machines” so the obvious countermeasure is “buy another MRI”

Thinking of “solutions” instead of “countermeasures”

Trying to fix others before fixing problems in your own circle of influence

Not confirming that countermeasures really led to improvements

Not confirming that the improvements are sustained over time

Not reflecting on the learning that occurred during problem solving (focusing only on the results)

Which of these do you see as the most common problem-solving problems in your organization? Are these problems more prevalent in certain industries? If you comment, please share what industry you are in…

Yes, it is called “cause and effect.” The “five why” questioning tool is integrated with cause and effect in the concept of a causal chain. Other similar concepts are “IF-THEN”, etc. Basic logic, engineering, systems thinking kind of stuff.

I think Rob’s question was about the idea that a “root cause” must include the patient perspective somehow. I haven’t heard that used as a rule or guideline for RCA and I’m not sure if that would work. But, we need to not lose sight of the patient needs, quality, and value.

As a 9a you could also have, “Not removing the possibility of going back to the old ways.”

Mark Eaton talks about “throwing away the old shoes”. He says that when you buy new shoes they are stiff and uncomfortable and if you keep the old shoes the temptation is to still wear them. But if you throw away the old shoes you will break in the new shoes all the quicker.

Of course with making improvements you should only throw away the old shoes once you have verified that the new shoes fit, which is part of your number 8.

If we classify your ten points as to which part of the PDSA cycle they are in you get:

Plan – 1) Jumping to solutions (before understanding the problem or its root cause)Plan – 2) Jumping to the root cause (without thorough enough analysis and observation)Plan – 3) Not going to the “gemba” to talk to the people involved and to observe first handPlan – 4) Solving the wrong problem (the real problem isn’t what you think it is)Plan – 5) Stating the problem as the lack of something (pre-supposing a countermeasure), such as “problem = we don’t have enough MRI machines” so the obvious countermeasure is “buy another MRI”Plan – 6) Thinking of “solutions” instead of “countermeasures”Plan – 7) Trying to fix others before fixing problems in your own circle of influenceStudy – 8) Not confirming that countermeasures really led to improvementsAct – 9) Not confirming that the improvements are sustained over timeMeta Study (the PDSA of your PDSA) – 10) Not reflecting on the learning that occurred during problem solving (focusing only on the results)

This points up that there are no faults with the Do (run the experiment) part of the PDSA cycle. So may I suggest the following:

Do 11) Not running the experiment as described in the plan (typically stopping too early).Do 12) Not running an experiment at all – this could also be a variant of (1)Do 13) Not running the experiment as an experiment (e.g. subset of work / limited time / one area out of many), but changing everything at once. (Though perhaps is a defect of Plan since that is where you design the experiment…)

Rob – Yes, I sort of grouped them intentionally in that sequence. Thanks for adding the “Do” problems. When I get this down to 9 or 10 or 11 for an article, I’ll include something in the Do category, because people do make mistakes there…

Good list, Mark. In addition to your list, one of my favorites is “a solution in search of a problem”; in other words, having a pet project or pet experiment that someone has always wanted to do, and looking for an excuse to do it.

Nice list. #1 & #2 are definitely ones that apply to the HealthIT space. When we hear of a user having an issue with an aspect of our software we start throwing out ideas on how to solve it…often by writing even more code. I’m guilty of this.

The better approach is to step back and truly understand the problem before crafting a solution or preventative measure. Maybe the user doesn’t really need that button…maybe something needs to be removed instead.

I think #1 is really important and #5 is something I see a lot and have to coach others in. With all that said, #8 is a huge issue. People put in countermeasures and move on without verifying if the countermeasure worked and if so did it work to the level they predicted it would. In fact, did they even have a hypothesis they were testing?

What about not recognizing problems to begin with? Legendary is the habit of managers to shut-down people who bring problems to them, to ignore problems, or to insist that obvious problems do not exist.

Great point. Identifying a problem is the first step in problem solving. Unfortunately, many people are stuck in the denial stage as you suggest – and wonder out loud why there are recurring problems in their organization.

I use the first 3S of 5S to do genba observation exercises – this gives me a great opportunity to practice coaching the basics of problem solving and gives people the opportunity to see that identify problems is an ongoing exercise.

I’ll give you an additional list of 10 now ;-)
1. not knowing why the problem is even important
2. assuming the way work works (related to #3 and resulting in several others on your list)
3. no clear future state other than in terms of results (but not in terms of principles and conditions to be respected)
4. lack of focus on the underlying system of work and its problem mechanisms (like it is done in P-M analysis)
5. seeing conditional and accelerating factors as the root cause
6. no verification on the genba of the logic used in the “5-why”
7. seeing a corrective action as a countermeasure
8. forgetting to standardize an effective countermeasure and integrating it into the system of work
9. only looking at the problem of occurrence, and forgetting about the problem of non-detection
10. not differentiating between problems stemming from the absence of a standard, possible non-adherence of standard or when actually adhereing to the standard (and still having the problem).

I like Rob Worth’s additions on the “Do” phase: many don’t understand it is first and foremost about an experiment (and trying) instead of thinking you know for sure upfront your countermeasure will be effective.

It relates to the classical discussion about the victim of a burglary shooting the burglar: is the burglary in itself the root cause? Some will say “otherwise the shooting wouldn’t have happened”. But on the other hand, the burglary in itself is not enough reason to get shot either is it… So what is now the root cause?

Another analogy could be the catalyst in a chemical reaction. Is the catalyst a root cause of the reaction or just an accelarator or condition?

It is also well articulated in an approach used in investigating safety incidents, called TripodBeta (used at Shell a.o.). In safety you often look at “last prevention barrier” and events instead of conditions.

Furthermore, I think P-M analysis used in TPM is also a rigorous approach to finding causes through analysis of the physical phenomenom and the mechanism that created the phenomenom.

Lastly, I was “raised” in a Lean culture in which we often used reverse logic which implied asking that if you would eliminate the assumed root cause, the observed problem really would not occur anymore. If not, we continued searching.

“Interesting list, but I think the lack of problem recognition belongs on the list – there’s even a Japanese saying that says to celebrate mistakes (because then the root cause can be found and corrected). Hard to problem solve when the problem is undetected/overlooked/ignored.”

I love all the comments, but I still find one common mistake missing: assuming there is only one root cause. Many problems are complex and are the result of multiple contributing factors. In aerospace, this is quite often the case, as I would assume would also hold true in the medical field.

There is also the situation where sometimes it is not possible to prevent a problem with one countermeasure. Thinking “one bullet will kill it” is also a common mistake. This is part of some of the previous comments, but it normally falls under the mentality that “we did something, now we’re done”.

The #1 problem, I think, is believing that you are able to solve the problem. “I know what to do!” is the problem – otherwise, if people were less certain (or if heroics were not rewarded. The person who runs into a burning building and snuffs the flames is a hero – but lucky. The person who steps back, determines what caused the fire, what materials are burning, the best material to extinguish the flame and how to deploy the firefighters will save more lives.) they would approach problems more cautiously, measure results and learn from each step.

The Shingo model begins with humility. People must believe in their own ignorance, understand that any solution they invoke is only a best guess at a moment in time, and remain keenly aware that each situation is, in some way, its own unique instance.

#5 on Mark’s list is a big one. To expand further, what happens a lot in companies who are just starting with lean is, problems will be identified as “lack of” a specific lean tool, such as standard work or 5S. People get enamored by the TOOLS and think they will come in and save the day, and never really fully understand what the problem is they are trying to solve, or waste they’re trying to remove (#2 and 3).

Countermeasure implies an action that may have some positive impact but might be temporary… or might have some side effect or it creates/unearths a different problem.

So I think it’s more a matter of the mindset than the specific word. But hearing and using the word countermeasure has been helpful for me and my clients. They might still say “solution” but they are talking more about the countermeasure mindset.

In an organization , each person is bounded by the status, position, and adequate ego related to it. So i understand the people fallacy on problem resolution are based on facts below:

-Time bound for resolution of problem…urgency matters most cases.
-The level of management handling it …(top/middle /bottom)
-The intuition or understanding of person responsible for handling problem ( His/her past experience on same)
-The gravity of problem that is linked to the corporate policy /objective..
-The organization policy matter that binds the particular area of problem..
Finally the management systems that controls the flow of information or communication within the organization.
Last but not least, the attitude of person handling the problem.

One problem I don’t see discussed so far is the lack of understanding of the depth of the problem. Problems are easy to find and the solutions are generally simple to find using the right tool-set. However, I have seen many instances of solving one problem only to create three others because the entire value stream isn’t considered.

Another problem is finding a solution that is scalable. Often times, one problem exists across multiple lines (manuf) and or services and solving the problem that can be used effectively in all of the areas affected is where you make the real money.

I think the 1st one is the most prevalent one. A lot of people – myself included – make haste to put a band-aid solution over the symptoms of a problem, without dwelling on what the root cause of the problem is, and then addressing that.