Case Study: Using Logical Thinking Process Tools

In 2010, the finance department of a Fortune 500 company established a quality audit (QA) system for the company’s operations. Toward the end of 2011, the general feeling on the floor was that the QA system was effective and the quality of work in the department was improving. The metrics, however, were not improving. A simple run chart displaying the department’s quality score throughout 2011 is shown in Figure 1.

Figure 1: Quality Scores in 2011

This year-end analysis showed that there was a need to modify the QA system; the established system was no longer achieving effective results. Although the difference in percentage points may not appear significant, each 1 percent change in quality score per month was the equivalent of approximately $24,000 in lost revenue (roughly $288,000 per year).

This case study covers the transformation that was undertaken to realign the QA system to the changing business needs – enacting more proactive initiatives in preparation for post-recession growth.

Applying Logical Thinking Process Tools

Although in some ways this is a typical Lean Six Sigma project, team members used logical thinking process (LTP) tools, which are based on the theory of constraints (TOC) philosophy, to address the QA system. Three critical questions were asked during this project. These questions were:

Why is a QA team needed? What needs to be achieved?

What are the current problems? What is not working right?

What needs to happen to resolve the current problems?

Let’s look at these questions in depth.

1. Why is a QA team needed? What needs to be achieved?

It is important to define the end when starting the project. Asking these questions enabled the team to define the:

The project sponsor and other stakeholders participated in a brainstorming session to:

Understand the business objective

Identify the CSFs and NCs

Understand the relationships among the objectives, CSFs and NCs

The end result of these steps was an intermediate objectives (IO) map. A simplified view of the map is provided below for reference.

Figure 2: Intermediate Objectives Map (Click to Enlarge)

2. What are the current problems? What is not working right?

In this step, the stakeholders held a brainwriting session. The stakeholders wrote down their problems with the established system and then discussed those problems and unearthed others. These were documented in a problem list.

Once this list was consolidated, a facilitated group discussion session began in which the participants tried to group the problems (as with an affinity diagram) and to see if there were any links between these problems. The group quickly started identifying how one problem led to another, and how a particular problem was caused by another problem.

Quality rating scale is counterintuitive: For most people, a quality score of 100 stands for error-free product and process. The scale that was used prior to this project set 80 as the benchmark for effectiveness and left an additional 20 points for creativity (as a bonus). This meant that even when the score improved, operators intuitively felt they were still far behind when compared to a benchmark of 100.

Quality score is calculated at a category level: The checklist that was used had 40 checkpoints grouped into six high-level categories for ease of reporting and tracking. The quality score calculation was done at each category level and not at the individual checkpoint level. Thus, even while operators were improving their performance (reducing defects at checkpoint level), a single error at the category level meant the score remained the same. This also led to the overall metrics indicating no improvements while performance was improving at granular levels.

The CRT diagram (Figure 2) indicates how these problems led the business into a negative reinforcing loop, where supervisor and operators were unmotivated and the entire program started to lose ground. The only way to effect change was to resolve the root causes.

3. What needs to happen to resolve the current problems?

After completing the CRT, the path to improve the current system was obvious to the team. The team defined and listed the action items, and created an implementation plan. (Team members looked for negative consequences of any proposed actions – they found none.) A portion of the resulting implementation plan is shown in the table below.

Excerpt of Implementation Plan

Necessary Condition

Core Problems

Step

Solution

Agreed upon quality checklist

QA and operations do not agree on checklist definitions

1

Define each checkpoint specifically so that there is no subjectivity. Checklist must align to operators’ guidelines and standards.

2

Get consensus on the checklist.

3

Make checklist available to all stakeholders at all times through Intranet and display boards.

This project was well received by all stakeholders when the project was completed. LTP tools were cited specifically as the critical success factors behind this project.

Cost-benefit Analysis Results

The project effort was spread across two months and cost approximately $57,000. The benefits (the reduction in lost revenue) were projected to be more than $500,000 in the first year. A half-yearly analysis calculated the actual benefit in the first six months as $300,000 – on track to beat projections.

Key Learnings

An important lesson from this project is recognizing that a system which is effective today may not be effective tomorrow. It is important to continuously evaluate all of the critical business systems and sub-systems for their effectiveness and continued relevance to the business environment.

Share With Your Network

Comments 1

Kicab

IMHO the article doesn’t portray logical thinking. Start with the second and third sentences: “Toward the end of 2011, the general feeling on the floor was that the QA system was effective and the quality of work in the department was improving. The metrics, however, were not improving.”

The first logical thinking question should have been is the metric (there is only one shown so it’s not metrics) that is plotted measuring the same thing as “general feeling” (how do you quantity that?) or the “quality of work”?

This is revealed by the fact that the article does not end with a graph showing that the original metric tracked showed a) improvement because of verifiable improvements or b) no improvement because the feelings were just that and the quality of work was constant not improving.

Or, the article could have ended with a graph showing a new metric because the original measure did nor validly reflect what it was intended. That graph would show how it reflected verifiable improvements.