Gail is co-author of The Institute Way. With a career spanning over 30 years of strategic planning and performance management consulting with corporate, nonprofit, and government organizations, she enjoys speaking, training, and writing, sharing her experience with others. She currently is the Chief Strategy Officer and VP Americas for Corporater.

The Four Things I Wish I’d Known - Part 3

By: Gail S. Perry

KPIs Are Essential (But Know Your Audience)

I wanted to sink into the conference room floor. I was so embarrassed and was convinced that I must have just asked the stupidest question in the world. To this day, I cringe at the memory of standing there, in front of the entire leadership team at a prestigious, world-renowned, non-profit organization, while the entire team stared blankly at me. I was well into the second decade of my consulting career and was accustomed to taking on major projects. This time, I’d been asked to design a dashboard of metrics for this organization. I’d gathered the heads of all functions and departments to explain the purpose of the project, their roles as stakeholders, and then, poised to write responses on the whiteboard, I asked the question: “Can each of you tell me what three to six key metrics you use (or would like to us) to manage your part of the organization?” My thinking was that this would give us a quick and rough outline of what metrics mattered most to the people who ran the organization – these same people who were now staring at me. Finally, one gentleman spoke up and said, “I believe this is what we hired you to do – don’t you know what metrics we should use?”

Fast forward another decade and I have a lot more KPI experience under my belt and I’ve worked with dozens of major organizations on performance management as well as implemented KPIs for my own use in managing an organization. And in hindsight, I now realize that the managers who were staring at me should have known their key processes and value drivers and been able to articulate what they were trying to accomplish and how to measure it.

I have since learned that there are two kinds of managers/leaders. Those who operate at a tactical level and those who see the full picture. The tactical managers keep very busy managing what Covey calls the whirlwind of daily operations. Some focus only on the day-to-day actions that are required of them. Some are great at people skills. Some are more entrepreneurial and implement innovations, initiatives, and projects they feel are needed as they sense and respond to risks and signals at the tactical level. But after all these years, I see how these sorts of managers consistently fail to achieve meaningful long-term results. They perform well on daily operations, but few can achieve sustainable improvements in those operations. And that is exactly what happened to every one of the managers in that conference room. They did their daily jobs well, but they couldn’t produce long-term results for the organization. Within five years, all were replaced.

The other type of managers/leaders see the full picture of key processes and value drivers and they leverage KPIs to monitor and manage performance. They know KPIs (metrics) will enable them to better manage overall performance as well as to assess the impact of any innovations, initiatives, and projects.

I’ve since learned that I didn’t ask a stupid question. I simply was asking it of the wrong sort of manager/leaders. I’ve asked that same question of the other type of managers and they rattle off metrics faster than I can write them down.

I have learned to assess the audience first and be sensitive to the fact that not everyone knows about KPIs or how they enable managers with insights and power for improving performance. Some individuals may need some basic education about the topic, they may have a long change management journey to buy into the value and use of KPIs, and they most likely will need coaching help to figure out their key processes and value drivers, as well as how to determine appropriate KPIs to use.

It’s not rocket science. To some of us, it is simply common sense. But not everyone is wired this way. We are all born with different natural tendencies so I’ve tried to learn to keep that in mind. And I no longer sink into the floor when someone stares blankly at me. I simply start asking more questions until we find common ground and then work forward from there.

Read Part 2 of The Four Things I Wish I’d Known here. Read Part 4 here.

A World Without Measures

By: Terry Sterling

Welcome to a miserably hot, sweltering day at Yankee Stadium with the temperature sitting at 95° F in the shade and 90% humidity. It’s the bottom of the ninth and the Yankees are down by three runs. A collective sense of expectation can be felt throughout the stadium as Didi Gregorius, who has 10 home runs already this season, steps back into the batter’s box with bases loaded and a full count. Boston’s ace leftie, Chris Sale, with a 4-1 record and a 2.39 ERA going into the game, goes into his windup. The pitch is a 99-mph split-finger fastball at the knees on the inside corner of the plate. Gregorius swings, connects and the ball travels 420 feet clearing the center field wall by 12 feet; a grand slam and the Yankees win their 28th game of the season!

Rewind
Welcome to another day at the empty lot where an unspecified number of people have shown up to play baseball. It feels hot, but since no one measures temperature no one is sure if it is any hotter than usual. The game is loose – the bases and fence are randomly placed, and the game continues until the players decide they are done, as no one tracks innings or measures time. No one knows the score; no one counts strikes or balls; no one tracks how many runs or hits are made on a team or individual basis; much less the type or speed of the pitch being delivered. No talent is required to play because no one keeps track of how one performs; baseball statistics don’t exist. Alas, there is no excitement and no tension; no pressure to improve. No one loses or wins…pure UTOPIA!!Measurements play a relevant part of our daily world, both in our personal and professional lives. H. James Harrington summed it up like this, “Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.”

Everyone uses measurements every day of their life and usually don’t even pause to give them consideration. We all check the weather to see how warm or cold it is. We use measurements to determine our budgets, how much money we need for vacations and how well our children are performing in school. We calculate how many minutes we need to add to our workout routine to allow us to eat that extra piece of cake; how much we can afford to pay for things, and so much more. Face it, life in today’s world doesn’t exist without measures!

What confounds us and causes confusion, missed opportunities and misguided, unproductive efforts is our inability and/or lack of experience in determining what measurements are key. How do we determine how effective we are in achieving our desired goals and objectives, whether in our personal life or at work? How do we measure success and how do we know when we have achieved it?

The answer centers around the concept of developing Key Performance Indicators (KPI’s). The challenge is in developing and knowing what measurements are “Key” in determining the success of our performance and that of our organizations.

If you want to learn more about KPI’s and how to develop useful and meaningful measures for your organization, visit: http://kpi.org

Why "World Class" Performance Isn’t Measurable

By: David Wilsey

Let’s say our organization needs to buy a fleet of vehicles and we have two procurement teams. We tell team 1 that we want quiet, blue, four-door, fuel-efficient cars. We tell team 2 that we want world-class, high-quality, great-value, high-performing cars. Then we give both teams a few weeks to find their vehicles. Guess which team will be able to produce measurable results?

Team 1 will have the easier time, as it is clearer what is meant by the criteria provided. Team 2 will struggle because their criteria are too ambiguous. Without further clarifications, “world-class” could be interpreted to mean a hot rod sports car, a luxury sedan, or even a nice SUV. And if the team cannot agree on the specifically desired result, how can it measure success?

This example demonstrates an important principle of good measure design. Before you can design a measure, you first must agree on what result you are trying to achieve. And not all results are created equal. Results written in abstract language are less measurable and harder to implement than those written in concrete language.

Abstract language refers to concepts or vague ideals. Examples of abstract words or phrases include sustainable, innovative, reliable, leadership, quality, effective, leverage, efficient, resilient, optimized, or responsive. Strategic plans are often littered with this type of language, as we aim to deliver best practices, thought leadership or world-class performance. These “weasel words”, as they are often called, are notoriously hard to measure without first translating into concrete terms.
Concrete language is sensory-specific, meaning it describes things you can see, hear, smell, taste, or feel. Because they are observable, concrete results are measurable. Team 1 will have no problem determining the percentage of cars procured that meet their specifications. Concrete results are also more memorable and easier to implement.

So if you are struggling to design measures for your organization, your first step should be to clarify what result you are trying to achieve, in concrete terms.

Types of KPIs: The Logic Model and Beyond

By: David Wilsey

As part of the KPI Basics series of content we are developing as part of the launch of the KPI.org website, I thought I would introduce the different types of key performance indicators (KPIs). As I describe in the accompanying video, like to use a framework called the Logic Model to describe the first four types.

The Logic Model is a framework that is helpful for differentiating what we produce from what we can only influence. It is also helpful for separating between elements that are more operational versus those that are more strategic in nature. For every key process, we spend resources like time, money, raw materials and other inputs. Then every process has measurements that could be tied to that particular process. The outputs of my process are what we produce. Ultimately though, I want to create an impact with my work. Outcomes capture that impact.

Let’s look at some examples of these types of measurements in real life. If I am a coffee maker, my Input measurements might focus on the coffee, the water, or my time invested. My Process measures could have anything to do with the process of making coffee, from the efficiency to the procedural consistency. The outputs of my process would be the coffee itself. I could have a variety of measures around the quality of my coffee output. Finally, my outcome measures would be related to things I can only influence, such as if my audience enjoys or buys the coffee. There is certainly more value in measuring impact than there is operations. If my customer enjoys the coffee I am doing something right. But you really do need a mix of both to truly understand performance.

To fully understand all of the elements of strategy execution, I can then add a few other broad categories of measures to my story. Project measures monitor the progress of our improvement initiatives and projects and can be designed to improve operations or strategic impact. These track things like scope, resources, deliverables or project risk. In my coffee example, I might have a new branding campaign to sell my coffee.

Employee measures tell us if employees are performing well or have the right skills and capabilities needed. I might measure my employees’ skills in making coffee, for instance.

Finally, risk measures tell us if there has been an important change in a risk factor that could have a significant impact on our organization. For example, I might have a risk indicator that tells me if global coffee bean availability becomes a problem.

The information that these different types of measures provide can be used to inform decision making. Using a family of measure like this can broadly inform your entire strategy.

What I Learned About KPIs from My Six-Year-Old

By David Wilsey

I arrived to pick up my daughter on the last day of art camp just in time for program evaluations. Since we at the Balanced Scorecard Institute (BSI) use evaluation data for course improvement, I was intrigued to watch a room full of six- to nine-year-olds randomly fill in bubbles and then quickly improve their scores when the teacher noted that if any of the scores were less than three they’d have to write an explanation.

In the car on the way home, I asked my daughter why she rated the beautiful facilities only a 3 out of 5. She said, “well, it didn’t look like a porta-potty. And it didn’t look like a palace.” She also said she scored the snack low because she didn’t like the fish crackers and wished they’d had more pretzels. As I giggled at the thought of some poor City program planner or instructional designer trying to make course redesign decisions based on the data, I reflected on the basic principles that we try to follow that would have helped the city avoid some of the mistakes they had made.

The first is to know your customer. Obviously, giving small children a subjective course evaluation standardized for adults was ill advised. Better would have been to ask the students about their experience using their language: did they have fun? Which activities were their favorite? Which did they not like as much?

Further, the children aren’t really the customer in this scenario. Since it is the parents that are selecting (and paying for) the after-school education for their children, their perspective should have been the focus of the survey. Were they satisfied with the course curriculum? The price? The scheduling? Would they recommend the course to others?

Another important principle is to make sure that your measures provide objective evidence of improvement of a desired performance result. My daughter’s teacher used descriptive scenarios (porta-potty versus palace) to help the young children understand the scoring scale, but those descriptions heavily influenced the results. Plus a child’s focus on pretzels versus crackers misses the mark in terms of the likely desired performance result.

Similarly, it is important not to get fooled by false precision. Between some participants superficially filling in bubbles and others changing their answers because they don’t want to do any extra work, the city is simply not collecting data that is verifiable enough to be meaningful.

These might seem like a silly mistakes, but they are common problems. We have had education clients that wanted to measure the satisfaction of a key stakeholders (politicians and unions) while ignoring their actual customers (parents and students). We see training departments that measure whether their participants enjoyed the class, but never ask if their companies are seeing any application of the learning. And we see companies making important decisions based on trends they are only imagining due to overly precise metrics and poor analysis practices.

Even the evaluations for BSI certification programs require an explanation for an answer of 3 or less. I wonder how many of our students ever gave us a 4 because they didn’t want to write an answer. I have also seen evaluations go south simply because of someone’s individual food tastes.

At least I can take solace in the fact that no one ever compared our facilities to a porta-potty.

How Did I Get an MBA Without Learning This?

By: David Wilsey

Most MBA programs pride themselves as being the ”practical” degree that will best prepare its students for any number of management roles. And I have to admit that I can point to that degree as a true turning point in my career. But it wasn’t until I became a Balanced Scorecard Professional (BSP) that I learned several principles that I have found to be key to being a good manager and leader.

Help your team articulate a shared vision
Many managers and leaders think that the key to success is to have a clear vision. But vision that is poorly articulated (or not at all) is just a dream. And simply dictating the vision to employees usually doesn’t work either. Change doesn’t happen because “I said so” or by assigning tasks without any context. Employees engage when they understand what we are trying to accomplish and why. Shared vision and change management happen through dialog, facilitation, and the development of a logical business case.

Connect the dots between what employees are working on and desired outcomes

A good supervisor makes sure that employees are completing their tasks. A good leader makes sure that employees are working on and completing tasks that move the organization toward a shared vision of the future. BSPs have been taught to articulate the difference between mission, vision, and strategy. They know how to organize their energy, measurements, and initiatives around a set of coherent strategic objectives. They know that many people are visual learners and so they use a strategy map to communicate how the dots connect. They know how to align department objectives with high level strategy and communicate to employees where they fit.

Measure results (not just actions)
Most managers know to measure project milestones as indicators of success, and unfortunately many strategic planners use this basic principle for KPI development. They define a handful of goals (e.g. Improve Brand Awareness), list all of the projects needed to reach those goals (e.g. website redesign), and then measure the completion of those projects as a measure of success (e.g. percentage of website redesign completed). Good leaders measure results. A redesigned website is nice, but I should be much more interested in whether or not it led to improved brand awareness.

Develop strategy before KPIs
The best KPIs in the world won’t help if they are designed to measure a half-baked strategy. The good news is that you don’t have to be a Steve Jobs-type visionary to develop an intuitive strategy by formally assessing your strategic situation and identifying a path forward using common methods like a SWOT, PESTEL, Customer Value Proposition, Blue Ocean Strategy, and other methods.

There are other such principles, such as how to identify drivers of future performance using Perspectives, how to use strategy to prioritize, how to set and reach reasonable performance targets, and many more. If you can think of any others, please add them in the comments section below.

The Ultimate KPI Cheat Sheet

By: David Wilsey

We’ve received a lot of interest in our new KPI Certification Program. In fact, one woman said she couldn’t wait until the first scheduled program offering. She also wanted to know if we had a handy list of the most important principles – she wanted a cheat sheet! So in the interest in tiding her (and others) over, below I have compiled a few of the most important KPI tips and tricks. There are many more of course, so if you think I’ve missed anything, please add them in the comments section below.

Strategy comes first!A training student told me his organization is struggling to implement measures for brand equity, customer engagement, and a few others because they believed the measures didn’t really apply to their company. I asked him why they were implementing those measures if they didn’t seem to apply, and he said they had found them in a book. They had no strategy or goals of any sort, and yet somehow thought they had a measurement problem.

KPIs found in a book of measures don’t necessarily mean anything in relation to your strategy. If you don’t have a strategy and/or can’t articulate what you are trying to accomplish, it is too early for KPIs.

KPI Development is a ProcessI am embarrassed to admit that the first time I facilitated the development of performance measures with a client, I stood in front of a blank flip chart and asked them to brainstorm potential measures. It was my first consulting engagement as a junior associate and the project lead had stepped out to take an emergency phone call. Even though I had a basic understanding of what good KPIs looked like, I couldn’t help the client come up with anything other than project milestones (“complete the web redesign by August”), improvement initiatives (“we need to redesign the CRM Process”), or vague ideals (“customer loyalty”). What I didn’t understand at the time is that you need to use a deliberate process for developing KPIs, based on the intended results within your strategy. And like any other process, KPI development requires continuous improvement discipline and focus to get better.

Articulate Intended Results Using Concrete, Sensory-Specific LanguageStrategy teams have a habit of writing strategy in vague, abstract ideals. As you pivot from strategy to measurement, it is critical that you articulate what this strategy actually looks like using concrete language that you could see, hear, taste, touch or smell. A vaguely written strategic objective like Improve the Customer Experience might get translated into checkout is fast, or facilities are safe and clean. Improve Association Member Engagement might get translated into a result of members volunteer for extracurricular activities. I’ve seen strategy teams shift from 100% agreement on vague ideals to diametric opposition on potential intended results, indicating that their consensus around strategy was actually an illusion. Use simple language a fifth-grader could understand to describe the result you are seeking. If you spend your time honing this intended result, the most useful performance measures almost jumps out at you.

It’s not about the Dashboard!Dashboard software is great when it is used to support a well-designed strategic management system. Unfortunately, many people are more interested in buying a flashy new tool than they are in understanding how they are performing (a topic I’ve talked about before). KPIs are not about a dashboard. KPIs are about articulating what you are trying to accomplish and then monitoring your progress towards those goals. A dashboard is the supporting tool and too much emphasis on technology misses and often distracts us from the point.

It’s not about the KPIs!Speaking of people missing the point, we have many clients who think this process begins and ends with the KPIs themselves. Unfortunately, some of these folks are simply trying to meet a reporting requirement or prepare for a single important meeting. This type of approach completely misses the power of KPI development, which is that KPIs provide evidence to inform strategic decisions and enable continuous improvement.

Howard Rohm is the Co-Founder and President of the Balanced Scorecard Institute. Howard is an author, performance management trainer and consultant, technologist, and keynote speaker with over 40 years' experience.

Gaming the System at the VA

By: Howard Rohm

Imagine you take your car to the car dealer to get serviced. Before you give your car to the service manager you see the following performance statistics posted on the wall:

Average time to wait for an appointment after requesting one—27 days

Number of people who requested an appointment but didn’t get one—46,000

Not too reassuring is it. Would you leave your car or look somewhere else?

Some Veteran Administration facilities have a performance history like this. According to a recent review of the VA requested by President Obama, the agency is in deep trouble—average wait time for an appointment is 27 days and 46,000 veterans never got an appointment after requesting one. Some veterans died while waiting for appointments, although it’s not clear if the delays in medical attention contributed to the deaths.

At some VA facilities performance measurement data were misreported to make executives’ performance appear better than it was. Fraudulent performance reporting was used to help justify executive performance bonuses. (A department audit reported that three out of four facilities had a least one instance of false wait-time data and in some facilities two sets of books were being maintained.)

This type of behavior is called “gaming the system”. It’s a consequence of a culture overly focused on the wrong things (wait times) and a measurement system that emphasizes process performance over outcome performance. We shouldn’t be too surprised by the VA experience. When the wrong things are measured and incentivized, the wrong behaviors almost always result.

Focusing on the wrong measures and missing or minimizing the right measures created a climate of misreporting and deceit at some VA facilities, leading some executives to get credit for and bonuses based on reported good performance while all along the opposite seems to be true. Almost $300 million was paid out by the VA in 2013 for performance bonuses to employees, including nearly 300 senior leaders. (Maybe some of these executives should give their bonuses back to the VA for poor performance!) We’ll leave for another discussion the bigger question—what is systemically wrong at VA that encourages a behavior to keep two sets of books on performance?

Some critical questions come to mind. Where does customer satisfaction (veterans and their families are the customers) fit into the performance reporting and incentive equation? Shouldn’t satisfaction with medical service be heavily weighted in determining executive bonuses? If performance and reward are based mostly on process measures—like wait time—and wait time is being misreported, shouldn’t one assume that outcomes like effective medical care would suffer and that cheating to gain bonuses could occur?

How can an organization choose the “right” measures? Start with the end in mind (desired results/accomplishments) and work backwards through the processes that lead to the desired outcomes and to the resources required to produce the program outputs that yield the desired outcomes. Make sure the desired results are expressed in unambiguous language. Then test the developed measures to make sure you’re not measuring what doesn’t matter, or worse, measuring the wrong things and incentivizing the wrong behaviors. Whether you are a hospital, a car dealership, or any other business, government or nonprofit, the same principles apply for developing good performance measures.

The unintended consequences of doing measurement badly are, in the case of the VA, potentially life threatening. Can your organization afford to do performance measurement badly, or not at all?

You can learn more about developing measures that matter in our book, The Institute Way: Simplify Strategic Planning and Management with the Balanced Scorecard. You can order the book on our website or on Amazon.

4 Reasons Business Intelligence Systems are Like an (Unused) Gym Membership

By: David Wilsey

My business intelligence (BI) and analytics software salesman friend said something interesting to me the other day over lunch. He said, “I don't sell software, I sell gym memberships. When someone joins a gym they are not really buying the membership. They are buying the dream of improved health and a better physique. Their intention is to work out every day and fulfill that dream, despite the fact that few people ever actually follow up. Selling BI software is the same way. I'm not selling the software; I’m selling the dream of improved insight and competitive advantage.”

The unspoken implication was that few people ever get significant benefit from their software system, a conclusion I have also observed over my years in strategic performance management.

There are many common reasons that your strategic performance management software system might be getting less use than the gym membership you bought last January. Below are the top 4 that I’ve seen as well as some tips for avoiding them.

Reason 1: You bought into the hype but not the skills
I overheard a CEO recently saying that he needed to buy into the big data craze. It was clear that this person had no idea what big data or predictive analytics meant, but he definitely needed to buy some. Many people seem to think if they just buy some software, within weeks a “number cruncher” will magically come down from a mountain with answers to all of their problems. That is like thinking that if I buy a shovel, a garden will magically appear in my back yard. Performance management and statistical analysis skills are critical to creating value in this field.

Reason 2: You keep the results a secret
The first question some people ask when considering a performance system is, “how do I keep everyone out of my data?” Security around private customer, employee, or some financial information is an absolute must, but a surprising amount of strategic organizational performance information can be shared with leaders and managers. Leaders need information to make decisions and limiting access can communicate that strategy management is something to be left to only a select few. Analyzing data is only the first step. The dialog around why the results occurred and what should happen next are just as critical.

Reason 3: You only use out-of-the-box performance report design
The standard templates provided by the software companies are almost always designed to make the software sell well, as opposed to informing YOUR strategic decision making. Good performance reports communicate three things clearly: 1) How is OUR organization currently performing, 2) Why are WE getting the results that we are getting? And 3) What are WE doing to improve our results?

Reason 4: You count and report on everything that can be counted.
Just because the vendor promises that this tool can handle the volume doesn’t mean that this is a good idea. Strategic performance management is about focusing on the most critical things first. I would recommend selecting a handful of critical performance gaps and focus your data collection, analysis, and improvement efforts on those. Teach everyone in your organization how to do this effectively before you expand to other areas.

There are many more common mistakes, but these four are top of mind for me. Please share other mistakes you’ve seen in the comments section below.

"Fight" of the Bumblebee

By: David Wilsey

Have you heard the common legend that scientists have proven that bumblebees, in terms of aerodynamics, can’t fly? This is a myth that came about because about eighty years ago an aerodynamicist made this statement based on an assumption that the bees’ wings were a smooth plane. It was reported by the media before the aerodynamicist actually looked at the wing under a microscope and found that the assumption was incorrect. While the scientist and the media issued retractions, the legend lives on.

Unfortunately, in the management world, decisions are made every day based on “legends” rather than on real evidence. At a manufacturing company I once worked for, it was a well-known “fact” that it was more profitable to discount prices to increase volume in a particular market. Even after a team of business managers proved discounting was a money loser, certain sales managers continued to rigorously advocate for the discount strategy for years. I like to refer to any ongoing argument like this as the "Fight" of the Bumblebee. This fight is the most difficult when the bumblebee argument is emotionally compelling (they’re not supposed to be able to fly!) and the truth is difficult to convey (bumblebees’ wings encounter dynamic stall in every oscillation cycle, whatever that means). Everyone loves a discount and can see pallets of product going out the door. Not everyone understands some of the indirect nuances that contribute to profit.

Winning the fight of the bumblebee is dependent on making sure that you are interpreting, visualizing, and reporting performance information in a meaningful way. People have to be trained to appreciate the difference between gut instinct and data-driven decision making. Once they see analysis done well a couple of times, they will start asking for it.

The key to interpreting a measurement is comparison. And the trick is to display the information in a way that effectively answers the question, Compared to what? Visualizing performance over time identifies trends that show data direction and development and provide context for the underlying story relative to strategy. The simplest and most effective way I’ve seen for consistently visualizing data is with a Smart Chart (or XmR chart), a tool showing the natural variation in performance data.

Once you have a better idea of how to interpret your data, reporting the information in a way that is meaningful is important. Reports should always be structured around strategy, so that people have the right context to understand what the data is about. Reports should answer basic questions you need to know, such as what is our current level of performance?, why are we getting that result?, and what are we going to do next?