Menu

Metrics – to fool or be fooled – that’s the question!

KPI’s should be about understanding what you need to improve to be better at meeting your Customer’s needs and desires. Designing a measurement framework, the metrics that go with that, and the cross-functional dashboards to ensure cross-silo understanding how improvement in one area effects another, should not be regarded lightly. At the same time putting together a Customer Experience Feedback Analytics team to keep tracking metrics, searching for new correlations and continuously increasing your understanding of what truly matters to Customers, as well as what you need to do about just that, I consider a must for every company.

Unfortunately, in the perception of many, KPI’s seem to exist only to please “the boss” or to show “the boss” how well one is doing. KPI’s or metrics are often not well designed, and are sometimes extremely well designed: Extremely well to suit an important purpose: Fool your Boss (e.g. for a bonus, for getting the money to launch that project you really want to do, or just to be able to not have to do anything).

Here are two examples of metrics that fit into that last category:

Pursuing your own desires, not your Customer’s:

A company understands that their customers desire a speedy turn-around time with regard to account-change-requests. They have asked their clients what they would consider a speedy turn-around time. On average the Customer has provided feedback that 10 business days would be fine. The manager therefore has put in place a metric: average turn-around time of account-change-requests. After a big ICT project (which they always wanted to do but did not have a sound business case for) they succeeded in getting it about right. Unfortunately Customer Satisfaction did not increase and the (complaint) volume in their Contact Center did not decrease either, it increased!

What happened: after analyzing the data on turn-around times it was discovered that the company has been successful in decreasing the turn-around time of requests, that were already being dealt with within 10 days before. They improved turning them around in 3 days. Great achievement, but clearly not in line with the desired outcome of their Customers. Worse even, the turn-around time of requests that were handled outside the 10 day limit, increased from 15 days to 18 days. A lot of money had been spend on reducing the average turn-around time (by system automation), only to find out it did not have any of the desired outcomes for the company. The manager is happy though, with a state of the art system and a good bonus for meeting the KPI-goals.

Little effort, maximum results

A company has analyzed that their Call Center First Contact Resolution-rate was too low, causing high levels of dissatisfaction among their Customers who contacted the Contact Center. They also analyzed that most of the repeat traffic occurred within 2 weeks after the first call. Hence the responsible contact center manager put in place a KPI to track and reduce the repeat volume that occurred within 2 weeks. After as little as one month they saw an increase in the new First Contact Resolution KPI and after 3 months they hit their target (95 % FCR). Unfortunately, and you feel it coming, dissatisfaction levels did not decrease, nor did call volume.

What happened: contact center management proved to be very effective. They implemented the new KPI all the way through to the level of Customer Services Representatives. They of course know exactly how to influence this, without structural improvements needed. The CSR’s made a great effort in managing expectations of the calling Customers: it will take at least 2 weeks before your requests will be dealt with. No improvements were made on the actual turn-around time of the Customer requests, hence all Customers kept calling back after the two weeks had passed. A good example of: little effort maximum result!

What kind of (bad) examples do you have to share? Or: how did changing the way you measured really improve understanding, what mattered for your Customers, for you? Please share your stories here.

One of the best KPI’s that I’ve come across was at JCB (www.jcb.com), a UK-based manufacturer of excavators. Joe Bamford was the founder, and in the 70’s and 80’s the vast majority (maybe 90%) of their tractors went to “owner/drivers”, or small firms. Every day Joe would go round the senior managers asking what the VOR or vehicle off the road figure was for that day. The VOR figure was important in that if an owner/driver’s vehicle was off the road then he couldn’t earn a living – as simple as that. And with Joe walking round the business asking what that single number was, he was, in his own way, a walking one number dashboard.

Hi Wim, I do “get” the product or service “does a job” for the “consumer” and understanding what that job is in reality, is key to understanding the Why (i.e. the McDonalds milkshake does one job for the long distance driver, a different one for the mother with the kids. The job of the shake for the driver is to keep him and entertained, and sugared up for the driving. But I don’t think that’s the full “job”). He might feel safer now, because he was worried that he might get distracted, and the shake makes him feel that he had taken an action to ward off this potentiality. The Shake Keeps Him Safe. He Buys Safety. And I know, I might be out on a limb here.:)

But I’ve been reading around a bit, and the state that someone is in before, during and after any interaction, is the filter through they interpret it.

Where I totally agree with the comment you made about enabling the customer to make themselves happy, this might be customer experience management through the design and delivery of the service itself. It’s something that we (at our company) think about a lot, and would love to be a lot better at. But when stuff breaks down (my bill is incorrect, your product didn’t arrive), the customer doesn’t Trust the company, they Trust a person to hold that concern, to look like/sound like, they will deliver that fix, and then, the customer wants to feel better.

I could be preaching to the choir here, but I have spoken to many big companies, and when I say “have you thought about how your customer feels while they are waiting for your product to arrive” I am usually left sit in a pool of silence for about 7 seconds, and then someone says, “that’s very interesting…..”

I was not indicating you do not understand “jobs” & “service dominant logic”. I also like the “shake”-job. You might be off indeed, on the other hand, you also might not.

I agree with you that it is important to understand what emotions your Customers go through when experiencing your products or services, and Customer Services too.

I also highly recognize the silence when asking your question. Trust comes with consistent, great and mostly effective experiences of a company that meets Customer’s needs. One should aim at meeting those needs effectively throughout the experience (including when something breaks down). When doing that, the Customer will feel better.

And, to understand if you met the needs, one should ask about meeting the need, not how safe he/she feels. Safety when driving can be influenced by a lot of other factors. I think the same applies for satisfaction.

Great thread. With a common theme. The metrics often stand in their own little walled garden, and many involved in the “programme” do not have direct cause and effect relationships explicitly lain out. Metrics/KPI’s gone awry often happens because the cause effect relationships, and their relative importance, has gone uncommunicated.

On another note: the metrics that are imposed will always be gamed. It is human nature. It is a cultural obsession with measurement. Yet the very things that executives need to move are emotions. Now tell me, are you 3 out of 5 happy, or 4 out of 5 happy?

Thx very much for the comment. Your last note triggered me: “yet the very things that executives need to move are emotions”

I’m not sure I agree. Emotions can well be overrated. If one thinks of consumption as a mean to do a job, one can take another view on metrics: How well does your product or service (or better: experience) enable the Customer to do the job.

To be able to measure that, one needs to understand what the desired outcome of such a job is. Measuring how effective you are at meeting those desired outcomes is what comes next.

My last note: moving emotions could be considered inside-out thinking: I can make you happy. Outside-in thinking is: How can I help you be happy? A company that understand that, does not need to move emotions, they can meet Customer’s needs. Emotions will follow..

Hi Wim – one of the most common “Manipulate the measurement” tactics that I encounter is what I call “stop the clock”.

The company puts in place a measurement – let’s say: 90% of all applications for health will be processed within 10 days. Perhaps three departments are involved in processing an application – sales, underwriting, and enrollment. Each department starts the clock when they get the application – and each “stops” the clock whenever they encounter something that needs outside assistance. So, sales admin gets the application in from the field two days after the agent met with the client. Those two days aren’t counted – the clock starts when the app is timestamped in sale admin. Sales admin records the app and sends it on to underwriting – it goes through interoffice mail, so the clock stops again until underwriting picks up the mail and clocks it in.

During the course of underwriting review, the need arises to ask for a medical record. The clock stops while waiting for a doctor to reply. If the doctor doesn’t reply soon, a second request may follow. Finally the records come in, the clock starts again, and the application is underwritten.

Back to interoffice mail, and enrollment sets up the account and schedules the membership cards for printing and mailing.

Time on the clock? Three days – well within standard! Elapsed time to the applicant? 4 weeks. Now, management wonders why customers complain about slow processing time, but all management reports show everything is going great.

The same scenario illustrates another common problem – as long as the 90% goal is hit, noone tracks what happened to the outliers – so nobody knows that while most apps get processed in “10 days”, some take 3 months of clock time!

There are so many examples of this you could write a book. One last one for now – the service level fallacy. Many organizations have requirements to meet certain service levels in the call center – say, 80% answered in 30 seconds. Assuming this is even a meaningful or important measure, very few organizations measure this by interval. Instead, they average out the servcie level over 24 hours or even a month… so they always make service level even though every Monday morning at 9 am service levels between 9 to 10 are at 15%.

Thanks for the opportunity to contribute – a good topic and a great example of what happens when metrics measure what management wants instead of what customers want.

I worked for many years in outsourced help desks as a call taker, team lead, supervisor and manager.

Unfortunately, in the industry it’s all too common for managers and directors to not have any say in the metrics to be used.

The metrics are designed to make the outsourcer look good (“Look we improved the FCR of your Help Desk!”) and give the client’s management the feeling that they made the right choice to outsource.

Neither client management not the outsourcer’s management care about the end customer. Worse, the outsourcer firmly believes that the corporate client is their ONLY customer. This is why outsourced call centres are so despised by callers (end customers); they aren’t there for the end customers.

I think that, first and foremost, it is imperative to clearly define who your customers are. And, this needs to be done by everyone in an organisation and it needs to be done with a large, inclusive definition.

Realistically, if you can’t really define your customers, even if you have great, well-defined metrics, they won’t matter. Once this is done, then, and only then, can you go about defining metrics that will actually help you improve and not fool.

In my limited personal experience, call centre staff fit into one of 2 categories: those who genuinely like helping people or those that are looking for a stepping stone into a company or industry. The latter are rarely very helpful no matter what you measure since they ” just don’t get it”.

The former, if left alone, will do a great job. However, if you force them to perform against useless metrics, they become disillusioned. Morale goes down and sick days go up.

So, too sum it up, if you hire the right people, you’re better off measuring nothing than using “fool or be fooled” metrics.

I was delivering back the results of a customer satisfaction survey recently in one of our full day workshops (http://www.infoquestcrm.co.uk/pdfs/Workshop.PDF) when I noticed that one of the groups was getting really upset. A long story short, the customers were complaining about on-time deliveries and the production director was being paid monthly bonuses for getting everything out-the-door on time. A little research showed that the clients “on-time” was the despatched date, and the customers “on-time” was the received date – and both parties were using the SAME date (so no allowance was made for the time it took to deliver the parts). This was such an old fashioned basic issue that it was understandable for the managing director to be going balistic.

Whether this would be solved by having a dash-board (or even created in the first place by having overly simplistic measures!) is debateable.

You know that we sell B2B surveys Wim. Sometimes I am approached by B2C companies aking for help. My response is to tell them about my first job, over thirty years ago, working for a clothing manufacturer that supplied Marks & Spencer. The boss had a routine, where every Wednesday he and a team of designers would got to M&S’s head office with upwards of 100 new designs. Efery other weekend he would go to New York or Paris or Rome (mainly New York and always on Concord) and look for the new fashions that were coming through. So, the guy had a big ego. BUT, every week he would spend several hours in the stores, listening to customers. He’d watch, and if he saw a customer pick up a blouse and buy it, he’d go over and ask “why”. And if she looked at a blouse and put it back on the rail he’d also want to know why. He didn’t hide away strategising or ask some market research firm to help out. He got off his arse, every week, and found out for himself. And that’s what paid for his gold bath-taps and his Rolls Royces.

Often the root cause for the “please the boss” effect is a corporate environment that values numbers over real results and has a “shoot the messenger” mentality. To avoid this, orgs should encourage reporting of ALL results, and focus on “do nothing” behavior as the real danger to the business. If someone comes to me with bad numbers, a plan for addressing the problem and then executes to resolve the problem, to me, that person is a top performer. To be clear, just surfacing bad news and not doing anything about it is just as bad as producing useless KPIs as bad news without solutions can create a hugely demoralizing environment.

I realize at many orgs, this kind of open cultural is easy to pay lip-service to and rarely actually comes to fruition, but fundamentally, I see this cultural shift as the best way to fix the problem.

Recollection of these stories makes me feel depressed again and reminds me why I have left consulting field.

One of my previous clients decided to reduce cost per call in their Customer Support Call Center by moving calls traffic to self-serve forum. That move “optimized” their call cost dramatically, however they refused to correlate these KPI’s with well synchronized drop in sales of formerly supported products.

It is even more interesting to watch the dynamics of poorly chosen KPIs for Sales organizations, particularly if they are not synchronized with compensation policies. Salespeople are exceptionally good in exploiting these gaps very fast.