NPS®, or Net Promoter Score® is frequently discussed and increasingly used by technology companies. Multiple blog posts and other resources discuss the various aspects of its implementation and use, but one particular point is rarely mentioned. That point is the way to measure improvement or decline in the score

Recently I had the opportunity to discuss NPS with someone who said “we’ve increased NPS by 50%”. When I asked him to explain, his response was that their score went up from +10 to +15. Given that NPS is supposed to represent customer loyalty, do we really think that customers’ loyalty to this company increased by 50%?

In order to develop a more insightful metric for NPS improvement we should remember that Net Promoter Score is a number that ranges from -100, where all customers are detractors, to +100 where all are promoters. Therefore, our ability to increase, or indeed, decrease this number is limited by the upper and lower boundary. The top boundary is our goal, and the gap to the bottom boundary is our risk. How do we, then, calculate the journey between the two boundaries?

Several lives ago I used a system based on the concept of measuring changes along the possible range of NPS scores rather than improvement. In essence, we look at our current position and calculate how much improvement we have left. So, for a score of 80, we have 20 points of potential improvement, while at -50 we’d have 150 points. Since that rating system worked on 1-5 performance scores (1 being best), we needed to calculate 4 boundaries between those regions, while the top and bottom ones were the boundaries of the the score. The table below contains the calculations I’ve used at the time:

Rating

Lower Boundary

Upper Boundary

5: Worst Rating

-100

(100+score)*0.96-100

4

(100+score)*0.99-100

(100+score)*0.99-100

3

(100+score)*0.99-100

(100-score)*0.05+score

2

(100-score)*0.05+score

(100-score)*0.125+score

1: Best Rating

(100-score)*0.125+score

+100

There are several features for this system:

It assumes a score of 3 is for fulfilling the basic requirements of the position, and allows for some improvement as well as slight decline to account for volatility in the ratings

It also assumes that goals will be increasingly difficult to attain and harder to sustain as the score increases and adjusts accordingly

To illustrate, let’s see how this system works for several sample scores:

Score

4

3

2

1

-50.00

-52.00

-50.50

-42.50

-31.25

0.00

-4.00

-1.00

5.00

12.50

80.00

72.80

78.20

81.00

82.50

In accordance with the assumptions made above, the window between the lowest score for “4” rating and highest score for “2” rating shifts with the score. With lower scores, the formula is faster to penalize and slower to reward, and as the score improves the formula will become increasingly forgiving.

Obviously this system is not carved in stone. I developed it to respond to certain conditions, you should experiment with these formulas as your environment requires.

Wish lists and their management are a frequent point of discussion in the customer support and success world. A wish, or enhancement request, is created when a customer would like the product to act differently than it currently does. Enhancement requests are added to the product team’s backlog, together with requirements from marketing, bug fixes and other development items, such as platform and technology upgrades. All items in the backlog are usually prioritized by product management, some will eventually be developed while others won’t

Customer Success and Support teams, focusing on single customers or transactions, are usually concerned with enhancement requests that represent the needs of individual customers. Conversely, Product Management and Engineering will focus on the needs of broader segments of the customer base as well as the overall needs of the market. Consequently, discussions with product management or engineering tend to break down and leave everybody frustrated. The question is therefore, how can Customer Support and Success teams succeed in making the case to Product Management for product enhancement?

From my own experience, there are several key actions to take:

Screen – Ensure that any enhancement requests you promote are in line with the product’s strategic direction. If they are not, then you’ll be misleading customers and losing credibility both externally and internally

Quantify – the number of customers that need, or are expected to need, this enhancement. [How do you know how many will need it? Sometimes it is obvious, for example, customers using a certain application or technology platform, customers subject to certain regulations]

Assign value – Estimate the risk the company is facing as a result of not implementing this enhancement. Consider what the company will lose due to not implementing this enhancement, this can range from having a single disappointed system administrator to losing revenues from multiple large customers for several products

Bundle – Are there several enhancement requests that are similar enough to be combined, or close enough to be developed jointly?

Recruit Customers – create a forum for customers to help refine the definition of the enhancement requests and solicit votes and comments. Doing so will help you in assigning value to the each enhancement. Françoise Tourniaire has an excellent blog post on managing wish lists in your customer forums

Following these steps will help you come to the product planning discussion equipped with high quality information and better, more strategic arguments than the fairly common “VIP Customer x wants the product to make espresso” argument, and hopefully you’ll have better results to show for it.

One of the problems in running any large, globally distributed organization is the potential disconnect between executives in HQ and the teams on the front lines. Frequently it is very difficult to know what’s on the minds of the individual engineers and managers, especially in remote locations where the executive in charge flies in for a few days every six months at best, holds a team meeting and flies away again after a day or two

A partial remedy to this disconnect are periodical reviews with every first line manager and team lead in the organization. These reviews go by multiple names, from Quarterly Business Review, or QBR to Deep Dive. Many executives hold them with their direct reports as a checkpoint to track execution, raise problems and float ideas. A much smaller subset run such review meetings with front line managers. In this post I’d like to share the format that worked for me in the past and which I helped others implement as well

These review meeting can take a number of directions, they can be quantitative or qualitative, and they can explore or review. My preference was to focus the meetings mostly on qualitative and exploratory side, for three reasons:

Support organizations, especially large and mature ones, have a very strong, often excessive, metrics and reporting discipline. Key metrics are reviewed regularly and visible to all stakeholders as well as broader populations. Consequently, the likelihood of new discoveries is low and the opportunity to exchange ideas and unstructured information is lost

Focusing on metrics, where the executive “reviews” the performance will invariable give the meeting a critical atmosphere and create tension. It will not encourage a free discussion and building of communication channels which the executives should value and the team managers appreciate

Numbers and charts are the safe zone for many support managers, often they are used to divert meaningful communications. Instead, having an open, qualitative, discussion forces a different layer of communication without the usual crutches

So, rather then focusing on metrics and dashboards we can have the discussion cover several basic items:

Recent accomplishment – this was a chance for the manager to demonstrate the successes of the team. Review the workload and the ways it changed over time, as well as several important metrics [I know I just contradicted myself, but please bear with me]. This should be the only part of the meeting where numbers and metrics play a primary role in the discussion rather than an illustration to a specific point

What works? While the previous section focuses on the team being reviewed, this can be an opportunity to provide feedback and insight on other parts of the organization, the company and share success stories. Examples could be any experiments the team has conducted, an initiative they set in place, and so on.

What doesn’t work? Surprisingly, this proved to be the hardest part of the meeting for many managers. There were attempts to rename it [e.g., “Things We Can Improve”], non responses [“I really can’t think of anything”] and more. But, for me, it was an extremely useful part of the meeting, for two reasons: First, it gave me a perspective into the difficulties the teams were facing, and second, it allowed me to understand how the managers functioned in a slightly stressful situation

People and development plans – a key part of every executive’s responsibilities should be the development of every person within the organization. It is easy to focus on individuals who stand out in one way or another, high-flyers, troublemakers and extroverts usually stand out. However, reviewing every individual in the team, even briefly, can provide additional insight into the team’s chemistry and all the individuals within it. Frequently this is the opportunity for the managers to voice concerns that otherwise do not make their way up the organization.

Anything else – ensure there’s sufficient time where the managers have the opportunity to discuss any point they feel they should but had no opportunity earlier in the meeting

Some helpful guidelines:

Provide guidelines for preparation – ask the managers to take sufficient time to prepare, consult with their own team members, peers and direct manager

Allocate sufficient time, between 60-90 minutes for every meeting

limit the amount of paper or slides permitted. For example, require that each of the above sections be limited to a single slide, with 4-6 bullet points in each, otherwise you are doomed to death by powerpoint.

Be prepared with questions, an understanding of the people and environment the team operates in and a good number of questions. A good place to start is the notes from the previous meeting with that manager as well as a briefing with their direct manager

Resist the temptation to solve problems during the meeting

Under no circumstances make this a punitive opportunity. Do that even once and the information will spread through the organization faster than you think

Take notes and action items, but do not assign them outside of the organization’s management hierarchy

A generic version of slides for support business review can be found on slideshare

In summary, use this as an opportunity to learn more about the organization and, mostly, its people and the opportunities you can use to help them develop individually as well as increase the capabilities of your entire organization

The blog has been silent for way too long, I know. I will do my best to continue at a better pace until life intervenes again

One of my guiding principles, at work and outside of it, has always been “no surprises”. I try to apply this principle to every interaction and relationship, customers and partners, executives and fellow employees.

Recently the validity of this principle was reaffirmed when an online service I use extensively eliminated a very valuable feature without notice. The change removed important data users have saved and, initially, did not offer any way to access it. The problem was fixed, in a very kludgey manner, only after several weeks, during which the data was not available for use. The official statement, released after the fact, claimed this was done in the name of user experience improvement

The vendor’s customer support team provided a response along the lines of We have your data, you just can’t see it. We plan to make it available again, but don’t have a time estimate.

Having encountered similar situations in the past, there are several guidelines I’ve always relied on when making product changes with potential impact to customers:

Plan Well – understand the change you are introducing, the variety of ways it will affect customers, and the number of customers impacted. SaaS vendors, especially, have more detailed knowledge of their customers than on-premise, and should use that information

Weigh the need – ask yourself whether the benefit offered justifies the impact to customers’ ability to derive value from your product or service

Seek customers’ input – invite customers to offer their perspective on complex or high impact changes. This can be done in a variety of ways, from 1:1 conversations through surveys all the way to open discussions on your community sites or during user group meetings

Communicate deliberately:

Provide sufficient advance notice – let your customers know the time a change will take place, and do it well in advance of the change so that they are able to prepare for it. If your customers’ business follow a business cycle, aim for the low activity periods

Explain the impact the change will have, list ways with which customers could minimize or eliminate this impact and explain the benefits in case they are not immediately obvious

If customer data or their ability to access it will change in any way, explain the data’s disposition – what will happen to it, how it can be accessed, when it will be available for use again

Provide a mechanism to circumvent the problem – for example, in this example allow customers to download a copy of the data for offline use

Offer a mechanism for feedback at each and every stage

Prepare for impact – arm your support team with answers and talking points. Some customers will invariably be unhappy about this change, ensure your support team is not facing them empty handed

Obviously, different demographics call for different methods. High end enterprise vendors could schedule individual meetings with customers to prepare for high impact changes, while those in the consumer markets could send a single generic email with instructions. In any case, however, do not leave your customers in the dark!

It’s very exciting to see the increasing coverage Lean Manufacturing (or just Lean) is getting. From my experience, Lean provides very useful and intuitive methodologies for process improvement and streamlining as well as a more robust set of tools for advanced implementations. A good number of books have been written about Lean Manufacturing over the years, from its origins in the automotive industry with Toyota (hence the alias TPS for Toyota Production System) through other implementation to the influential ‘The Lean Startup‘

In its most basic form, Lean provides guidelines and ideas for reducing or eliminating waste in process driven operations. Waste, for this purpose, is defined as anything that does not add value that the customer will appreciate and pay or. There are seven different types of waste (sometimes known by their Japanese name ‘Muda‘) that are recognized: Transportation, Inventory, Motion, Waiting, Over Processing, Over Production and Defects. Additionally, for some organizations, including Enterprise Support, I believe it is useful to add Unused Employee Talent as the eighth waste

When discussing these wastes we should keep in mind that some apply to the work in progress (from cases and knowledge items through equipment being repaired to employment candidates) while others apply to the resources (people or machines) that perform the work. We should also know that Lean distinguishes between necessary waste and unnecessary waste. Necessary waste, known as ‘Type I’, could, for example, be any activity that’s required for compliance – from inventory audit to health and safety drills. Un-required waste is known as ‘Type II’, and should be eliminated. While the first distinction is built into each waste’s definition, determining which wasteful activity is necessary and which isn’t is subject to individual analysis

With this understanding, let’s look at each waste and how it can manifest itself in the enterprise support world. With that understanding we can determine how reducing or eliminating it can help us improve our operation. We should also note that certain wastes can generate other types of waste:

Transportation – is focused on the work items, and can equally apply to physical movement as well as to transitions of ownership or responsibility. For example, are we moving cases unnecessarily between teams and individuals? How much extra work is created every time we change case ownership? For hardware support organizations, are we moving returned equipment between departments or locations unnecessarily? Excessive transportation can sometimes lead to increased inventory – another waste

Inventory – this is another work item waste. Are we keeping cases open unnecessarily? Why do we have so many open cases? In what stages are they open? Do we respond to all recruitment candidates as soon as a decision was made? How much equipment do we have waiting to be repaired? What’s keeping us from getting that equipment to the lab? How much inventory do we keep to support potential customer failures?

Motion – this is a resource waste. For example, are we asking support engineers to move physically in order to do parts of their job, for example go to the lab to recreate a problem? Are we requiring our employees to repeatedly switch contexts? For example, do we instruct support engineers to drop whatever they are doing to answer incoming phone calls?

Waiting – this is another resource waste, where people or machine are idle due to lack of work or the need for another person’s knowledge or a constrained resource. Typical examples include access to systems needed to recreate a customer’s problem, access to a specialist with knowledge that can help make progress with a case and generally waiting for any other person or resource required to complete a task

Over Processing – this is a work item waste, where the organization invests work that does not add customer value. For example, complicated forms acting as a barrier to escalations rather than facilitate rapid problem resolution, repeated analysis of a customer’s problem by multiple engineers and repeated attempts at fixing a customer’s problem or repeated repairs of returned equipment

Over Production – this also is a work item waste. It usually refers to making items ahead of customer demand. A typical example could be preparing and publishing knowledge items for a one-time problem that will never be used again, or repairing products to be used to replace customers’ failed units in excessive quantities

Defects – again a work item waste. It focuses on an end product that does not address a customer’s need. From providing the wrong fix through creating a knowledge item that misguides customers to hiring a person that’s not qualified to do the job

Unused Employee Talent – I have written in the past about the impact of untapped employee talent from a knowledge management perspective. Additional examples include forcing escalations to happen within a certain time, even when the front line engineer has all the knowledge required to resolve the customer’s case

I am sure every reader can think of many other examples for wastes in their own environment. This post does not purport to provide an exhaustive list. Rather, it was an attempt to give readers a glimpse into the world of Lean as an additional tool to use for improving process flows. I hope you begin to make use of Lean thinking and that it works for you as well as it did for me

Comments Off on Seven Wastes and One More: Lean Perspectives on Enterprise Support

The people at TeamSupport invited me to write a guest post on their blog. The result is Bridging the Gap in B2B Service Expectations which was published yesterday (12 January 2016). The theme should be familiar to regular readers, touching on some differences between B2B and B2C support, and offering several actions we can take to avoid some potential traps

A previous post on this blog discussed several reasons for not using case life as a goal for first line support engineers. I was reminded of that post, and the reasons for writing it, while hearing someone state that organizations should “only measure outcomes, not activities”. It’s easy to agree with this statement, after all, outcomes are the only thing that matters in business, right?

Let’s take a moment and try to understand how these outcomes are realized. The obvious question to start with is “what’s the organization’s deliverable to the company?” Over the years, through my own and others’ experiences, I have seen a variety of answers to this question as it concerns enterprise support organizations. Goals ranged from operational objectives through customer satisfaction all the way to maintenance renewal rates and revenues. Surely the organizational deliverable is an important metric to track and report on, but is it the only thing that matters? And more importantly, does it apply to every member of the organization?

Similar to case life mentioned earlier, or to the goals listed above, outcomes are trailing indicators. They, almost by definition, represent an aggregation of a variety of activities and inputs. Making any improvement to the operation requires us to understand the individual components our deliverables are composed of and build a system of metrics that allows us to understand the way we perform on each, identify weaknesses and measure improvements. We should also measure inputs, from the number of cases to the growth in the number of customers and track the way those inputs influence our performance, all the way to delivery on our goal. In essence, we should design a metrics structure that breaks down the goals of the organization into departmental goals supporting it, and into the activities that these goals are composed of

We should also keep in mind that metrics can serve different roles depending on the audience. One group’s outcome is another’s input. Activities represent a major investment for the organization and ensuring their efficiency and effectiveness is a primary concern for anybody running a process driven operation. Any measurement system we design will have to take into account the organizational deliverables for the short and longer term future, the investments it has to make into producing those deliverables, the inputs it requires in order to do that, and the way these measurements apply to every function within the organization. On that in our next post

Recently I came across two interesting posts discussing NPS®. Each of them seems to miss one, or more, important points about creating a sustainable customer survey program that actually produces tangible results

First, and very interestingly, is Fred Reichheld, creator of NPS, on linkedin telling us to “Stop Thinking Like a CEO (and Think Like a Customer Instead)“. If we think about the title for a minute we’ll realize that unlike the customer, a CEO has responsibility for identifying weaknesses in the customer experience and taking action to fix them. In his post, Mr. Reichheld tells the stories of two CEOs who transformed their companies’ customer experiences. But, did those CEOs do what Mr. Reichheld is asking us to do? No, they did not. They never stopped thinking like a CEO, but they did bring the customer perspective into the organization, and, most importantly, acted on it to deliver a better customer experience

In his post Mr. Reichheld claims that:

“The best approach is to ask customers on a scale of zero to 10 how likely they would be to recommend your products or services. This is what […] companies do every day to be loyalty leaders. By closing the loop with their customers and taking action on the reasons why customers love doing business with them or not […]”

But while the scale is detailed, the need to “take action” remains nebulous – how do we know what actions to take, and how do we verify their impact?

Another post that caught my eye last week was on the Bluenoseblog, titled “Driving Net Promoter Adoption: 3 Must Do’s“. In his post Don MacLennan, Bluenose CEO, states that leadership commitment, organizational alignment and customer follow through are essential to increasing a company’s NPS results, and he recommends engaging with customers as a survey follow-up in order to establish the cause of their dissatisfaction and eliminate it

Some readers may ask whether there’s anything wrong about engaging with customers to find out what’s causing dissatisfaction. The answer is that there’s nothing wrong about that, but when we think about a process driven business we can’t rely on anecdotal evidence as the sole input to our improvement efforts, and that for several reasons:

The plural of anecdote is not data – so no matter how many customers the company interviews they will be only a small portion of the customer base and their opinions remain anecdotal rather than a complete picture

Discrepancy between stated and actual behavior drivers is a well known phenomena and is documented in numerous academic papers

The challenge in converting anecdotal evidence into a sustained organizational effort to improve products and services

With that said, what would be the best way to follow up on NPS, or any other, survey and generate sustained improvement

The answer should be no surprise to the blog’s followers. Survey responses, in correlation with your operational data, provide the organization with very clear insights on actual (as opposed to stated) customer behaviors and their drivers. With these insights it is possible for the organization to develop sustained improvement efforts based on the most critical elements of the customer experience, and gauge their impact on customer satisfaction, as well as customer behavior. It’s doable, and usually easier than we tend to think, but does require commitment and a change in the way we think

Net Promoter, NPS, and the NPS-related emoticons are registered service marks, and Net Promoter Score and Net Promoter System are service marks, of Bain & Company, Inc., Satmetrix Systems, Inc. and Fred Reichheld

Recently I had the need to contact a company for support concerning a feature they removed from one of their product which I, and many other users, sound extremely valuable. Despite it being a free service, the support team responded on a weekend in time that would put most enterprise vendors to shame. However, the content of their message left a lot to be desired:

Thank you for contacting us!

Personally, I’m glad you brought this out. At […], we are constantly working to improve our platform and products. Suggestions from […] like you are an incredibly important part of that process. Updates can be done without notice, we will keep you informed if we make an update on our platform. We want you to know how much we appreciate your input and support!

Parsing this message, we find a number of items for improvement:

They start by saying “Personally, I’m glad” – but this is not a personal matter, I am interested in the company’s position, not the agent’s personal opinion

Then “At […], we are constantly working to improve our platform and products” – in this case they made a change that has negative impact on the customer experience, in the opinion of a large number users

Then “Suggestions from […] like you are an incredibly important part of that process.” – First, this is a general statement that has no bearing on anything, and second, I did not make a “suggestion”, I pointed out a change that’s making it harder for me to use the product

Followed by “Updates can be done without notice, we will keep you informed if we make an update on our platform.” – which is a plain contradiction

And the concluding sentence is “We want you to know how much we appreciate your input and support!” after doing everything to show that they don’t

This vendor, with a single response, managed to transform an otherwise satisfied customer with a product problem into a skeptical customer, questioning their commitment and willingness to provide service. Analyzing how this happened leads us to several conclusions:

Being too familiar and personal, with phrases like “personally” and “I’m glad”

Using phrases that sound good but have very little to do with the situation

Using feel good language as a substitute to meaningful information

Do your response templates suffer from similar problems or are you making it every individual person’s responsibility to produce their own customer responses? Have you ever read those responses with a critical eye? Or, even better, ask someone else to do that? If you service customers in multiple countries and cultures, are you aware of the differences in perception between those and your native culture?

Update: while writing this post I received a message from the vendor:

I’ve passed your message to our engineering team and will be better able to help with your particular question. You will receive a more detailed reply shortly; we appreciate your patience!

Which sounded like good news, until the following message arrived a day or two later:

Hi there,

We’re experiencing an extremely high volume of support requests currently and may not be able to directly answer every question received. Please browse our […] Help Center […] for articles that may help you resolve your issue, as it provides a lot of solutions to common issues reported by our learners.

If you have been able to answer your question in the last few days, there’s no need to reply to this message. If you have not been able to find your answer and still need assistance from us, please reply to this email and we’ll get an answer to you as soon as we can.

Thank you for your understanding as we continue to improve our support resources to help […] like you!
[…] Community Operations

Many companies have challenges handling growth, this one seems to do a particularly bad job at acknowledging it and taking the actions required.

The blog has discussed in the past some aspects of partner eco-systems on support, mostly focused on the motivations of the two sides and the business relations between them. Recently I had the opportunity to discuss support oriented partner programs and the basic building blocks that make them successful. This post, therefore, will focus more on the operational sides of creating a partner program

Companies have different classes of 3rd parties acting as intermediaries between them and some, or all, of their customers. These 3rd parties are sometimes called distributors, OEMs, channel partners, VARs, and more. From a support perspective, however, we are mostly interested in two classes of partners – those who support their own customers and those who do not. In this post we’ll focus on the first group, and we’ll use the term partners for simplicity

Building a successful partner support program requires consideration of four specific points:

Which partners should participate – entry criteria

What does the company require from the partners

what does the company provide to partners in the program

What does the company do to ensure partners continue to deliver and what to do if they do not

To join the support program, partners should qualify in several levels:

Infrastructure – having a case tracking system, or using the company’s system. Phone access to support staff, ability to recreate customer problems, etc.

People – the partner must have dedicated, well trained people to address customer cases. Routing customer calls to services or pre-sales staff in the field is not an acceptable substitute

Members of the partners support program should be expected to provide a certain level of technical expertise and deflect a considerable number of cases before escalating the balance to the vendor’s support organization. It is important to remember that some cases might slip through the net. But, overall the proportion of simple cases that can be resolved via the knowledge base, problem recreation and other relatively simple activities should be much lower than those received from direct customers

Partners are an extension of the company’s support organization and as such their ability to successfully support their customers is key to their customers’ satisfaction with the products as well as propensity to renew their support and maintenance contracts. The company must therefore ensure partners have access to as many information resources as possible, from internal and external knowledge bases through customer cases all the way to training and more. When given access to customer cases then customer identity should not be shown to the partner staff

To ensure that partners continue to deliver the expected service levels to their customers, companies must think of developing a relationship that borrows some elements from the customer success discipline. For example, a periodic business review, where metrics are reviewed and an open discussion of what works and what doesn’t, and most importantly, how to capitalize on the positives and fix the negatives

In short, a successful partner program treats the partners as an extension of the company to ensure its success rather than ignore them, or even worse, create an adversarial relationship