Three Pitfalls to Avoid with Artificial Intelligence in Customer Service

A few weeks ago, I did a webinar on using AI in customer service with my friends at ServiceNow. It was very well received, and since then I got some questions about the content. There was one slide that was the most discussed, and I wanted to take this opportunity to go into deeper detail on it. As I was told during the boot camp training at Gartner, when more than two people ask the same question, it should become a research note to answer many who are not going to ask – but have similar concerns.

I don’t write research notes anymore, but here is a blog post explaining the slide and the lessons learned behind it.

The slide in question, which I have been using for a while, is this one.

There is a lot going on in this picture, after all – I was trained at Gartner on how to build slides, but there are three things that you should be aware going into it that will make your journey through analytics and into artificial intelligence far more rewarding.

First, it is about automation

Every year I conduct a survey of customer service practitioners to help me understand what they are working on, planning to work on, and losing sleep over at night. During these interviews, trends and patterns emerge. One of the biggest ones the last three years has been the rise of self-service (again), and chatbots (also, again). We are not going to have the in-depth discussion on whether this time will work or not – that’s a different blog post, but we are going to have the discussion on why this is happening.

Why are companies embracing self-service and chatbots for customer service?

To automate the simplest transactions.

I, and many others, have written over the last few years how customer service organizations looking to improve how they deliver service they must automate the simple stuff. Customers, we say, just want an answer, not an interaction or an experience. And that answers in anywhere from 40 to 80 percent of the cases can be automated.

Customer service should be reserved for the exceptions, the cases that cannot be defined and resolved by rules and therefore cannot be automated. It’s been said many times that the optimal customer service is no customer service, no interactions between customers and the company. And the way to prevent this is to automate as much as possible the simple stuff.

There are many ways to automate customer service (proactive service, multi-channel self-service, subject-matter-experts via communities, and more) but they all focus on the same: how can we both reduce the number of interactions and deliver better service at the same time, automation is the answer, and AI is what is going to get your organization there.

Artificial intelligence is not a magic stone that solves all problems automatically, it’s a journey towards automation that must be strategically planned and implemented. If you do that, instead of “buying AI” you will get to success.

Second, machines only do what they are told

Forget the Sci-Fi literature and movies: computers don’t become sentient (unless they are told to become sentient – but then, if there is no explanation of what sentient is, or computers don’t know how to use things like intuition and leap-of-faith, even if you tell them they cannot do it).

This is the biggest disservice done to AI over the last few years: the expectation that by “implementing it” the computer somehow magically will figure out things and begin to act independently and solve all problems and – singularity: they rule the world. Which makes for a very interesting, if not very flawed, narrative for the technology, but takes away from the potential value proposition that AI offers.

In the chart above there is a good description of how the progression to value works: computers are experts at processing massive amounts of data very fast, and spotting trends and patterns that we would not notice. They are then excellent at spotting other occurrences of the same pattern or trend, and in quickly applying rules to resolve these new instances in the same way that the older ones were successfully resolved.

That’s what they do because that’s what we can tell them to do repeatedly and systematically. We can ask them to spot repeat occurrences, to resolve new situations like the old ones, and to point to what worked an what didn’t. Everything else, they cannot do without a lot of training – and for some of them, not even training will help them. They are workhorses for massive data situations, they resolve those situations.

If the organization does not know how to resolve the situation in a programmatic way, it cannot then tell the computer how to do the same. In the simplest form, this is about documenting resolutions to common problems and then programming the computer to identify the key elements that define a situation and the best way to apply the response. As more evolved elements of automation and AI are applied, the machine can then make “assumptions” where incomplete data is provided and hold back-and-forth conversations to fill in those holes and arrive to decision points faster – either way, it is doing what it’s told.

Document well what the machine is told to do, and the potential responses and decision points and work with machine learning tools to improve the speed and accuracy of the predictions and solutions. It is about you telling the computer what to do, always, and the computer doing it faster and better.

Third, clean data BAE

As you can imagine, the basis for AI to be successful is to have clean data. Back in the day when I was a little boy, we used to say that data was about 20-30% “dirty” or unusable. We did a lot of work since then to improve the accuracy and cleanup of the data – until Big Data came to town and collapsed all those efforts by collecting all sorts of noise and calling it data and trying to use it. That raised the percentage of “dirty” data we have in storage to the 30-40% range – higher than before!

However, just as we did before, we are getting better at understanding what is good and what is good data and what is not good data. As data storage progressed over the last decade, we decided to store more data and then “we will figure out what to do with it”. Problem is that what we stored is not data, it’s just noise until it is found to be useful – and that is the part where we are trying to figure out what to do.

And the reason we are doing all this, is that as we began to implement AI we noticed the results were erratic and not easily reproduced from one case to the other. What we thought was a prediction, turned out to be a guess in some cases based on little real data and lots of noise. Thus, we learned that without clean data there is little value to be derived from AI, an initiative that looks at the data for patterns it can recognize and then uses those patterns to predict outcomes.

There is no standard, one-solution-for-all method of cleaning data, but what has worked best for the post-Big Data world has been to – well, not store it all. If you don’t store data that has value to the business or can help AI processes predict patterns better, there is no purpose of storing. Even though storage is cheap and fast, storing garbage yields garbage as insights, and as the basis for predictions. To ensure AI does what it is supposed to do, clean data is essential to find the right patterns, implement the right predictions, and analyze the results to learn for future iterations.

Clean data before anything else (in case you are not current in your teen lingo) is the number one lesson of adopting AI for customer service: clean data makes for proper decisions.

Embracing artificial intelligence in customer service is about understanding what you are trying to do, how you are doing to do it, and ensuring that the tools used to make it happen are the right tools and work as expected.

Share this:

Like this:

Related

4 Replies to “Three Pitfalls to Avoid with Artificial Intelligence in Customer Service”

Excellent post, Esteban. It’s good to be reminded of the holy grail of AI in customer care when so many shiny objects keep emerging to take our eyes off the ball. Looking forward to seeing you in a few weeks when you speak at the University of Wisconsin E-Business Consortium conference in Madison!

Clean data BAE shortcuts reality and will probably hamper many from gaining intelligence from their globs of data. Consider working thru Google’s machine learning crash course (apparently the course was open sourced after many Googlers have completed it). The process simplified for Google Home uses neural networks in TensorFlow to process by slicing data into a parallel series of cascading search like queries to test results using feedback comparisons to sample training data and weight the results. Works like an automated search engine ranking results (ta-dah). Using the resultant data set manually add intelligence to Google Home.Such as tell me when the customer is angry by listening for signs of frustration in vocabulary or on a camera application when the customers face is angry on a picture.Another ML application may need to start over with the data from scratch.https://developers.google.com/machine-learning/crash-course/ml-intro