How to Design less Annoying Bots

This article is a follow up of the previous: Why Bots are the next Big Bang in UX. I closed the article with introducing the onboarding process. To be more specific the importance of tutorials. Which can be summarized:

“So next time the user talks to the bot, it should be more like a conversation between friends who know each other.”

So the bot in my example — poncho the funny weather bot — ensures this by creating “hooks” based on the preferences of the user. However, there’s no guarantee, that the person who is using it the first time will give precise information.

That’s why a good designer needs to think one step ahead.

In this article we’re going to answer the following questions:

What is a conversational funnel?

How to build trust and why it is important?

What are the best practices that deliver a smooth user experience?

Let’s begin.

Conversation types and conversation flow

One of the hardest part is to design the conversation flow. There are two main types of conversations. The first is task-led where the target is to accomplish certain goals while going through the conversational funnel. The second will be a topic-led conversation. This type is different because it’s more about entertainment.

Task-led conversations:

A good example could be when someone is buying lunch. It starts with a single task definition. Once we have a task, the process needs to be initialized.

Figure 1.

The initialization process can be divided into multiple subtasks. In our example it could be something like this:

Figure 2.

The same goes for the actual buying process:

Figure 3.

If everything goes according to the plan then the task is completed. However as I mentioned before, we think and talk like humans and this is where it gets tricky.

The conversation funnel

When it comes to task-led conversations, I like to mention the conversation funnel. You might have heard about the funnel term used in marketing. If we think about webshops, these describe the desired path that the visitors should go through in order to count as a customer. Needless to say, companies need to focus on website optimization if they want to bring out the most of the funnel.

When designing conversations, optimizing the different stages of the funnel is just as important.

Figure 4.

In the example shown in Figure 4, the goal of the task-led conversation is to guide the users through the funnel and optimizes the conversion rate.

Topic-led discussions:

The biggest difference between task- and topic-led conversations are a number of steps. Unlike before, the main goal is entertainment, or in other words: a way to kill time. Instead of a certain path, the designer needs to define a set of topics. The topics are large datasets and the user is free to explore it during the conversation. However, this doesn’t mean that the user is thrown into the ocean. A good topic-led conversation has some sort of “anchor” points. So it also has a structure that can look something like this:

Figure 5.

User: Which is your favorite transport vehicle in Star Wars?

Bot: Bantha.. obviously.

User: Lol, I kinda like the Sith Speeder, it’s pretty fast and funny looking at the same time. Remember when Darth Maul got on it?

Bot: Darth Maul is a really cool character not as cool as BB-8 though. Would you like to see some cool dual lightsaber designs?

When it comes to topic-led conversations, keeping up the interest can be easier because there is more room to navigate. In the example above the conversation started around a dataset of vehicles then moved on to characters.

How to build trust?

We are humans, we think and talk as humans. For us building trust is essential. We can make decisions, change subjects without following a particular logical order. So the main problem with the human-bot interaction is that humans like to test bots in order to get an idea on how “smart” it is. When the user starts this “game” there’s a high chance that the bot will fail at some point — even if it deals with huge datasets such as Amazon’s inventory like Kip —.

Figure 6.

Going off topic is always a challenge and a designer has to face this sooner or later. However, this “conflict” isn’t always a bad thing. Maintaining the “human-like” factor in a conversation is expected and not impossible. A good practice can be decoration. This means that the designer adds sentences that have no direct influence on the conversation, but

it adds characteristic to the bot, or in other words more depth. This way the user can “forget” that he/she’s talking to a pile of code.

Managing User exceptions:

For humans, changing course in a conversation is not a big deal. Let’s say the user goes through the initialization process without any problems, then suddenly wishes to revert back his/her decision and pick something else.

The designer can handle these multiple ways:

Routing the conversation to a human is probably the easiest and most transparent way. It can be a good alternative to course correction. A person who is in charge can suggest solutions to more complex questions. But it has the possibility that users will eventually “exploit” this feature by asking for human assistance in the future.

Course correction is about how can the bot climb out of awkward situations by taking the conversation back on track. Let’s say the user wants something else instead of the options available. Then an indirect course correction can save the outcome. These type of conversations are very common. For example, If you go to a restaurant and ask the waiter about something that is not on the menu.

User: Hi, I wanna eat something.

Bot: Hello! What can I get you? Would you like to see the menu?

User: Do you have Lasagne?

Bot: We can’t serve you Lasagne at the moment (our Chicken Enchilada is super delicious by the way), but I’ll let the chef know about your wish and will notify you as soon as we have it on our menu!

This is much more appealing than simply repeating the same offer all over again.

It would be confusing to ask what the user wants and offer the menu just because of the sake of bot-logic. This is a great opportunity to extract entities. In this example, we have a simple and not sensitive information provided in the order. There could also be multiple entities present at the same time.

User: Hi, I would like to have some Lasagne for dinner and delivered home.

If we look at the statement it holds the following entities:

the exact type of food

a non-specified time

a non-specified location

The first two are simple and not sensitive, but the last entity is the exact opposite. It defines an address which is backed up with a large database that contains personal information about the user. For the user, this is nothing but ordering some food.

However for the bot this is a great opportunity to gather more data in order to provide a more personal user experience in the future.

On mobile and web interfaces there are form-samples where the user has to fill out the required fields in a given structure and also there’s a validation process involved. In conversations validation is still a requirement, but the order of input doesn’t always follow a certain pattern. Artificial intelligence frameworks usually have sophisticated entity extraction solutions that can handle validation.

Implementing Artificial Intelligence:

It’s a common misconception that Artificial Intelligence (AI) is the very core of bots. AI at its current state can be considered as a set of tools that designers and developers can use while building all kinds of applications. In certain type of conversations, it is advised to use Artificial intelligence (AI). It helps to deal with complex conversations. However, it also adds some complexity.

Image recognition — Probably the most common out there. It has proven itself in the past couple of years. It allows to recognize and analyze objects, faces, look for certain patterns in people’s emotions, read the text and so on.

Natural language processing (NLP)— this is especially effective when designing building a conversational environment. It helps understanding intents and extracting information from the conversation.

Conversation management — it is a high-level AI not only capable to extract entities but also sees through the whole context.

Predicting the outcome — This ability is essential to humans. We are able to tell what’s right and wrong or the next move in a situation just by predicting the future outcome. Prediction APIs can analyze and learn certain patterns.

Drawbacks of AI:

Unless the situation really requires it — like managing pure text conversations — it can be an “overkill”. AI can be complex and of course expensive. Due to its complexity, it’s hard to implement in a bot. Also, it needs a lot of time to learn and build up its personality before it can be productive.

Most bots today don’t use AI instead they guide the user with the help of rich interactions.

Conversation best practices:

Shorthanding: it comes handy when designing UI flows. If the user and the bot already “know” each other, there is no reason to not use the extracted entities from before to build up a preference list that is unique to the user. Imagine yourself ordering your favorite pizza while not having to go through the whole process over and over again. Instead, all you have to say is: “Hi, I want the usual today!”

Learning from mistakes: making mistakes is a part of the game called life. Instead of making up excuses, a designer should analyze and prevent it from occurring in the future. Going through all the weak spots where the bot failed to satisfy the user’s expectations is a great opportunity to learn more about the user and extract additional information for a more future-proof engagement.

Rich elements and expecting feedback: additional pictures, GIFs, emojis, navigation buttons are additional elements that can make a conversation more interesting. In some cases, a single picture can express more information, than multiple lines of text. A decent can provide this hybrid communication experience.

But a bot really shines when it expects these rich interactions and feedbacks from the user and can react in the same way!

Conclusion:

A conversation between human beings can be task- or topic-led. This applies to human-bot relations also. However, humans often go off topic, change the logical order and so on. Sometimes they do this just to test the “partner”. If the bot fails to correct the course and pull the conversation back on track, the user might just end up leaving the conversation funnel. This means that building up trust is one of the main priorities and should be performed in an early stage. Managing user exceptions and building up a personalized experience from extracted information can be very effective when building trust. Many think that Artificial Intelligence is simply the essence of bots, but that’s not true. AI can be considered as a set of tools for designers and developers and mostly required in specific situations only.

What’s your opinion on the topic? What would you like to read next? Please leave your thoughts and ideas in the comment section below. Also sharing is caring and keeps us motivated!