Blog Detail

Layers Of Conversation – 3 Contextual Layers

Virtual Chatbots today, should use 3 layers context architecture in order to give humans the answers that they expect for.

Conversation with robot vs. a human

If you ask a common person a simple question as in ‘how are you doing?’ You will probably be able to handle with any sort of answer you get. You will also be able to easily reply in the suitable context, and continue the conversation. On the other hand, a computer is not able to form any legitimate answer under any context, causing termination of any conversation, what so ever. The range of answers for that question is vast and the layers of context can be even more diverse.

In order to understand why are bots so limited, I will use my favorite example – a hotel bot. Let’s say the bot is helping someone to select a room. It might ask some inquiring questions like “would you be interested in half board accommodation?” and later ask “would you consider a lake view room?” The answer in both cases could be identical: “Yes”. So how do you expect a bot to be able to distinguish between “Yes” and “Yes”? By associating each answer to the right context connected to a certain question.

Bots these days use superficial context techniques, so this is pretty much what they are capable of doing. Does today’s technology allow us to use more advanced context techniques in a conversation bot? Of course it does. Before I show you one way of doing so, I want to stress out the complexity of applying contexts on bots, even if it only has to deal with a simple task like booking a hotel.

The problem of programing context

Let’s assume that the virtual conversation bot is asking “are you interested in a regular room or a deluxe suite?” immediately after asking the question the bot prepares itself for the answer, in which it is programmed to understand “yes” or “no” only. It can also manage to deal with answers like “I think so” or “I’m not sure” as long as the context is associated to the specific question that has been asked.

But what happens if the user suddenly goes off topic and jumbles up the whole progress, like people tend to do? For instance, if the bot asks “are you interested in a regular room or a deluxe suite?” the user might respond with “what’s the difference between both room types?” or “how many rooms are there in total?” or even jump to a different subject and ask to get offers for other hotels.

By using this common superficial context technique, the only way to make such a flow chart possible is to have a paper larger than earth, in short it is impossible. Let’s see a better way to go through this barrier with a more appropriate use of the term ‘context’.

Layer of conversation – Optimal solution

To do the above, I will be creating three levels of context in for the virtual chatbot, which will all stay on standby mode. The bot, then, should only be applying the relevant level at the right stage of the conversation.

Local context

Regional context

Cumulative context

local context – this is the most common context to apply on today’s bots, and it means a specific reference to a given action in the virtual chat (sentence or comment).

Regional context – in this we make a context to a certain region in the conversation that can be assigned a topic. A regional context contains at least 2 local contexts. For example, if there’s a certain point during the conversation the user hesitates between 2 hotels – everything the user says in this region, is interpreted by the bot as if the conversation is about the 2 hotels. The context is precise – not just one hotel nor 3 or 4. Just the 2 hotels, which the region is about.

Cumulative context – this is a context that’s affected by the growing information that is given by the user as the conversation progresses over time. It is quite hard to determine which information to keep and how to use it later on when it is required, but basically, when the user asks a question, the bot’s answer is affected by the position of the question in the conversation. This implies that even if the same question is repeated by the user, the virtual chatbot will reply differently according to the position of the question in the conversation. The difference depends on the data that is gathered by the bot from the beginning of the conversation, or previously with the same user in a specific virtual chat.

Before I proceed with giving a simple example, I would like to just add that every hotel bot must be equipped with all possible knowledge there is about hotels of that area. Sometimes there might be numerous amounts of data for each hotel starting from room sizes or menus for each meal, up to hotel protocols, and even the price of a soda bottle that is served at dinner.

Example for cumulative context

Let’s suppose that the user asks “is there a pool in the hotel?” since there are 2 pools in the hotel (Hotel A), the bot will simply answer “there are 2 pools in the hotel, one for adults and one for children”.

But what if the user asks the same question about the second hotel that has been offered, (hotel B), which in our case doesn’t have a pool? In this case the bot should reply “no, in hotel B there are no pools at all, but in hotel A (Which was offered previously) there are 2, one for children and another for adults”.

Meaning – I’ve created a basic rule: in cases in which the user wants to inquire something about a certain hotel attraction, the virtual chat bot should reply with information concerning the hotels that were previously mentioned, and also the hotels that will be presented later on. In this case the cumulative context is overcoming the regional context, and as a result, the bot will answer in a more extensive manner. These kinds of answers are understood to be given by a human salesman, and a good one would mention other hotels with pools if he thought it is necessary.

This rule can obviously be filtered in case the amount of hotels that can be suggested is too big, the bot will only brief about 2-3 selected hotels so the user will not get confused by the outburst of information.

Cumulative context – example 2

Let’s suppose that the user asks “How many rooms are there in the hotel?” seemingly the answer should be always the same – “230 rooms”. But according to the cumulative method, it’s not enough. If, for example, the client was inquiring previously about a pool, the bot should answer – “230 rooms, 105 of them are with a pool view”. And if the client was inquiring previously about a suite, the answer would be “230 rooms, 24 of them are suites”. Why did I set this rule? Because that’s what a reasonable human being would do through out a conversation – connecting his answer to the wider context.

The goal is to shape up many rules with many contexts regarding to those 3 layers of context. And after this goal is reached, the objective is to program the virtual chat bot to identify the right timing for using each layer. This is not an easy task and requires a lot of creativity to complete. The scriptwriting is vital and the bot scriptwriter must break down many layers of conversation while taking under consideration communicative, psychological and social stratums. Not to mention we still haven’t gone over on how to characterize the virtual conversation bot, and making dialog patterns, which in my opinion is the most fun part of the process.

Let’s end this by walking back a little up to the question of “How am I doing?” Well, I’m fine, thank you all for asking.