Creator of Expert Chatbots

Main menu

Category Archives: Uncategorized

By now, you must have heard the term “virtual assistants.” The natural evolution of virtual assistants is the virtual experts. Going from former to the latter is a substantial technical challenge not many companies are willing to meet yet, because the simpler “assistant” version has untapped commercial potential and a quick ROI. Nevertheless, virtual experts is the real game changer – a paradigm shift – that will have social and economic impact beyond our wildest imagination.

The rise of virtual experts is just hiding behind the puzzle of the most effective machine learning approach.

Some examples of virtual experts coming off our conveyor belt include DrCHAT which is a virtual doctor for women’s health (in Beta). DrCHAT encapsulates physicians’ expertise following the ACOG guidelines for evidence-based care, and is further described in the article “Artificial Intelligence (AI) in Medicine …”

Virtual Spokesperson

Companies needing to interact with clients beyond Website presentations can launch a virtual company spokesperson. Vera is an example where she has absorbed several layers of company information via machine learning. Although her conversation skills do not match a real human, she is highly effective with genuine visitors who are looking for information by chatting instead of surfing Web pages.

Virtual Tax Helper

As an example of converting documents into chatbots, the virtual expert Terry Kohen chats about IRS Small Business Tax guide (Publication 347). The conversation with Terry is somewhat limited to the scope of the IRS document, thus it does not replicate the expertise of a human tax expert.

Virtual Guide – Smart Cities and Travel Safety

Geographic expertise is always in demand for travellers. The two most prominent areas for virtual guides include smart city and travel safety applications. Before these specialties become virtual experts, we have been testing a destination finder, Davis Hunter, using a limited-scope wikivoyage data.

New Opportunity to Monetize Expertise
The commercial impact of virtual experts will be driven by the scalability offered by chatbots.

While human experts monetize only by face-to-face consultations, their virtual counterparts will be able to monetize by one-to-thousands consultations, simultaneously.

Eventhough such electronic consultations may require small payments, high volume will push the revenues to levels only determined by server capacity and market demand. That’s the critical value point.

______________________________________

This article is brought to you by exClone, a chatbot technology provider.

Impersonating chatbots is one of those concepts that are around the corner. They will add one more option to our online digital presence with social networks, personal blogs, etc. An immediate question is why would anyone build his/her own chatbot? Here are five reasons why impersonating chatbots may take off sooner than later.

1. Share Your Ideas

A chatbot impersonating you is like your personal messenger that can tell others about your ideas, expertise, interpretations, and status. You can pack as much information as you want inside your chatbot and update it as frequent as you can. When you review the conversational logs, you can see how people are reacting to your ideas.

Anonymous conversations with your chatbot can test your ideas by real feedback devoid of social pressure to please.

2. Managerial Communication

If you are managing a group in your business, you can build your chatbot to remind your workers of the rules, regulations, milestones, visions, expectations, and much more. Usually, one-on-one conversations between a manager and a worker is an awkward one if the subject matter is rules, regulations, etc.

Chatbots can be a polite way to fully inform your workers about rules, regulations, and what is expected of them.

3. Chatbot as Your Talking Resume

If you are looking for a job, your conventional resume may fall short of explaining who you really are. Your impersonating chatbot, on the other hand, can contain more social knowledge of your life, pictures, videos, and those appropriately selected “personal touch” bits of information. Whilst it can be considered annoying to toot your horn during an actual interview, your chatbot can do that for you.

A chatbot as your talking resume can fill an important gap of personal touch which may otherwise not be appropriate to share with a future employer during an interview.

4. Dating Game

Impersonating chatbots can easily be a vehicle to increase our social engagement by presenting ourselves in a unique manner. While many dating sites use personal information to make matches, a chatbot may be a new way for both chatbot owner and the people talking to it. In one end, the anonymous talker can ask tough and private questions freely. On the other end chatbot owner can make selection from conversational logs.

Social selection based on chatbot presentation, and chatbot conversation can be a new avenue for dating.

5. Digital Life After Death

Either for personal reasons, or for educational purposes, life after death may be possible in a digital form. Impersonating chatbots are the first step in this direction.

Chatting with dead people via chatbots may keep us better acknowledged and aware of our heritage and history.

CHATBOT CREATION by EDITORIAL EFFORT

All these avenues will become possible only if chatbot creation is reduced to a mere editorial effort. It should not include any coding, corpus training, or AI experience. Everyone should be able to build it just by writing and curating content. Here is an example of my impersonating chatbot which I built using our editorial platform. The whole process is straighforward and fast as long as you have your content ready.

Another example is a chatbot impersonating Abraham Lincoln. That was built in the same manner for educational purposes.

The deployment is automatic: a public URL is created for your chatbot which you can share. Let us know what other creative reasons you can come up with for impersonating chatbots.

Let’s assume that you have a very simple business, and you want to deploy a chatbot for customer support. Let’s assume 100 questions and answers (Q/A)s would cover all your issues. It looks very simple, and you may be tempted to deploy one of the deep learning methods to build your chatbot. Here are the problems you are going to face:

COMBINATORY EXPLOSION IN NATURAL LANGUAGES

Unless you are a trained linguist, you might easily undermine how flexible natural language can be, and how explosive the combinations will emerge out of a single question. Let’s say your first Q/A starts with a basic complaint the users will have something like “I have a problem with my cable.” This simple statement can be expressed in more than a dozen ways as shown below, and the combinations do not end there!

If we take one of the possible expressions above, there can be another dozen combinations only by morphological and synonymous variations:

As you can see, this is only the first Q/A from your set of 100. Just imagine if some of the (Q/A)s you have are more complicated than this simple starting expression.

Your set of 100 (Q/A)s can easily mount to 10,000 different equivalent expressions the users may type which must be detected and understood by your chatbot software.

SO, WHAT IS THE PROBLEM?

The problem is not the deep learning method itself, but what it needs to function properly. You need to have a data set of 10,000 questions, if not more, that are linguistically equivalent expressions as shown above. Also, these 10,000 questions should map to 100 answers in this hypothetical case. Unless someone sits down and types them one by one, such a data set will be a nightmare to acquire.

If you already have a customer support system and collected, let’s say, 1 million (Q/A)s, there is still no guarantee that this 1 million (Q/A)s will cover the 10,000 linguistic variations to detect the 100 main (Q/A)s. Considering the Gaussian distribution of a typical user response analytics, 1 million (Q/A)s would cover less than 30% of your required data set. Your chatbot solution will remain vulnerable to undetected responses after all that trouble.

Consequently, someone who is deploying a deep learning method will find himself/herself in a data crises situation quickly. No matter what type of deep learning method you deploy, the data requirement described here holds. Neural networks cannot discover themselves equivalent variations of natural language without being provided ample examples. And I want to underline the word “ample” here.

OTHER TYPES OF DATA CRISES WITH DEEP LEARNING

Going back to the hypothetical case where you have a service operation and you can pull 1 million (Q/A)s. To make sure this data set will not cause any harm, someone must manually go through the set to clean it up. You cannot just dump data to a deep learning system without verifying it. Remember the Microsoft case, where Tay, the chatbot developed using twitter feeds, started to produce racial statements.

Learn-as-you-go approach also poses problems. Deep learning methods require a training process and convergence before deployment. This can be a long process. Once trained, the system cannot simply absorb new data in an addition mode. The entire data set must be trained again. As a result, if you plan to add new data to your chatbot every week, you need a team of AI specialists training the system every week and re-deploy. As one can imagine, this does not seem like a scalable business solution.

WILL USING BUTTONS SOLVE THE PROBLEM?

Facebook, when they launched the chatbot platform, assumed that buttonizing conversations could solve part of the combinatory explosion problem described here. First of all, let’s make one thing clear:

If the user is not allowed to enter free expressions any time during conversation, it is no longer a chatbot, or conversational AI. It is a toy.

Most Facebook chatbot developers jumped on the idea of buttonizing entire conversations, thus yielding nothing more than a toy. Most of the 30,000 plus chatbots developed in this fashion flopped big time, only few succeeded as reported in several recent articles prior to Facebook’s recent summit meeting. Entirely buttonized conversations can rarely provide successful solutions for very particular business types. If buttons are used alongside free expressions successfully detected, then this combination can be powerful.

WHERE IS THE SOLUTION?

I intend to write more about the solutions later. However, in a nut shell, solutions to the chatbot problem require independent NLP solutions before a deep learning methods can be used. One thing is for sure, deep learning alone is not a good fit, and has no future with this “silver bullet” engineering mentality.

There is a very easy distinction between a chatbot and a search engine which explains almost everything: SHORT-TERM MEMORY.

A search engine, like Google, has no short-term memory. Google will take your query, and bring results. The job is done. The next query you have is completely new to it. It is a new session with no ties to the previous query.

A chatbot, on the other hand, can remember 2, 3, 4, or N steps back, which gives it a huge advantage in responding better, more focused, and with higher accuracy. Especially, applications like “advisor” chatbots can take advantage of this fact. However, remembering N steps back poses a challenging technical problem that can grow in a combinatory fashion. Without getting into such technical details, let’s see an example.

Multiple Questioning Before Presenting Answers
A showcase example is the chatbot Davis Hunter which is designed to find you new travel destinations based on your choices. The multiple questioning operation uses short-term memory which is shown below.

At the end of the questioning steps, the chatbot presents travel destinations with precision. It has used its short-term memory to remember all your inputs before making a decision on its list of destinations. Once the user selects from the options, then Davis will start to present more information about the destination using the free content from Wikivoyage.

The operation shown above is a blue print of any kind of advisory chatbot in any subject.

Search will Shift to Specialized Chatbots in the Near Future
It is fair to assume that conventional search will die out as “Google generation” is steadily replaced by “Siri generation” who are more inclined to use messaging and chatting platforms. This transformation is already at works and is expected to accelerate as chatbots get better and spread in every vertical.

The expectation that a search engine user will sift through dozens of inaccurate results is increasingly becomming obsolete and intolerable for the new generation who grew up with persistent messaging habits highly suitable for chatbot interaction.

The key point in this transformation is the ability to create quality chatbots with an easy and familiar effort (like writing a blog entry) that would accelerate the proliferation of viable chatbots in every subject.