Traditionally, unexpected results are called bugs in computer programs. A computer should produce the results we expect, otherwise it’s faulty and needs to be fixed. We’re so used to the bipolar world of true vs. false, right vs. wrong, that we judge our computer systems on this basis. It’s how we exam students, it’s how we sentence in court, it’s how we test machines.

In short: computer systems are objective and humans are subjective. That’s why we use computers. It has a life-long quest to improve the results it gives to its users. We want more accurate and precise results, whether it concerns forecasting the weather, applying nanotechnology, searching for results on the internet or doing scientific research, we want all those things to be better than before.

But nowadays we’re building systems that don’t give right answers all the time. When we build systems that need to interact closely with humans, like chat bots, we need the computers to act and speak like humans. With all the quirks and illogicalities inherent to humans, when we want to exploit human knowledge, we’re likely to see contradictions, arguments, opinions and plain errors and not just the so-called objective facts.

Chatbots should be like humans

Conversational chat bots are made to interact with humans using a question-and-answer dialogue. They act as an interface between man and machine. We can use chat bots to create customer service of self service agents. These agents are available 24/7 and extremely scalable.

Bots should respond adequately to any user question. Using dialogues based on prepared question-and-answer-scripts create rather blunt bots, not able to interpret the meaning and intent behind the vast array of possible questions. Thus it cannot find the most appropriate answers either. Not so good for adequate interaction.

Chat bots must apply deep learning to improve their conversations with people. In this way the bot can adapt its responses for its audience: their knowledge levels, cultural backgrounds, emotions and so forth. Bots will learn from its past behavior, analyze the effectiveness of the conversations and apply what’s learned in upcoming interactions.

They not only use artificial intelligence to determine the personality of the human it communicates with but also apply so called cognitive technologies to find the best answer for the wide range of possible questions these humans can ask them.

Deep learning techniques will change the behavior of the system over time. It will adapt to its audience. Microsoft discovered that their Twitter bot adapted himself, or herself, too much. It became plainly racist and had to be switched off. Rightfully so, but it also showed how deep learning can lead to unexpected results. Well, unexpected… If Microsoft had known their audience on beforehand, what could they expect?

Cognitive promises you the unexpected

In cognitive computing, a fast rising part of artificial intelligence, we don’t even want pre-defined results. Cognitive computing promises to present new insights that you couldn’t have found yourself. It bases its results on a body of human knowledge, somewhat like a specialists’ library. These insights can be used to explore new marketing possibilities, detected rare diseases and find hidden information. We use artificial intelligence and deep learning to find these new insights, insights that we didn’t expect at first hand.

When you query a cognitive system, the system will give you results based on probability. It will not show right or wrong answers, but the most probable answers. If you’re lucky, the system will tell you the probability as a percentage. That helps.

“ With deep learning, unexpected results are no longer bugs. It’s computer behavior that surprises us. We should call undesired or unintended results bugs from now on. ” (the author)

For example, when you ask IBM Watson Health which disease fits a certain set of symptoms, it will give a set of most probable diseases, disorders, illnesses or whatever applies to the symptoms. It’s up to the expert, the doctor, to probe further and find the right answer. Well, the most applicable answer.

Expect the unexpected

Artificial intelligence, for example deep learning, will bring us computer systems that will give us unintended results, unexpected behavior or unforeseen advise. We should get used to the fact that computers are fallible, not giving us the right answer all the time.

How we can judge the results those systems give us? In some cases, we can’t. But as we want computers to support humans more closely, they will have to act more like humans. When we apply deep learning techniques, we don’t know on beforehand what the systems will learn, how they will behave in the long run and what knowledge they would gain. We do not know how creative they will become in finding new answers to our problems.

Only when we expect the unexpected, we can fully exploit the possibilities of cognitive computing, deep learning, machine learning and all the other advances in artificial intelligence. And the next time such a computer gives you unpredicted results, it might be true (and not a bug).