Ambiguous words and phrases are the pillars of many presentations and marketing campaigns. It is no small irony that ambiguity can capture the interest of the masses while at the same time cause great discomfort to organizations and organizations when strategy, vision, or responsibilities are ambiguous.

Nevertheless, here we are with buzzwords — ideas expressed without precision that become the way people think and talk about things.

The example of cloud computing comes to mind (heretofore known ambiguously as “cloud”). Initially, there may have been some shared understanding of what this means, but critical mass leads to an acceptance of misuse equivalent to the groupthink anti-pattern. But I want to focus on Artificial Intelligence.

The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Let’s break this down. The list of tasks normally requiring human intelligence we could call the backlog of the Artificial Intelligence product. A lot of these tasks have been completed long before computers were invented. Portable clocks and maps replaced the human intelligence required to navigate the sea with the sky and the sun. The written language allows humans to transfer information which had previously been transferred orally. Speaking requires human intelligence (one may argue there is a variance of intelligence employed, but nonetheless it is required). This knowledge is transferred faster and more accurately with a book. Is a book a kind of Artificial Intelligence?

That sounds absurd. We all know Artificial Intelligence means the creation of consciousness in a machine. Right?

That includes visual perception, speech recognition, decision-making…add the ability to write code and we have the singularity. This is part of what makes AI “sexy” — and it scares us, too.

I propose that we deconstruct this to better understand what we are really talking about.

This article does an outstanding job of distinguishing between the concepts of Artificial Intelligence and machine learning, where machine learning is basically a means to an artificially intelligent end. I would like to go further and say this: Artificial Intelligence is a flawed concept. It represents two concerns that can be decoupled to remove ambiguity. One concern is the advancement of the human condition, the other is our fear of being replaced.

Intelligence in the physical world comes in two kinds of containers: biological and mechanical. To better understand the impact of the word “artificial,” we need to look at how the word “artifice” is defined. Again Google:

Clever or cunning devices or expedients, especially as used to trick or deceive others.

This brings to mind Artificial Intelligence tests where a machine is substituted for a human in a game. We had no idea it was a machine! But the purpose of these exercises is not to surprise people. The purpose is to improve machine decision making to the point that it surpasses human intuition.

This often leads to another misconception: that machine intelligence ultimately seeks to replace human intelligence altogether, adding to the “scariness” that makes enticing marketing but does not reflect actual cases of machine intelligence applied to world problems. Nor does it help the cause if our goal is to further intelligence in general, biological or otherwise.

Biological and machine intelligence began their coexistence with the first evidence of human technology directly influencing human evolution. They also are not mutually exclusive in any way.

I know this is not new or original, but stating these facts over and over will help establish a pattern to combat the Artificial Intelligence anti-pattern:

Biological intelligence is extended by machine intelligence, and advances in machine intelligence have historically improved our living condition more than diminished it.

Focusing on the “sexy” and “scary” AI fantasies does not improve our living condition in any substantial way.

Anyone who has read Daniel Kahneman’s work is familiar with his definition of two kinds of thinking: fast/intuitive and slow/analytic. Machines exponentially exceed human intelligence with the latter. Machines are gaining ground in the former (speech, visual field recognition, some complex decisions), but are still far from human level in synthesizing the two. His framework aligns very closely with the actual reality of technological progress as we have experienced it.

AI fantasies assume that this synthesis will result in the complete superfluity of humans (worst case, machines will, therefore, want to replace and/or destroy them).

This will take time to debunk, but I believe it is worth trying. Let’s stop using the term “Artificial Intelligence” altogether. Let’s use “machine intelligence” instead. Right now, Google’s first choice for a definition of machine intelligence is this:

Another term for Artificial Intelligence. Advances in machine intelligence may someday threaten our sense of selfhood.

This is not going to be easy.

We should do everything possible to dispel the belief in a mythical singularity. This is apocalyptic thinking, a known anti-pattern that makes for great theater and sometimes costs lives.

We should do everything possible to educate. Artificial Intelligence is often cast as a specter that steals jobs in the night. The only way to design an autonomous version of anything is to understand that thing, then code it. If the future is automation, the people who perform those tasks now should be the first in line for training.

We should also get used to making more decisions based on data, not perception or intuition alone. If there is any area where machines are teaching us right now, that is definitely one.

A LinkedIn connection shared this article, which prompted me to share thoughts on this. The article itself is concerned with ethics, but really raises the AI specter more than treat ethical implications with any sincerity.