Young red haired man wearing beanie hat and glasses resting in chair with legs on table using smartphone and drinking coffee in modern officeFeatured image: Content to leave the work to others/press master, Adobe Stock

If you don’t know, you’re in good company. Kalev Leetaru, a technology entrepreneur and recurring contributor at Forbes,points out in a recent piece that most engineers who use AI don’t know themselves. While it may not matter if we don’t know, it matters a good deal that they don’t know.

Leetaru suggests that engineers fail to dig down to the mechanism, at least in part, because they get enamored with the seeming “magic” of Deep Learning:

Deploying one’s first machine learning algorithm can in many ways be like experiencing magic for the first time. Somehow, without any coding, this piece of software has ‘learned’ the underlying patterns of its training data and applied its new knowledge to achieve quite reasonable results on novel input data.

Much like watching a baby take its first steps, the wonderment and magic of this experience lies not in the quality of the results, but rather in the amazement of the moment.”

That’s part of the cause. There is another factor: The Internet’s culture of sharing code.

When businesses first adopted computers, they had to license the software they relied on. Much business software from leading technology companies such as Microsoft, IBM, and Oracle is still sold through software licenses. But the software powering much of the Internet is different: It is shared more than it is licensed.

Modern AI applications also have shared roots. Sometimes developers obtain code that another team developed, such as Google’s Tensorflow. Or they use it through a service, such as the services Microsoft,Amazon, and Google offer.

Why does sharing matter? Because software developers can be as lazy as rest of us, Leetaru notes:

… they practice their trade merely by plucking pre-made canned algorithms off the shelf, pointing them to a directory of training and testing data and following the instructions to twist a few knobs until the accuracy level is high enough, before deploying to production and moving on to the next project.

If it were their own code, when an odd behavior arises (that is, a bug), they would dig into the code to determine the cause. Instead, they consider it “an unavoidable retraining requirement.”

Because they’ve chosen to not deeply learn their deep learning systems—continuing to believe in the “magic”—the limitations of the systems elude them. Failures “are seen as merely the result of too little training data rather than existential limitations of their correlative approach” (Leetaru). This widespread lack of understanding leads to misuse and abuse of what can be, in the right venue, a useful technology.

Anyone can borrow a drawer full of tools. Knowing when to use a tool and why and knowing its limitations separates the craftsperson from the novice. Sadly, too many AI engineers work as novices rather than using their full humanity to make good, informed decisions. That puts us all at risk.

Also by Brendan Dixon: If You Think Common Sense Is Easy to Acquire… Try teaching it to a state-of-the-art self-driving car. Start with snowmen.

Featured image: Content to leave the work to others/press master, Adobe Stock

Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence

Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Mind Matters features original news and analysis at the intersection of artificial and natural intelligence. Through articles and podcasts, it explores issues, challenges, and controversies relating to human and artificial intelligence from a perspective that values the unique capabilities of human beings. Mind Matters is published by the Walter Bradley Center for Natural and Artificial Intelligence.