Why AI Matters – No magic wand but also no simple tool

Bylined article by Philipp Gerbert, senior partner in the Munich office of the Boston Consulting Group and a BCG Fellow analyzing the impact of Artificial Intelligence on businesses.

Artificial intelligence is very important. Trust me, I am the CEO of a tech giant: 'It is hard to overstate the impact AI will have' (Amazon), 'We are moving … to an AI first world' (Google), 'In five years AI will impact every decision we make' (IBM). And AI is good for you.

Citizens, businesses and governments alike cannot escape the onslaught of AI, our lives seem to be marked by ever new milestones: Jeopardy? Done! Board games? Done! Translation? Almost done! Driving? Working on it!

Given that the field of AI has been around for 60 years one might wonder what just happened. One part of the answer is: Nothing 'just happened'. We are simply throwing exponentially increasing Big Data and processing power onto algorithms that are hungry for both. The other part of the answer is: We have seen dramatic improvements in two specific areas over the last years.

Vision, allowing machines to move and operate in the real world

Language, allowing machines to interact intuitively with humans and access a huge body of human knowledge

Both are now 'good enough' for many tasks and the pathway to perfection seems cleared.

In many ways this is just wonderful: we are helped in choosing books and films, we can talk to our smart phone and home assistant, we manage to navigate foreign cities, we get better disease detection and treatment, etc. But it is also necessary: We simply would not be able to cope with all the data, from machine sensors to photographic pictures, if we had to rely on our unaided capabilities or traditional computer programs. We would drown in complexity. And increasingly, as in financial markets, we would be too slow or rigid.

Let us be more concrete. Already when we try to classify pictures we realize the task is far too complex to be solved with traditional computer programs. Instead we write a short algorithm, train it on millions of images and have it 'learn' similarities and differences at various levels of abstraction. Eventually, the algorithm is able to recognize: this is a cat, while this is a truck. More generally, AI is about agents that perceive, pursue goals, explore, adapt to change and operate autonomously to achieve optimal results. An advanced example is a self-driving car.

So having accepted that AI is a smart and arguably essential aid in our complex, data-rich world the question remains: 'Why should I care? It is just another tool. But thank you, I appreciate I can use my voice for commands'.

Well, AI has some unusual properties that everyone might want to understand. An untrained algorithm is sort of like an untrained dog:

It cannot do much at first. You have to invest in training it with lots of examples, for instance in recognizing the smell of drugs.

Eventually the capabilities become useful.

It is never fully predictable how it will act in a certain situation.

It can even be hard in hindsight to dissect why precisely it did what it did.

It keeps evolving, sometimes quite fast, and the details of its actions change.

What is familiar in dogs is unusual in tools. Companies are used to learn about tools, but not to 'train' their programs in order to reap benefits. And we might not be prepared for the imperfections of AI, which is ultimately a 'best try based on current understanding': Even a simple family picture misclassification might upset you, such as Google labeling your girlfriend as a 'gorilla'.

So we would argue every person and company should develop an 'intuitive understanding' of AI in order to be able to address important questions, including:

How to define tasks that AI can address today and in the future?

What do people need to know about the training, its pitfalls and ist consequences?

How do organizations learn to use AI?

What level of transparency and control do you want?

What rules should apply? Is strong statistical performance enough?

How will AI change what humans do, augmenting skills or automating tasks?

The good news is that the very simplicity of the basic ideas of AI render it realistic for everyone to develop such an 'intuitive understanding'. This would help not only to identify the areas for leveraging AI to reap large benefits, but also to guide us how to implement it. Such an understanding would also quell science fiction–triggered concepts and fears, replacing them with an educated debate of useful rules, expected changes and ways to deal with them.

And last not least it might provide for a fresh perspective on our own thoughts and actions as humans.