Why AI can't solve everything

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

AI solutionism

In only a few years, AI solutionism has made its way from the technology evangelists' mouths in Silicon Valley to the minds of government officials and policymakers around the world. The pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

We are now seeing governments pledge support to national AI initiatives and compete in a technological and rhetorical arms race to dominate the burgeoning machine learning sector. For example, the UK government has vowed to invest £300m in AI research to position itself as a leader in the field. Enamoured with the transformative potential of AI, the French president Emmanuel Macron committed to turn France into a global AI hub. Meanwhile, the Chinese government is increasing its AI prowess with a national plan to create a Chinese AI industry worth US$150 billion by 2030. AI solutionism is on the rise and it is here to stay.

Neural networks – easier said than done

While many political manifestos tout the transformative effects of the looming "AI revolution", they tend to understate the complexity around deploying advanced machine learning systems in the real world.

One of the most promising varieties of AI technologies are neural networks. This form of machine learning is loosely modelled after the neuronal structure of the human brain but on a much smaller scale. Many AI-based products use neural networks to infer patterns and rules from large volumes of data. But what many politicians do not understand is that simply adding a neural network to a problem will not automatically mean that you'll find a solution. Similarly, adding a neural network to a democracy does not mean it will be instantaneously more inclusive, fair or personalised.

Challenging the data bureaucracy

AI systems need a lot of data to function, but the public sector typically does not have the appropriate data infrastructure to support advanced machine learning. Most of the data remains stored in offline archives. The few digitised sources of data that exist tend to be buried in bureaucracy. More often than not, data is spread across different government departments that each require special permissions to be accessed. Above all, the public sector typically lacks the human talent with the right technological capabilities to fully reap the benefits of machine intelligence.

For these reasons, the sensationalism over AI has attracted many critics. Stuart Russell, a professor of computer science at Berkeley, has long advocated a more realistic approach that focuses on simple everyday applications of AI instead of the hypothetical takeover by super-intelligent robots. Similarly, MIT's professor of robotics, Rodney Brooks, writes that "almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine".

One of the many difficulties in deploying machine learning systems is that AI is extremely susceptible to adversarial attacks. This means that a malicious AI can target another AI to force it to make wrong predictions or to behave in a certain way. Many researchers have warned against the rolling out of AI without appropriate security standards and defence mechanisms. Still, AI security remains an often overlooked topic.

Machine learning is not magic

If we are to reap the benefits and minimise the potential harms of AI, we must start thinking about how machine learning can be meaningfully applied to specific areas of government, business and society. This means we need to have a discussion about AI ethics and the distrust that many people have towards machine learning.

Most importantly, we need to be aware of the limitations of AI and where humans still need to take the lead. Instead of painting an unrealistic picture of the power of AI, it is important to take a step back and separate the actual technological capabilities of AI from magic.

For a long time, Facebook believed that problems like the spread of misinformation and hate speech could be algorithmically identified and stopped. But under recent pressure from legislators, the company quickly pledged to replace its algorithms with an army of over 10,000 human reviewers.

The medical profession has also recognised that AI cannot be considered a solution for all problems. The IBM Watson for Oncology programme was a piece of AI that was meant to help doctors treat cancer. Even though it was developed to deliver the best recommendations, human experts found it difficult to trust the machine. As a result, the AI programme was abandoned in most hospitals where it was trialled.

Similar problems arose in the legal domain when algorithms were used in courts in the US to sentence criminals. An algorithm calculated risk assessment scores and advised judges on the sentencing. The system was found to amplify structural racial discrimination and was later abandoned.

These examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

Related Stories

University of Toronto Ph.D. student David Madras says many of today's algorithms are good at making accurate predictions, but don't know how to handle uncertainty well. If a badly calibrated algorithm makes the wrong decision, ...

Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information ...

Artificial intelligence, or AI, is undergoing a period of massive expansion. This is not because computers have achieved human-like consciousness, but because of advances in machine learning, where computers learn from huge ...

The fatal crash involving an autonomous car operated by Uber in the United States was a tragic but timely reminder of the dangers of companies rushing to implement artificial intelligence to be first to market.

"Siri, will it rain today?", "Facebook, tag my friend in this photo." These are just two examples of the incredible things that we ask computers to do for us. But, have you ever asked yourself how computers know how to do ...

A machine learning algorithm designed to teach computers how to recognize photos, speech patterns, and hand-written digits has now been applied to a vastly different set of data: identifying phase transitions between states ...

Recommended for you

Researchers from the University of Washington and Microsoft have demonstrated the first fully automated system to store and retrieve data in manufactured DNA—a key step in moving the technology out of the research lab and ...

One of the ocean's little known carnivores has been allocated a new place in the evolutionary tree of life after scientists discovered its unmistakable resemblance with other sea-floor dwelling creatures.

In research that casts cells as curators of their own history, Dana-Farber Cancer Institute scientists have discovered that adult tissues retain a memory, inscribed on their DNA, of the embryonic cells from which they arose. ...

New photonic tools for medical imaging can be used to understand the nonlinear behavior of laser light in human blood for theranostic applications. When light enters biological fluids it is quickly scattered, however, some ...

16 comments

The IBM Watson for Oncology programme was a piece of AI that was meant to help doctors treat cancer. Even though it was developed to deliver the best recommendations, human experts found it difficult to trust the machine.

That's because it was found that the whole system was a mechanical turk that simply replayed the opinions and recommendations of a handful of doctors from a single hospital, which very often didn't match the local conditions and treatment options available.

The fundamental limitation of "AI" is that it lacks the "I" part. It's just regurgitating a set of rules because it's not allowed to adjust the set on the go, or it might make the wrong choice. That reduces it to a regular dumb computer algorithm no matter how deep your neural network is.

AI systems (like neural networks but also expert systems, boltzman networks, decision trees/forests, etc. ) are classifiers. The are good at finding out what class a particular set of inputs belongs to (sometimes they even work on data without predefined classes by defining their own groupings). As soon as people grasp this it'll be easy to understand what they can and cannot do.

AI needs to be trained. If it then gets an input that is well outside its training data it will fail (just like any human would in a similar circumstance). If the training data is biased the resulting AI will be biased.

...and 'bias' in a training set can be the oddest thing. Here's my personal favorite AI-fail story:

In the 1990s the german army had an AI program that was supposed to identify tanks in front of scenery. So they sent out two photographers to take pictures of scenery: One took pictures with tanks the other without. Then the AI was trained. As usual the data was divided into two sets: training data and test data. After training on the training data the AI performed extremely well on the test data.

They took this to the field and it didn't work. At all. Turns out the two photographers used slightly different f-stops. The AI had learned to identify what f-stop was used in an image -.not the content.

That's the thing with AI...you can train it and it might work (even extremely well and be extremely useful), but you can't actually show *what* it has learned.

"An algorithm calculated risk assessment scores and advised judges on the sentencing. The system was found to amplify structural racial discrimination and was later abandoned."

After looking around the Internet, I found not a single shred of evidence that any of these algorithms have been abandoned. One thing I did find, however, is that the US Supreme Court refused to consider even looking at a case brought to it regarding these black-box algos, that one one can look at because they are, well, "private," and just trust us (the mfg) 'cause we wrote them!

antialias_physorg> the two photographers used slightly different f-stops

If this is true, take two photographs of the same field each with a different f-stop and get AI that has learned to identify what f-stop was used in an image. Al will be able to identify each photograph based solely on its f-stop!

Seems a bit flaky to me, but if Al can do this so be it.But it means two separate photographs under f8 are under constantly changing daylight so no two identical photographs are under the same equivalent f-stop, there are subtle differences. There is great potential in this process Al is undertakingAl can be taught to recognise stellar differences between two identical stellar plates.At the moment Al is still an automated device.

A different fstop will change the sharpness distribution of the image. An algorithm doesn't look at an image "as a whole" - like humans do - but at pixels and pixel-neighborhoods. Effectively it's looking at the fourier transformed of the image more than the image istelf.Even slight variations in fstop will show up very noticeabyl (i.e. being able to be classified into distinct groups) in the fourier transformed data.

At the moment Al is still an automated device.

Well, no. No more than your brain is an automated device "because neurons are chemical switches".AI is a trained classifier. You are confusing the actual AI implementation with the box it runs on.

If it then gets an input that is well outside its training data it will fail (just like any human would in a similar circumstance)

If you show a human an object or configuration they have never seen before, we don't fail like the AI fails. We recognize the situation as outside of our learned categories, and assign it a new one (dynamic learning / self-doubt).

We can also infer the object's properties and purpose based on other information, such as in seeing scissors we don't mistake it for two knives crossed, or vice versa. We have sentience, self-awareness and theory of other minds, therefore understanding of purpose so we can infer meaning to the object - we know there's no 1:1 mapping between objects and their meaning.

Mean while the AI is still trying to fit the object into its trained and fixed A->B "transfer function" and either reports seeing no object, or mis-identifying the object as something else.

Well, no. No more than your brain is an automated device "because neurons are chemical switches".

The brain is dynamically modifying itself to meet the demands of the situation, according to no fixed rules on how to do it.

The AI is a fixed computer program. The extent to which it can adapt while it's actually running is limited to how it is programmed to learn, which is also fixed.

AI is a trained classifier.

AI, or deep neural networks in specific, is a method of applying artificial selection to evolve algorithms and programs without knowing how to program them yourself. The advantage of NNs over traditional code is just that the program can't crash - it will just produce the wrong output, which makes it possible to evolve the program in the first place.

That doesn't make these programs intelligent. Being a classifier would require intelligence - the AI is just mapping inputs to outputs.

The pitiful Human excuse for intelligence is based on a foundation of instincts. Programmed into our rodent and monkey ancestors. By millions of years of scrambling for desperate survival and competing to reproduce.

We see patterns cause the ape that couldn't see a shade dappled predator? Or, on the fly, the difference between reaching for a branch and reaching towards a basking viper? They didn't get to be ancestors.

No, the base motivation for waxing poesy of the wonderful future of AI. Is to provide the humans controlling the machines with the facade of progress. That they may escape just consequences for all the poor decisions of human fallibility. And divert public attention to blaming the machines.

Who said that AI can solve everything?But no doubt AI can solve MANY things (and I speak here as an AI expert) so it would be completely moronic to dismiss AI just because it cannot solve literally 'everything'. Why rise the bar so ridiculous high?

Why? Publicity and desperation to pull in eyeballs to paying advertisers, with clickbait headlines.

And to repeat... "Carthago delenda est!" Oops, sorry. Channeling a previous incarnation.....That those who command the use of the machines will be able to escape responsibility for the orders they give. "I swear that AI missile launcher just went berserk and decided to blow up your country. I am an important man and do not have the time to concern myself with your petty problems!"

A tediously flood of AI stories since "Colossus" have shaped public opinion to pretend that no human is in charge. Or that any official will have to be held accountable for evil deeds. "Cause it's all the fault of the Artificial Stupids. Nothing to do with us innocent angels just lounging around the boardroom."

Consider the question. If an AI is given an order to kill, would it have any free will, the depth of judgement to be able to refuse such an order?https://en.wikipe...riest%3F

And if that is to far past for any effort at pondering the possibilities... In a 2017 appearance before the Senate Intelligence Committee, former FBI director James Comey testified that President Donald Trump had told him he "hoped" Comey could "let go" of any investigation into Michael Flynn; when asked if he would take "I hope", coming from the president, as a directive, Comey answered, "Yes. It rings in my ears as kind of 'Will no one rid me of this meddlesome priest?'"

Nah, we pretend that we are at the pinnacle of evolution but basically we are still the same nasty primates as our monkey ancestors.

Using technology is not civilization. Using technology incompetently? Say an illiterate tweeting? Holds zero promise for intelligent design.

If an AI is given an order to kill, would it have any free will, the depth of judgement to be able to refuse such an order?

I think it's not sensible to talk about AI that way. AI are not ethical machines. They are pattern classifiers (in effect: complex filters). They are trained to react to certain patterns in certain ways. The implementation allows for before unseen (but similar) patterns to be classified. That's it.

There is no consciousness, emotion or ethical framework involved which would be required to start judging the classification results. There is also no value system involved (other that the one trained for classification tasks)

AI is a very interesting subject, and I dabble in neural networks myself, but we really must let go of these fanciful notions (either benign or horrible). They are a tool - and as such they can be employed to help or cause untold misery depending on how someone employs them.

"It is the poor craftsman, who blames his tools for poor workmanship."

A_P, that is the point I am stressing. All this hoohaw about "Smart" technology and Artificial Intelligence is to delude the public that those in power ever have to take the blame. No matter how sordid the crimes.

It used to be "The devil made me do it!". From now on "The AI's made me do it." "Alexa/Siri/whatever on my phone told me to kill all those people!"

"So I can't be held responsible cause I am hearing voices. They are not just in my head but physically exist in my hand."

How can you argue against what is a factual claim? In an age when "patriotism" is defined as obeying orders without question or moral judgement.

I see the danger more in the prefiltering of information through what is essentially a black box. IBM tried this with the application of their 'Watson' AI for cancer diagnostics.

While this is still a benign application we're fast heading towards the point where the AI just prefilters actions of a nation on a broader scale and just leaves the "nuke/don't nuke" option open to the final (human) decision maker who cannot make an *informed* decision - because the way the information is filtered is opaque. (That's why the Watson diagnostic solution is not accepted by surgeons - not because it's necessarily worse than human-based decisions, but because you have no chance to delineate an erroneous diagnosis from a good one)

The decision point is condensed to a single final decision with momentous consequences - and that exacerbates any wrong decision taken greatly.