Google's Go victory shows AI thinking can be unpredictable, and that's a concern

Humans have been taking a beating from computers lately. The 4-1 defeat of Go grandmaster Lee Se-Dol by Google's AlphaGo artificial intelligence (AI) is only the latest in a string of pursuits in which technology has triumphed over humanity.

There is a real sense that this month's human vs AI Go match marks a turning point. Go has long been held up as requiring levels of human intuition and pattern recognition that should be beyond the powers of number-crunching computers.

AlphaGo's win over one of the world's best players has reignited fears over the pervasive application of deep learning and AI in our future – fears famously expressed by Elon Musk as "our greatest existential threat".

We should consider AI a threat for two reasons, but there are approaches we can take to minimise that threat.

The first problem is that AI is often trained using a combination of logic and heuristics, and reinforcement learning.

The logic and heuristics part has reasonably predictable results: we program the rules of the game or problem into the computer, as well as some human-expert guidelines, and then use the computer's number-crunching power to think further ahead than humans can.

This is how the early chess programs worked. While they played ugly chess, it was sufficient to win.

The problem of reinforcement learning

Reinforcement learning, on the other hand, is more opaque.

We have the computer perform the task – playing Go, for example – repetitively. It tweaks its strategy each time and learns the best moves from the outcomes of its play.

In order not to have to play humans exhaustively, this is done by playing the computer against itself. AlphaGo has played millions of games of Go – far more than any human ever has.

The problem is the AI will explore the entire space of possible moves and strategies in a way humans never would, and we have no insight into the methods it will derive from that exploration.

In the second game between Lee Se-Dol and AlphaGo, the AI made a move so surprising – "not a human move" in the words of a commentator – that Lee Se-Dol had to leave the room for 15 minutes to recover his composure.

This is a characteristic of machine learning. The machine is not constrained by human experience or expectations.

Until we see an AI do the utterly unexpected, we don't even realise that we had a limited view of the possibilities. AIs move effortlessly beyond the limits of human imagination.

In real-world applications, the scope for AI surprises is much wider. A stock-trading AI, for example, will re-invent every single method known to us for maximising return on investment. It will find several that are not yet known to us.

Unfortunately, many methods for maximising stock returns – bid support, co-ordinated trading, and so on – are regarded as illegal and unethical price manipulation.

How do you prevent an AI from using such methods when you don't actually know what its methods are? Especially when the method it's using, while unethical, may be undiscovered by human traders – literally, unknown to humankind?

It's farcical to think that we will be able to predict or manage the worst-case behaviour of AIs when we can't actually imagine their probable behaviour.

The problem of ethics

This leads us to the second problem. Even quite simple AIs will need to behave ethically and morally, if only to keep their operators out of jail.

Unfortunately, ethics and morality are not reducible to heuristics or rules.

A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher.

Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track.

Should you flip the switch or do nothing?

What would you expect – or instruct – an AI to do?

In some psychological studies on the trolley problem, the humans who choose to flip the switch have been found to have underlying emotional deficits and score higher on measures of psychopathy – defined in this case as "a personality style characterised by low empathy, callous affect and thrill-seeking".

This suggests an important guideline for dealing with AIs. We need to understand and internalise that no matter how well they imitate or outperform humans, they will never have the intrinsic empathy or morality that causes human subjects to opt not to flip the switch.

Morality suggests to us that we may not take an innocent life, even when that path results in the greatest good for the greatest number.

Like sociopaths and psychopaths, AIs may be able to learn to imitate empathetic and ethical behaviour, but we should not expect there to be any moral force underpinning this behaviour, or that it will hold out against a purely utilitarian decision.

A really good rule for the use of AIs would be: "Would I put a sociopathic genius in charge of this process?"

There are two parts to this rule. We characterise AIs as sociopathic, in the sense of not having any genuine moral or empathetic constraints. And we characterise them as geniuses, and therefore capable of actions that we cannot foresee.

But would you want an AI to look after your grandma, or to be in charge of a Predator drone?

There are good reasons why there is intense debate about the necessity for a human in the loop in autonomous warfare systems, but we should not be blinded to the potential for disaster in less obviously dangerous domains in which AIs are going to be deployed.

A Google-developed supercomputer bested a South Korean Go grandmaster again Thursday, taking a commanding 2-0 lead in a five-game series that has become a stunning global debut for a new style of "intuitive" artificial intelligence ...

A South Korean Go grandmaster on Sunday scored his first win over a Google-developed supercomputer, in a surprise victory after three humiliating defeats in a high-profile showdown between man and machine.

Computers eventually will defeat human players of Go, but the beauty of the ancient Chinese game of strategy that has fascinated people for thousands of years will remain, the Go world champion said Tuesday.

Constituting over 78 % of the air we breathe, nitrogen is the element found the most often in its pure form on earth. The reason for the abundance of elemental nitrogen is the incredible stability and inertness of dinitrogen ...

Off the coast of Washington, columns of bubbles rise from the seafloor, as if evidence of a sleeping dragon lying below. But these bubbles are methane that is squeezed out of sediment and rises up through the water. The locations ...

The dramatic difference in gonad size between honey bee queens and their female workers in response to their distinct diets requires the switching on of a specific genetic program, according to a new study publishing March ...

An international team based in Ghent, Belgium (VIB-UGent Center for Plant Systems Biology) and Basel, Switzerland (University of Basel), found a link between a class of enzymes and immune signals that is rapidly triggered ...

New photonic tools for medical imaging can be used to understand the nonlinear behavior of laser light in human blood for theranostic applications. When light enters biological fluids it is quickly scattered, however, some ...

One of the ocean's little known carnivores has been allocated a new place in the evolutionary tree of life after scientists discovered its unmistakable resemblance with other sea-floor dwelling creatures.

Good article indeed. One brake on the perils of letting AI loose would be in law holding a human or humans responsible for the it's actions, not a very good brake as it is known that holding humans accountable for their own actions is far from 100% safe. However the recent steps in the direction of the attribution of the legal status of Driver to autonomous vehicles suggests that even this inadequate brake is unlikely to be applied. The main danger of AGI is that for success in most human competitive arenas deception is a winning strategy, therefore an AI set loose to learn how to win in these arenas is likely to learn successful deception early on. Once this has happened.....

When Kasparov played against Deep Blue for the first times, the computer actually made an error because of a bug in its programming, and made a nonsensical move that was nevertheless within the rules. Kasparov lost the game because he thought the move was deliberate and meaningful, and started to believe the computer had outwitted him because he couldn't understand what it was aiming for.

Moral of the story: don't confuse accidental behaviour for intelligence or planning, even when the results are in your favor.

The same problem applies here: because you can't see what the computer is doing, you don't know whether it actually found a novel strategy, or whether it just made an error and got lucky.

though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular time

Read between the lines: the computer did not make a novel move, it used a known move at an uncommon time. In other words, it had not surpassed its training material.

This has implications on ethics: the computer can still do only what it sees humans do. A stock trading application for example wouldn't and couldn't find an entirely novel way to cheat because it lacks what humans possess: creativity. It's still a classically deterministic machine where output depends entirely on the input.

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.