This type of artificial intelligence is referred to as Artificial Narrow Intelligence (ANI) – non-human systems that can perform a specific task. We encounter this type on a daily basis, and its use is growing rapidly.

The next generation of AI

With the next generation of AI the stakes will almost certainly be much higher.

Artificial General Intelligence (AGI) will have advanced computational powers and human level intelligence. AGI systems will be able to learn, solve problems, adapt and self-improve. They will even do tasks beyond those they were designed for.

Importantly, their rate of improvement could be exponential as they become far more advanced than their human creators. The introduction of AGI could quickly bring about Artificial Super Intelligence (ASI).

What appears almost certain is that they will arrive eventually. When they do, there is a great and natural concern that we won’t be able to control them.

The risks associated with AGI

There is no doubt that AGI systems could transform humanity. Some of the more powerful applications include curing disease, solving complex global challenges such as climate change and food security, and initiating a worldwide technology boom.

But a failure to implement appropriate controls could lead to catastrophic consequences.

It is here that the science of human-machine systems – known as Human Factors and Ergonomics – will come to the fore. Risks will emerge from the fact that super-intelligent systems will identify more efficient ways of doing things, concoct their own strategies for achieving goals, and even develop goals of their own.

Imagine these examples:

an AGI system tasked with preventing HIV decides to eradicate the problem by killing everybody who carries the disease, or one tasked with curing cancer decides to kill everybody who has any genetic predisposition for it

an autonomous AGI military drone decides the only way to guarantee an enemy target is destroyed is to wipe out an entire community

an environmentally protective AGI decides the only way to slow or reverse climate change is to remove technologies and humans that induce it.

These scenarios raise the spectre of disparate AGI systems battling each other, none of which take human concerns as their central mandate.

Various dystopian futures have been advanced, including those in which humans eventually become obsolete, with the subsequent extinction of the human race.

The next decade or so represents a critical period. There is an opportunity to create safe and efficient AGI systems that can have far reaching benefits to society and humanity.

At the same time, a business-as-usual approach in which we play catch-up with rapid technological advances could contribute to the extinction of the human race. The ball is in our court, but it won’t be for much longer.