Get humans out of the AI loop, argues professor

Humans should get ‘out of the loop’ of artificial intelligence systems, UTS roboticist Professor Mary-Anne Williams argued last week at an Australian Human Rights Commission technology conference in Sydney.

AI needn’t consult a flesh and blood individual even when making life or death decisions, said Williams, director of The Magic Lab at the university’s Centre of Artificial Intelligence.

The rise of autonomous weapons systems which operate without human control is being campaigned against around the world.

“States must draw the line now against unchecked autonomy in weapon systems by ensuring that the decision to take human life is never delegated to a machine,” the Campaign to Stop Killer Robots states.

In Australia, 122 AI experts last year signed a letter to Prime Minister Malcolm Turnbull urging him to “take a firm global stand” against weapons systems that remove “meaningful human control” when selecting targets and deploying lethal force.

But such ‘golden rules’ were dangerous, Williams argued.

“This golden rule – very, very dangerous. Golden rules like, really? Didn’t golden rules go out with the Greeks? I think it’s very, very worrisome that people hold on to this, like a security blanket, like a teddy bear. And it’s not going to work,” she said.

“Who is going to challenge an AI? AI is already outperforming people. People have plateaued; you’re not going to challenge an AI. If an AI said shoot to kill – you’re not going to say don’t shoot. And vice versa. The sooner we throw it out the more opportunity we will have to build a future worth living in.”

Effort would be better spent on making sure such AI systems operated as they were supposed to, Williams added.

“Let’s monitor and be sure that the AI is actually competent, that’s what we should be doing. Not putting us in the loop. Because putting humans with our own frail intellect and cognitive bias in the loop – what are you doing? – we need to be out of the loop,” Williams said.

“The idea that somehow we bring accountability is I think just nonsense,” she added.

What do you think? Do deadly AI systems need a human in the loop? Let us know in the comments below.

The IDG News Service is the world's leading daily source of global IT news, commentary and editorial resources. The News Service distributes content to IDG's more than 300 IT publications in more than 60 countries.