Theresa May, AI, Ethics and the World Economic Forum at Davos

Theresa May is on her way to Davos, to speak at the World Economic Forum - and the papers are already stating that she is going to call for ‘safe and ethical’ artificial intelligence.

In my opinion, before we can even start to talk about ‘safe and ethical’ artificial intelligence, we have to have ethical roboticists.

Without a clear understanding of the ethical challenges that roboticists face, how can we establish standards for ethical artificial intelligence development – standards which are absolutely vital?

She will apparently say,

For right across the long sweep of history from the invention of electricity to advent of factory production, time and again initially disquieting innovations have delivered previously unthinkable advances and we have found the way to make those changes work for all our people

But, ‘Safe and ethical’ artificial intelligence is a narrow approach.

Who decides the ethics?

Whose moral compass will guide us through this revolution of technology?

When opposing morals collide, whose will take precedence?

We are surely tired of hearing the moral dilemma of the autonomous driverless car having to make a decision: hit a pedestrian or allow the passenger to die. But we need to think broader.

When robots become medical care assistants, will they be given the autonomy to decide who receives a donor organ?

When artificial intelligence is given control over dietary consumption of the overweight, will they withhold particular foods?

And perhaps most frightening of all, who will have ‘the codes’ to override artificial intelligence’s programmes?

Recognising the power of the label ‘ethical’ must also be considered.

This could confuse consumers, giving them the impression that a product or artificial intelligence is ‘good’.

Ethical and good are not synonymous, and being imprecise with our language is a dangerous slippery slope - a challenge that you could explore in more details in my podcast on Machine Ethics last week.

But underpinning this entire debate is an assumption that many have made, and few have questioned, that: the question of digital ethics is currently driven by technologists, not philosophers.

Engineers and inventors simply are not trained to consider these types of issues: and I think I can say with some certainty that, if they had, Facebook would have been designed on a bedrock of rewarding authority, not popularity.

Perhaps the mire of problems they, along with other social networks, are currently finding themselves in, could have been avoided if they had, instead of trying to fix it twelve years too late.

So what does all this tell us?

Firstly, that we need to build engagement on the conversation of ethical artificial intelligence across a wider cross-section of society.

Liberal arts trained professionals may not seem like the obvious choice, but without unique approaches, we are simply going to receive the same answers, time and time again.

Professor Alan Winfield, of the University of the West of England, is starting to bridge this gap, but he himself again comes from a roboticist perspective. We need philosophers working alongside technologists for new answers to be discovered.

Secondly, we need to address the most common fear when robots are discussed: the loss of employment.

When Theresa May speaks about artificial intelligence ‘for all our people’, she could begin a nuanced discussion on how we can ensure that the incredible power that artificial intelligence brings is leveraged to bring societal benefits, not just short-term gains for small groups.

We need to ask ourselves just what widespread artificial intelligence looks like: how we can utilise it to help society, reduce inequality, and ensure that access is not limited to the few who exploit their power.

An increasing ‘robotics poverty’ is opening as the privileged few have access to high technology, coding education, and philosophical awareness of the growing consciousness of artificial intelligence.

Will this new ‘technocrat’ class share their wealth of understanding

When driverless cars are owned by only the rich, and robotic servants only by certain countries, how can we claim equality?

We need to give serious consideration to a robot tax, and start global mechanisms for taxing technology to avoid these challenges: Silicon Valley cannot hold the wealth concentration at the expense of the rest of the world, as London is currently demonstrating through its role as a major centre for jobs, but not capital wealth creation.

Thirdly and lastly, we need to engage with religious leaders to start dialogues between and within communities about deep questions about purpose and the meaning of life.

In the multi-cultural and multi-ethnic society that we enjoy here in the UK, we need to build conversations – and hopefully, consensus – about how we as humans are going to approach artificial intelligence and robotics.

Building that dialogue from all sides is important.

Perhaps we need to take the bold decision to slow progress down until greater consensus is reached, setting ceilings for artificial intelligence research so that humanity has time to agree, and prepare for currently unknown consequences.

I passionately believe that we cannot purely assess artificial intelligence from a purely technological perspective.

Without a classical philosophical exploration of this new world, we enter it blinded, seeing nothing but the potential that it delivers to humanity - unable to comprehend that we face the greatest philosophical debates of human history.

Theresa May is putting a spotlight on the importance of understanding the ethical consequences of what we do. The challenge is, what happens if we do not, as a world, agree on what those are?