How much power do you want to give to a smart machine? The RSA asked some humans...

We are consistently interested in those bits of the future that seems to be the least accessible to communities and localities - that is, the big technological changes coming out of California or China. And particularly artificial intelligence, which really promises to supersede a lot of routine mental labour (faster than the robots can swallow up physical labour). Many jobs, and assumptions, about the "decent working life" are about to come under question.

Why shouldn't everyday citizens have a say over these vast forces? In terms of basic state investment in scientific advances, as Marianna Mazzucato often puts it, we've probably already paid for them. Shouldn't they then benefit us - for example, by reducing our work load (while still keeping us in some kind of paid employment)? Why should we stand so fearfully in their wake?

But you have know about the tide of changes that are happening first. - and this report from the RSA is a great example of what you find out when you ask people about the future.

There are two panels from their survey which are particularly illuminating on popular attitudes to the increasing decision-making power of Artificial Intelligence. Here's the first one:

There's only one majority that knew about the kind of capacities that AI does, or is about to have - that was in choosing adverts to see on the internet (48%). The largest minority (41%) knew that it helps made decisions about access to financial services (the "Computer says no" experience of many with their credit cards or loans). But only 9% knew that AI were helping to make judgements in the criminal justice system, 14% in immigration or 18% in healthcare.

Yet the next question was how much you would support these AI's making decisions on your behalf:

In matters of criminal justice, workplace labour, immigration, and social support - areas where, one might imagine, that nuanced judgement of human need might be heightened - thuddingly large majorities reject the idea that an AI would have a decisive role.

This gets to the crux of people’s fears about AI – there is a perception that we may be ceding too much power to AI, regardless of the reality. The public’s concerns seem to echo that of the academic Virginia Eubanks, who argues that the fundamental problem with these systems is that they enable the ethical distance needed “to make inhuman choices about who gets food and who starves, who has housing and who remains homeless, whose family stays together and whose is broken up by the state.”

Yet, these systems also have the potential to increase the fairness of outcomes if they are able to improve accuracy and minimise biases. They may also increase efficiency and savings for both the organisation that deploys the systems, as well as the people subject to the decision.

They are operating in partnership with DeepMind’s Ethics and Society programme - which of course begs an interesting sceptical question. Is this one of AI's greatest commercial proponents trying to get ahead of popular fears raised by the recent Facebook shenanigans around Cambridge Analytica? Or if we need, as Tim Berners-Lee once said, a new digital Magna Carta, is this the part when the merchants make their appeal to the sovereign user?

We hope to bring some of these questions to our "collaboratories" up and down the country - so we are watching with interest.