Artificial intelligence (AI), not the type that will see super intelligent robots destroy humanity, but the type that might deny you a credit card or develop a scarily accurate picture of your life, is already starting to shape how we live. This will only continue as governments start to invest huge sums in the field.

Where are the voices of ordinary people in the conversation about how AI should develop? Here are four reasons why the government should engage the public in this conversation, and three suggestions of how to do it:

Public engagement could improve AI research: AI researchers are overwhelming male and are also likely to be from predominantly wealthy backgrounds. Theories of collective intelligence and cognitive diversity show that more diverse groups are better at solving problems. This lack of diversity also means that AI researchers often focus on solving the problems of people like them. Artificial Intelligence holds many promises, but its exclusivity may be holding it back.

It could focus AI research on the most pressing challenges: Self-driving cars are designed to be autonomous, perhaps because they were born out of the military’s need for vehicles that can operate in hostile environments. But a civilian transport system may work better if self-driving cars were seen as one part of an intelligent and connected system of roads, private vehicles and public transport. What if research was focussed on addressing what the public thinks are the most pressing challenges rather than the needs of investors, industry or the military? Could this lead to AI research that wasn't focused on either the trivial, such as serving up better Netflix recommendations, or the lethal, such as autonomous weapons systems?

It could make AI more ethical: When the public get a chance to tell experts what they think about science and innovation funding - in RCUK’s Public Dialogues or the Eurobarometer survey, for example, they express strong views about the need for research to focus on pressing societal needs and for regulation to prevent negative effects of innovation, for example job losses and a loss of privacy. Public involvement in the development of AI could play an important part in guarding against negative outcomes.

It’s the right thing to do: All of the preceding arguments for public involvement are instrumental - involving the public will improve the quality of AI research and target it at the most pressing problems. But Artificial Intelligence is going to have wide-ranging effects on the lives of everyone and yet only a tiny group of people are making important decisions about its development. Beyond any instrumental value that public engagement may have, there is a social justice argument for involving a much wider group of people in debating and designing Artificial Intelligence.

How can Government involve the public in AI research?

“It’s not rocket science: listen, and those who feel ignored will re-engage passionately.”*

The subtle art and science of citizen engagement is highly developed. But ultimately, it's about listening to what the public has to say, giving policymakers a chance to hear viewpoints outside of the AI echo chamber of researchers, policymakers and businesses. Here are three simple ideas of how to do it:

Send the UK’s digital minister on a nationwide tour

The UK’s digital minister should follow the example of Andy Haldane, chief economist at the Bank of England, and go on a nationwide tour to feed the everyday experiences of ordinary households into policy making on artificial intelligence. The minister could take others with him on his tour: civil servants, researchers and business advocates, to learn about the concerns, fears and also positive visions of citizens around the country.

Organise a nationwide series of debates on the development of AI

The Royal Society’s research into public attitudes to machine learning is an important piece of work and the RSA's forthcoming citizens juries on the use of AI in criminal justice could also be useful, but how can the Government engage the public beyond focus groups?

Firstly, get creative. Terms such as Neural nets and backpropagation don’t mean anything to the public. Debates should be built around things that people can interact with: videos, games and physical installations that explore different scenarios. For example: “How would you feel if all your search data from the last five years was hacked?”

Secondly, why would the public want to take part in a series of debates on AI? As we have argued before, citizen engagement exercises need a clearly defined goal. Two potential goals for these debates include: one, to develop a set of public principles for AI research to guide its development along socially acceptable lines. This could look something like the eight principles that RCUK developed based on a review of its public engagement exercises; and two, to develop a set of public challenges for AI research. What pressing issues does the public think AI research should be directed at? This could be used to inform the Government’s AI industrial strategy challenge fund.

Appoint ‘public champions’ to each commission, expert group and inquiry

Experts, policymakers and business interests are well represented on commissions designed to explore the ethics of AI. Citizens might be consulted, but they aren’t represented. The Government and others who are setting up AI commissions should appoint a ‘public champion’ to ensure that the views of the public are represented. This person may be an expert in science communication, someone skilled in closing the distance between the public and the science and innovation establishment.

Public involvement in debating the principles behind the development of AI and setting the challenges that AI research is directed at - this is an agenda for an AI policy that could truly solve the grand challenges of our era.

Author

Tom Saunders

I really like this and these great, practical ideas.
I would like to add one key thing to the mix - the demand to report back to us all, and those consulted, on how policy has been influenced by these public debates, or not. I have been involved in SO many 'listening' exercises over the years and feel sure that actual listening has occured from policy makers, but then it all goes back in to the black box of government and we are none the wiser as to what difference, if any, it has made. There will be ways that what the public wants can't be delivered, or conflicting priorities, or practical issues, or just disagreements about what is required. But that's the bit I would like to see more of. A more open approach from government on how decisions are made, what influences their direction of travel and who has been involved in deciding priorities, what they are and what trade offs have been made and why.

Name *

Please enter your full name

Email *

We won't publish your email address and we won't use it for marketing purposes unless you sign up to our mailing list. Please see our privacy policy for more info on how we use your details.

Stay up to date

Get our weekly newsletter and tailor your updates on our programmes, events and research

Stay up to date

Join our mailing list to receive updates about Nesta’s work, including the regular Nesta newsletter and tailored information on jobs, funding opportunities, programme updates, new research and publications, event invites and the occasional requests to take part in research or surveys - based on your interests.

Sign up for our newsletter

I'm interested in
*

Education

Creative economy

Government innovation

Innovation policy

Health

Futurescoping

Challenge prizes

Impact investment

You can unsubscribe by clicking the link in our emails where indicated, or emailing [email protected]. Or you can update your contact preferences. We promise to keep your details safe and secure. We won’t share your details outside of Nesta without your permission. Find out more about how we use personal information in our Privacy Policy.