Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

Description

More machine learning algorithm–powered decision-support systems are piloted and deployed in the public
sector each day to help detect individuals and corporate wrongdoing in areas such as taxation, child protection
and policing. While some welcome this trend as the dawn of more evidence-based administrative decisionmaking,
others worry that the opacity and perceived objectivity of such systems usher in unwanted biases
through the back door just as they kick due process out.
Studies of these systems have primarily attempted to look-in or reverse-engineer them from the outside,
missing the people that obtain, deploy and manage these technologies within diverse institutional contexts. To
help fill this gap, 25 public servants and technologists from different sectors and countries involved in public
sector machine learning projects were identified and interviewed. They were asked about their experiences
with these technologies, focussing on how they understood and approached operational barriers and ethical
issues they encountered. Analysis of these interviews shows promising roles for recent technological
approaches to responsibility in this field such as ‘fairness-aware’ or interpretable machine learning systems.
Yet these interviews also raise questions and issues that are both currently underemphasised and unlikely to
be resolved by technical solutions alone. This research suggests that governance mechanisms for appliedmachine-learning must be more sensitive to on-the-ground pressures and contexts if they are to succeed in
ensuring new data-driven decision-support systems are societally beneficial.