Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

Description

More machine learning algorithm–powered systems are deployed each day in areas that now include
employment, policing, marketing, price discrimination, health intervention, online news curation, tax fraud
prediction and child protection. Some welcome this trend of data-driven decision-making and decisionsupport,
while others worry that the opacity and perceived objectivity of such systems usher in unwanted
biases through the back door at the same time as they kick due process out. The GDPR offers a range of rights
—some new, some simply rehashed — that many hope will help them navigate this new algorithmic
governance society in the courts. Yet as this paper will discuss, when considering the GDPR in the context of
machine learning using both a legal and a computer science lens these rights do not appear straightforward to
understand or implement.
An alleged new “right to an explanation” (art 13)—which has actually existed in similar form in the DPD since
1995 —has both legal and technical caveats. Legally, there has always been a carve out from the right for the
protection of trade secrets and intellectual property, which probably explains its lack of historical use in the
EU. Recital 63 of the GDPR does however now counsel that this should not justify “a refusal to provide all
information to the data subject” [emphasis added].
Providing “meaningful information about the logic” of advanced machine learning models is rarely technically
possible. The main techniques proposed today by computer scientists to effectively ‘explain’ neural networks,
their innards black-boxed even to their designers, wrap simpler models optimised for an explanation around
more complex ones to estimate core logics: so-called ‘pedagogical interpretation’. Yet such simple models are
just that—simple—therefore often failing to ‘explain’ the fringe cases that are the most likely to lead
individuals to call upon their GDPR rights.
The right to not be subject to algorithmic decision-making (art 22) – again not new but extended from an
earlier right in the DPD, art 15 – seems promising, but is replete with exemptions and only valid (a) in cases of
automated processing producing legal or similar significant effects and (b) where the effect was solely based
on automated processing. Machine learning in high-stakes contexts is almost always deployed as decisionsupport
rather than purely automated decision-making and the GDPR lacks the nuances necessary to establish
whether a human was seriously ‘in-the-loop’.
Similar issues arise around existing rights to a “right to data portability” (art 20) ,deletion and “to be forgotten”
(erasure, art 17) with problems also foreseeable given the ongoing debate on when personal data ceases to be
so by virtue of anonymization/pseudonymisation. . What right does a data subject have to these in respect of
inferred data?
We conclude by asking if we are all condemned to be slaves to the algo-rhythm?