Will the EU be an Exporter of Ethical Artificial Intelligence?

In April, the European Union (“EU”) published a set of guidelines on how those involved in deploying artificial intelligence (“AI”)-powered solutions should go about doing so in an ethical manner. This follows the publication of the guidelines’ first draft in December 2018 and a consultation process during which the expert group working on the document received over 500 comments. The guidelines propose a set of seven key requirements that AI systems should comply with in order to be deemed trustworthy. The document is likely to heavily influence discussions surrounding the EU’s future regulatory landscape for AI.

In its work program for 2019, the European Commission stated that it wants to be the “effective standard-setter and global reference point on issues such as data protection, big data, artificial intelligence and automation.”1 The values enshrined in the EU’s General Data Protection Regulation 2016/679 are already shaping the global economy and laws of other countries.2 It is clear that the European Commission has the same ambitions for its AI-related initiatives. In fact, it has been reported that Brussels’ “ethics-first approach has already attracted attention from outside Europe, including Australia, Japan, Canada and Singapore.”3