This could be overly restrictive to the successful development of AI – see http://digitalbusiness.law/2017/02/do-we-need-robot-law/. Whilst appropriate legislative developments may be needed where there are real existing barriers to AI adoption (potentially, the Text and Data Mining Exemption for copyright materials may need review) or where there is a real need to manage and control the implementataion or use of AI, there is little need for a new regulatory regime specifically for AI. Some existing laws may need tweaking and modification and there may be a need for some new laws in specific circumstances (e.g. driverless cars) but not a whole new legal framework.

In the area of algorithmic bias, the Equality Act 2010 already provides considerable protection for minority interests. The Act applies perfectly well to service providers using AI. Is there any real need for a further regulatory regime in this area that is specific to AI?

Also, in the digital world, laws tend to be overlooked and regarded as relatively unimportant. Greater transparency of the principles, parameters and logic underpiining AI and algorithms in particular may lead to public review and scrutiny. This is likely to a lot more effective in putting pressure on digital players to conform with good principles. Experience shows that a bad review on a review website is likely to lead to almost immediate action by digital companies, compared to a sluggish and legalistic response to claims of breach of the law. Perhaps we need greater legal compulsion on these transparency principles. In fact, Data Protection already gives some rights in this area – these rights may need further developement.

Here are my speaking notes for the event:

Law Society – Ethics and Potential Bias in the Law of Algorithms

As the practising Technology lawyer on the panel I confess that I feel a little uneasy in discussing ethical and social issues. My day-too-day work relates to legal implications of the application of technology. A few years ago my work primarily involved the implementation of office automation systems – often “back office” systems. But with the increasing involvement of Tech in our everyday business and personal lives, we cannot ignore the ethical and social dimensions, particularly as IA and machine learning brings Tech into closer interaction with activities that we consider as being intellectual endeavours, rather than simply automation.

The issues of unintended or intentional bias in algorithms takes us into the territory of the principles that should underpin the operation of AI solutions. This is also quite “hot” news with the European Parliament’s resolution for a voluntary ethical code of conduct on robotics for researchers and designers to ensure that they operate in accordance with legal and ethical standards and that robot design and use respect human dignity. They’ve also asked the EU Commission to consider creating a European agency for robotics and artificial intelligence, to supply public authorities with technical, ethical and regulatory expertise.

As with many technological developments, there is some momentum developing for a new regulatory framework for AI. I agree that we need to assess carefully the extent to which the regulatory environment needs to be modified to allow for the introduction of AI – and to assess the extent to which the regulatory environment may need modification to control undesirable social and economic aspects of AI. I remain to be convinced that we need a specific regulatory framework simply for AI. AI is simply a sophisticated IT tool and should be regulated on that basis.

What I am starting to think may be increasingly important – and this is specifically brought into focus in the discussions over the use of algorithms – is greater legal compulsion to make public on a transparent basis the parameters, logic and principles which underpin AI solutions (including algorithms) so that they can be given greater public review and scrutiny. I’ll say more about this later transparency agenda when I look at the existing legal framework for AI.

Firstly – a few words about the idea of a specific legal framework for AI. At its most basic level the idea of a specific regulatory framework for robotics have been debated in science fiction for over 50 years – Asimov’s 3 or 4 rules of robotics emerged in the early 50s. The idea was that these rules would be programmed into robots so that they would govern their activities.

To recap: Asimov’s Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

NB.0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

BUT Asimov’s rules are – of course – fictional devices. They simply don’t work in practice. How can a robot be programed to identify all the possible ways a human might come to harm? How can a robot understand and obey all human orders, – even people get confused about what instructions mean? Most importantly, Asimov’s laws are inappropriate because they try to insist that robots behave in certain ways, as if they were people. In real life, it is the humans who design and use the robots who must be the actual subjects of any law. Robots are simply tools of various kinds, albeit very special tools, and the responsibility of making sure they behave well must always lie with human beings.

In order to overcome these limitations the Engineering and Physical Science Research Council identified 5 Principles of Robotics in 2010. They provide a useful framework for the implementation and operation of AI and machine learning –

Robots should be designed and operated to comply with existing law, including privacy.

Robots are products: as with other products, they should be designed to be safe and secure

Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.

It should be possible to find out who is responsible for any robot.

Robots should not be designed as weapons, except for national security reasons.

In my view the EPSRC principles set out a good basis for the development of a legal framework around AI solutions. In particular, their focus on AI solution as products working within an existing legal framework takes the debate away from concepts, such as legal personality for robotics – which in my view are likely to obscure the real issues.

Earlier this year (February) in response to concerns around the potential negative implication of the development of AI the Future of Life Institute (funded by the co-founder of Skype and a DeepMInd researcher) convened a conference to identify principles designed to ensure that AI remains a force for good. These principles were developed at the Asilomar conference venue in California through an extensive process of discussion and consensus with the delegates at the conference.

They are already being referred to as the Asilomar Principles. They consist of three categories:

Research issues

Ethics and values

Longer-term issues

The Ethics and Values Principles identified by the Asilomar Conference are:

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

Whilst the Asilomar principles do not specifically refer to compliance of AI with the existing framework they do refer to compatibility with human values of human dignity, rights, freedoms and cultural diversity

So how does the current legal framework apply to Algorithms? Its worth looking at two areas. I’ll spend a few minutes looking at equality legislation and the emerging transparency obligations in the GDPR.

Firstly I will quickly review how the Equality Act 2010 applies in these circumstances. Section 29 of the EA applies to service providers – persons concerned with the provision of a service, goods or facilities to the public or a section of the public, whether or not for payment. Service providers can be individuals, businesses or public bodies. Payment is not a pre-requisite – so it is clear that Act applies to free IT services – such as search engines, online marketplaces, online recruitment agencies.

Where these services utilise algorithms in the provision of the services to the public these services are also subject to the Act.

Section 29 prohibits direct and indirect discrimination:

Direct discrimination – Direct discrimination is where a person is treated less favourably than another person and the reason for the less favourable treatment is one of specific range of “protected characteristics”

Indirect discrimination – Indirect discrimination occurs when a policy, criteria or practice which is applicable to everyone is shown to put those with a relevant protected characteristic at a disadvantage (this can be either a group of people or a particular individual).

A policy, criteria or practice will not be considered indirect discrimination if it can be shown that it was a proportionate means of achieving a legitimate aim.

The relevant protected characteristics covered by this section are:

Age.

Disability.

Gender reassignment.

Marriage and civil partnership.

Race.

Religion or belief.

Sex.

Sexual orientation.

So, algorithm based-services which have either a deliberate or an un-intentional bias will be in breach of s. 29. The fact that an AI solution or algorithm was the cause of the discrimination is irrelevant. The service provider will be liable.

I think this is the correct approach. The service provider in these circumstances is responsible for the consequences of the use of the algorithm. The service provide may have bought in the AI solution from a third party and may not be the cause of the bias. It may well be that the bias simply emerges from the way that an AI solution interacts with its database. I.e. asking the question what does a successful CEO look like will inevitably result in images of white middle class men

The automated decision-taking rules in the GDPR are similar to the equivalent rules contained in the Directive (proposals to introduce restrictions on any ‘profiling’ were, in the end, not included in the final GDPR).

The rules relate to decisions:

−− taken solely on the basis of automated processing; and

−− which produce legal effects or have similarly significant effects.

Basically automated processing can be used where the processing is:

−− necessary for the entry into or performance of a contract; or

−− authorised by Union or Member State law applicable to the controller; or

−− based on the individual’s explicit consent.

However, suitable measures to protect the individual’s interests must still be in place.

There are additional restrictions on profiling based on sensitive data – which need explicit consent, or to be authorised by Union or Member State law which is necessary for substantial public interest grounds

Transparency of Algorithms

There are already provisions in the DP Directive (which will continue in the GDPR) which impose transparency obligations on the use of algorithms when PD is involved. Article 12(a) Directive/ Article 15(1)(h) GDPR – Subject Access Rights

As well as specific data access rights (confirmation whether his/her personal data are being processed and access to the data), the data controller must provide “supplemental information” about the processing.

Already in the Directive “Supplemental Information” includes any regulated automated decision taking (i.e. decisions taken solely on an automated basis and having legal or similar effects; also, automated decision taking involving sensitive data) – including information about the logic involved and the significance and envisaged consequences of the processing for the data subject.

This is a starting point for regulatory compulsion over the principles that used in algorithmic processing. The obligation does not provide the level of clarity that would be desirable in order for greater public review and scrutiny to be achieved in the use of algorithmic decision making. At the moment the rights is limited to situations where the algorithmic processing has some form of legal effects or similarly significant effects. It does not apply to services that provide information only services.

In my view, by giving greater transparency over the principles that underpin the use of algorithms greater public review and scrutiny of algorithms will occur. This will result in pressure on the service providers to changes these algorithms where they cause problems. I the digital world this public review and comment is likely to be far more effective in controlling the use of algorithms than purely legal remedies. Service providers are responsive to public comments whereas they can be quite resistant to legal compulsion.

So – in conclusion – let’s not get too carried away by the possibly exciting prospect of some new form of legal status being granted to robots. Let’s analyze carefully how the existing legal framework applies to these developments in a hard-headed and pragmatic way. Yes – there will be a need for devel0pmmets to the law to accommodate new technology – there always is but let’s aim to do this on an incremental and sensible way which is consistent with and effective in the digital age.

Roger Bickerstaff is co-lead of Bird & Bird's digital business campaign. He has over 20 years of Tech law experience and was Chairman of the Society for Computers & Law (2013-2016) and was President of the International Federation of Computer Law Associations (2014 -2016).