Ensuring meaningful human control, in the face of the weaponisation of robotics and AI

May 24, 2016 Comments are off

The UK parliament is currently holding an inquiry into implications for the UK of advances in robotics and artificial intelligence, including the “social, legal and ethical issues raised by developments in robotics and artificial intelligence technologies, and how they should be addressed.” Article 36 made a submission to this inquiry in April, building on our analysis of UK policy in relation to autonomous weapons systems.

The UK’s engagement to date in multilateral discussions on the implications of increased autonomy in weapons systems, facilitated by robotics and artificial intelligence (AI), is not adequate to the broad societal implications of the subject matter.

How the relationship between human and machine decision-making is managed on issues of life and death is of fundamental importance to how society’s relationship with computers and AI will develop in the future. In that context the UK’s approach to policy making on autonomous weapons so far lacks foundations in a vision of the role of AI in society in the future. It fails to engage with the key questions of immediate relevance yet seeks to avoid movement towards multilateral agreement on the nature and form of human control that should be considered necessary in making decisions over how force is applied.

UK policy making in this area should be subject to a broad review to ensure that a policy driven by defence interests also reflects the position the UK wishes to take on the wider roles of AI and computer autonomy in society in the future.