The co-evolution of society and potentially disruptive technologies makes decision guidance on such technologies difficult. Four basic principles are proposed for such decision guidance. None of the currently available methods satisfies these principles, but some of them contain useful methodological elements that should be integrated in a more satisfactory methodology. The outlines of such a methodology, multiple expertise interaction, are proposed. It combines elements from several previous methodologies, including (1) interdisciplinary groups of experts that assess the potential internal development of a particular technology; (2) external scenarios describing how the surrounding world can develop in ways that are relevant for the technology in question; and (3) a participatory process of convergence seminars, which is tailored to ensure that several alternative future developments are taken seriously into account. In particular, we suggest further development of a bottom-up scenario methodology to capture the co-evolutionary character of socio-technical development paths.

Co-evolutionary scenarios are used for creative prototyping with the purpose of assessing potential implications of future autonomous robot systems on civil protection. The methodology is based on a co-evolutionary scenario approach and the development of different evolutionary paths. Opportunities, threats and ethical aspects in connection with the introduction of robotics in the domestic security and safety sector are identified using an iterative participatory workshop methodology. Three creative prototypes of robotic systems are described: "RoboMall", "RoboButler" and "SnakeSquad". The debate in society that might follow the introduction of these three robot systems and society's response to the experienced ethical problems and opportunities are discussed in the context of two scenarios of different future societies.

The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers IV and V, has to do with agency and responsibility. Another issue, somewhat aside from the philosophical, has to do with coping with future technologies, and developing methods for dealing with potentially disruptive technologies. This is discussed in papers VI and VII.

Paper I systemizes some ethical issues surrounding the use of UAVs in war, with the laws of war as a backdrop. It is suggested that the laws of war are too wide and might be interpreted differently depending on which normative moral theory is used.

Paper II is about future, more advanced autonomous robots, and whether the use of such robots can undermine the justification for killing in war. The suggestion is that this justification is substantially undermined if robots are used to replace humans to a high extent. Papers I and II both suggest revisions or additions to the laws or war.

Paper III provides a discussion on one normative moral theory – ethics of care – connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in ethics of care, and second, to discuss whether ethics of care may be a suitable theory to implement in care robots.

Paper IV discusses robots connected to agency and responsibility, with a focus on consciousness. The paper has a functionalistic approach, and it is suggested that robots should be considered agents if they can behave as if they are, in a moral Turing test.

Paper V is also about robots and agency, but with a focus on free will. The main question is whether robots can have free will in the same sense as we consider humans to have free will when holding them responsible for their actions in a court of law. It is argued that autonomy with respect to norms is crucial for the agency of robots.

Paper VI investigates the assessment of socially disruptive technological change. The coevolution of society and potentially disruptive technolgies makes decision-guidance on such technologies difficult. Four basic principles are proposed for such decision guidance, involving interdisciplinary and participatory elements.

Paper VII applies the results from paper VI – and a workshop – to autonomous systems, a potentially disruptive technology. A method for dealing with potentially disruptive technolgies is developed in the paper.

Several robotic automation systems, such as UAVs, are being used in combat today. This evokes ethical questions. In this paper it is argued that UAVs, more than other weapons, may determine which normative theory the interpretation of the laws of war (LOW) will be based on. UAVs are unique as a weapon in the sense that the advantages they provide in terms of fewer casualties, and the fact that they make war seem more like a computer game, might lower the threshold for entering war. This indicates the importance of revising the LOW, or adding some rules that focus specifically on UAVs.

Machine ethics is a field of applied ethics that has grown rapidly in the last decade. Increasingly advanced autonomous robots have expanded the focus of machine ethics from issues regarding the ethical development and use of technology by humans to a focus on ethical dimensions of the machines themselves. This thesis contains two essays, both about robots in some sense, representing these different perspectives of machine ethics. The first essay, “Is it Morally Right to use UAVs in War?” concerns an example of robots today, namely the unmanned aerial vehicles (UAVs) used in war, and the ethics surrounding the use of such robots. In this essay it is argued that UAVs might affect how the laws of war (LOW) are interpreted, and that there might be need for additional rules surrounding the use of UAVs. This represents the more traditional approach of machine ethics, focusing on the decisions of humans regarding the use of such robots. The second essay, “The Functional Morality of Robots”, concerns the robots of the future – the potential moral agency of robots. The suggestion in this essay is that robots should be considered moral agents if they can pass a moral version of the Turing Test. This represents the new focus of machine ethics: machine morality, or more precisely, machine agency.

In this paper, the moral theory ethics of care - EoC - is investigated and connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in EoC (which is, it is argued, slightly different from the everyday use of the term) indicating that we should distinguish between "natural care" and "ethical care". The second aim is to discuss whether EoC may be a suitable theory to implement in care robots. The conclusion is that EoC may be a theory that is suitable for robots in health care settings.

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.

11.

Johansson, Linda

KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.

Can artifacts be agents in the same sense as humans? This paper endorses a pragmatic stance to that issue. The crucial question is whether artifacts can have free will in the same pragmatic sense as we consider humans to have a free will when holding them responsible for their actions. The origin of actions is important. Can an action originate inside an artifact, considering that it is, at least today, programmed by a human? In this paper it is argued that autonomy with respect to norms is crucial for artificial agency.