Wednesday, April 29, 2009

The editors of Choice featured a review of Moral Machines by G. Trajkovski in their April 2009 issue.

[I]n this holistic volume, they raise many questions on what scientists need to consider when building robots. Written with an abundance of examples and lessons learned, scenarios of incidents that may happen, and elaborate discussions on existing artificial agents on the cutting edge of research/practice, Moral Machines goes beyond what is known as computer ethics into what will soon be called the discipline of machine morality. Summing Up: Highly recommended. Academic and public libraries, all levels.

The authors do an admirable job at using language accessible to an interdisciplinary audience, which also makes the book open to a more general public readership. It will be of interest to anyone concerned with the ethical, social, and engineering issues that accompany the quest to develop machines that can act autonomously out in the world.

P.W. Singer's article "Robots at War: The New Battlefield" (WQ Winter 2009) contributes significantly to a discussion that is long overdue. Should the U.S. and other countries slip into the roboticization of warfare, or is this a bad idea? Singer illustrates clearly how the trend towards autonomous fighting machines is inexorably driven by the logic of war. He correctly notes that these developments carry grave ethical risks and that the idea of "keeping humans in the loop" is already an illusion because there are pressures leading to greater autonomy for lethal weapons-carrying robots.

In our recent book Moral Machines: Teaching Robots Right From Wrong (OUP 2009), we focus on the prospect of building moral decision making faculties into autonomous systems, an area that is already being explored by researchers with military funding. Surveying the limitations of existing technology, readers of the book may reasonably conclude that it will be impossible to meet the challenges.

Progress in artificial intelligence is a primary determiner in Singer's scenario of mechanized war. Machines are, he suggests, easier to program for intelligent warfare than human soldiers are to train. But here, too, he and the military may underestimate the challenges involved. If not, however, the very developments in robotics and artificial intelligence that Singer mentions also open up new avenues for making fighting machines sensitive to the ethical situations that arise during warfare.

Singer does not mention the possibility of using A.I. to mitigate ethical problems. Nevertheless, his explicit concern for the ethical issues is a significant step in the right direction. Indeed, a recent, comprehensive report on military robotics, the Unmanned Systems Roadmap 2007-2032, does not mention the word 'ethics' once nor does it mention the risks raised by robotics, with the exception of one sentence that merely acknowledges that "privacy issues [have been] raised in some quarters".

Can robots be made to respect the differences between right and wrong? Without this ability, autonomous robots are a bad idea not just for military contexts, but also in other situations, such as care of the elderly. Overly optimistic assessments of technological capacities could lead to a dangerous reliance on autonomous systems that are not sufficiently sensitive to ethical considerations. Overly pessimistic assessments could stymie the development of some truly useful technologies or induce a kind of fatalistic attitude towards such systems.

National and international mechanisms for discriminating real dangers from speculative dangers are needed. This is easier said than done. It is not clear whether legislatures and international bodies such as the UN have the will to create effective mechanisms for the oversight of military robots.

The helmet is the first "brain-machine interface" to combine two different techniques for picking up activity in the brain. Sensors in the helmet detect electrical signals through the scalp in the same way as a standard EEG (electroencephalogram). The scientists combined this with another technique called near-infrared spectroscopy, which can be used to monitor changes in blood flow in the brain.

Brain activity picked up by the helmet is sent to a computer, which uses software to work out which movement the person is thinking about. It then sends a signal to the robot commanding it to perform the move. Typically, it takes a few seconds for the thought to be turned into a robotic action.

While this technology might be adapted to facilitate many activities, such as opening the trunk of a car when one's hands are full, it is not ready yet for general use. One difficulty is filtering out distractions in the wearer's thinking. Differences between users' thought patterns mean that the system must be trained by a specific wearer of the helmet.

Monday, April 13, 2009

The new IEEE Technology and Society magazine (Vol 28, number 1, Spring 2009) is a feature issue on "Lethal Robots." It includes articles by John Canning ("You've just been disarmed. Have a nice day!"), Noel Sharkey (Death strikes from the sky: the calculus of proportionality), Peter Asaro (Modeling the Moral User), Robert Sparrow (Predators or plowshares? arms control of robotic weapons), and Ronald Arkin (Ethical Robots in Warfare).

Sunday, April 5, 2009

Machine Morality is the topic featured in the March/April issue of Philosophy Now Magazine . Wendell Wallach was invited to be the editor of this special issue, which includes contributions from James Moor, Susan and Michael Anderson, Steve Torrance, Tom Powers, and Joel Marks. The articles are:

Would you trust a machine to make life and death decisions about you. It doesn’t have to be a robot. It might be a piece of software that decides when to turn off your life support. A new book worth a read this year is Moral Machines: Teaching robots rights and wrongs (sic) by Wendell Wallach and Colin Allen. Noel talks to one of the authors, Colin Allen a Professor of Cognitive Science and the History and Philosophy of Science in the College of Arts and Sciences at Indiana University, Bloomington, USA, to find out what all the fuss is about. The book discusses many of the issues about why and how machines in the near future will make ethical decisions about our lives. They are not talking about some far fetched super intelligent machine that is conscious. They are talking about machines based on today's technology. The book is written in a very straightforward, interesting and non-technical style. Although there are parts where I don’t agree with them, it is in everyone's interest to read this. It will make you better informed about a possible future that you may want to have opinions about - it is in your own interest.

For the next week the team fed the developing brain a liquid containing nutrients and minerals. And once the neurons established a network sufficiently capable of responding to electrical inputs from the electrode array, they connected the newly formed brain to a simple robot body consisting of two wheels and a sonar sensor.

According the the report the robot learned to avoid obstacles in its course through Hebbian learning, the strengthing and weakening of synaptic connections between the neurons.

Thursday, April 2, 2009

The literature of the newly-emerging area of robot ethics frequently mentions that one benefit of trying to build moral robots is what we stand to learn about ethics in the case of human beings. Yet, there is little mention of any specifics. The purpose of this special issue, "Robot Ethics and Human Ethics," is to address this matter explicitly and ask what we can learn about human ethics by our attempt to build moral machines. To this end, papers are requested that explore various facets of this question, including, among others, what insight robot ethics might cast on:

1. how to deal with conflicting moral claims on our behavior,

2. whether ethical practice is essentially algorithmic, possibly by following the application of moral theories,

3. the role that internal, psychological factors like desire and emotion play in the way we make decisions,

4. the possibility of special moral neurological circuitry used in determining human behavior,

5. the extent to which ethics in humans might emerge from evolutionary pressures keyed to certain environmental props or social circumstances,

6. or the role that enculturation might play in determining behavior by establishing norms and other expectations for conduct.

We are also interested in papers that address whether the attempt to build moral machines is demonstrating (or will demonstrate) that human ethics requires something well beyond the capacity of machines, or whether, perhaps, this very attempt is exposing weaknesses in our existing conceptions of (human) ethics that might be repairable following insights that arise in the process of experimenting with machines. To be clear, this issue is not dedicated to papers primarily about how to build moral machines, save insofar that mention of such is necessary for illuminating human morality.

The editors at Ethics and Information Technology are seeking articles for a special issue in this area. Submissions will be double-blind refereed for relevance to the theme as well as academic rigor and originality. High quality articles not deemed to be sufficiently relevant to the special issue may be considered for publication in a subsequent non-themed issue.

Closing date for submissions: September 1st, 2009.

To submit your paper, please use the online submission system, to be found at www.editorialmanager.com/etin.

There will be a workshop on this topic hosted at Delft University of Technology when the special issue comes out.

Please contact the special guest editor for any information regarding this special issue,

Anthony Beaversafbeavers@gmail.com

Or the managing editor,

Noëmi Manders-HuitsN.L.J.L.Manders-Huits@tudelft.nl

Ethics and Information Technology (ETIN) is the major journal in the field of moral and political reflection on Information Technology. Its aim is to advance the dialogue between moral philosophy and the field of information technology in a broad sense, and to foster and promote reflection and analysis concerning the ethical, social and political questions associated with the adoption, use, and development of IT.