Neuroscience, Cognitive Science, Brain Science: whatever you call it, over the last ten years, amazing advances in brain imaging and neural recording techniques have led to a revolution in how we think about thinking. Every edition of popular science magazines such as New Scientist or Scientific American features new discoveries in brain function: what is deja vu? how might you see sound? why should someone disown part of their own body?

Increasingly cognitive neuroscientists are venturing into domains once thought to be outside the limits of any kind of experimental enquiry. Perhaps now data exist to answer questions previously only considered by philosophers. Are moral beliefs absolute? Is the concept of God a natural consequence of our neural circuitry? Can the mind exist distinct from any physical reality? How could we ever decide?

Announcing a 2-day workshop on “Ethical Guidance for Research and Application of Pervasive and Autonomous Information Technology (PAIT)” March 3-4, 2010. The workshop will be a culminating event of a year-long process of planning, case development and analysis, and networking among information technology engineers and researchers, ethicists, and other interested persons. The workshop is funded by the National Science Foundation (grant number SES-0848097) and sponsored by Indiana University’s Poynter Center for the Study of Ethics and American Institutions and the Association for Practical and Professional Ethics.

Confirmed speakers include Helen Nissenbaum, Associate Professor in the Department of Culture and Communication and Senior Fellow of the Information Law Institute, New York University; and Fred H. Cate, Distinguished Professor and C. Ben Dutton Professor of Law, IU School of Law, and Director of the Center for Applied Cybersecurity Research, Indiana University Bloomington.

Technologies are being developed today using very small, relatively inexpensive, wireless-enabled computers and autonomous robots that will most likely result in the near-omnipresence of information gathering and processing devices embedded in clothing, appliances, carpets, food packaging, doors and windows, paperback books, and other everyday items to gather data about when and how (and possibly by whom) an item is used. The data can be analyzed, stored, and shared via the Internet. Some of these pervasive technologies will also be autonomous, making decisions on their own about what data to gather and share, which actions to take (sound an alarm, lock a door), and the like.

The potential benefits of pervasive and autonomous information technology (PAIT) are many and varied, sometimes obvious, sometimes obscure – as are the ethical implications of their development and deployment. The history of information technology suggests that long-standing issues including usability, privacy, and security, among others, as well as relatively new phenomena such as ethically blind autonomous systems, are best addressed early enough to become part of the culture of researchers and engineers responsible for identifying needs and designing solutions.

This project will create a firm ethical foundation for this nascent field by convening an international meeting of experts in PAIT, ethicists well versed in practical ethics, and other stakeholders. The meeting will feature discussions of previously-prepared case studies describing actual and anticipated uses of PAIT, invited presentations on key issues, working groups to identify and categorize ethical concerns, and other activities aimed at community-building and formulating ethical guidance to help researchers and designers of such systems recognize and address ethical issues at every stage, from design to deployment to obsolescence. The participants will form the core of a new interdisciplinary subfield of value-centered PAIT which will develop guidelines and conceptual tools to support communication and collaboration among and between researchers, engineers, and ethicists.

The Planning Committee (see http://poynter.indiana.edu/pait) is actively seeking experts interested in joining one or more informal working groups to help prepare for the workshop; if you are interested in being involved, please get in touch with the project director (see contact information below).

The PAIT workshop will precede the annual meeting of the Association for Practical and Professional Ethics, which will begin on Thursday, March 4, 2010 at the historical Hilton Cincinnati Netherland Plaza in Cincinnati, Ohio.

Registration will be required for attendance at the PAIT workshop, but there will be no registration fee. PAIT participants are also encouraged to register to attend and participate in the Association’s annual meeting (see http://www.indiana.edu/~appe/).

Tuesday, June 23, 2009

Investigators looking into the deadly crash of two Metro transit trains focused Tuesday on why a computerized system failed to halt an oncoming train, and why the train failed to stop even though the emergency brake was pressed.

This isn't the first time that Metro's automated system has been called into question.In June 2005, Metro experienced a close call because of signal troubles in a tunnel under the Potomac River. A train operator noticed he was getting too close to the train ahead of him even though the system indicated the track was clear. He hit the emergency brake in time, as did the operator of another train behind him.

Details of the attack, which occurred in Makeen, remained unclear, but the reported death toll was exceptionally high. If the reports are indeed accurate and if the attack was carried out by a drone, the strike could be the deadliest since the United States began using the aircraft to fire remotely guided missiles at members of the Taliban and Al Qaeda in the tribal areas of Pakistan.

the Air Force is planning to build a more selective breed of military drones, with swarms of bird-size bots shadowing targets and new unmanned aerial vehicles (UAVs) capable of launching mini-missiles at multiple targets at once. The mechanized assassin, it seems, is about to become a lot more professional.

Like most UAVs, these robots would most likely be used for surveillance and reconnaissance. But in an animated clip released by the Air Force late last year, a MAV lands on an enemy sniper, and, without so much as a prayer to its machine god, detonates itself. The new Air Force briefing doesn't elaborate on this miniature suicide-bomber concept, but it does include plans to have flocks of sparrow-size MAVs airborne by 2015, and even smaller, dragonfly-size robots by 2030. And with the recent news that Israel is developing an explosives-laden snakebot, the writing is on the wall: You can run from tomorrow's robotic hitmen, and you can hide, and they'll flap or squirm or glide into position and kill you anyway.

Bioengineers at Duke University have developed a laboratory robot that can successfully locate tiny pieces of metal within flesh and guide a needle to its exact location -- all without the need for human assistance.

This robot may be the harbinger of systems capable of placing and removing radioactive "seeds" for the treatment of prostate cancer.

Will A Machine Replace You? –Courtney Boyd MyersAI in The C-Suite –Dale AddisonAI And What To Do About It –Ben GoertzelThe Coming Artilect War –Hugo de GarisThe Ethical War Machine –Patrick LinIntelligence Evolution –Barry Ptolemy

Wednesday, June 17, 2009

Ever since 9/11 securing cargo containers has appeared to be a nightmarish task. Now robotic ferrets have been enlisted for inspecting cargo containers. The ferrets will help detect radioactive materials, drugs, and explosives, as well as illegal imigrants smuggled within the containers.

Dubbed the "cargo-screening ferret" and designed for use at seaports and airports, the device is being worked on at the University of Sheffield in the United Kingdom with funding from the Engineering and Physical Sciences Research Council (EPSRC). . . The ferret will be the world's first cargo-screening device able to pinpoint all kinds of illicit substances and the first designed to operate inside standard freight containers. It will be equipped with a suite of sensors that are more comprehensive and sensitive than any currently employed in conventional cargo scanners.

Noel Sharkey published a piece in the Daily Telegraph on the need to consider the moral consequences of developing mechanical soldiers. He writes in an article title,March of the killer robots, that:

Despite planned cutbacks in spending on conventional weapons, the Obama administration is increasing its budget for robotics: in 2010, the US Air Force will be given $2.13 billion for unmanned technology, including $489.24 million to procure 24 heavily armed Reapers. The US Army plans to spend $2.13 billion on unmanned vehicle technology, including 36 more Predators, while the Navy and Marine Corps will spend $1.05 billion, part of which will go on armed MQ-8B helicopters.

[I]n Waziristan, where there have been repeated Predator strikes since 2006, many of them controlled from Creech Air Force Base, thousands of miles away. According to reports coming out of Pakistan, these have killed 14 al-Qaeda leaders and more than 600 civilians.

Such widespread collateral damage suggests that the human remote-controllers are not doing a very good job of restraining their robotic servants. In fact, the role of the "man in the loop" is becoming vanishingly small, and will disappear. "Our decision power [as controllers] is really only to give a veto," argues Peter Singer, a senior fellow at the Brookings Institution in Washington DC. "And, if we are honest with ourselves, it is a veto power we are often unable or unwilling to exercise because we only have a half-second to react."

Tuesday, June 16, 2009

Asia-Pacific Computing and Philosophy 2009 will be held on October1st-2nd, 2009 in Tokyo, Japan. The conference will be hosted at theUniversity of Tokyo's Sanjo Conference Hall. Keynotes speeches will begiven by Professor Hiroshi Ishiguro (Osaka University) and ProfessorShinsuke Shimojo (Caltech). This year AP-CAP 2009 will be held inconjunction with the Devices that Alter Perception workshop, whichwill form a special track. The conference will also feature a special track on roboethics.

The call for papers, information for attendees, Word and LaTeXtemplates, online paper submission form and registration are allhosted at:

http://ia-cap.org/ap-cap09/

Following acceptance, papers will be made available online forcommentary and also public voting in order to award the AP-CAP 2009best paper prize.

SUBMISSIONS

Authors are invited to submit an extended abstract limited to 1,000words. The deadline for abstract submission will be July 15th, 2009 at23:59 GMT. At submission time, authors should indicate a track forabstract consideration. Camera-ready papers are due on September 15thand should be A4 paper size and less than 10 pages and under 2megabytes in size.

SPECIFICS FOR THE ROBOETHICS TRACKTrack Chair: Jorge SOLIS

Nowadays with recent technological breakdowns in developing human-likerobots, medical robots, etc.; it is possible to conceive intelligentmachines which can autonomously perform specific tasks. More recently,the introduction of personal robots designed to coexist with humans isbecoming closer to the reality. Therefore, new challenges are seen inintroducing robots to other applications fields out of the industry.The goals of the track are to: (1) understand the ethical, social andlegal aspects of the design, development and employment of robots (2)engaging in a critical analysis of the social implications of robots(3) increase the convergence of roboticists, computer scientists,philosophers, etc.

ORGANIZERS

AP-CAP 2009 is sponsored by the International Association forComputing and Philosophy. The conference is organized by theUniversity of Tokyo Meta-Perception Research Group, Oxford UniversityInformation Ethics Research Group, and University of HertfordshireGroup in Philosophy of Information.

Attendees who are members of IACAP will enjoy a discounted conferencefee. We encourage interested parties to join IACAP prior to theSeptember 1st early registration deadline. You can find moreinformation about membership at the IACAP website:

http://ia-cap.org/membership.php

REGISTRATION

On-line registration will be available at the AP-CAP 2009 website:

http://ia-cap.org/ap-cap09/attending.php

The conference registration fees provide a discount for earlyregistration (before September 1st) as well as a discount for IA-CAPmembers. Registration fees will be payable in US dollars. In the caseof on-site registration we will accept credit card payment or cash.

Monday, June 8, 2009

One advantage of unmanned drones is their ability to stay aloft for long periods of time. This has turned out to be quite useful in the extended surveillance of drug traffickers. A Heron UAV has been deployed in drug interdiction on the coast of El Salvador reports a TIME article by Tim Padgett titled, "Using Drones in the Drug War." The Heron is capable of staying airborne for more than 20 hours at 15,000 feet, while it streams back high resolution real-time video.

Cost savings from the use of drones, as well as placing fewer drug agents' lives in jeopardy, may make funding an expansion of the drone fleet in the drug war irresistible to Congress. From a civil liberties perspective, the use of drones raises concerns as to whether they might also be deployed in ways that violate privacy laws or transgress other civil rights.

Furthermore,

the Heron isn't without problems. The Turkish military complained last month about mishaps with the drones it had bought from IAI for counterterrorism surveillance, such as too often not responding to commands from their human operators on the ground. (IAI rejected the claims but has promised to "rectify" any problems.) U.S. Customs & Border Protection has used Predator drones in recent years to detect illegal immigration, but a series of crashes in recent years has clouded the program.

It is too early to know what caused the crash of Air France Flight 447, however, there are already speculations about a computer glitch. This was presumably a system failure rather than an action initiated by the computer. An article titled, "Could a Computer Glitch Have Brought Down Air France 447", written by TIME correspondent Jeffrey T. Iverson is available online at YAHOO! News.

Friday, June 5, 2009

...Moral Machines represents a valuable addition to, and extension of, the current literature on machine morality. As the development of autonomous artificial moral agents becomes closer to being realized, I suspect that this book will only gain in importance.

Moral Machines: Teaching Robots Right from Wrong is the first book-length discussion of issues arising in the nascent field of Machine Ethics, offered by two of its more veteran thinkers. The authors do an admirable job at using language accessible to an interdisciplinary audience, which also makes the book open to a more general public readership. It will be of interest to anyone concerned with the ethical, social, and engineering issues that accompany the quest to develop machines that can act autonomously out in the world.

As a response to the expanding (and seemingly limitless) scope of artificial intelligence and robotics research, a surge of recent work has focused on issues related to the development of artificial moral agents (AMAs) (or moral machines or ethical robots). These robots will (or in certain cases already do) have the capacity to perform ethically relevant actions out in the world, in varying ways and with varying degrees of autonomy. As the capacities of such robots increase, so too should our demand that such machines act ethically. The cutting-edge discipline of Machine Ethics--made up of engineers, artificial intelligence researchers, and philosophers--is important because it investigates whether or not the development of AMAs is possible (and desirable), and helps us to prepare just in case it is.

The main themes of Moral Machines are twofold: An examination of the motivations we have for creating AMAs and how we should go about developing machines that behave ethically. Each chapter of the book focuses on certain specific issues that need to be attended to if the project of Machine Ethics is to be successful. Some of the more noteworthy questions posed by Wallach and Allen include: 'Is machine morality necessary?', 'Can robots be moral?', 'Does humanity want machines making moral decisions?', 'What are the roles of engineers and philosophers in the design of AMAs?', 'What methods and moral frameworks are best suited for the design of AMAs?', and 'How can machine morality inform human morality?'. Through their attempt to answer these questions, the authors offer a detailed and thorough survey of the relevant research being done on machine morality, and offer preliminary (and often quite insightful) answers to these and other questions (although they humbly admit that much more work needs to be done in the future).

The authors also make some more substantial claims about how ethics could be implemented into machines. For example, after discussing the benefits and shortcomings of both top-down (rule-based) and bottom-up (evolution- or learning-based) approaches to the design of moral robots, the authors spend some time arguing for a hybrid approach (Ch. 8 and Ch. 11). One example suggested by the authors is an approach that appeals to a virtue ethical framework, since virtue ethics focuses on virtuous character traits which are acquired through training and habit formation (and hence may accommodate both top-down and bottom-up computational approaches). The authors argue that a hybrid approach holds much promise for overcoming the problems associated with pure top-down and bottom-up approaches to implementing ethics into machines. This proposal has some initial appeal and plausibility, and warrants the attention of further research.

Despite the value of the book as a whole, a few critical notes are worth mentioning. For one thing, although the authors touch upon issues surrounding the nature of moral agency, they do so only somewhat superficially, leaving many of the more complex and important issues unattended (and unresolved). For example, there is a rich debate over whether or not consciousness is a necessary condition for being a moral agent, and, if so, whether robots could be sufficiently conscious so as to possess moral agency (akin to humans, perhaps). Although the authors do mention the issue of machine consciousness (and moral agency in general), they do so only in passing (Ch. 4).

Furthermore, although the authors discuss the relationship between ethics and engineering, and the different (and often conflicting) roles of ethicists and engineers, the authors seem to champion the task of the engineer. In other words, although the book is devoted to the topic of machine morality, the authors focus primarily on the design, implementation, and engineering aspects of creating AMAs, with the consequence of leaving other (ethical) issues by the wayside. For example, in their discussion of which sort of ethic we should implement into machines, the authors focus on which frameworks work best in terms of their computability or implementability. There is no doubt that this issue is important. Yet certain ethical questions may demand attention, prior to the implementation stage. For example, whether the moral codes we are trying to implement into our machines allow for the development of those types of machines is never asked. Moreover, from an engineering perspective, the moral frameworks appealed to for designing AMAs are assessed based solely on whether they are conducive to implementability. Yet, ethicists may be reluctant to accept that all (or most) moral frameworks start on an even playing field, the problem simply being a matter of which frameworks are most conducive to implementation. Some discussion of the longstanding debates in Ethics between competing moral frameworks may be necessary here. Although the authors argue for a hybrid approach to designing AMAs, perhaps one that adopts a virtue-based moral framework, they do not ask whether we would want our machines to be virtuous, in the sense that virtue ethics is the best moral framework on offer (as compared to duty-based or consequentialist ethics, for example).

Despite these unattended issues, Moral Machines represents a valuable addition to, and extension of, the current literature on machine morality. As the development of autonomous artificial moral agents becomes closer to being realized, I suspect that this book will only gain in importance.

Monday, June 1, 2009

P.W. Singer, Wendell Wallach, Pablo Garcia, and Robert Anderson were all interviewed for a Public Radio show developed by the SETI Institute. The show titled, Robots Call the Shots, is available online now. The interviewers for SETI are Seth Sostak and Molly Bentley.