Monday, June 28, 2010

In September, Springer will publish a special issue of Ethics and Information Technology dedicated to "Robot Ethics and Human Ethics." The first two paragraphs of the editorial are offered here, along with the Table of Contents.

It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At bottom is what we must know about ethics in general to build machines that operate within normative parameters. Fuzzy intuitions will not do where the specifics of engineering and computational clarity are required. So, machine ethicists are forced head on to engage in moral philosophy. Their effort, of course, hangs on a careful analysis of ethical theories, the role of affect in making moral decisions, relationships between agents and patients, and so forth, including the specifics of any concrete case. But there is more here to the human story.

Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.

"Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making"Wendell Wallach

Saturday, June 26, 2010

Wendell Wallach will be giving the keynote talk at the plenary session of the World Future Society Conference in Boston on July 8th. The title of the talk will be, Navigating the Future: Moral Machines, Techno Humans, and the Singularity. Other speakers at WorldFuture 2010:Sustainable Futures, Strategies, and Technologies will be Ray Kurzweil, Dennis Bushnell, and Harvey Cox.

A Chicago-based company called Tanagram Partners is currently developing military-grade augmented reality technology that - if developed to the full potential of its prototypes - would completely change the face of military combat as we know it. . . . First of all, the company is developing a system of lightweight sensors and displays that collect and provide data from and to each individual soldier in the field. This includes a computer, a 360-degree camera, UV and infrared sensors, stereoscopic cameras and OLED translucent display goggles.

With this technology - all housed within the helmet - soldiers will be able to communicate with a massive "home base" server that collects and renders 3D information onto the wearer's goggles in real time. With the company's "painting" technology, various objects and people will be outlined in a specific color to warn soldiers of things like friendly forces, potential danger spots, impending air-raid locations, rendez-vous points and much more.

Ray Kurzweil's movie, The Singularity is Near: A True Story, has been released. The film receive a Best Special Effects award and a Second Place Audience Award at the Breckenridge Film Festival. I will be attending the film's NY debut on June 24th, and will post brief comments soon after.

We are posting this submitted blog and invite others to offer differing opinions. Guest bloggers are not required to have an in-depth understanding of the field of inquiry covered by the Moral Machines blog.

The first thing that struck me when I visited this blog for the first time was the title – if ever there was a delicious yet appropriate oxymoron, this is it. Since when did machines and morality go hand in hand? Whenever we talk of technology and its intrusiveness in all aspects of our lives, we hold back from going gaga over the machines because they lack ethical values and intuitive sense. In a nutshell, they lack innate qualities of humanness like the ability to discern right from wrong based on ethics, kindness, morals, and a host of other factors that must be taken into consideration.

Take for example a court of law – if a person is on trial for murder, they are not automatically sentenced to the death penalty or life imprisonment. The circumstances under which the murder was committed are taken into consideration – some people do it in cold blood after planning it out carefully; others are psychopaths who take pleasure in the acts of torture and killing; and yet others are victims of circumstances and are provoked into killing either to defend themselves or because they are so incensed that they don’t realize what they are doing until it’s too late.

A machine could probably pronounce the right verdict if you feed in the circumstances and the associated punishments, but what if there are extenuating circumstances? What if the murder was planned, but only because the culprit was so badly affected by the victim that he saw no other way but to eliminate him from this world? What if he was avenging the brutal rape and murder of his wife and young daughters? How would a machine judge him in such a case? Being a machine, it would not be able to accord enough importance to the anguish and mental agony of the murderer who himself is actually the main victim here? Even human beings find it hard to make the right decision in such cases. So how on earth can a machine that is made of fiber and circuits have enough moral fiber to do right in such a tough call?

There’s no doubting the fact that the march of the technological brigade is on at full speed and in full swing; but as regards the implication that they can replace humanity at any time in the future, there is much doubt and confusion as to what we can create and what those creations can do. Yes, they will be more efficient and dependable in a workhorse kind of way, but unless they are provided with human guidance and augmentation when situations call for ethical and moral decisions, machines will be more detrimental than advantageous to society.

By-line:This article is contributed by Susan White, who regularly writes on the subject of Rad Tech Schools. She invites your questions, comments at her email address: susan.white33@gmail.com.

Pre-crimeIn the film, "pre-cogs" can look into the future and inform the police (they have got no choice – they are stuck in baths in the basement). In 2008, Portsmouth city council installed CCTV linked to software that would note whether people were walking suspiciously slowly. University researchers had already realised in 2001 that, if you recorded the walking paths of people in car parks, you could spot the would-be thieves simply: they didn't walk directly to a car, but instead ambled around with no apparent target. That is because, unlike everyone else in a car park, they weren't going to their own car.

That's not the end: Nick Malleson, a researcher at the University of Leeds, has built a system that can predict the likelihood of a house being broken into, based on how close it is to routes that potential burglars might take around the city; he is meeting Leeds council this week to discuss how to use it in new housing developments, to reduce the chances of break-ins. So although pre-crime systems can't quite predict murder yet, it may only be a matter of time.

Spider robotsThe US military is developing "insect robots", with the help of British Aerospace. They actually have eight legs (so, really, arachnid robots) and will be able to reconnoitre dangerous areas where you don't want to send a human, such as potentially occupied houses.

"Our ultimate goal is to develop technologies that will give our soldiers another set of eyes and ears for use in urban environments and complex terrain; places where they cannot go or where it would be too dangerous," Bill Devine, advanced concepts manager with BAE Systems, told World Military Forum. Give it 10 years and they will be there.

Thursday, June 17, 2010

IBM's Watson computer has been designed to play the TV game Jeopardy, and wins quite often. The NYTIMES has a feature article by Clive Thompson on Watson whose title plays on the answer to a Jeopardy question, What Is I.B.M.'s Watson?. While computers like Watson represent a dramatic step forward in the development of Turing Level computing they have clear limitations and can be problematic if used inappropriately.

Watson can answer only questions asking for an objectively knowable fact. It cannot produce an answer that requires judgment. It cannot offer a new, unique answer to questions like “What’s the best high-tech company to invest in?” or “When will there be peace in the Middle East?” All it will do is look for source material in its database that appears to have addressed those issues and then collate and compose a string of text that seems to be a statistically likely answer. Neither Watson nor Wolfram Alpha, in other words, comes close to replicating human wisdom.

CULTURALLY, OF COURSE, advances like Watson are bound to provoke nervous concerns too. High-tech critics have begun to wonder about the wisdom of relying on artificial-intelligence systems in the face of complex reality. Many Wall Street firms, for example, now rely on “millisecond trading” computers, which detect deviations in prices and order trades far faster than humans ever could; but these are now regarded as a possible culprit in the seemingly irrational hourlong stock-market plunge of the spring. Would doctors in an E.R. feel comfortable taking action based on a split-second factual answer from a Watson M.D.?

Back in the day, the founders of iRobot had been against the weaponization of robots. Perhaps business and financial pressures are pushing the boundaries. Indeed, the military market is becoming ever more important, according to the company's first quarter results. Finances were very tight in 2009 and many engineers, which in my professional opinion are very talented individuals, are getting paid below wage standards in the industry. iRobot probably sees military systems as a market they'll have to explore and expand.