With more and more robots part of the military's active service the New York Times has now run an article aboutthe ongoing ethical debate about future autonomous military robots that may make their own decisions about life or death on the battlefield. The article, entitled "A Soldier, Taking Orders From Its Ethical Judgment Center", highlights the views of numerous experts including Noel Sharkey, Daniel Dennett and Ronald Arkin, who's research hypothesis is that "intelligent robots can behave more ethically in the battlefield than humans currently can".

Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, argues that the steady increase in the use of robots in day-to-day life poses unanticipated risks and ethical problems.

Prof Sharkey shrugs off doomsday scenarios in books such as Isaac Asimov's I, Robot about the threatening interaction between robots and humans, or in movies such as the The Terminator in which robots take over the world.

Such story lines will remain firmly in the realm of fantasy, even as societies hurtle towards greater automation, he said.

'I have no concern whatsoever about robots taking control. They are dumb machines with computers and sensors and do not think for themselves despite what science fiction tells us,' he said.

'It is the application of robots by people that concerns me and not the robots themselves.'

'It is the application of robots by people that concerns me and not the robots themselves.'

Let's assume that the doomsday scenarios portrayed in popular culture are indeed far off, that robots are far from outsmarting humans, and that they are no threat to humanity. If what we are worried about is truly the application of robots by people, then how is this problem different from the application of other types of technology?

Let's look at a few examples:

According to the Straits Times article you linked,

Professor Sharkey worries how robots - and particularly the people who control them - will be held accountable when the machines work with 'the vulnerable', namely children and the elderly ...

What makes robots special? Television sets also "work" with children (to stick with the strange terminology). They have been around for many years, and as we became accustomed to the technology we've learned how to integrate it into our lives. This did take some time, but only required little in the way of ethical guidelines or special legislation.

As a second example, let's think about the semi-autonomous war robots currently on duty in Afghanistan and Iraq. They are just another form of a smart weapon, like a torpedo, a laser guided missile, or a smart bomb. Again, similar systems have been around for many years.

Should we be having a much larger discussion about taking humans out of the loop in any - robotic- or non-robotic - system: cars, airplanes, or tanks?