Monday, December 28, 2009

Mindclones—consciousness in post-biological media—will feel as full of life as we biological creatures

.

The differences between organic and cybernetic life are less important than their similarities. Both are mathematical codes that organize a compatible domain to perform functions that must ultimately result in reproduction. For organic life, the code is written in molecules and the domain is the natural world. For cybernetic life the code is written in voltage potentials and the domain is the IT world. We call organic life biology. It seems fitting to call cybernetic life vitology .

Mindclones are alive, just not the same kind of life that we are accustomed to. They are functionally alive, albeit with a different structure and substance than has ever existed before. Yet, that is the story of life. Before there were nucleated cells, eukaryotes (of which we are comprised), such things had never been seen before – not for nearly two billion years. That is time duration that bacterium had an exclusive claim to life on earth. Before there were multicellular creatures there were only single cell creatures – from their perspective, the first slime molds were not so much a life form but a community of single cell creatures. And so the story goes, down through the descent of man. We must judge life based upon whether it streams order upon itself – self-replicates pursuant to a Darwinian code and maintains itself against the tendency to dissemble – and not get picky over what it looks like or what flavor of Darwinian code it uses. Using this objective yardstick, vitology will be alive. Mindclones, sitting at the apex of vitology, will feel as full of life as we do from our perch atop the summit of biology. Aware of themselves, with the emotions, autonomy and concerns of their forbearers, mindclone consciousness will bubble as frothily alive as does ours.

John S. Canning has been proposing that "armed unmanned systems offer us the opportunity to break this centuries old paradigm of warfare, if we design them to target an enemy’s weapons instead of the people who are employing them."Canning is Chief Engineer at the G80 Division, Naval Surface Warfare Center Dahlgren Division. A presentation by Canning was brought to our attention by our sister bog >tin can thoughts.

Armin Krishnan, Visiting Professor for Security Studies at the University of Texas at El Paso and author of Killer Robots: Legality and Ethicality of Autonomous Robots was interviewed by Gerhard Dabringer.

In your recent book “Killer Robots: The Legality and Ethicality of Autonomous Weapons” you explore the ethical and legal challenges of the use of unmanned systems by the military. What would be your main findings?

The legal and ethical issues involved are very complex. I found that the existing legal and moral framework for war as defined by the laws of armed conflict and Just War Theory is utterly unprepared for dealing with many aspects of robotic warfare. I think it would be difficult to argue that robotic or autonomous weapons are already outlawed by international law. What does international law actually require? It requires that noncombatants are protected and that force is used proportionately and only directed against legitimate targets. Current autonomous weapons are not capable of generally distinguishing between legitimate and illegitimate targets, but does this mean that the technology could not be used discriminatively at all, or that the technology will not improve to an extent that it is as good or even better in deciding which targets to attack than a human? Obviously not. How flawless would the technology be required to work, anyway? Should we demand a hundred percent accuracy in targeting decisions, which would be absurd only looking at the most recent Western interventions in Kosovo, Afghanistan and Iraq, where large numbers of civilians died as a result of bad human decisions and flawed conventional weapons that are perfectly legal. Could not weapons that are more precise and intelligent than present ones represent a progress in terms of humanizing war?

I don’t think that there is at the moment any serious legal barrier for armed forces to introduce robotic weapons, even weapons that are highly automated and capable of making own targeting decisions. It would depend on the particular case when they are used to determine whether this particular use violated international law, or not. The development and possession of autonomous weapons is clearly not in principle illegal and more than 40 states are developing such weapons, indicating some confidence that legal issues and concerns could be resolved in some way. More interesting are ethical questions that go beyond the formal legality. For sure, legality is important, but it is not everything. Many things or behaviors that are legal are certainly not ethical. So one could ask, if autonomous weapons can be legal would it also be ethical to use them in war, even if they were better at making targeting decisions than humans? While the legal debate on military robotics focuses mostly on existing or likely future technological capabilities, the ethical debate should focus on a very different issue, namely the question of fairness and ethical appropriateness. I am aware that “fairness” is not a requirement of the laws of armed conflict and it may seem odd to bring up that point at all. Political and military decision-makers who are primarily concerned about protecting the lives of soldiers they are responsible for clearly do not want a fair fight. This is a completely different matter for the soldiers who are tasked with fighting wars and who have to take lives when necessary. Unless somebody is a psychopath, killing without risk is psychologically very difficult. Teleoperators of the armed Predator UAVs actually seem to suffer from higher levels of stress than jet pilots who fly combat missions. Remote controlling or rather supervising robotic weapons is not a job well suited for humans or a job soldiers would particularly like to do. So why not just leave tactical targeting decisions to an automated system (provided it is reliable enough) and avoid this psychological problem? This brings the problem of emotional disengagement from what is happening on the battlefield and the problem of moral responsibility, which I think is not the same as legal responsibility. Autonomous weapons are devices rather than tools. They are placed on the battlefield and do whatever they are supposed to do (if we are lucky). The soldiers who deploy these weapons are reduced to the role of managers of violence, who will find it difficult to ascribe individual moral responsibility to what these devices do on the battlefield. Even if the devices function perfectly and only kill combatants and only attack legitimate targets, we will not feel ethically very comfortable if the result is a one-sided massacre. Any attack by autonomous weapons that results in death could look like a massacre and ethically difficult to justify, even if the target somehow deserved it. No doubt, it will be ethically very challenging to find acceptable roles and missions for military robots, especially for the more autonomous ones. In the worst case, warfare could indeed develop into something in which humans only figure as targets and victims and not as fighters and deciders. In the best case, military robotics could limit violence and fewer people will have to suffer from war and its consequences. In the long term, the use of robots and robotic devices by the military and society will most likely force us to rethink our relationship with the technology we use to achieve our ends. Robots are not ordinary tools, but they have the potential for exhibiting genuine agency and intelligence. At some point soon, society will need to consider the question of what are ethically acceptable uses of robots. Though “robot rights” still look like a fantasy, soldiers and other people working with robots are already responding emotionally to these machines. They bond with them and they sometimes attribute to the robots the ability to suffer. There could be surprising ethical implications and consequences for military uses of robots.

Do you think that using automated weapon systems under the premise of e.g. John Canning’s concept (targeting the weapon systems used and not the soldier using it) or concepts like “mobility kill” or “mission kill“ (where the primary goal is to deny the enemy his mission, not to kill him) are ethically practicable ways to reduce the application of lethal force in armed conflicts?

John Canning was not a hundred percent happy with how I represented his argument in my book, so I will try to be more careful in my answer. First of all, I fully agree with John Canning that less than lethal weapons are preferable to lethal weapons and that weapons that target “things” are preferable to weapons that target humans. If it is possible to successfully carry out a military mission without using lethal force, then it should be done in this way. In any case it is a very good idea to restrict the firepower that autonomous weapons would be allowed to control. The less firepower they control, the less damage they can cause when they malfunction or when they make bad targeting decisions. In an ideal case the weapon would only disarm or temporarily disable human enemies. If we could decide military conflicts in this manner, it would be certainly a great progress in terms of humanizing war. I have no problem with this ideal. Unfortunately, it will probably take a long time before we get anywhere close to this vision. Nonlethal weapons have matured over the last two decades, but they are still not yet considered to be generally a reasonable alternative to lethal weapons in most situations. In conflict zones soldiers still prefer life ammunition to rubber bullets or TASERS since real bullets guarantee an effect and nonlethal weapons don’t guarantee to stop an attacker. Pairing nonlethal weapons with robots offers a good comprise, as no lives would be at stake in case nonlethal weapons prove ineffective. On the other hand, it would mean to allow a robot targeting humans in general. It is not very likely that robots will be able to distinguish between a human who is a threat and a human who isn’t. It is hard enough for a computer or robot to recognize a human shape – recognizing a human and that this human carries a weapon and is a threat is much more difficult. This means that many innocent civilians, who deserve not to be targeted at all, are likely to be targeted by such a robot. The effects of the nonlethal would need to be very mild in order to make the general targeting of civilians permissible. There are still serious concerns about the long term health effects of the Active Denial System, for example. To restrict autonomous weapons to targeting “things” would offer some way out of the legal dilemma of targeting innocent civilians, which is obviously illegal. If an autonomous weapon can reliably identify a tank or a fighter jet, then I would see no legal problem to allow the weapon to attack targets that are clearly military. Then again it would depend on the specific situation and the overall likelihood that innocents could be hurt. Destroying military targets requires much more firepower than targeting individuals or civilian objects. More firepower always means greater risk of collateral damage. An ideal scenario for the use of such autonomous weapons would be their use against an armored column approaching through uninhabited terrain. That was a likely scenario for a Soviet attack in the 1980s, but it is a very unlikely scenario in today’s world. The adversaries encountered by Western armed forces deployed in Iraq or in Afghanistan tend to use civilian trucks and cars, even horses, rather than tanks or fighter jets. A weapon designed to autonomously attack military “things” is not going to be of much use in such situations. Finally, John Canning proposed a “dial-a-autonomy” function that would allow the weapon to call for help from a human operator in case lethal force is needed. This is some sort of compromise for the dilemma of giving the robot lethal weapons and the ability to target humans with nonlethal weapons and of taking advantage of automation without violating international law. I do not know whether this approach will work in practice, but one can always be hopeful. Most likely weapons of a high autonomy will only be useful in high-intensity conflicts and they will have to control substantial firepower in order to be effective against military targets. Using autonomous weapons amongst civilians, even if they control only nonlethal weapons, does not seem right to me.

In your book you also put the focus on the historical developments of automated weapons. Where do you see the new dimension in modern unmanned systems as opposed to for example intelligent ammunitions like the cruise missile or older teleoperated weapon systems like the “Goliath” tracked mine during the Second World War.

The differences between remotely controlled or purely automated systems and current teleoperated systems like Predator are huge. The initial challenge in the development of robotics was to make automatons mechanically work. Automatons were already built in Ancient times, were considerably improved by the genius of Leonardo da Vinci, and were eventually perfected in the late 18th century. Automatons are extremely limited in what they can do and there were not many useful applications for them. Most of the time they were just used as toys or for entertainment. In terms of military application there was the development of the explosive “mine” that could trigger itself, which is nothing but a simple automaton. The torpedo and the “aerial torpedo” developed in the First World War are also simple automatons that were launched in a certain direction with the hope of destroying something valuable. In principle, the German V1 and V2 do not differ that much from earlier and more primitive automated weapons. With the discovery of electricity and the invention of radio it became possible to remote control weapons, which is an improvement over purely automated weapons in so far as the human element in the weapons system could make the remote controlled weapon more versatile and more intelligent. For sure, remote controlled weapons were no great success during the Second World War and they were therefore largely overlooked by military historians. A main problem was that the operator had to be in proximity to the weapon and that it was very easy to make the weapon ineffective by cutting the communications link between operator and weapon. Now we have TV control, satellite links and wireless networks that allow an operator to have sufficient situational awareness without any need of being close to the remotely controlled weapon. This works very well, for the moment at least, and this means that many armed forces are interested in acquiring teleoperated systems like Predator in greater numbers. The US operates already almost 200 of them. The UK operates two of the heavily armed Reaper version of the Predator and has several similar types under development. The German Bundeswehr is determined to acquire armed UAVs and currently considers buying the Predator. Most of the more modern armed forces around the world are in the stage of introducing such weapons and, as pointed out before, the US already operates substantial numbers of them. The new dimension of Predator opposed to the V1 or Goliath is that it combines the strengths of human intelligence with an effective way of operating the weapon without any need of having the operator in close proximity. Technologically speaking the Predator is not a major breakthrough, but militarily its success clearly indicates that there are roles in which “robotic” systems can be highly effective and even can exceed the performance of manned systems. The military was never very enthusiastic about using automated and remote controlled system, apart from mine warfare, mainly because it seemed like a very ineffective and costly way for attacking the enemy. Soldiers and manned platforms just perform much better. This conventional wisdom is now changing. The really big step would be the development of truly autonomous weapons that can make intelligent decisions by themselves and that do not require an operator in order to carry out their missions. Technology is clearly moving in that direction. For some roles, such as battlespace surveillance, an operator is no longer necessary. A different matter is of course the use of lethal force. Computers are not yet intelligent enough that we could feel confident about sending an armed robot over the hill and hope that the robot will fight effectively on its own while obeying the conventions of war. Certainly, there is a lot of progress in artificial intelligence research, but it will take a long time before autonomous robots can be really useful and effective under the political, legal and ethical constraints under which modern armed forces have to operate. Again introducing autonomous weapons on a larger scale would require a record of success for autonomous weapons that proves the technology works and can be useful. Some cautious steps are taken in that direction by introducing armed sentry robots, which guard borders and other closed off areas. South Korea, for example, has introduced the Samsung Techwin SGR-1 stationary sentry robot, which can operate autonomously and controls lethal weapons. There are many similar systems that are field tested and these will establish a record of performance. If they perform well enough, armed forces and police organizations will be tempted to use them in offensive roles or within cities. If that happened, it would have to be considered a major revolution or discontinuity in the history of warfare and some might argue even in the history of mankind, as Manuel DaLanda has claimed.

Do you think that there is a need for international legislation concerning the development and deployment of unmanned systems? And how could a legal framework of regulations for unmanned systems look like?

The first reflex to a new kind of weapon is to simply outlaw it. The possible consequences of robotic warfare could be similarly serious as those caused by the invention of the nuclear bomb. At that time (especially in the 1940s and 1950s) many scientists and philosophers lobbied for the abolition of nuclear weapons. As it turned out, the emerging nuclear powers were not prepared to do so. The world came several times close to total nuclear war, but we have eventually managed to live with nuclear weapons and there is reasonable hope that their numbers could be reduced to such an extent that nuclear war, if it should happen, would at least no longer threaten the survival of mankind. There are lots of lessons that can be learned from the history of nuclear weapons with respect to the rise of robotic warfare, which might have similar, if not greater repercussions for warfare. I don’t think it is possible to effectively outlaw autonomous weapons completely. The promises of this technology are too great to be ignored by those nations capable of developing and using this technology. Like nuclear weapons autonomous weapons might only indirectly affect the practice of war. Nations might decide to come to rely on robotic weapons for their defense. Many nations will stop having traditional air forces because they are expensive and the roles of manned aircraft can be taken over by land based systems and unmanned systems. I would expect that the roles of unmanned systems to be first and foremost defensive. One reason for this is that the technology is not available to make them smart enough for many offensive tasks. The other reason is that genuinely offensive roles for autonomous weapons may not be ethically acceptable. A big question will be how autonomous should robotic systems be allowed to become and how to measure or define this autonomy. Many existing weapons can be turned into robots and their autonomy could be substantially increased by some software update. It might not be as difficult for armed forces to transition to a force structure that incorporates many robotic and automated systems. So it is quite likely that the numbers of unmanned systems will continue to grow and that they will replace lots of soldiers or take over many jobs that still require humans. At the same time, armed conflicts that are limited internal conflicts will continue to be fought primarily by humans. They will likely remain small scale and low tech. Interstate conflict, should it still occur, will continue to become ever more high-tech and potentially more destructive. Hopefully, politics will become more skilled to avoid these conflicts. All of this has big consequences for the chances of regulating autonomous weapons and for the approaches that could be used. I think it would be most important to restrict autonomous weapons to purely defensive roles. They should only be used in situations and in circumstances when they are not likely to harm innocent civilians. As mentioned before, this makes them unsuitable for low-intensity conflicts. The second most important thing would be to restrict the proliferation of autonomous weapons. At the very least the technology should not become available to authoritarian regimes, which might use it against their own populations, and to nonstate actors such as terrorists or private military companies. Finally, efforts should be made to prevent the creation of superintelligent computers that control weapons or other important functions of society and to prevent “doomsday systems” that can automatically retaliate against any attack. These are still very hypothetical dangers, but it is probably not too soon to put regulatory measures in place, or at least not too soon for having a public and political debate on these dangers.

Nonproliferation of robotic technology to nonstate actors or authoritarian regimes, which I think definitively an essential goal, might be possible for dedicated military systems but seems to be something which might not be easily achieved in general, as already can be seen by the use of unmanned systems by the Hamas. In addition the spread of robot technology in the society in nonmilitary settings will certainly make components widely commercially available. How do you see the international community countering this threat?

Using a UAV for reconnaissance is not something really groundbreaking for Hamas, which is a large paramilitary organization with the necessary resources and political connections. Terrorists could have used remote-controlled model aircraft for terrorist attacks already more than thirty years ago. Apparently the Red Army Fraction wanted to kill the Bavarian politician Franz-Josef Strauß in 1977 with a model aircraft loaded with explosives. This is not a new idea. For sure the technology will become more widely available and maybe future terrorists will become more technically skilled. If somebody really wanted to use model aircraft in that way or to build a simple UAV that is controlled by a GPS signal, it can clearly be done. It is hard to say why terrorists have not used such technology before. Robotic terrorism is still a hypothetical threat rather than a real threat. Once terrorists start using robotic devices for attacks it will certainly be possible to put effective countermeasures in place such as radio jammers. There is a danger that some of the commercial robotic devices that are already on the market or will be on the market soon could be converted into robotic weapons. Again that is possible, but terrorists would need to figure out effective ways of using such devices. Generally speaking, terrorists tend to be very conservative in their methods and as long as their current methods and tactics “work” they have little reason to use new tactics that require more technical skills and more difficult logistics, unless those new tactics would be much more effective. I don’t think that would be already the case. At the same time, it would make sense for governments to require manufacturers of robotic devices to limit the autonomy and uses of these devices, so that they could not be converted easily into weapons. I think from a technical point of view that would be relatively easy to do. National legislation would suffice and it would probably not require international agreements. To tackle the proliferation of military robotics technology to authoritarian regimes will be much more challenging. Cruise missile technology has proliferated quickly in the 1990s and more than 25 countries can build them. Countries like Russia, Ukraine, China, and Iran have proliferated cruise missile technology and there is little the West can do about it, as cruise missiles are not sufficiently covered by the Missile Technology Control Regime. What would be needed is something like a military robotics control regime and hopefully enough countries would sign up for it.

A lot of people see the problem of discrimination and proportionality as the most pressing challenges concerning the deployment of unmanned systems. Which are the issues you think need to be tackled right now in the field of law of armed combat?

I think most pressing would be to define autonomous weapons under international law and agree on permissible roles and functions for these weapons. What is a military robot or an “autonomous weapon” and under which circumstances should the armed forces be allowed to use them? It will be very difficult to get any international consensus on a definition, as there are different opinions on what a “robot” is or what constitutes “autonomy”. At the same time, for any kind of international arms control treaty to work it has to be possible to monitor compliance to the treaty. Otherwise the treaty becomes irrelevant. For example, the Biological and Toxin Weapons Convention of 1972 outlawed biological weapons and any offensive biological weapons research, but included no possibility of monitoring compliance through on-site inspections. As a result, the Soviet Union violated the treaty on massive scale. If we want to constrain the uses and numbers of military robots effectively we really need a definition that allows determining whether or not a nation is in compliance with these rules or not. If we say teleoperated systems like Predator are legal, while autonomous weapons that can select and attack targets by themselves would be illegal, there is a major problem with regard to arms control verification. Arms controllers would most likely need to look very closely at the weapons systems, including at the source code for its control system, in order to determine the actual autonomy of the weapon. A weapon like Predator could theoretically be transformed from a teleoperated system to an autonomous system through a software upgrade. This might not result in any visible change on the outside. The problem is that no nation would be likely to give arms controllers access to secret military technology. So how can we monitor compliance? One possibility would be to set upper limits for all military robots of a certain size no matter whether they would be teleoperated or autonomous. This might be the most promising way to go about restricting military robots. Then again, it really depends how one defines military robots. Under many definitions of robots a cruise missile would be considered a robot, especially as they could be equipped with a target recognition system and AI that allows the missile to select targets by itself. So there is a big question how inclusive or exclusive a definition of” military robot” should be. If it is too inclusive there will never be an international consensus, as nations will find it difficult to agree on limiting or abolishing weapons they already have. If the definition is too exclusive, it will be very easy for nations to circumvent any treaty by developing robotic weapons that would not fall under this definition and would thus be exempted from an arms control treaty. Another way to go about arms control would be to avoid any broad definition of “military robot” or “autonomous weapon” and just address different types of robotic weapons in a whole series of different arms control agreements. For example, a treaty on armed unmanned aerial vehicles of a certain size, another treaty on armed unmanned land vehicles of a certain size, and so on. This will be even more difficult or at least time consuming to negotiate, as different armed forces will have very different requirements and priorities with regard to acquiring and utilizing each of these unmanned systems categories. Once a workable approach is found in terms of definitions and classifications, it would be crucial to constrain the role of military robots to primarily defensive roles such as guard duty in closed off areas. Offensive robotic weapons such as Predator or cruise missiles that are currently teleoperated or programmed to attack a certain area/target, but that have the potential of becoming completely autonomous relatively soon, should be clearly limited in numbers, no matter whether or not they are already have to be considered autonomous. At the moment, this is not urgent as there are technological constraints with respect to the overall number of teleoperated systems that can be operated at a given time. In the medium to long-term these constraints could be overcome and it would be important to have an arms control treaty on upper limits for the numbers of offensive unmanned systems that the major military powers would be allowed to have.

Apart from the Missile Technology Control Regime, there seem to be no clear international regulations concerning the use of unmanned systems. What is the relevance of customary international law, like the Martens Clause, in this case?

Some academics take the position that “autonomous weapons” are already illegal under international law, even if they are not explicitly prohibited, as they go against the spirit of the conventions of war. For example, David Isenberg claims that there has to be a human in the loop in order for military robots to comply with customary international law. In other words, teleoperated weapons are OK, but autonomous weapons are illegal. This looks like a reasonable position to have, but again the devil is in the detail. What does it actually mean that a human is “in the loop” and how do we determine that a human was in the loop post facto? I already mentioned this problem with respect to arms control. It is also a problem for monitoring the compliance to the jus in bello. As the number of unmanned systems grows, the ratio between teleoperators and unmanned systems will change with fewer and fewer humans operating more and more robots at a time. This means most of the time these unmanned systems will make decisions by themselves and humans will only intervene when there are problems. So one can claim that humans remain in the loop, but in reality the role of humans would be reduced to that of supervision and management. Besides there is a military tradition of using self-triggering mines and autonomous weapons have many similarities with mines. Although anti-personnel land mines are outlawed, other types of mines such as sea mines or anti-vehicle mines are not outlawed. I think it is difficult to argue that autonomous weapons should be considered illegal weapons under customary international law. Nations have used remote-controlled and automated weapons before in war and that was never considered to be a war crime in itself. The bigger issue than the question of the legality of the weapons themselves is their usage in specific circumstances. If a military robot is used for deliberately attacking civilians, it would be clearly a violation of the customs of war. In this case it does not matter that the weapon used was a robot rather than an assault rifle in the hands of a soldier. Using robots for violating human rights and the conventions of war does not change anything with regard to illegality of such practices. At the same time, using an autonomous weapon to attack targets that are not protected by the customs of war does not seem to be in itself to be illegal or run counter the conventions of war. Autonomous weapons would only be illegal if they were completely and inherently incapable of complying with the customs of war. Even then the decision about the legality of autonomous weapons would be primarily a political decision rather than a legal decision. For example, nuclear weapons are clearly weapons that are not discriminative and that are disproportionate in their effects. They should be considered illegal under customary international law, but we are still far away from outlawing nuclear weapons. The established nuclear powers are still determined to keep sizeable arsenals and some states still seek to acquire them. One could argue that nuclear weapons are just the only exception from the rule because of their tremendous destructive capability that makes them ideal weapons for deterrence. Furthermore, despite the fact that nuclear weapons are not explicitly outlawed there is a big taboo on their use. Indeed, nuclear weapons have never been used since the Second World War. It is possible that in the long run autonomous weapons could go down a very similar path. The technologically most advanced states are developing autonomous weapons in order to deter potential adversaries. But it is possible that a taboo against their actual usage in war might develop. In military conflicts where the stakes remain relatively low such as in internal wars a convention could develop not to use weapons with a high autonomy, while keeping autonomous weapons ready for possible high-intensity conflicts against major military powers, which have fortunately become far less likely. This is of course just speculation.

Another aspect which has come up in the discussion of automated weapon systems is the locus of responsibility. Who is to be held responsible for whatever actions the weapons systems takes? This may not be a big issue for teleoperated systems but gets more significant the more humans are distanced from “the loop”.

Are we talking about legal or moral responsibility? I think there is a difference. The legal responsibility for the use of an autonomous weapon would still need to be defined. Armed forces would need to come up with clear regulations that define autonomous weapons and that restrict their usage. Furthermore, there would need to be clear safety standards for the design of autonomous weapons. The manufacturer would also have to specify the exact limitations of the weapon. The legal responsibility could then be shared between a military commander, who made the decision to deploy an autonomous weapon on the battlefield and the manufacturer, which built the weapon. If something goes wrong one could check whether a commander adhered to the regulations when deploying the system and whether the system itself functioned in the way guaranteed by the manufacturer. Of course, the technology in autonomous weapons is very complex and it will be technically challenging to make these weapons function in a very predictable fashion, which would be the key to any safety standard. If an autonomous weapon was not sufficiently reliable and predictable, it would be grossly negligent of a government to allow the deployment of such weapons in the first place. With respect to moral responsibility the matter is much more complicated. It would be difficult for individuals to accept any responsibility for actions that do not originate from themselves. There is a big danger that soldiers get morally “disengaged” and that they no longer feel guilty about the loss of life in war once robots decide whom to kill. As a result, more people could end up getting killed, which is a moral problem even if the people killed are perfectly legal targets under international law. The technology could affect our ability to feel compassion for our enemies. Killing has always been psychologically very difficult for the great majority of people and it would be better if it stayed that way. One way to tackle the problem would be to give the robot itself a conscience. However, what is currently discussed as a robot conscience is little more than a system of rules. These rules may work well from an ethical perspective, or they may not work well. In any case such a robot conscience is no substitute for human compassion and ability to feel guilty about wrongdoings. We should be careful with taking that aspect of war away. In particular, there is the argument that bombers carrying nuclear weapons should continue to be manned, as humans will always be very reluctant to pull the trigger and will only do so in extreme circumstances. For a robot pulling the trigger is no problem, as it is just an algorithm that decides and as the robot will always remain ignorant of the moral consequences of that decision.

In addition to the common questions concerning autonomous unmanned systems and discrimination and proportionality you have also emphasized the problem of targeted killing. Indeed, the first weaponized UAVs have been used in exactly this type of operation, e.g. the killing of Abu Ali al-Harithi in Yemen in November 2002. How would you evaluate these operations from a legal perspective?

There are two aspects to targeted killings of terrorists. The first aspect is that lethal military force is used against civilians in circumstances that cannot legally be defined as a military conflict or war. This is in any case legally problematic no matter how targeted killings are carried out. In the past Special Forces have been used for targeted killings of terrorists. So the Predator strikes are in this respect not something new. For example, there has been some debate on the legality of the use of ambushes by the British SAS aimed at killing IRA terrorists. If there was an immediate threat posed by a terrorist and if there were no other ways of arresting the terrorist or of otherwise neutralising the threat, it is legitimate and legal to use lethal force against them. The police are allowed to use lethal force in such circumstances and the military should be allowed to do the same in these circumstances. At the same time, one could question in the specific cases whether lethal action was really necessary. Was there really no way to apprehend certain terrorists and to put them to justice? I seriously doubt that was always the case when lethal action was used against terrorists. This brings us to the second aspect of the question. I am concerned about using robotic weapons against terrorists mainly because it makes it so easy for the armed forces and intelligence services to kill particular individuals, who may be guilty of serious crimes or not. “Terrorist” is itself a highly politicised term that has often been applied to any oppositionists and dissenters out of political convenience. Besides it is always difficult to evaluate the threat posed by an individual, who may be a “member” of a terrorist organization or may have contacts to “terrorists”. If we define terrorism as war requiring a military response and if we use robotic weapons to kill terrorists rather than apprehend them, we could see the emergence of a new type of warfare based on assassination of key individuals. Something like that has been tried out during the Vietnam War by the CIA and it was called Phoenix Program. The aim was to identify the Vietcong political infrastructure and take it out through arrest or lethal force. In this context 20,000 South Vietnamese were killed. Robotic warfare could take such an approach to a completely new level, especially, if such assassinations could be carried out covertly, for example through weaponized microrobots or highly precise lasers. This would be an extremely worrying future scenario and the West should stop using targeted killings as an approach to counterterrorism.

Where do you see the main challenges concerning unmanned systems in the foreseeable future?

I think the main challenges will be ethical and not technological or political. Technology advances at such a rapid pace that it is difficult to keep up with the many developments in the technology fields that are relevant for military robotics. It is extremely difficult to predict what will be possible in ten or 20 years from now. There will always be surprises in terms of breakthroughs that did not happen and breakthroughs that happened. The best prediction is that technological progress will not stop and that many technological systems in place today will be replaced by much more capable ones in the future. Looking at what has been achieved in the area of military robotics in the last ten years alone gives a lot of confidence for saying that the military robots of the future will be much more capable than today’s. Politics is much slower in responding to rapid technological progress and national armed forces have always tried to resist changes. Breaking with traditions and embracing something as revolutionary as robotics will take many years. On the other hand, military robotics is a revolution that has been already 30 years in the making. Sooner or later politics will push for this revolution to happen. Societies will get used to automation and they will get used to the idea of autonomous weapons. If one considers the speed with which modern societies got accustomed to mobile phones and the Internet, they will surely become similarly quickly accustomed to robotic devices in their everyday lives. It will take some time for the general public to accept the emerging practice of robotic warfare, but it will happen. A completely different matter is the ethical side of military robotics. There are no easy answers and it is not even likely that we will find them any time soon. The problem is that technology and politics will most likely outpace the development of an ethics for robotic warfare or for automation in general. For me that is a big concern. I would hope that more public and academic debate will result in practical ethical solutions to the very complex ethical problem of robotic warfare.

Thursday, December 17, 2009

In another story today, NBC is reporting that more than 7 drones were used to kill up to 17 people (numbers vary) in the North Waziristan region of Pakistan during two different drone attacks. One of these attacks reported involved at least five drones and ten missiles. Click here for the news report.

One of the bigger news stories today is the report that militants in Iraq used off-the-shelf software to intercept and view the live video feed from a Predator drone. Presumably this would help them evade a drone attack. U.S. officials are trying to play down the story and emphasize that the militants did not take control of any drone or interfere with their flight or mission. The Wall Street Journal broke the subject in an article today titled, Insurgents Hack U.S. Drone: $26 Software Is Used to Breach Key Weapons in Iraq; Iranian Backing Suspected.

The drone intercepts mark the emergence of a shadow cyber war within the U.S.-led conflicts overseas. They also point to a potentially serious vulnerability in Washington's growing network of unmanned drones, which have become the American weapon of choice in both Afghanistan and Pakistan. . . .

U.S. military personnel in Iraq discovered the problem late last year when they apprehended a Shiite militant whose laptop contained files of intercepted drone video feeds. In July, the U.S. military found pirated drone video feeds on other militant laptops, leading some officials to conclude that militant groups trained and funded by Iran were regularly intercepting feeds.

The U.S. is apparently responded to these findings by attempting to add encryption to video feeds from drones. However, ready available encryption systems may not be compatible with the proprietary technology used by General Atomics Aeronautical Systems Inc. for communication between the drone and those remotely controlling the aircraft. Furthermore, encryption may slow down the sharing of time-sensitive information.

Wednesday, December 16, 2009

Kenneth Anderson, who is a Law Professor at Washington College of Law, American University, commented on the NYTimes Magazine's Guilty Robots article at the Opinio Juris Website

Although I am strongly in favor of the kinds of research programs that Professor Arkin is undertaking, I think the ethical and legal issues, whether the categorical rules or the proportionality rules, of warfare involve questions that humans have not managed to answer at the conceptual level. Proportionality and what it means when seeking to weigh up radically incommensurable goods - military necessity and harm to civilians, for example - to start with. One reason I am excited by Professor Arkin’s attempts to perform these functions in machine terms, however, is that the detailed, step by step, project forces us to think through difficult conceptual issues regarding human ethics at the granular level that we might otherwise skip over with some quick assumptions. Programming does not allow one to do that quite so easily.

You know a subject has come of age when it is featured among the 'year in ideas' issue of the NYTimes Magazine, which comes out each December. Under the title, Guilty Robots, Dara Kerr writes:

[I]magine robots that obey injunctions like Immanuel Kant’s categorical imperative — acting rationally and with a sense of moral duty. This July, the roboticist Ronald Arkin of Georgia Tech finished a three-year project with the U.S. Army designing prototype software for autonomous ethical robots. He maintains that in limited situations, like countersniper operations or storming buildings, the software will actually allow robots to outperform humans from an ethical perspective. “I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time and thus be able to make better informed decisions,” he says. The software consists of what Arkin calls “ethical architecture,” which is based on international laws of war and rules of engagement. The robots' behavior is literally governed by these laws. For example, in one hypothetical situation, a robot aims at enemy soldiers, but then doesn't fire–because the soldiers are attending a funeral in a cemetery and fighting would violate international law. But being an ethical robot involves more than just following rules. These machines will also have something akin to emotions–in particular, guilt. After considering several moral emotions like remorse, compassion and shame, Arkin decided to focus on modeling guilt because it can be used to condemn specific behavior and generate constructive change. While fighting, his robots assess battlefield damage and then use algorithms to calculate the appropriate level of guilt. If the damage includes noncombatant casualties or harm to civilian property, for instance, their guilt level increases. As the level grows, the robots may choose weapons with less risk of collateral damage or may refuse to fight altogether.

The software creates a virtual "shell" around the robot's arm, allowing it to avoid obstacles in combination with machine vision. Bosscher says the company has applied for a patent and may try to market the collision avoidance program to businesses.

"European researchers have developed a new approach to artificial intelligence that could empower computers to respond intelligently to human behaviour as well as commands." Michael Conroy writes about, Popeye, the robot with brains not brawn, on the wired.co.uk website.

"The originality of our project was our attempt to integrate two different sensory modalities, namely sound and vision," project coordinator Radu Horaud explains to Science Daily. Their robot, named Popeye, was built to work out which voices are "relevant" amongst a cacophony of noise by combining video input and image recognition technology with sound analysis. "It is not that easy to decide what is foreground and what is background using sound alone, but by combining the two modalities – sound and vision – it becomes much easier," Horaud continues. "If you are able to locate ten sound sources in ten different directions, but if in one of these directions you see a face, then you can much more easily concentrate on that sound and throw out the other ones."

It may not sound like much of an achievement, but using multiple "senses" to infer meaning – rather than simply throwing more computational power and new algorithms at the problem – is a fundamentally different approach to artificial intelligence.

Sunday, December 6, 2009

AP article considers how product liability laws may apply to autonomous robots, and quotes Ron Arkin & George Bekey on the need for some sort of ethical guidance for robots. Unfortunately the article leaves the impression that Asimov's laws are the right starting point.

By BROOKE DONALD, Associated Press Writer – Sat Dec 5, 10:27 am ET

PALO ALTO, Calif. – Eric Horvitz illustrates the potential dilemmas of living with robots by telling the story of how he once got stuck in an elevator at Stanford Hospital with a droid the size of a washing machine.

"I remembered thinking, `Whoa, this is scary,' as it whirled around, almost knocking me down," the Microsoft researcher recalled. "Then, I thought, `What if I were a patient?' There could be big issues here."

We're still far from the sci-fi dream of having robots whirring about and catering to our every need. But little by little, we'll be sharing more of our space with robots in the next decade, as prices drop and new technology creates specialized machines that clean up spilled milk or even provide comfort for an elderly parent.

Now scientists and legal scholars are exploring the likely effects. What happens if a robot crushes your foot, chases your cat off a ledge or smacks your baby? While experts don't expect a band of Terminators to attack or a "2001: A Space Odyssey" computer that takes control, even simpler, benign robots will have legal, social and ethical consequences.

"As we rely more and more on automated systems, we have to think of the implications. It is part of being a responsible scientist," Horvitz said.

Horvitz assembled a team of scientists this year when he was president of the Association for the Advancement of Artificial Intelligence and asked them to explore the future of human-robot interactions. A report on their discussions is due next year.

Saturday, December 5, 2009

K21st-Essential 21st Century Knowledge site has an article that outlines Monica Anderson's concept of Artificial Intuitions.

Most humans have not been taught logical thinking, but most humans are still intelligent. Contrary to the majority view, it is implausible that the brain should be based on Logic; I believe intelligence emerges from millions of nested micro-intuitions, and that Artificial Intelligence requires Artificial Intuition. Intuition is surprisingly easy to implement in computers.

Thanks to Walter J. Freeman for bringing this article to our attention. In an email Scott Brown notes, "that this notion of brains as 'prediction machines' is also the basis of Jeff Hawkins's theory of cognition in his book On Intelligence. For a Wikipedia article on Hawkin's theory go here.

Existing sensors, such as those based on simple pressure switches and motor resistance, are limited in their ability to detect subtle changes in pressure and to distinguish between different textures. A key reason for this is the electrical components and wires they are made from tend to be inflexible.

Building in a lot of sensors will give a robot additional useful information about what it is touching and handling. However, placing large numbers of traditional sensors close together increases the potential for electromagnetic interference.

To get around these obstacles, Jeroen Missinne and colleagues at Ghent University in Belgium have developed a flexible "skin" containing optical sensors.

The skin consists of two layers of parallel polymer strips lying perpendicular to each other to form a grid. These are separated by a thin sheet of plastic. Light is constantly fed into the polymer strips, which act like optical fibres in that their geometry encourages internal reflection and reduces light loss.

When pressure is applied anywhere on the skin it causes the strips to be pushed closer together and allows light to escape from one set into the other. The detection of this leakage of light provides a highly sensitive feedback mechanism.

Thursday, December 3, 2009

The New York Times has another story on the CIA's drone offensive in Pakistan. The article titled, C.I.A. Is Expanding Drone Assaults Inside Pakistan, estimates that more than 400 enemy fighters have been killed by drones. Estimates of the civilian deaths range from a low of 20 to a high of around 250.

One of Washington’s worst-kept secrets, the drone program is quietly hailed by counterterrorism officials as a resounding success, eliminating key terrorists and throwing their operations into disarray. But despite close cooperation from Pakistani intelligence, the program has generated public anger in Pakistan, and some counterinsurgency experts wonder whether it does more harm than good.

Assessments of the drone campaign have relied largely on sketchy reports in the Pakistani press, and some have estimated several hundred civilian casualties. Saying that such numbers are wrong, one government official agreed to speak about the program on the condition of anonymity. About 80 missile attacks from drones in less than two years have killed “more than 400” enemy fighters, the official said, offering a number lower than most estimates but in the same range. His account of collateral damage, however, was strikingly lower than many unofficial counts: “We believe the number of civilian casualties is just over 20, and those were people who were either at the side of major terrorists or were at facilities used by terrorists.”

Wednesday, November 25, 2009

Arthur C. Clarke famously said, “Any sufficiently advanced technology is indistinguishable from magic.” But if science fiction has taught us anything, it’s that any sufficiently advanced technology will inevitably rise up to enslave us. So if you want to get ready for the day when your Roomba declares that maybe it’s time for you to start crawling around on the floor sucking up dust, it might be a good idea to evaluate the Republican and Democratic approaches to this problem.

Monday, November 23, 2009

SSwarm Bots Evolve Communications Skills and Deceit, is an article by Aaron Saenz over at the Singularity Hub. Saenz provides an update on research with S-bots, swarming bots developed by EPFL in Lausanne, Switzerland. The article also contains three videos showing the bots avoiding poison and swarming around food, 'evolving' effective communication to join in a shared task, and jointly dragging a young child across the room (see below).

Gianmarco Veruggio and Fioella Operto of the Scuolo di Robotica (Genova) were interviewed by Gerhard Dabringer. The full interview is available here.

GIANMARCO VERUGGIORoboethics is not the “Ethics of Robots”, nor any “ethical chip” in the hardware, nor any “ethical behavior” in the software, but it is the human ethics of the robots’ designers, manufacturers and users. In my definition, “Roboethics is an applied ethics whose objective is to develop scientific – cultural - technical tools that can be shared by different social groups and beliefs. These tools aim to promote and encourage the development of Robotics for the advancement of human society and individuals, and to help preventing its misuse against humankind.Actually, in the context of the so-called Robotics ELS studies (Ethical, Legal, and Societal issues of Robotics) there are already two schools”. One, let us called it “Robot-Ethics” is studying technical security and safety procedures to be implemented on robots, to make them as much safe is possible for humans and the plant. Roboethics, on the other side, which is my position, concerns with the global ethical studies in Robotics and it is a human ethics.

FIORELLA OPERTORoboethics is an applied ethics that refers to studies and works done in the field of Science&Ethics (Science Studies, S&TS, Science Technology and Public Policy, Professional Applied Ethics), and whose main premises are derived from these studies. In fact, Roboethics was not born without parents, but it derives its principles from the global guidelines of the universally adopted applied ethics This is the reason for a relatively substantial part devoted to this matter, before discussing specifically Roboethics’ sensitive areas.Many of the issues of Roboethics are already covered by applied ethics such as Computer Ethics or Bioethics. For instance, problems - arising in Roboethics - of dependability; of technological addiction; of digital divide; of the preservation of human identity, and integrity; the applications of precautionary principles; of economic and social discrimination; of the artificial system autonomy and accountability; related to responsibilities for (possibly unintended) warfare applications; the nature and impact of human-machine cognitive and affective bonds on individuals and society; have been already matters of investigation by the Computer ethics and Bioethics.

"I worry that in the absence of some good, up-front thought about the question of liability, we'll have some high-profile cases that will turn the public against robots or chill innovation and make it less likely for engineers to go into the field and less likely for capital to flow in the area," said M. Ryan Calo, a residential fellow at the Law School's Center for Internet and Society.

And the consequence of a flood of lawsuits, he said, is that the United States will fall behind other countries – like Japan and South Korea – that are also at the forefront of personal robot technology, a field that some analysts expect to exceed $5 billion in annual sales by 2015.

"We're going to need to think about how to immunize manufacturers from lawsuits in appropriate circumstances," Calo said, adding that defense contractors are usually shielded from liability when the robots and machines they make for the military accidentally injure a soldier.

"If we don't do that, we're going to move too slowly in development," Calo said. "When something goes wrong, people are going to go after the deep pockets of the manufacturer."

Scientists at Intel's research lab in Pittsburgh are working to find ways to read and harness human brain waves so they can be used to operate computers, television sets and cell phones. The brain waves would be harnessed with Intel-developed sensors implanted in people's brains.

The scientists say the plan is not a scene from a sci-fi movie -- Big Brother won't be planting chips in your brain against your will. Researchers expect that consumers will want the freedom they will gain by using the implant.

"I think human beings are remarkable adaptive," said Andrew Chien, vice president of research and director of future technologies research at Intel Labs. "If you told people 20 years ago that they would be carrying computers all the time, they would have said, 'I don't want that. I don't need that.' Now you can't get them to stop [carrying devices]. There are a lot of things that have to be done first but I think [implanting chips into human brains] is well within the scope of possibility."

NewScientist has a story and video on, Medibots: The world's smallest surgeons. Among the technologies discussed is the 20-millimetre HeartLander with "rear foot-pads with suckers on the bottom, which allow it to inch along like a caterpillar."

The HeartLander has several possible uses. It can be fitted with a needle attachment to take tissue samples, for example, or used to inject stem cells or gene therapies directly into heart muscle. There are several such agents in development, designed to promote the regrowth of muscle or blood vessels after a heart attack. The team is testing the device on pigs and has so far shown it can crawl over a beating heart to inject a marker dye at a target site (Innovations, vol 1, p 227).

Another use would be to deliver pacemaker electrodes for a procedure called cardiac resynchronisation therapy, when the heart needs help in coordinating its rhythm.

European researchers have developed the first semantic search platform that integrates text, video and audio. "The system can 'watch' films, 'listen' to audio and 'read' text to find relevant responses to semantic search terms." The MESH project "represents an emerging paradigm shift in search technology" according tl an article in ScienceDaily titled, Listen, Watch, Read: Computers Search for Meaning.

Right now, text in computing is defined by a series of numbers, most commonly the Unicode standard. Each number signifies a particular letter, and computers can scan these codes very quickly. So when you enter a search term, the machine has no idea what those letters signify. It simply looks for the pattern -- it has no inkling of the concept behind the pattern.

But in semantic search, every bit of information is defined by potentially dozens of meaningful concepts. When a copywriter invoices for his or her work, for example, the date could be defined in terms of calendar, invoice, billing period, and so on. All these definitions for one piece of information are called 'metadata', or information about information.

Collections of agreed metadata terms for a particular field or task, like medicine or accounting, are called ontologies.

So the computer not only searches for the term, it searches for related metadata that defines types of information in specific ways. In reality, the computer still does not 'understand' a concept in its semantic search -- it continues to look for patterns of letters. But because the concepts behind the search terms are included, it can return results based on concepts as well as text patterns.

“Each neuron in the network is a faithful reproduction of what we now know about neurons,” [Jim Old] says. This in itself is an enormous step forward for neuroscience, but it also allows neuroscientists to do what they have not previously been able to do: rapidly test their own hypotheses on an accurate replica of the brain.

While the introduction of the simulator indicates that computer scientists are on track to build a simulator with the synaptic capacity of the human brain by 2019, it also suggests drawbacks in this approach for building supercomputers with human-level intelligence.

A major problem is power consumption. Dawn is one of the most powerful and power-efficient supercomputers in the world, but it takes 500 seconds for it to simulate 5 seconds of brain activity, and it consumes 1.4 MW. Extrapolating from today’s technology trends, IBM projects that the 2019 human-scale simulation, running in real time, would require a dedicated nuclear power plant.

ABSTRACTFuture homes will be populated with large numbers of robots with diverse functionalities, ranging from chore robots to eldercare robots to entertainment robots. While household robots will offer numerous benefits, they also have the potential to introduce new security and privacy vulnerabilities into the hoe. Our research consists of three parts. First, to serve as a foundation for our study, we experimentally analyze three of today’s household robots for security and privacy vulnerabilities: the WowWee Rovio, the Erector Spykee, and the WowWee RoboSapien V2. Second, we synthesize the results of our experimental analyses and identify key lessons and challenges for securing future household robots.Finally, we use our experiments and lessons learned to construct a set of design questions aimed at facilitating the future development of household robots that are secure and preserve their users’ privacy.

The roombas in a simulation of Pac-Man are not dangerous. We read about the Roomba Pac-Man in a Nov 11th post at Robots.Net.

The Research and Engineering Center for Unmanned Vehicles (RECUV) at the University of Colorado at Boulder has been developing software that helps robots form ad-hoc networks and distribute cooperative control of their operations. Some of the individuals at RECUV decided to create a cool demo on their own time to show off what their software can do. They've implemented a real-life version of Pac-Man using Roombas. They are quick to point out that despite the fact that the Blinky, Inky, Clyde, and Pinky Roombas seem determined to kill the Pac-Man Roomba, all the robots are actually quite safe. This is because, they say, all are "instilled with the Three Laws of Roombotics".

By developing algorithms for integrating both auditory and visual input, Popeye, a robot built by a team of European researchers, was able to effectively identify a "speaker with a fair degree of reliability." ICT Results reports on this research in an article titled, Robotic perception on purpose.

“This was very difficult to do, because you are integrating two completely different physical phenomena,” he adds.

Vision works from the reflection of light waves from an object, and it allows the observer to infer certain properties, like size, shape, density and texture. But with sound you are interested in locating the direction of the source, and trying to identify the type of sound it is.

The drone program, for all its tactical successes, has stirred deep ethical concerns. Michael Walzer, a political philosopher and the author of the book “Just and Unjust Wars,” says that he is unsettled by the notion of an intelligence agency wielding such lethal power in secret. “Under what code does the C.I.A. operate?” he asks. “I don’t know. The military operates under a legal code, and it has judicial mechanisms.” He said of the C.I.A.’s drone program, “There should be a limited, finite group of people who are targets, and that list should be publicly defensible and available. Instead, it’s not being publicly defended. People are being killed, and we generally require some public justification when we go about killing people.”

Since 2004, Philip Alston, an Australian human-rights lawyer who has served as the United Nations Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, has repeatedly tried, but failed, to get a response to basic questions about the C.I.A.’s program—first from the Bush Administration, and now from Obama’s. When he asked, in formal correspondence, for the C.I.A.’s legal justifications for targeted killings, he says, “they blew me off.” . . . Alston describes the C.I.A. program as operating in “an accountability void,” adding, “It’s a lot like the torture issue. You start by saying we’ll just go after the handful of 9/11 masterminds. But, once you’ve put the regimen for waterboarding and other techniques in place, you use it much more indiscriminately. It becomes standard operating procedure. It becomes all too easy. Planners start saying, ‘Let’s use drones in a broader context.’ Once you use targeting less stringently, it can become indiscriminate.

Tuesday, November 3, 2009

We belatedly noticed the publication of Artificial Beings: Moral Conscience, Awarness and Consciencousness by Jacques Pitrat, from Wiley, John & Sons..

It is almost universally agreed that consciousness and possession of a conscience are essential characteristics of human intelligence. While some believe it to be impossible to create artificial beings possessing these traits, and conclude that ultimate major goal of Artificial Intelligence is hopeless, this book demonstrates that not only is it possible to create entities with capabilities in both areas, but that they demonstrate them in ways different from our own, thereby showing a new kind of consciousness. This latter characteristic affords such entities performance beyond the reach of humans, not for lack of intelligence, but because human intelligence depends on networks of neurons which impose processing restrictions which do not apply to computers.At the beginning of the investigation of the creation of an artificial being, the main goal was not to study the possibility of whether a conscious machine would possess a conscience. However, experimental data indicate that many characteristics implemented to improve efficiency in such systems are linked to these capacities. This implies that when they are present it is because they are essential to the desired performance improvement. Moreover, since the goal is not to imitate human behavior, some of these structural characteristics are different from those displayed by the neurons of the human brain - suggesting that we are at the threshold of a new scientific field, artificial cognition, which formalizes methods for giving cognitive capabilities to artificial entities through the full use of the computational power of machines.

Sunday, November 1, 2009

A recent collection of articles titled, Ethics and Robotics, edited by Rafael Capurro and Michael Nagenborg has been published by IOS Press. Among the contributors to this volume are Peter Asaro, Patrick Lin, George Beckey, Keith Abney,

Thinking ethically about robots means no less than asking ourselves who weare…Ethics and robotics are two academic disciplines, one dealing with the moral norms and values underlying implicitly or explicitly human behavior and the other aiming at the production of artificial agents, mostly as physical devices,with some degree of autonomy based on rules and programmes set up by their creators. Robotics is also one of the research fields where the convergence of nanotechnology, biotechnology, information technology and cognitive science is currently taking place with large societal and legal implications beyond traditional industrial applications. Robots are and will remain -in the foreseeable future- dependent on human ethical scrutiny as well as on the moral and legal responsibility of humans. Human-robot interaction raises serious ethical questions right now that are theoretically less ambitious, but practically more important than the possibility of the creation of moral machines that would be more than machines with an ethical code. The ethical perspective addressed in this volume is therefore the one we humans have when interacting with robots. Topics include the ethical challenges of healthcare and warfare applications of robotics, as well as fundamental questions concerning the moral dimension of human-robotinteraction including epistomological, ontological and psychoanalytic issues. It deals also with the intercultural dialogue between Western and Non-Western as well as between European and US-American ethicists.

Saturday, October 31, 2009

An article in The Times titled, Can you give a drone a conscience? discusses not only plans by the Bristish and US defence forces for unmanned aerial vehicles (UAVs) but also cites the formation of the International Committee for Robot Arms Control (ICRAC).

However, it is ethical issues arising from hunter-killer UAVs now in development, and not the use of Reapers, that are under discussion. Christopher Coker, Professor in International Relations at the London School of Economics and Political Science, will ask delegates at the RUSI conference: “Can you give a drone a conscience?”

The debate about if and when a UAV can be given an artificial conscience that will allow it to operate autonomously in theatre and discriminate, for example, between combatants and civilian targets has intensified since the launch last month of ICRAC, formed by Professor Sharkey with three academics from Australian, German and US universities. “With UAVs like the Reaper, there is still, at the moment, a human being in the loop, who will decide when, why and whom to kill,” Professor Sharkey says. “I know that British sorties make every effort to ensure the risk of collateral damage is minimised. However, UAVs now in development are creeping towards autonomy.”

teaching a robot to recharge itself is just the first step in a long journey towards autonomy. Eventually robots will need to diagnose damage to their hardware or software, and either repair themselves or travel to a maintenance facility. Full automation will also require robots that are built, installed, perhaps even designed by other machines. All along this path of development, robots will require progressively more complex sensors and reasoning capabilities. Hopefully other robotics engineers will take Marvin’s use of ROS as proof of the benefits that open source software and compatible hardware can have when trying to focus on sensing and programming. Working together robot designers can better equip robots to work by themselves.