Heather Roff2015-03-03T14:00:49-05:00Heather Roffhttp://www.huffingtonpost.ca/author/index.php?author=heather-roffCopyright 2008, HuffingtonPost.com, Inc.HuffingtonPost Blogger Feed for Heather RoffGood old fashioned elbow grease.Not What We Bargained For: The Cyber Problemtag:www.huffingtonpost.com,2015:/theblog//3.67881742015-03-02T18:54:18-05:002015-03-02T18:59:01-05:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/New America Foundation hosted its launch for an interdisciplinary cybersecurity initiative. I was fortunate enough to be asked to attend and speak, but the real benefit was that I was afforded an opportunity to listen to some really remarkable people in the cyber community discuss cybersecurity, law, and war. I listened to a few very interesting comments. For instance, Assistant Attorney General, John Carlin, claimed that "we" (i.e. the United States) have "solved the attribution problem, and the National Security Agency Director & Cyber Command (CYBERCOM) Commander, Admiral Mike Rogers, say that he will never act outside of the bounds of law in his two roles. These statements got me to thinking about war, cyberspace and international relations (IR).

In particular, IR scholars have tended to argue over the definitions of "cyberwar," and whether and to what extent we ought to view this new technology as a "game-changer" (Clarke and Knake 2010; Rid 2011; Stone 2011; Gartzke 2013; Kello 2013; Valeriano and Maness 2015). Liff (2012), for instance, argues that cyber power is not a "new absolute weapon," and it is instead beholden to the same rationale of the bargaining model of war. Of course, the problem for Liff is that the "absolute weapon" he utilizes as a foil for cyber weapons/war is not equivalent in any sense, as the "absolute weapon," according to Brodie, is the nuclear weapon and so has a different and unique bargaining logic unto itself (Schelling 1977). Conventional weapons follow a different logic (George and Smoke 1974).

One might object here and claim that the nature of the weapon does not matter, as it is the game and its frame that are important. But this is exactly where the game breaks down for cyber and for IR theorists. The classic bargaining model assumes two rational actors (usually states), where the sending state issues a public demand to the target state, usually due to some disagreement where negotiation and diplomacy are insufficient to resolve the dispute. Thus the first step of the bargaining model of war presupposes that there are two actors already publicly discussing some issue or good. Yet in cyber this is not the case. There is no discussion. If there is some good or issue in "dispute" it is more than likely some overarching foreign policy goal or objective and has little to do with a specific ultimatum. In fact, there is no ultimatum and we do not (contrary to Mr. Carlin) know who our interlocutors are. We are off to a poor start then.

The first step in the game tree for a bargaining model is for the sending state to issue the public ultimatum. The target can either accept or reject. If the target rejects, then it goes to a second round and escalates. Here is the rub though, if the causes of the kerfuffle are nowhere to be seen (Junio 2013), and the target is not aware of any problem or issued an ultimatum in the first round, then any subsequent step in the model is a moot (and frankly impossible) step. There is no bargain.

Empirically, moreover, cyber attacks have not yet risen to the level of a use of force tantamount to an armed attack. In other words, the few cases we have of cyber attacks either cause physical damage (Stuxnet 2010; Saudi Aramco 2012) or widespread functionality issues (Georgia 2008, Estonia 2007; Sony 2014?), and they have not come on the heels of some sort of classic bargaining model. Georgia in 2008 is the only attack to have been perpetrated during an armed conflict, though Russia denies any involvement. What we have, then, is not a bargaining model of war, where war is the most costly tool in the stateperson's toolbox. Rather, we have a strategic interaction, where one side calculates what its payoff will be if it attacks below the threshold for war. The key about a strategic interaction is not what the target actually does in response, but what the attacker calculates the target is likely to do.

For cyber "war" then, what we see is a continual barrage of below the threshold attacks (what I refer to as "sub limina attacks") undertaken on the assumption that the target will calculate that it is not worth responding in an escalatory manner. Due to this calculation, escalation does not occur. What does appear to happen, however, is some sort of public response that is nonescalatory and weakly punitive in nature. There may be tit-for-tatting covertly, but the publicly acknowledged responses are either to ignore or to respond in a nonmilitary way. A case in point is President Obama's use of the term "cyber vandalism" to refer to the Sony hack, and his "proportionate" response as imposing more economic sanctions on the Kim regime.

The public posturing of the Obama administration is thus very telling as to how it views, and would like to view, cyber weapons and cyber "war." First, cyber weapons are not akin to nuclear weapons. Cyber weapons have the potential to discriminate between (non)combatants. Moreover, they do not (presently) risk destroying the world in which we live. In fact, treating them as anything more muddies our conceptual frameworks. Second, the norms emerging for the use of cyber weapons and the response to cyber attacks are proving to be nonescalatory and risk averse. Indeed, the very labeling of an attack as a crime and not a use of force signals to the target and to the rest of the international community that there is cyber restraint (Valeriano and Maness, 2014), and I would add that this restraint is intentional because states are trying to forge norms to govern the use of coercive cyber force.

Where does this leave us? Well, if Assistant Attorney General Carlin is right, and "we have solved the attribution problem," then leaders can bargain in private or name and shame in public. If, however, this statement is merely a ploy at deterring would-be-cyber-attackers from hacking the US, then we are still in need of IR scholars to do some novel and creative work on how states can pursue foreign policy objectives in a coercive relationship where there are no public demands. If NSA/CYBERCOM Director Admiral Mike Rogers is correct, and he will not act outside of the bounds of law, then it is also imperative for the US and the international community to start making laws governing activities in cyberspace. The present strategy of slow norm development has not and will not stop militarization of the Internet and the proliferation of new and much more frightening cyber weapons. The bargaining model will not help us in either situation.

*Note: This blog first appeared on The Duck of Minerva]]>The "Right" to Be Forgotten & Digital Leviathanstag:www.huffingtonpost.com,2015:/theblog//3.66341522015-02-06T19:18:21-05:002015-02-06T19:59:01-05:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/cyber attacks that steal data, such as credit card numbers, social security numbers, names, incomes, or addresses. We hear about attacks that steal intellectual property, from movies to plans for the F-35 Joint Strike Fighter. Indeed, we face a continual onslaught from not only the cyber criminals, but from the media as well. One of the lesser-reported issues in the US, however, has been a different discussion about data and rights protection: the right to be forgotten.

Last year, The European Court of Justice ruled in Google vs. Costeja that European citizens have the right, under certain circumstances, to request search engines like Google, to remove links that contain personal information about them. The Court held that in instances where data is "inaccurate, inadequate, irrelevant or excessive" individuals may request the information to be erased and delinked from the search engines. This "right to be forgotten" is a right that is intended to support and complement an individual's privacy rights. It is not absolute, but must be balanced "against other fundamental rights, such as freedom of expression and of the media" (paragraph 85 of the ruling). In the case of Costeja, he asked that a 1998 article in a Spanish newspaper be delinked from his name, for in that article, information pertaining to an auction of his foreclosed home appeared. Mr. Costeja subsequently paid the debt, and so on these grounds, the Court ruled that the link to his information was no longer relevant. The case did not state that information regarding Mr. Costeja has to be erased, or that the newspaper article eliminated, merely that the search engine result did not need to make this particular information "ubiquitous." The idea is that in an age of instantaneous and ubiquitous information about private details, individuals have a right to try to balance their personal privacy against other rights, such as freedom of speech.

In a more recent case, Dan Shefet, a Danish lawyer, asked Google to delink defamatory material pertaining to him from Google's French site. Shefet's case, however, differed than the Costeja ruling in that his request was ultimately to delink all material from the global search engine and not merely links associated with Google.fr. The idea was that the right to be forgotten requires that information pertaining to someone could be easily accessed through other national search engines (e.g. google.de, google.com, google.ch). Mr. Shefet won. The French judge ruled that a parent company can be held liable for the actions of its subsidiary (a sort of Respondeat Superior argument).

Yesterday, the advisory council to Google on the "right to be forgotten" issued its report. The members of the advisory council, however, disagreed with the French judge's ruling. Their opinion was that while the Internet defies territoriality, the requests of one individual's right to be forgotten as a European Union (EU) right, may not be a right of another individual outside of the EU. In effect, by requesting that data be erased or delinked from other countries' search engines, this would undermine their sovereign rights to determine what information is present in their societies. As Luciano Floridi, one of the leading experts on the council explains, "my place, my rules, but your place, your rules. How could one explain to Brazilians that some legally published information online should no longer be indexed in a Brazilian search engine because the European Court of Justice has ruled so?" The council, thus advises that the right to be forgotten requests for erasure and delinking be limited to the EU.

The "newness" of the right to be forgotten is not, however, that new. In effect, the idea is about the right to a good reputation (i.e. laws against libel or defamation of character), and framing information in such a way that it unnecessarily impinges on one's right to privacy. I am uncertain if anything here is entirely new in concept; the only thing truly new is that we are able to access a wealth of information from a variety of sources instantaneously. We are creating new "data" about every mundane detail of our lives; moreover, through this generation of data we have not particularly cared about who owns that data or what they do with it.

What these two cases show, however, is that a balance was made about reputation and privacy. In the case of Mr. Costeja, the balance was one where he had paid his debts and wanted to move on with his affairs, and the continual linkage of this piece of information was unnecessary. In the case of Mr. Shefet, the requests were to take down defamatory material. Typically, the state would be the only locus required to adjudicate these disputes. However, we are facing a transnational issue where the actors are not merely individuals, but corporations, individuals and other states. Our international legal institutions are ill equipped to deal with this. Professor Floridi's notion that "my place, my rules" may make sense on some fronts, but not on others. For example, if I have a right, a right against you that I expect my government to enforce, but the enforcement of this right requires other sovereign states to cooperate, then that right is not really a "right." On Kant's reasoning, a right is "an authorization to use coercion," meaning that I can use coercive force to ensure that you uphold your duty and respect my right. To square various theoretical circles, the state is usually seen as the authoritative body to wield that coercion. But, states cannot use coercive force (lawfully) against other states. Each is sovereign, and so they have no sway over the affairs of the other. "My place, my rules" only works when I don't require another place to enforce my rules. In effect, what we have here is a right that is unenforceable, and so, is not really a right. So, the right to be forgotten means that one can only be forgotten in the EU. Yet we know that this is clearly not sufficient given the transnational and global reach of the Internet. If the right to be forgotten is really a right, then it is something on the order of what Kant identifies as a "right to a good reputation after one's death." It is a right that we have by virtue of our humanity, but attempting to make sense of it is a logical impossibility unless we create a digital Leviathan.]]>Autonomous or 'Semi' Autonomous Weapons? A Distinction Without Differencetag:www.huffingtonpost.com,2015:/theblog//3.64872682015-01-16T11:24:55-05:002015-01-16T12:59:01-05:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/Future of Life Institute. The purpose of the event was to think through the various aspects of the future of AI, from its economic impacts, to its technological abilities, to its legal implications. I was asked to present on autonomous weapons systems and what those systems portend for the future. The thinking was that an autonomous weapon is, after all, one run on some AI software platform, and if autonomous weapons systems continue to proceed on their current trajectory, we will see more complex software architectures and stronger AIs. Thus the capabilities created in AI will directly affect the capabilities of autonomous weapons and vice versa. While I was there to inform this impressive gathering about autonomous warfare, these bright minds left me with more questions about the future of AI and weapons.

First, autonomous weapons are those that are capable of targeting and firing without intervention by a human operator. Presently there are no autonomous weapons systems fielded. However, there are a fair amount of semi-autonomous weapons systems currently deployed, and this workshop on AI got me to thinking more about the line between "full" and "semi." The reality, at least the way that I see it, is that we have been using the terms "fully autonomous" and "semi-autonomous" to describe the extent to which the different operational functions on a weapons system are all operating "autonomously" or if only some of them are. Allow me to explain.

We have roughly four functions on a weapons system: trigger, targeting, navigation, and mobility. We might think of these functions like a menu that we can order from. Semi-autonomous weapons have at least one, if not three, of these functions. For instance, we might say that the Samsung SGR-1 has an "autonomous" targeting function (through heat and motion detectors), but is incapable of navigation, mobility or triggering, as it is a sentry-bot mounted on a defensive perimeter. Likewise, we would say that precision guided munitions are also semi-autonomous, for they have autonomous mobility, triggering, and in some cases navigation, while the targeting is done through a preselected set of coordinates or through "painting" a target through laser guidance.

Where we seem to get into deeper waters, though, are in the cases of "fire and forget" weapons, like the Israeli Harpy, the Raytheon Maverick heavy tank missile, or the Israeli Elbit Opher. While these systems are capable of autonomous navigation, mobility, trigger and to some extent targeting, they are still considered "semi-autonomous" because the target (i.e. a hostile radar emitter or the infra-red image of a particular tank) was at some point pre-selected by a human. The software that guides these systems is relatively "stupid" from an AI perspective, as it is merely using sensor input and doing a representation and search on the targets it identifies. Indeed, even Lockheed Martin's L-RASM (long-range anti-ship missile), appears to be in this ballpark, though it is more sophisticated because it can select its own target amongst a group of potentially valid targets (ships). The question has been raised whether this particular weapon slides from semi-autonomous to fully autonomous, for it is unclear how (or by whom) the decision is made.

The rub in the debate over autonomous weapons systems, and from what I gather, some of the fear in the AI community, is the targeting software. How sophisticated that software needs to be to target accurately, and, what is more, to target objects that are not immediately apparent as military in nature. Hostile radar emitters present little moral qualms, and when the image recognition software used to select a target is relying on infra-red images of tank tracks or ship's hulls, then the presumption is that these are "OK" targets from the beginning. I have two worries here. First, is that from the "stupid" autonomous weapons side of things, military objects are not always permissible targets. Only by an object's purpose, location, use, and effective contribution can one begin to consider it a permissible target. If the target passes this hurdle, one must still determine whether attacking it provides a direct military advantage. Nothing in the current systems seems to take this requirement into account, and as I have argued elsewhere, future autonomous weapons systems would need to do so.

Second, from the perspective of the near term "not-so-stupid" weapons, at what point would targeting human combatants come into the picture? We have AI presently capable of facial recognition with almost near accuracy (just upload an image to Facebook to find out). But more than this, current leadingAI companies are showing that artificial intelligence is capable of learning at an impressively rapid rate. If this is so, then it is not far off to think that militaries will want some variant of this capacity on its weapons.

What then might the next generation of "semi" autonomous weapons look like, and how might those weapons change the focus of the debate? If I were a betting person, I'd say they will be capable of learning while deployed, use a combination of facial recognition and image recognition software, as well as infra-red and various radar sensors, and they will have autonomous navigation and mobility. They will not be confined to the air domain, but will populate maritime environments and potentially ground environments as well. The question then becomes one not solely of the targeting software, as it would be dynamic and intelligent, but on the triggering algorithm. When could the autonomous weapon fire? If targeting and firing were time dependent, without the ability to "check-in" with a human, or let's say, that there were just too many of these systems deployed that "checking-in" were operationally infeasible due to band-width, security, and sheer man-power overload, how accurate would the systems have to be to be permitted to fire? 80%? 50%? 99%? How would one verify that the actions taken by the system were in fact in accordance with its "programming," assuming of course that the learning system doesn't learn that its programming is hamstringing it to carry out its mission objectives better?

These pressing questions notwithstanding, would we still consider a system such as this "semi-autonomous?" In other words, the systems we have now are permitted to engage targets -- that is target and trigger -- autonomously based on some preselected criteria. Would these systems that utilize a "training data set" to learn from likewise be considered "semi-autonomous" because a human preselected the training data? Common sense would say "no," but so far militaries may say "yes." The US Department of Defense, for example, states that a "semi-autonomous" weapon system is one that "once activated, is intended only to engage individual targets or specific target groups that have been selected by a human operator" (DoD, 2012). Yet, at what point would we say that "targets" are not selected by a human operator? Who is the operator? The software programmer with the training data set can be an "operator," and the lowly Airman likewise can be an "operator" if she is the one ordered to push a button, so too can the Commander who orders her to push it (though, the current DoD Directive makes a distinction between "commander" and "operator" which problematizes the notion of command responsibility even further). The only policy we have on autonomy does not define, much to my dismay, "operator." This leaves us in the uncomfortable position that distinction between autonomous and semi-autonomous weapons is one without difference, and taken to the extreme would mean that militaries would now only need to claim their weapons system is "semi-autonomous," much to the chagrin of common sense.

* Note: This blog first appeared on The Duck of Minerva . Moreover, since the first publishing of the blog, Elon Musk has publicly pledged $10 Million in research support money for the development of AI that is beneficial to humanity, including research into autonomous weapons.]]>Results of the UN CCW Meeting on Killer Robotstag:www.huffingtonpost.com,2014:/theblog//3.53439162014-05-17T12:46:43-04:002014-07-17T05:59:04-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/
After four days of expert meetings, concomitant "side events" organized by the Campaign to Stop Killer Robots, and informal discussions in the halls of the UN, the conclusions are clear: lethal autonomous weapons systems deserve further international attention, continued action to gain prohibition, and without regulation may prove a "game changer" for the future waging of war.

While some may think this meeting on future weapons systems is a result of science fiction or scare mongering, the brute fact that the first multilateral meeting on this matter is under the banner of the UN, and the CCW in particular, shows the importance, relevance and danger of these weapons systems in reality. Even more telling is the consensus that states are opposed to "fully autonomous weapons."

Now for the bad news: this meeting was a (crucial) first step, but many more steps will be required to gain an absolute and comprehensive ban of these systems. Moreover, as Nobel Peace laureate Jody Williams noted in her side event speech, the seeming consensus may be a strategic stalling tactic to assuage the worries of civil society and drag out or undermine the process. When pushed on the matter of lethal autonomous systems, there were sharp divides between proponents and detractors, and these divisions, not surprisingly fell on lines of state power. Those who supported their creation, development and deployment came from a powerful and select few, and many of those experts citing their benefits also were affiliated in some way or another with those states. The narrative this tells, of course, is Thucydides all over again: the powerful do what they can and the weak suffer what they must.

There is hope, however, in the collective power and action of smaller and medium states, as well as through the collective voice of civil society. Indeed, invoking the Merten's Clause as a potential legal justification to ban lethal autonomous systems implicitly and explicitly notes the power of public conscience. Many states and civil society delegates raised this potential avenue, thereby challenging some of the experts' opinions that the Merten's Clause would be insufficient or inapposite as a source of law for a ban.

The meetings also surprised and pleased many to see that ethics was even on the table. Serious questions about the possibility of accountability, liability and responsibility arise from autonomous weapons systems, and such questions must be addressed before their creation or deployment. Paying homage to these moral complexities, states embraced the language of "meaningful human control" as an initial attempt to address these very issues. For any system must be under human control, but the level of control, and the likelihood for abuse or perverse outcomes must be addressed now, and not, after the systems are deployed. Thus in the coming months and years, states, lawyers, civil society and academics will have their hands full trying to elucidate what "meaningful human control" entails, and how once agreed upon, can be verified when states undertake to use such systems.

From my perspective, as a representative of ICRAC, a speaker at one of the side events, and an academic studying these issues, the meetings gave me hope that we might be able to preemptively ban such terrifying and morally abhorrent weapons systems before they start killing, destroying, capturing, maiming or used as a tool to violate human rights. The excellent work must continue, and while this small victory tastes sweet today, we cannot let it satisfy our sensibilities and turn bitter. For the moral, legal and operational issues raised by lethal autonomous weapons push our way of thinking and our commitments to law and human rights to the brink. As Hannah Arendt once claimed, radical evil is the result, not of evil intent, but from a very lack of intent (a lack of thinking). To delegate the decision to wage war and to kill to a machine is the highest example of Arendt's radical evil: for it means we have willingly accepted to be unthinking and ignorant of the horror and atrocities that these systems will surely commit in our names.]]>Why 'Beating' a Russian Cyber Assault on Ukraine Is Trickier Than You Might Thinktag:www.huffingtonpost.com,2014:/theblog//3.48965452014-03-04T12:14:33-05:002014-05-04T05:59:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/blogged about how the United States (U.S.) can "beat" a Russian cyber assault on the Ukraine. Today, we have reports of communications issues within the Ukraine. Specifically, there are reports that Russian troops are using jamming equipment to disrupt communication between Ukraine forces, and that Russian troops allegedly cut Internet cables inside Crimea. As Shane Harris at Foreign Policyreports, there may also be signs of cyber attacks, though nothing is yet confirmed. What we do see is evidence of Russia's plan to disrupt, degrade and deny communications to Ukraine. This is not surprising given that at the moment, Russia's activities amount to a bloodless invasion, and bloodless or no, an invasion of any country requires prepping the battlefield to your advantage. The first thing to do, therefore, is to cut as many lines of communication your enemy has.

While we are still watching events in Crimea closely, we should think about possible options to aid Ukraine. Given Healy's expertise in the cyber realm, perhaps we should listen to his advice. Unfortunately, Healy's message was mixed. First he claims that some cyber attacks that are low-level denial of service attacks ought to be overlooked. Others, though, that cause "major disruption to government services of critical infrastructure must be considered as crossing a line." Though, he leaves us wondering what types and degrees of attacks against which government services would constitute an act of war. Second, the he claims that:

Some steps to help a nation facing a strategic assault require strong government action. The U.S. president, NATO secretary general and European leaders could call Putin to warn that they are not fooled by his use of nationalist proxies [to launch cyber assaults] and will hold him to account. Since warnings won't sway Putin, they should be backed with harder options. The U.S. Department of Defense could order its muscular Cyber Command to prepare to disrupt the attacks if asked to do so by Ukraine's government.

Here is the rub: if the U.S. decides to enter the conflict between Ukraine and Russia it becomes a belligerent party to that conflict. In other words, the U.S. is no longer a neutral party and is subject to attack by Russia. We can think of it in simpler terms. For example, a much larger and stronger person is bullying your friend. The bully is threatening to take something that belongs to your friend. You have the option of staying out of it, or perhaps shaking a fist from afar, or even trying to dissuade the bully through discussion or even threats. But once you decide to fight the bully on behalf of your friend, you open yourself to the bully attacking you too. That is the law of neutrality at work. Any state that assists, aside from diplomatic efforts, one side in a conflict is no longer neutral party. Thus if the U.S. were to put "harder options" on the table, the U.S. would become a party to the Ukraine/Russian conflict.

Given that the U.S. is so far very reluctant to discuss overt military measures against Russia, and has moved to coerce Putin only diplomatically and economically, we might be hard pressed to see any "harder" options. While some might believe that putting the U.S. Cyber Command into action is not a belligerent act, this is not so. It would be no different than the U.S. European Command marshaling troops to send to Ukraine. For if we believe that the use of cyber weapons against another state constitutes an act of war, then using cyber weapons in defense of others would constitute an act violating neutrality. I am not certain that President Obama would be very comfortable painting a giant target on U.S. networks and infrastructure, no matter how unjust Russia's actions against the Ukraine are. Thus we should be wary of suggestions like Healy's.]]>The God of Googletag:www.huffingtonpost.com,2014:/theblog//3.47041702014-02-03T16:50:53-05:002014-04-05T05:59:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/
Recently Google purchased eight leading robotics companies (Boston Dynamics, Bot and Dolly, Autofuss, Holomni, Redwood Robotics, Meka Robtics, Schaft, and Industrial Perception) all of which are involved in creating cutting edge robotics. Though this event slid by rather silently, we should look at some of Google's other recent activities to try to gain an understanding of the tech giant is planning. In particular, a few days ago reports emerged that Google also bought Deepmind, an artificial intelligence (AI) company for $400 million. This move comes alongside the creation of Google's "Open Automotive Alliance," an alliance between the company and a group of car manufacturers who agree to use Google's Android operating system as a platform for apps in their respective cars. Some see this move as a potential avenue to foster deeper relationships with automakers seeking to create self-driving cars; a well researched and funded pet project of Google. This project required a cadre of experts in the fields of robotics, machine learning, and engineering so that the car will not merely navigate obstacles, but mimic the reaction times and agility of a human driver. However, those experts were in employment before -- well before -- the acquisition of the robotics firms and Deepmind.

While Google's activities taken in isolation may not raise too many eyebrows, taken together they point to some sort of strategic vision, of which, we are currently unaware. This strategic vision, though, may require careful execution if it is to uphold to the company's motto of "do no evil." Indeed, reports that part of the Deepmind deal was to create an "ethics board" points to some knowledge that this technology has wide ranging and potentially dangerous consequences. However, creating an ethics board is one thing, and having one's motto be "do no evil" is another. For if Google is cornering the market on the creation of artificially intelligent, or at least learning, machines we may want to press them on what types of ethical boundaries they want to impose on these future (or present?) creations.

Ethics is a messy, messy business. For one thing, ethics is an attempt to aid a person -- or perhaps a now a machine -- to understand the vast, and rather complex, world in which we live and to provide us with action guiding principles. Ethics is not merely for thinking, pondering, and isolated sorts, it is for moral agents living in and amongst other moral agents. (I say moral agent here because it is unclear if we should reserve the word "person" for humans anymore, or when that word may extend to artificially created beings and intelligences, given the trajectory of machine learning and Google's interest in exploring and funding it.) So ethics is an attempt at providing moral agents with action guiding principles. What does this entail? Well, for instance, the question "what should I do?" presupposes that the context of the situation is at least relatively clear, that one understands the variety of available options, that one has a fairly robust understanding of the relevant moral rules or precepts, and that one is capable of making a reasoned judgment. That one might make the wrong judgment is also, always, a possibility.

Here is the other thing about ethics: there is no one agreed upon universal conception of what is right and wrong. If Sally is a consequentialist, she believes that the consequences of her acts dictate the moral worth of her actions. If Bob is a deontologist, he believes that it is the motives, or the maxim, of his action and not the expected effects that give the act moral worth. These, are, of course, gross oversimplifications, but there is something that we can learn from boiling centuries of debate down to two sentences: the same act, in the same set of circumstances, with the same available options, may be immoral to one person and moral to another. What is more, depending upon who you ask, there may be some situations where there is no wholly and completely correct moral answer to a problem. One might be faced with a moral dilemma, where no matter what one does, harm, wrong, or a "moral remainder" will fall somewhere.

What does this have to do with the God of Google? Well, if one's motto is "do no evil," it seems to me, that such position presupposes what evil is and how to avoid doing it. If we apply this position to Google's recent acquisitions of robotics and artificial intelligence companies, it seems to me that there is some sort of plan at work that we are unable to quite see yet.That plan may be one where the private corporation shoulders the responsibilities of guiding the creation of technology fraught with moral questions, problems and dilemmas. Indeed, it appears that if Google is going to spearhead the creation of artificially intelligent agents, then it sits in a very similar position to role of the Judeo-Christian god, as it now has the choice to pursue the creation of an intelligence that may or may not flirt with the concept of free will (and thus the ability to do evil), or to try to circumscribe the actions of artificial agents to do no evil. Of course, all of this depends, still, on one's vision for the future and one's definition of evil.]]>Robots Lactate! Humanoid Robots Make Their Debuttag:www.huffingtonpost.com,2013:/theblog//3.44265072013-12-12T11:10:02-05:002014-02-11T05:59:02-05:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/Washington Post's Monkey Cage, where I argued that considerable attention must be paid to the physical design of humanoid robots. I argued that those engineers and roboticists creating humanoid robots should be very carful in the decisions to impart physical characteristics that make the robots appear male or female, as they are ultimately gendering what is ostensibly a gender-neutral object. Gendering humanoid robots sends implicit, and sometimes explicit, messages beyond "this is a male" and this is a "female" robot.

Such messages can range from "this is what an ideal soldier" should look like, or this is "what an ideal female" should look like, or this is merely "ideal." For while some, like Laruen Wilcox, might think that my arguments about the physical characteristics of such robots are nothing more than the feared and decried "essentialism," that is not in fact true. The jobs that these robots are being designed to undertake, or the names that they have been given, speak volumes about the more ambiguous gendered relationships between and within masculinity, femininity, technology and politics. In my piece for the Monkey Cage, I pointed out that two robots in particular, DARPA's Atlas and the U.S. Navy's SAFFiR looked to be "male" due to their broad shoulders and v-shaped torsos. I also wondered whether any roboticist designing a "female" robot would give it one of the most widely seen female physical traits: breasts. Today, I have my answer.

NASA has just unveiled its humanoid robot, Valkyrie, for the upcoming DARPA robotics challenge. In two weeks, challengers will enter their robots into a contest to see if they can complete all of the requisite tests in a chance to win a 34 million dollar award. DARPA's main motivation is to create ground "disaster response" robots, where they are able to "execute complex tasks in dangerous, degraded, human-engineered environments." This challenge is directed to ultimately "advance the key robotic technologies of supervised autonomy, mounted mobility, dismounted mobility, dexterity, strength, and platform endurance," as well as making "ground robot software development [and ground robot systems development] more accessible, and lower software acquisition cost while increasing capability." By NASA's own lights, it designed Valkyrie to win this competition.

Valkyrie, as opposed to her DARPA counterpart, Atlas, has breasts. One might argue that those two convex shapes clearly in the upper region of its torso are not breasts, and thus it is an "it" and not a "she," but this is wishful thinking. Indeed, even her name - Valkyrie - is one of a host of female goddesses in Norse mythology that decide which soldiers will live and which ones will die in battle. Moreover, as Nicholaus Radford of the NASA JSC Dextrous Robotics Lab said in a recent interview with IEEE Spectrum "we really wanted to design the appearance of this robot to be one that when you say it, you'd say, 'Wow. That's awesome." Thus, breasts were on the minds of the makers.

Why? What do breasts have to do with designing a machine to execute complex tasks in dangerous and degraded environments? Breasts are designed to lactate. And, as the title of my previous piece points out: robots don't lactate. The physical embodiment of breasts on Valkyrie thus reifies what a "woman" should look like; however, if her namesake says anything about her potential wartime function, she is a subordinate helper, doing the bidding of her male master.

Lauren Wilcox has charged me with simplifying the issue about sex, gender and robotics, and that in reality "the ties between gender and sexed embodiment maybe even more unstable than a lactating robot would suggest; such a future calls for a critique not of 'robot' bodies as if they are other than our own but a critique about the ways our cyborg bodies are being made and put to use." I whole-heartedly agree with Wilcox's conclusion, just not that I've over simplified things. For we must begin with the first appearance of gendering and essentialism: the body. We must then take a critical stance, and ask about the purpose, the story and the values that this confluence of factors tells. In the case of Valkyrie, she is not a "ridiculous" example I've cited in support of serious questions about the role of humanoid robots in warfighting, but the first "serious" female robot produced for a serious task. That this task is deemed "disaster response" is just a cover- for NASA wants to use her on Mars, and it does not take much skepticism to think that her capabilities could be used on a battlefield. For, as her name suggests, she might well chose who lives and who dies in the future.]]>Debating Killer Robotstag:www.huffingtonpost.com,2013:/theblog//3.43612202013-11-30T10:29:16-05:002014-01-30T05:59:01-05:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/Last week, Georgia Institute of Technology's Center for Ethics and Technology held a debate between Ron Arkin and Robert Sparrow over the ethical challenges and benefits of lethal autonomous robots (LARS) or "killer robots." The debate comes amidst increasing attention to LARS, as the United Nations has recently agreed to discuss a potential ban on the weapons under the Convention on Conventional Weapons framework early next year. Moreover, we see more media attention in major media outlets, like the Washington Post, the New York Times, Foreign Policy and here on Huffington Post as well. With NGOs such as Human Rights Watch and Article 36 also taking up the issue, as well as academics and policy makers, much more attention may also be on the horizon.

My purpose here is to press on some of the claims made by Prof. Arkin in his debate with Prof. Sparrow. Arkin's work on attempting to formulate an "ethical governor" for LARs in combat has been one of the few attempts by academics to espouse the virtues of these weapons. In Arkin's terms, the ethical governor acts as a "muzzle" on the system, whereby any potentially unethical (or better formulated "illegal") action would be prohibited. Prof. Arkin believes that one would program the relevant rules of engagement and laws governing conflict into the machine, and thus any prohibited action would be impossible to take. Sparrow, a pioneer in the debate on LARs, as well as one of the founding members of the International Committee on Robot Arms Control (ICRAC), is vehemently skeptical about the benefits of such weapons.

Some of the major themes in the debate over LARs revolve around the issue of responsibility, legality, and prudential considerations, such as the proliferation of the weapons in the international system. Today, I will merely focus on the responsibility argument, as that was a major source of tension in the debate between Arkin and Sparrow. The responsibility argument runs something like this: since a lethal autonomous weapon either locates preassigned targets or chooses targets on its own, and then fires on those targets, there is no "human in the loop" should something go wrong. Indeed, since there is no human being making the decision to fire, if a LAR kills the wrong target, then there is no one to hold responsible for that act because a LAR is not a moral agent that can be punished or held "responsible" in any meaningful way.

The counter, made by those like Arkin in his recent debate, is that there is always a human involved somewhere down the line, and thus it is the human that "tasks" the machine that would be held responsible for its actions. Indeed, Arkin in his comments, stated that human soldiers are no different in this respect, and that militaries attempt to dehumanize and train soldiers into becoming unthinking automatons anyway. Thus, the moment a commander "tasks" a human solider or a LAR with a mission, the commander is responsible. Arkin explicitly noted that "they [LARs] are agents that make decisions that human beings have told them to make," and that ultimately if we are looking to "enforce" ethical action in a robot, then designers, producers and militaries are merely "enforcing [the] morality made by humans."

However, such a stance is highly misleading and flies in the face of commonsense thinking (as well as legal thinking) about responsibility in the conduct of hostilities. For instance, if a commander tasks Soldier B to undertake a permissible mission, where Soldier B will have very little, if any, communication with the commander, and in the course of Soldier B's attempts at completing the mission, Soldier B kills protected people (like noncombatants, i.e. those not partaking in hostilities), then we would NOT hold the commander responsible. We would hold Soldier B responsible. For during the execution of his orders, Soldier B took a variety of intervening decisions on how to complete his "task." It is only in the event of patently illegal orders that we hold commanders responsible under a doctrine of command responsibility.

Arkin might respond here that his "ethical governor" would preclude any actions like targeting of protected persons. For instance, Arkin discusses a "school bus detector" whereby any object that looks to be a school bus would be off-limits as a potential target, and so the machine could not fire upon that object. Problem solved, case closed. But is it?

Not by a long shot. Protected status in persons or things is not absolute. Indeed, places of worship, while normally protected become legitimate targets if they are used for military purposes (like a sniper in the bell tower, or storing munitions inside). Thus programming a machine that would never fire on school buses only says to the adversary - "hey! You should go hide in school buses!" Thus it is the dynamic nature of war and conflict that is so hard to discern, and attempts at codifying this ambiguity are so highly complex that the only way to accomplish this is to create an artificially intelligent machine. For otherwise, creating a machine that gives tactical and strategic advantage to the enemy, or in Arkin's words providing "mission erosion" is beyond a waste of money. Thus creating a machine that would not become a Trojan Horse requires that it is artificially intelligent and can discern that the school bus is really a school bus being used for nonmilitary purposes -- a machine that, it appears, Arkin would be uncomfortable with in the field.

The final argument in Arkin's arsenal is that if the machine, artificially intelligent or not, performs better at upholding the laws of war than human warfighters, then so be it. More lives saved equals more lives saved, period. Yet this seems to miss a couple of key points. The first is that the data that we have regarding all of the atrocities committed by service men and women are data points of when things go wrong. We do not have data on when things go according to plan - for that is considered a 'nonobservation'. Think in terms of deterrence. One cannot tell if deterrence is working, only when it is not. Thus saying that humans perform so poorly is only telling part of a much larger tale, and one, I'm not certain, requires robots as a solution to all of humanity's moral failings. The second is Sparrow's main point: using such machines seems profoundly disrespectful of the adversary's humanity. As Sparrow argues, using machines to kill distant others, where no human person takes even a moment to consider their demise, robs warfare of what little humanity it possesses.

Thus I hope that while we continue to think about why using robots in war is problematic, from moral, legal and prudential perspectives, we also continue to press on their touted "benefits."]]>The Devil Is in the Details: Where We Are With Syriatag:www.huffingtonpost.com,2013:/theblog//3.39448232013-09-18T11:49:31-04:002013-11-18T05:12:01-05:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/
This all sounds well and good. In fact, many believe this is the way out of a disaster waiting to happen. However, the sad fact is that the devil is in the details and I fear that nothing -- nothing -- will actually happen, and the Syrian people will continue to suffer and be brutalized. But to understand why I hold such a pessimistic view we should return to the history of this conflict. The initial protests began on March 18, 2011 during the Arab Spring. Within days of the first protests, security forces began killing protesters. At first Assad's police forces' pace was slow, kill five protesters here, six there, but the community's response to the killing was not to retreat in fear but to motivate more people to protest. Within weeks Assad dismissed a majority of his cabinet, claiming that the political reforms sought after were soon to come, and thus the violence and protests should end. Nothing changed, and Assad began playing a very shrewd game: he prevaricated between promising reform and shooting civilians. Within one month the death toll rose to 500.

Skip ahead eight months. Now we see defections from the Syrian security forces and the Arab League's first attempt at a peace plan is publicized and quickly shown to be a joke. The Arab League thus suspends Assad's membership, and in response he promises more concessions, only to back track and renege. Ultimately the Arab League monitors leave Syria due to "worsening conditions." Fast-forward another four months. The United Nations Secretary General appoints a "special envoy" to negotiate a peace. This special envoy is Kofi Annan. Former Secretary General Annan comes up with a Six-Point Peace Plan, only to see it dashed to the rocks by Assad's cat and mouse game and his paced escalations in violence. Several months later, Annan resigns citing a "lack of follow through" by the international community and "finger pointing" in the United Nations Security Council (due to the use of the veto by Russia and China in the Security Council on a resolution authorizing the use of force to stop Assad). At the time of the second special envoy's appointment, Lakhdar Brahimi, the death toll nears 60,000. Brahimi puts forth several more peace plans; only to see those negotiated ceasefires end within hours or days.

After the last of these peace plans (almost one year ago) the international community became rather silent -- until the chemical weapons attack on August 21, 2013. Almost one year prior (in July of 2012) the Obama administration uttered its now infamous "red line," claiming that if Assad used these weapons against his people the United States would be forced to act. Now here we sit in September of 2013. Assad has clearly crossed the red line, as the recently released United Nations report suggests. Sarin gas was used against a civilian center; 1,400 people were brutally murdered, and amongst them many, too many, children. Obama, true to his word, threatened the use of force to punish Assad and deter any future use of chemical weapons. However, Lavrov's deft political maneuvers have thwarted much of the momentum (and what little support) there was for any strikes against Assad.

So where are we now? We are exactly where we were in April of 2011. That is to say, Assad's killing machine continues, Russia's backing of the regime continues, and any political settlement is going to be bogged down in diplomatic squabbles. Currently the U.S., Britain and France want to push forward a(nother) Security Council resolution to enforce the tentative agreement reached between Kerry and Lavrov this past weekend. However, President Putin has, from day one, stated that the only way Russia will agree to a diplomatic settlement is if the use of force is taken off the table. The U.S., however, maintains that it is only the use of force that even brought Russia (and Syria) to the negotiating table. Thus we have come full circle. Assad has yet again made concessions that cost him nothing, and now the only difference is he is relying on his powerful ally to do the dancing for him. Russia can require certain conditions, and if those are not met, then diplomacy fails (yet again). Assad has lost nothing. He maintains his chemical weapons stockpiles, his monopoly on munitions, planes, tanks and helicopters and the world stands by to watch the slaughter continue. Assad is not a stupid man. Indeed, he seems to have played this round expertly. He is free to commit atrocities, destabilize the region, and then blame the outcome on the lack of flexibility or intransigence of the West. Well done Mr. Assad, well done. I only hope that your people can muster the strength to do what we cannot: oust you and hold you to account for the countless atrocities that you have committed against defenseless and innocent people.]]>The Syrian Reprisaltag:www.huffingtonpost.com,2013:/theblog//3.38247212013-08-29T17:09:55-04:002013-10-29T05:12:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/
First, international law is not typically enforced by one state. That the Syrian regime used chemical weapons does not necessarily entail that the U.S. becomes the international judge, jury and executive to dole out punishments. Indeed, reprisals are subject to strict criteria. Though this potential "limited" air strike against the Assad regime would be exactly that - a reprisal -- and it does not appear to satisfy the necessary conditions. That Secretary of State Kerry discussed the "heinousness" of the weapon and the breaking of international law by Assad, shows that it is this particular instance, and not the almost three year war with over 100,000 dead that is triggering the response.

First, the purpose of the reprisal must be in response to a "prior serious violation of international humanitarian law [IHL], and only for the purpose of inducing the adversary to comply with the law." Surely we can cite many instances of Assad violating IHL, and thus this particular use of chemical weapons fits here. But here is where the issues become murky: 1) is Obama's strategic objective to stop Assad from carrying out any additional chemical weapons attacks? If so, then there will need to be more than limited air strikes. There would need to be a lot more. In fact, Micah Zenko persuasively argues this today in the New York Times. Obama would need to take out the ability for Assad to deliever any attacks. That would include launching platforms, potentially targeting Assad's air force, weapons caches, and command and control centers. What is more, if one really wants to remove the ability to use this weapon, one has to eliminate those who want to use it. The war aims grow; and 2) the purpose is only to induce the adversary to obey the law. But would Assad ever (as he hasn't yet) obey the law? Chances are no.

Second, this must be a "measure of last resort." While some might say that the U.S. has tried diplomacy, and that there have been years of attempts at reining in Assad -- through nonmilitary means like sanctions, suspensions from international legaues and bodies, and the freezing of assets -- is it really Obama's last resort? Or is it his first resort? For if we frame the issue in terms of Assad's overall behavior, then we might make this justification. If, however, we frame the issue as a response to chemical weapons (the lawyer's way of justifying this particular reprisal), then it isn't the last resort, but the first.

Third, "reprisal action must be proportionate to the violation it aims to stop." However, what would be the measure of proportionality here? Lives lost? For if we restrict our case to that of this particular instance of chemical weapons use, then the damage delivered to enforce compliance would have to mirror that. If, however, Obama justifies the reprisal on the three-year conflict, then he has more wiggle room. However, if we do not take lives lost or property damage as the metric for proportionality, then justifications for regime change look more likely.

Fourth, the decision must be made to resort to a reprisal at "the highest level of government." This one is probably the easiest to meet. Finally, we have the condition of "termination." The "reprisal action must cease as soon as the adversary complies with the law." Yet how would this work? Unless the U.S. destroys Syria's capacity to use chemical weapons against its people, there is no guarantee that it would not do it again. Though, destroying this capacity is not something that can be achieved with limited air strikes from off shore carriers or high altitude bombing campaigns. Even if the U.S. does this, there would need to be verification that all stockpiles were secured and the necessary delivery platforms were destroyed.

The biggest fly in the ointment, however, is that the law governing reprisals is international humanitarian law. That is, it is the law governing hostilities. However, the U.S. is not in hostilities with Syria. Syria and the U.S. are not at war with each other. Now, does IHL govern the actions of the Syrian rebels and the Syrian government? Certainly. However, the US is not a party to this dispute, and so it is now justifying an act of law enforcement that it could not possibly meet the customary international legal requirements for because it has no standing. Any legal excuse that President Obama is going to present in the coming days are, therefore, fiction.

All in all, the recent maneuvering to rationalize a use of force against Syria due to its recent use of chemical weapons is a terrible justification. Syria does not belong to the chemical weapons ban treaty, and so has not broken any treaty law. If we justify a use of force based on customary international law, then we still face all the problems outlined above. Though, as reprisals typically have motives related to punishment and deterrence, we might want to ask whether Obama's actions would even have purchase. As Mary Ellen O'Connell's recently noted "the usefulness of a military strike to prevent future chemical weapons use is highly doubtful."

The result is not that the Obama administration ought to refrain from using force, but that the way in which it is going about (potentially) justifying such an action is problematic at best. While the use of chemical weapons is truly horrific, and we should all be outraged and sickened by Assad's atrocities, this is nothing new! Are not the 100,000 people already killed by: cluster munitions (also illegal), the Shabiha (the civilian killing squads), and the torturers and rapists also horrific? 100,000 people -- mostly women and children -- are already dead. Yet this was not enough to trigger an international response? As Stephen Walt said in the New York Times, "Dead is dead, no matter how it is done." Thus instead of looking for some phantom legal justification to use a limited -- and probably inefficacious -- air strike against Assad, the Obama administration ought to be looking at ways to actually end the violence and killing. Indeed, if military force is on the table, then coming in and dropping bombs and leaving is not going to solve the problem. One must stop the use of chemical weapons, sure, but one must also stop the killing, torturing and raping. One must make room for the millions of refugees to return to their homes, one must seek to secure the millions of internally displaced persons. For this is not about a reprisal -- this is about a humanitarian intervention.

*Update: Since writing this piece, the Arab League has declined to support any limited strikes against Syria, and Iran is pledging to strike Israel if the U.S. goes ahead with any sort of military campaign. While the United Kingdom is presenting a resolution in the United Nations today, to back "all means necessary" to protect civilians, there appears to be no regional support for such an endeavor, and there is likely going to be the exercise of the veto by Russia and China. The U.S. would therefore have to rely on few allies to engage in such a reprisal, and if it does so, will face many political, as well as tactical, problems.]]>A (Sort of) Red Line: Obama's Syrian Dilemmatag:www.huffingtonpost.com,2013:/theblog//3.38050752013-08-23T17:45:23-04:002013-10-23T05:12:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/recent reports of the Syrian government's use of chemical weapons against its own civilian population, the United States faces a particularly difficult decision. President Obama has stated multiple times over the past two years that if Bashar al-Assad were to use chemical weapons against his own people, this act would constitute a "red line" drawing a required response from the U.S. (and perhaps its allies). Yet the Syrian government's reported use of chemical weapons is not isolated to this past Wednesday's horrific attack. Allegations about the use of weapons have surfaced multiple times, and have led the U.S. government to pledge (though so far not deliver) military support to the rebels.

The question on everyone's mind at the moment is: What is going to be done? The answer, I'm afraid to predict, is: nothing new. There may be some finger waving, there will certainly be moral outrage and condemnation, and even assessments by those like the UN Secretary General that the use of such weapons "violate international humanitarian law." So what? Bashar al-Assad has slaughtered his people for years and the world has stood by to watch. He has already reportedly used chemical weapons on his people; this attack is therefore not different than any previous attack. The line was crossed already, and this particular event amounts not to a violation of some particular principle; it is rather a question of degree. In the history of this conflict, Assad has never faced a deterrent, nor has he suffered any consequences for his actions. Given these facts, it can only be assumed that the brutal dictator will continue to escalate his atrocities in the future.

Thinking that international law will come to the rescue is naïve. International "law" has purchase only when (1) states either willingly abide by its dictates or (2) other states, "coalitions of the willing," or international bodies (such as the United Nations) enforce that law. More often than not, such "enforcement" requires the arms and influence of powerful countries. If those countries are reluctant or unwilling to do anything, then nothing gets done (as we have seen many times before).

Yesterday, President Obama stated that "Sometimes what we've seen is that folks will call for immediate action, jumping into stuff that does not turn out well, [which] gets us mired in very difficult situations, [and] can result in us being drawn into very expensive, difficult, costly interventions that actually breed more resentment in the region." Nothing in this statement is untrue. However, in terms of Obama's foreign policy, this is not a principled account of when or if to engage in intervention for humanitarian purposes. In the case of Libya, Obama seized that opportunity early on, committed the requisite assets, and used a rather convenient loophole in U.S. law to use force before requesting congressional approval. Such actions show that he is not, in principle, against foreign intervention. The problem with Syria is that the momentum for actually committing to such a large problem is not there, and U.S. allies are in no rush to send money, arms or people to fight against Assad. The U.S. faces resentment in the "region" no matter what it does. This is a brute fact about American power, ideology, and past behavior in relation to "the region." What is more, inaction threatens to breed just as much resentment from those populations asking for help. Indeed, the U.S. has already missed the boat when it comes to co-opting these individuals to its side.

To be sure, one will argue, "Well, what should we do then?" We face a couple of options, and none of them are particularly savory. The first is the most extreme. We send in the necessary support to stop Assad from committing mass atrocities and war crimes. That would be a very large commitment. We are looking at committing almost every branch of the U.S. military to some sort of long-term operation. We would have to prepare for billions of dollars spent to support such an effort, and then more money... much more money... spent in the post conflict rebuilding efforts. That the U.S. does not trust any "one" side of the conflict means that if it took out Assad, it would have to run the country. We don't like to say such things publicly, but it is true. The U.S. would have to make Syria a client state and rule it by proxy.

The second is still pretty undesirable for many in Washington. The U.S. would have to commit limited assets and lots of money to the rebels. In this instance, we might see something like a more expensive Libyan venture. The U.S. would cover air defense and intelligence, reconnaissance and surveillance. This "no fly zone" approach would attempt to let the rebels advance and face Assad's forces without the fear of shelling. Unfortunately, no fly zones are still very expensive and can extend for years if one does not remove the leader or government in power. Just look to the U.S.'no fly zone(s)' enforced in Northern Iraq during the early 1990s. After the Gulf War, coalition forces established a 'no fly zone' against Saddam Hussein to protect the Iraqi Kurds from retaliation and massacre. This lasted roughly from 1992 until the U.S. invasion in 2003. In the Syrian case, however, it is unclear where such zones would be -- more than likely the entire country -- and any financial or military support to the rebels does not mean that they would then be successful in overthrowing Assad. In fact, the probability is that the U.S. would end up spending more money and losing more people to a cause -- a cause to which, although just, the U.S. is not entirely committed. Indeed, even if the rebels were successful, there is a large question mark above the "post-conflict" question.

Would the U.S. then be committed to rebuilding? How much? To what extent?

Finally, the most likely avenue is that the U.S. will do a combination of low intensity and low commitment measures. In other words, Obama's red line was a public commitment to "do something" but like any good lawyer he parsed his words carefully. The American people and the world at large assume that "something" means guns blazing, while Obama more likely thought "small arms, money and pressure." Publicly, the U.S. must be seen to uphold its commitments. However, despite Senator McCain's statements, this does not entail sending in the entire Air Force or thousands of troops. If we have learned anything about Obama's foreign policy style over the years, he is a cautious hawk. When the stakes are relatively high, the cost to the U.S. must be relatively low and there must be a serious U.S. interest in the mix to justify using force. If the stakes are moderately high, the cost must be pretty low to justify force. If the stakes are low, then there must be virtually no cost to using force. The Syrian situation, while morally abysmal, is a moderate to high stakes endeavor, and it is anything but low cost.

Syria's political ties to Iran, Russia's continued support to Assad, the factionalized rebel opposition, the increasing presence of terrorist elements mean that the U.S. would be walking into a political and tactical nightmare, and that it would be in essence committing itself to rebuilding a country and a people from the ashes.

It is, as always, with a dose of reality that one understands and in some way empathizes with Obama's role as "the decider" (assuming of course he wouldn't have to face approval on any action in Syria from Congress). On the one hand there is the moral outrage at doing nothing. On the other hand is the moral outrage at "what have you got us into!?" It is not an easy position.

Perhaps the answer for Obama is to return to Machiavelli. Look to the advice in The Prince.

"Never let any Government imagine that it can choose perfectly safe courses; rather let it expect to have to take very doubtful ones, because it is found in ordinary affairs that one never seeks to avoid one trouble without running into another; but prudence consists in knowing how to distinguish the character of troubles, and for choice to take the lesser evil."

This, of course, means that one must distinguish between fighting for the rights and lives of others or abstaining and letting the Syrian people suffer can be parsed in consequentialist terms. The lesser evil is still evil all the same.]]>Foust's Liberal Case for Drones or Wishful Thinking?tag:www.huffingtonpost.com,2013:/theblog//3.32931372013-05-17T12:59:37-04:002013-07-17T05:12:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/Foreign Policy magazine wrote a piece titled "The Liberal Case for Drones." In it, he outlined why the "phantom" fear of autonomy in weapons is overblown and that we should pretty much just embrace the use of unmanned aerial vehicles and increased autonomous weapons. Citing the U.S. Navy's successful launch of the X-47B stealth unmanned fighter jet as a portent of the future, then moving quickly into whether such portents are good or bad things, Foust eschews the debate about autonomy entirely and whitewashes the Pentagon's plans for creating and fielding autonomous weapons.

Aside from vacillating between claiming that increased autonomy is the future and claiming that complex autonomous weapons are not going to be developed (which is a blatant misreading of Directive 3000.09), Foust's entire argument falls flat. First, the experts who worry about increased autonomy in weapons systems worry about weapons that have the ability to target and fire without a human beings' direction. For the most part, they are not concerned with weapons that involve a human operator or even most "fire and forget" weapons. Yet Foust's attempt to make a "liberal case" (whatever that means) for drones is to claim that they will be more discriminating than human soldiers when it comes to obeying the laws of war and protecting the lives of civilians. This is the common mantra though. A machine isn't fatigued, it doesn't need bathroom breaks, and it isn't emotionally involved when it sees a fellow machine (or human) blown up by an adversary. Thus all of the emotional failings are avoided and the machine can act better than a human. Which is why he concludes that "the concern [over autonomous lethal robots] seems rooted in a moral objection to the use of machines per se: that when a machine uses force, it is somehow more horrible, less legitimate, and less ethical than when a human uses force. It isn't a complaint fully grounded in how machines, computers, and robots actually function."

But that is not the moral objection. The moral objection, at least from this "expert" is the one he raises in the very next paragraph -- responsibility. A machine that does not obey the laws of war and annihilates an entire village leaves us with a variety of questions on who to hold responsible. If this were a human soldier, with all of his moral failings, we'd point the finger at him and prosecute him. We'd blame him. But how do you blame a machine? It is like blaming your toaster for burning you, and saying you want to hold your toaster accountable for battery. Sure we can say that we could create new laws to deal with this situation, but those laws might threaten to undermine the existing laws regarding responsibility and liability for harm. Especially when we say that we've created an artificially intelligent agent capable of learning and acting in the world, capable of making life or death decisions, but not really bound by laws or norms or any of those "emotions" that are so pesky that they stop us, most of the time, from committing atrocious violations of law and morality. Thus Foust's case for drones actually falls apart; he gives the game away when he concedes that accountability for the actions of such weapons is "tricky." It is more than tricky, it is central to the entire notion of fighting war in any rule or law governed way.]]>How Automated Wars Rob Us Of Humanitytag:www.huffingtonpost.com,2013:/theblog//3.31637202013-04-30T12:03:52-04:002013-06-30T05:12:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/
We are at a similar juncture with regards to a "lack of thinking." In our case, however, it is in regards to the delegation of thinking to a machine, and a lethal machine in particular. What I mean here is that militaries, and the U.S. military in particular, envisions a future where weapons do the thinking -- that is, planning, target selection and engagement. Already the U.S. military services have capabilities that enable weapons to seek out and queue targets, such as the F-35 joint fighter and some targeting software platforms on tanks, like the M1 Abrams, as well as seeking out targets and automatically engaging them, like Phalanx or Counter Rocket, Artillery and Mortar (CRAM) systems.

The U.S.' decision to rely on unmanned aerial vehicles, or "drones," admits to the appeal of fighting at a distance with the use of automated technology. The current drones in combat operations, such as the Predator and Reaper, show the ease with which killing by remote can be accomplished. While drones are certainly problematic, from a legal and moral standpoint in regards to targeted killings, human beings still ultimately control this type of technology. Human pilots are in the "cockpit," and for better (or worse) there are human beings making targeting decisions.

The worry, however, is that militaries are planning to push autonomy further than the F-35 joint striker (which is far more autonomous than the Predator or Reaper) to "fully autonomous" weapons. Moreover, while we might try to push this worry aside and claim that it is a long way off, or too futuristic, we cannot deny the middle term between now and "fully autonomous" weapons. In this middle term, the warfighter will become increasingly dependent upon such technologies to fight. Indeed, we already see this in "automation bias" (or the over-reliance on information generated by an automated process as a replacement for vigilant information seeking and processing). With increased dependence on the technology, this automation bias will only increase and thus will lead to a degeneration of not only strategic thinking in the services, but like the case of Eichmann, a lack of thinking more generally.

The evil here is that through the banality of autonomy, we risk not only creating a class of unthinking warfighters, but that the entire business of making war becomes so removed from human judgment and critical thinking that it too becomes commonplace. In fact, it might become so banal, so removed from human agency, that even the word "war" starts to lose meaning. For what would we call a conflict where one side, or both, hands over the "thinking" to a machine, doesn't risk its soldiers' lives, and perhaps doesn't even place human beings outside of its own borders to fight? "War" does not really seem to capture what is going on here.

The danger, of course, is that conflicts of this type might not only perpetuate asymmetric violence, but that it further erodes the very foundations of humanity. In other words, if we are not careful about the increasing push towards autonomous weapons, we risk vitiating the thinking, judging and thus rational capacity of humanity. What was once merely automation bias becomes the banality of autonomy, and in an ironic twist, humans lose their own ability to be "autonomous."

The human warfighter is now the drone.

]]>Reigning in the Killer Robot? The DoD's Directive on Autonomous Weaponstag:www.huffingtonpost.com,2013:/theblog//3.30946752013-04-16T18:31:26-04:002013-06-16T05:12:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/DoD Directive 3000.09, addressing autonomy in weapons systems. The Directive is a first slice at framing policy prescriptions and demarcating lines of responsibility for the (future) creation and use of semi-autonomous, "human supervised" autonomous and fully autonomous weapons systems. In layman's terms, it attempts to answer the who, what, when, where and how of autonomous systems in military combat.

As many, myself included, have worried over the attribution of responsibility for such systems in the case of what the report titles "unintended engagement," the Directive is a welcomed first step. Why is this so? Because, it lays out clear lines of responsibility for creating guidelines for system development, testing and evaluation, equipment/weapons training, as well as developing doctrine, tactics, techniques and procedures. Indeed, the explicit purpose of the Directive is to establish such guidelines to "minimize the probability and consequences of failures in autonomous and semi-autonomous weapons systems that could lead to unintended engagements." These "unintended engagements" refer to "the use of force resulting in damage to persons or objects that human operators did not intend to be the targets of U.S. military operations, including unacceptable levels of collateral damage beyond those consistent with the law of war, ROE, and commander's intent." Thus it appears that this report will assuage not merely my mind, but those in and outside the beltway, as well as those in and outside of the ivory tower.

Unfortunately, it does not. In fact, it only fosters more questions and worry. The principal cause of such worry: overriding all of the guidelines and policy put forth in this 15 page directive. How is this done? Well, the Directive basically states that there is a legal loop-hole allowing the overriding of said policies when two Undersecretaries of Defense and the Chairman of the Joint Chiefs of Staff deem it so. In short, when there is a quorum of the Undersecretary of Defense for Policy, the Undersecretary of Defense for Acquisition, Technology and Logistics, and the Chairman of the Joint Chiefs, then these three can skirt the very "safeguards" that the Directive lays down as DoD policy. This is disconcerting because the fielding of autonomous weapons then does not even raise to the level of the Secretary of Defense, let alone the president. Indeed, the potential for such "unintended engagements" does not even reach level 1 cabinet level decision making. Whether this is done for expediency or political cover is open to question, but what is not, is how such a policy undermines not only U.S. strategic command (as it removes two of the most crucial players in the persons of the Secretary of Defense and the president), but also erodes the very notion of "proper authority" in the jus ad bellum considerations for just war. Thus while we might, upon first glance, welcome the Directive, we should instead be highly critical of it, and further press the Pentagon to align itself with the laws of war and requisites thereof.]]>The Distinguished Warfare Medal: A Sign of the Changing Timestag:www.huffingtonpost.com,2013:/theblog//3.26897972013-02-22T10:03:51-05:002013-04-24T05:12:01-04:00Heather Roffhttp://www.huffingtonpost.com/heather-roff/announced the creation of the "the Distinguished Warfare Medal" to recognize outstanding achievements by unmanned aerial vehicle pilots. The new medal will rank above the Bronze Star with Valor and just below the Distinguished Flying Cross. This is an interesting twist of events in military culture, as medals and honors are traditionally bestowed upon individuals whose acts of bravery and valor, in the face of grave physical danger, go above the call of duty. The entire notion behind bestowing such recognitions is that a service member has put acted selflessly, facing danger and possible death, for the sake of his comrades, the mission, or country. These medals are tokens and symbols of the military virtue par excellence: courage.

Yet how can one evaluate acts in war when the fighter is not on the battlefield and is in no physical (or even imminent) danger? Panetta's feelings on the matter are quite clear: "I've always felt, having seen the great work that they do, day in and day out, that those who performed in an outstanding manner should be recognized. Unfortunately, medals that they otherwise might be eligible for simply did not recognize that kind of -- of contribution." While such a sentiment is thoughtful, it is misplaced. "Doing great work" is fundamentally different than acting courageous. For instance, Aristotle reminds us that the virtue of courage is best understood as a "mean concerning matters that inspire confidence and fear," where one acts in the right way, at the right time and with the right motivations in the face of such fear. Combat and war, are of course, the primary theaters of fear. Yet unmanned aerial vehicle pilots are not, in any way, in danger and thus do not face the types of "fear" that traditional manned aircraft pilots or any combat soldiers face. While they are technically engaging in "combat" operations, they are not in the theater of combat.

Panetta seems to recognize this when he claims that "the medal provides distinct, department wide recognition for the extraordinary achievements that directly impact on combat operations, but that do not involve acts of valor or physical risk that combat entails." But why, then, even incorporate such acts into a system of recognition based on courage? Indeed, one must ask what the purpose of a medal for UAV pilots serves. If it is "department wide recognition," then some other sort of merit scheme that does not presuppose valor or courage on the battlefield could achieve this. Of course, it could be that the fear is that not recognizing such achievements threatens to create two classes of soldiers.

Or, perhaps more tellingly, the entire notion that we must determine how to assign merit to UAV pilots -- or perhaps future "cyber warriors" points to a different set of questions (and problems) in contemporary war-fighting. In other words, what is the nature of war and "courage" in such wars, when either one side (or both), is no longer in any sort of danger? Can we even begin to call these acts war, or are they only "war" for those experiencing violence? War is a conflict between two or more parties carried on with a "force of arms," yet the entire purpose of the use of such force is to make one side capitulate the demands of the other, usually for some political purpose. The coercion employed is to be costly on both sides of equation, which is why there is typically reticence to embroil oneself in such conflict. Though, in this new terrain of warfare, the costs appear -- for at least one side -- to be only monetary. Blood is not spilt, only equipment (or property) is damaged.

Another way of thinking about the new nature of war and how this new medal fits into recognizing a new generation of warriors is from the opposite end of the spectrum: cowardice. If we believe that we can attribute acts of courage and valor to soldiers whose actions achieve "extraordinary... impact on combat operations" though those actions involve no physical risk, how might we think of the opposite? What would constitute an act of UAV pilot cowardice? Is it even conceivable? If it is not, then we have something very telling about the new nature of war. For if soldiers cannot act cowardly in battle, then they also cannot act courageously, and so cannot be awarded medals based on those assumptions.]]>