The rapid increase in the use and capabilities of Unmanned Aerial Vehicles (UAVs), or “drones,” has led to debates on their place in US strategy, particularly their use in assassination missions, or so-called targeted killings. However, this debate has tended to focus narrowly on two questions: first, whether the US use of UAVs to assassinate its enemies, including US citizens, is legal, and second, whether drones should be given the autonomy to decide when to kill humans. This paper uses the concept of “necro-politics”—the arrogation of the sovereign's right both to command death and to assign grievable meaning to the dead—as it emerges in the work of Achille Mbembe to criticize the assumptions of these questions. It is argued that debates over endowing drones with the autonomy to kill humans assume that the current human operators of drones work outside of the context of racial distinction and colonial encounter in which they already make decisions to kill. The paper supports this argument with reference to the text of a US investigation into a strike which killed civilians in Uruzgan province, Afghanistan.

Issue Section

The decade between 2002 and 2012 saw a remarkable change in killing, from a time when no one had ever been the subject of a targeted killing by an unmanned flying weapon system, to one in which several thousand peoplehave been. Most of these people have been killed by Unmanned Aerial Vehicles (UAVs), or “drones,” in the service of United States, UK, or Israeli strategy. Most drone killings have been carried out by US (overt) military operations in Afghanistan or (notionally secret) CIA-led attacks in Pakistan's Federally Administered Tribal Areas and have increased markedly during the Obama administration. Whereas President Bush sanctioned, on average, one drone attack every 47 days, the average for President Obama was one every four days (Lorenz, von Mittelstaedt and Peter Schmitz ). Of the 308 drone strikes from 2004 until the time of writing, 256 took place under Barack Obama (Rushing ).

As is often the case with deadly innovations in the arsenals of Western states, the use of drones has been subject to a debate about their tactical conditions of use rather than the strategies they serve or the structures of power of which they form a part. In particular, this debate has been largely confined to two inter-related aspects, the legal and the ethical (as understood in the Anglo-American philosophical tradition). The legal debate focuses largely on whether the US tactic of using drones to assassinate enemies identified as Al-Qai'da is legal from the point of view of either Pakistani sovereignty or international law more broadly. The connected ethical debate concerns whether drones ought to be given more autonomy in decisions to take human life and whether this would be more or less compatible with ethical behavior in war.

Whether for or against their use, the participants in these debates are concerned with what may go wrong with drones and how their effective use may be assured. The horizon of these debates, to be feared or welcomed, is the prospect of an autonomous killing machine, empowered to take human life on the basis of an algorithmic distinction between potential targets. In the following, I challenge the common assumptions that lie behind these positions, in particular the core assumption which presents the current drone operators (or indeed, Western military personnel in general) as perfectly rational, liberal subjects sifting information about potential targets to carry out just acts of killing—with massacres and atrocities resulting from soldiers' failure to approximate such ideals. For the prodrone camp, autonomous or semi-autonomous killer robots will reduce such failure by removing the emotional complications of human combat. For the antidrone camp, the risk of drone failure and the absence of human responsibility for any killer algorithm pose a greater danger.

Rather, I argue that drones already operate within an algorithm of racial distinction, as the term is used in Foucault's later lectures (Foucault ; Su Rasmussen )—that is, the drawing of a line between populations, some of which are to be fostered and managed and others rendered subject to the sovereign right of death. In particular, I use the concept of “necropolitics” as it emerges in the work of Achille Mbembe to criticize the assumptions of the debate on drones. Necropolitics refers to the arrogation of, in Foucauldian terms, the sovereign's command of death, but within the apparatuses of surveillance, auditing, and management which characterize “biopower.” It is this dual character, defining a population through the methods of biopolitics yet rendering it as the potential object of the sovereign power of death, which distinguishes necropolitics from a broader logic of racism. I support this argument with an illustration from an unusually well-documented empirical instance of drone warfare from Uruzgan province in Afghanistan. I demonstrate how, in this instance, drone operators assigned Afghan civilians beneath their gaze to membership of a population worthy of death, the Afghan Military Aged Male.

The Drone Debate

What is at stake in the debate on robotic warfare? Although a body of critical work on the topic is emerging (for example, Chamayou ; Gregory , ; Shaw ; Shaw and Akhter ), the rise of drones has largely been discussed across two interconnected axes. The first of these concerns whether US assassination missions in Pakistan (and by extension Somalia and Yemen) using drones are permitted under international law (see O'Connell ; Shah ; Aslam ; Plaw, Fricker and Glyn Williams ; Johston and Sarbahi ). Some of the substance of this debate touches upon the argument made here, but since most of the controversy concerns the legality of attacks carried out with drones, rather than the nature of drones themselves, I will not engage with it directly. Connected to this legal controversy, there is an ongoing debate over whether drones can be ethically used in war, and the implications they hold for the future of war. As I argue below, the positions in this debate present the problem not as the Western use of military force for imperial ends, but rather how the perceived failure or excess of that force can be prevented.

The positions in this debate can be sketched out, very broadly, into pro- and antidrone arguments. On the prodrone side, scholars such as Ronald Arkin (), Bradley Strawser (, ), and Armin Krishnan () argue that the central point is not the use of drones as such, but the regulation of such use in accordance with the basic principles of the laws of war: just cause, proportionality, and discrimination between combatants and noncombatants. Against this broadly positive assessment of drone use, critics such as Noel Sharkey (, ) argue that the risk of empowering robots with lethal force is too great, either in particular decisions to kill or in the lowering of barriers to declaring war in general. Both sides are concerned not just with the uses of drones in the present, but the framework of their use as they become more autonomous—autonomy meaning the “capability of a machine (usually a robot) for unsupervised operation,” with the “smaller the need for human supervision and intervention, the greater the autonomy” (Krishnan :4). The debate is, therefore, directed toward the horizon of the development an autonomous killer robot: not necessarily one resembling the androids of the Terminator franchise, but nonetheless a machine endowed with the capability to decide when to take human life (Arkin :9).

The argument in favor of using killer drones, even those with a high degree of autonomy, draws on an analogy between these systems and less sophisticated weapons. In this view, lethal drones will also be used under the supervision of a human at some point and therefore simply represent a heightened version of the phenomenon of prosthesis provided by firearms, missiles, and the like. Even if an autonomous weapons system were to “pull the trigger,” it would do so under human supervision. Nor are drones the first autonomous killing system: antitank mines, for example, respond to a stimulus (weight) that triggers their lethal response without any human supervision (Arkin :38). If the war in which such weapons are used is a just one, then there is no reason for drone use a priori to be unjust (Caroll ), and given the assumption of this argument that war is an inevitable part of human life, it is better to make humane war-drones than to oppose them (Arkin :2).

Against these arguments, those holding to the antidrone position argue that the development of autonomous killer drones represents a change from their simple use as weapons, even though that use has already proved inhumane and unjust. A drone tasked with killing an enemy it selects, even on the basis of some preprogrammed criteria, violates the chain of moral accountability necessary for there to be any enforcement of justice in war. If an autonomous robot killed someone in error, it could not be considered a morally responsible agent—and even if it could be, how meaningful would it be to punish a machine? Conversely, it would surely be unjust, so the argument goes, to punish a programmer or operator for a malfunction that was not a moral choice of their own.

Scholars such as Sharkey go on to claim that drones lower the barrier both to the individual acts of killing that make up war, and to starting wars. This is because of the ease with which the drone-using side can kill its enemies, without guilt, agony, the sight of blood, or the loss of young lives: In other words, drones make war too easy (Sharkey :371). Sharkey () envisions autonomous drones leading to “automated killing as the final step in the industrial revolution of war—a clean factory of slaughter with no physical blood on our hands and none of our own side killed.” Sharkey hits on an important point here, which I expand upon below, that drones are unlikely to be used in warfare between major industrial powers. Rather, as drones become more autonomous, they are likely to be employed in the contexts in which they are currently used: asymmetric and largely aerial warfare, exacerbating the tendency to view the civilian casualties of such strikes as mere figures on a screen. Moreover, Sharkey () claims there is simply no way to program a drone to discriminate between civilians and combatants.

Arkin (:xvi), by contrast, asserts that drones can be programmed to act humanely—indeed that such systems could be programmed to act more humanely than human “warfighters.” He suggests it would be possible to establish a system of accountability, or “responsibility adviser” for autonomous killing systems (Arkin :40). According to Arkin (:39), principles pertaining to the proportionate and discriminate use of violence could be literally hardwired into drones, improving the capability for wars to be fought humanely, even if perfection in this regard is impossible to reach. Of course, this would require some kind of algorithm by which the drone would decide whom to kill. Krishnan describes how DARPA, the US Defence Research Agency, has developed an automated target recognition system, which “would allow a robot or robotic weapon to independently identify an object as a target and to make a decision whether or not to engage this target … based on a computer analysis of the signatures and movements of an object in the battlespace,” although “in the long run it would always be very difficult for any ATR system to divide humans, which it could someday certainly distinguish reliably from other objects, into combatants and civilians” (Krishnan :55–56). Arkin argues that because drones would not have the human instinct for self-preservation, they could more easily approach potential targets and ascertain whether they were combatants or not with less likelihood of using lethal force (Arkin :46). Indeed, prodrone scholars such as Bradley Strawser claim not only that drones are just, but that one is morally obliged to use them because they are more accurate than humans and do not risk the pilot's life, leading to an overall gain (Strawser ; see also Arkin ).

Against these claims, antidrone voices focus on the potential for lethal mistakes. Indeed, in 2007 a semi-autonomous cannon malfunctioned at a military display in South Africa, killing nine soldiers (Lin ). Events such as this fatal accident may be behind the reluctance to have the weaponized versions of US ground robots (systems such as TALONS and SWORDS) fire shots in battle (Singer :29). Yet, here antidrone and prodrone ethicists converge, both concerning themselves with the idea of the accidental, the unforeseen, and the precautionary. Ronald Arkin (:36) and Armin Krishnan (:4–5) argue that there may indeed be lethal drone mistakes in war, but these are likely to be fewer than those of human soldiers and more predictable. Atrocities in war, Arkin argues, result from human failings not shared by drones: fear of one's own death, rage at the loss of a comrade, “revenge,” “power dominance,” “punishment,” “asymmetrical necessity,” “genocidal thinking,” and “dualistic thinking—separating the good from the bad” (Arkin :35).

It is at this point that the convergence in arguments from both sides of the debate reveals the lacuna addressed by this paper. Thus, they are concerned with what goes wrong either with drones (going haywire, killing people indiscriminately because of a programming error) or with human soldiers (submitting to their emotional drives and committing atrocities). Yet might not the atrociousness of drones arise from their “correct” and quotidian use—especially in the context of the counterinsurgent battlefield in which they are most likely to be used? The debate on drone autonomy assumes that the current, human operators of drones are perfectly separable from both the drone and the hierarchical structure of violence that produced it. The claim that drones would be less likely to kill civilians because they lack emotions assumes that human soldiers commit atrocities because of their emotions. On the contrary, I argue that any drone atrocities, in the context of counterinsurgency in which the machines are likely to be used, reflect precisely those apparatuses that “distinguish those who are to be protected from those who are to be feared or destroyed” (Khalili :1476). The operation of these apparatuses, of which any putative autonomous systems will remain a part, reflects a necropolitical logic, “separating the good from the bad” and establishing who is “an object in the battlespace.” Before looking at an example of such logic, it is necessary to answer the questions: What is “necropolitics,” and what is at stake in using such a concept to intervene in the debates on drone warfare?

Drone Necropolitics

What does it mean, then, to speak of necropolitics and of a racial algorithm of distinction? Before demonstrating the empirical usefulness of the concept in the final section of the paper, we must establish the definition and theoretical coordinates of necropolitics as an apparatus of racial distinction. The transformation of the enemy into an object to be destroyed in an abstract matrix, rather than a human life, is of course a familiar maneuver in the study of war. For example, Carol Cohn (:704–706) seminal work demonstrates how the language of defense intellectuals serves to domesticate, obfuscate, and gender nuclear warfare, thereby rendering the fate of potential victims literally unspeakable. In an intervention pertinent to the physical and psychological separation characteristic of drone warfare, der Derian (:10–13) has argued that the collapse between the reality and “virtuality” of warfare (the increasing interpenetration of warfare and entertainment technologies) has made the pursuit of a putatively virtuous war more possible. These are instances of the objectification of an enemy, their reduction to nonlife, which is certainly part of the process of drone war. The object of interest in this paper, however, is a specific variant of this operation: the identification of populations as less than life, or as dangerous life whose extinguishing must be managed in order for valuable life to flourish.

It is in the above sense that the term racism is used here: a specific meaning found in Foucault () later lectures and developed by Achille Mbembe. This sense is somewhat different from—although not incompatible with—that in which “race” and “racism” are used in the broader field of Critical Race Theory in its legal and sociological variants. The concern of scholars in that context is to demonstrate the socially constructed, rather than biological nature of “race,” and its fundamental emergence from structures of oppression and “privilege,” conceived of as unearned benefits accruing to the dominant. Racism makes “races,” such that “whiteness” consists of “a location of structural advantage, of race privilege… a ‘standpoint’; a place from which white people look at [them]selves, at others and at society” and a “set of cultural practices that are usually unmarked and unnamed” (Frankenburg :1). Critical race theorists thus seek to undo this invisibility, demonstrating the historical nature and social genesis of the structures that come to construct race, and especially the dominant “white race” (Ignatiev ; Allen ; Fields and Fields ).

Racism in this sense is connected to the usage involved in this paper, in particular in its connection to colonialism (Frankenburg :16–17). Racism inevitably involves a process of racialization: “a social and political process of inscribing group affinity and difference primarily onto the body … as well as on other markers of lived experience” (Vucetic :7). Such processes of dehumanization and ascription of stock characteristics to a subject population—their passivity, “femininity,” and simultaneous warlike and treacherous nature—are found throughout the history of Western warfare in colonial and postcolonial lands (Gregory :7–8; Porter :37–41; Khalili :233–236) and will appear further in the discussion below. However, I adopt here a distinct version of the notion of racism deriving from Foucault's lectures on territory, war, and security. The distinction in this work, and the later elaboration by Mbembe, lies in the place of racism in Foucault's categorization of the forms of operation of power. Outlining the steps of this categorization also demonstrates the contribution made by the concept of necropolitics to the body of work applying Foucauldian categories to the War on Terror, which have hitherto viewed the phenomenon through the lens of biopolitics, biopower, and the “biohuman” (Duffield :5–7; Dillon and Reid :20–24).

Racism and necropolitics occupy a particular position in the steps of Foucault's genealogy of the operative modes of power. We agree with Morton and Bygrave (:5) that these modes might operate simultaneously, having taken form successively, while also distinguishing between them—a necessary, if basic operation, if we are to outline the place of necropolitics in this map of power, and in particular that period in which Foucault speaks of biopower rather than governmentality (Su Rasmussen :36). If the power of the sovereign operates through the right to command death through spectacular violence inscribed on the body, the “society of the disciplines” manipulates the body into docility through surveillance and regulation: the transition from this form of power to a biopolitical one being that biopolitics (Foucault :240–246) manages the life of a population at the level of the population, its characteristic being not to command death but to “make life live” (ibid.:241).

What is the relationship of racial distinction to this familiar schema, and how does Mbembe's notion of necropolitics derive from it? The origin of the concept lies in Foucault's intuition that racism is a form of operation of power in the interstice of sovereign power and biopower (Foucault :254; Su Rasmussen :40). It is the old sovereign power of death, operative in a biopolitical setting, implying therefore a division between populations, the making of a “caesura” between the population worthy of being made to live, and that subject to the right to command death (Foucault :253; Su Rasmussen :40). Foucault (:60–62) traces what he believes to be the history of this distinction as a variety of “race war” myths from the putative conflict between Gauls and Franks to the essence of Nazism, which he sees as the utmost expression of the logic of the sovereign right of death within the techniques of biopower (ibid.:259–260). Foucault demonstrates the link between biopower as the management of populations, and racism as the delineation and destruction of “unhealthy” populations, in that the organicist conception of the “race” sees destruction as a matter of increasing the vitality of the remaining body (Su Rasmussen :40).

Su Rasmussen (:34) invocation of necropolitics thus proceeds from Foucault's intuition, never fully fleshed out, that racism was a technology of biopower at work in the European colony. Mbembe (:23) locates necropolitics as the “synthesis of massacre and bureaucracy,” perfected in the colony and returned to Europe in the Second World War. Thus, racism is not a matter of individual prejudice or (just) the ascription of certain characteristics to a subjugated population, but a paradoxical form of biopolitics: “Necropolitics” in the sense that it establishes the distinction between populations worthy of life and subject to the right of death and that the command of death operates through the auditing, surveillance, and management of the population thus identified.Mbembe (:24) provides a prospectus of the operation of this form of power in the colony, outlining how the placing of the colonial population beyond legal codification permits the exercise of the sovereign right of death through means of the surveillance and assignation of populations.

Mbembe's concept forms a useful addition to Foucauldian approaches to the War on Terror, one that can be particularly productively invoked in the discussion of drones. The prevailing lens through which these categories have been applied has seen the War on Terror as an extension of the logics of biopolitics and biopower: the management, surveillance, and auditing of life at the level of populations (Duffield ; Reid and Dillon ). Thus, Dillon and Reid (:20) argue that the War on Terror represents an extension of the “liberal way of war”: waging a global struggle for “biohumanity.” Reid has extended this argument to claim that, far from representing the recrudescence of territorial sovereignty imprecated in military competition, the US invasions of Iraq and Afghanistan represented precisely that operation of decentered biopolitics theorized by Hardt and Negri a year prior to the September 11 attacks (Reid :238–240). The justifications for the War on Terror—apparently without geographic or temporal limit and proclaiming as a war aim “a balance of power that favors freedom”—indeed appear to present an international parallel to the passage from sovereign to disciplinary power: from the spectacular inscription of the power of the sovereign on the body of the subject, to the formation of subjects disciplined into a form of behavior. The apparatuses of the management of the processes of life by which Foucault characterizes biopower hinge upon the technologies of disciplinary codification, auditing, and mapping: technologies identified by Derek Gregory (:39–40) in the occupation of Baghdad.

Yet, a tension remains between the apparent operation of exceptional sovereign power, of the kind Foucault distinguished from disciplinary power over populations. Judith Butler (:94) has pointed to this tension, identifying how a kind of uneven and combined sovereign power is at work in the biopolitical apparatuses of the War on Terror, producing countless instances of the power of the sovereign wreaked upon the body of the subject, albeit brought to this point through techniques of the surveillance and management of populations. The necessary corollary of this articulation of sovereign and biopolitical modes is the constitution of certain populations as dangerous elements, as less than life (Butler :99).

Where do drones fit into all this? As outlined above, the concept of necropolitics fruitfully engages with both sides of the tension—with, as it were, the questions of “when is sovereign power and when is biopolitics?”—by identifying racism as the technology of power that unites the exercise of sovereign power with technologies of the surveillance, auditing, and management of populations. The drone is precisely a technology of the management of populations: of the drawing of a “caesura” between worthy and unworthy life (Su Rasmussen :40). Approaching the drone in this way allows us a way out of the bind of sovereignty versus biopolitics. The drone's eye view is a fundamentally biopolitical one, in the sense that it surveys and audits, in their “patterns of life.” Yet the drone is not an instrument of making life live among those it surveys. Its purpose is to destroy bodies, not render them docile.

For the drone is not merely a new technology in the everyday sense of a mechanical and electrical assemblage: It is a technology of racial distinction. What else is the drone operator's screen, or any potential automated target recognition (ATR) system, but a means “to define who matters and who does not, who is disposable and who is not” (Mbembe :27)? Circling and swooping above entire territories, the drone defines who is an “object in the battlespace” and who is not, delineating those areas and populations characterized by the “acceptability of putting to death.” The current debate on drones and their potential autonomy misses this point, not by underestimating the autonomy of drones, but overestimating that of their operators: There is alreadya target recognition system at work in the technology of racial distinction that embraces both the mechanical drones and their fleshly operators.

The necropolitical logic discussed in this paper thus refers to the drawing of the caesura between populations worthy and unworthy of life, and the consequent exercise of the sovereign power of death against the members of these populations. Such a process is at work in the visible techniques of distinction and allocation to a killable population used by the drone pilots in the example I give below. This population is known and audited through the gaze of the drone, but for the purpose of death rather than life. It is in this sense that I speak of the necropolitics of drone warfare—the operation of a logic revealed, but not comprehended, in the mainstream literature on drones, which speaks of “separating the good from the bad” and establishing who is “an object in the battlespace.” The example of the drone strike in Uruzgan illuminates this logic starkly.

“The Desire to Go Kinetic”: Drone Necropolitics in Uruzgan

In the following section, I take up an unusually well-documented instance of a strike that killed Afghan civilians to substantiate my argument about the necropolitics of drone warfare. This is only one instance, of course, but it evokes a series of responses and slippages also visible in the considered pronouncements of US military personnel of varying ranks. For example, the implication of the view that drones are “our answer to the suicide bomber” (Singer :63) is not difficult to draw out: The fanatical and uncivilized opponents of the United States do not fear death and therefore must be met with the ultimate product of technical civilization, a killer robot without the capacity to fear death (see also Gregory ). A similar slippage is at work when a US military journalist predicts the development of autonomous robotic networks that will “help save lives by taking humans out of harm's way” (Lawlor in Graham :169). Of course, a combat drone that does not put humans in harm's way would be useless. The premise of this statement is that those who take up arms against the United States have forfeited their humanity, becoming “savage life … just another form of animal life” (Mbembe :24). The process of racial distinction relies upon an apparatus of knowledge that identifies those whom it is acceptable to put to death. This apparatus of knowledge, the necropolitical logic of distinction of the drone, in particular defines Afghan “MAMs” (Military Age Males) as those whom it is “acceptable to put to death,” and assimilates every decision to kill to the identification of members of such a category. As the NYU/Stanford report “Living under the Drones” relates, “the US government counts all adult males killed by strikes as ‘militants,’ absent exonerating evidence” (International Human Rights Clinic at Stanford Law School and Global Justice Clinic at NYU School of Law 2012:x).

Necropolitical logic—distinguishing between populations whose life is to be managed and those who are subject to the right of death—is visibly at work in the instance I examine below. On February 21, 2010, a US strike was launched at a convoy of vehicles passing through the Afghan province of Uruzgan, killing between 15 (the US estimate) and 23 (the local elders' estimate) civilians (McHale :1). The incident was widely reported at the time as an especially awful example of civilian deaths due to US airpower in Afghanistan. A reading of the text of the US military report into the incident reveals the operation of necropolitics—the making of a “caesura” between life and death—by the drone team. The text used here comes from the US military report into the incident, which is available in full from the American Civil Liberties Union. In particular, I use the executive summary and the transcript of an interview with one of the victims of the strike. The report is now in the public domain, and all identifying information in it has been redacted.

The US military “kill chain” involved in the Uruzgan incident comprised ground troops, referred to in the text as “Operational Detachment Alpha” (ODA), the Predator Drone operators based at Creech Air Force Base in Nevada, the “screeners” processing information from the Predator video feeds at Hurlburt Field Base in Florida, and helicopter gunships known as “OH-58D” in the text. The helicopters fired the actual missiles, but this was on the basis of a decision made by drone operators based on their interpretation of what the screeners said. The text demonstrates that this decision was not the result of emotional distortion under the stress of battle—neither screeners nor drone operators were physically present in the battlefield, and the report describes the NCO of the troops actually present on the ground as “the most mature voice on the radio” (McHale :3). Rather, a picture of who is an acceptable target, a defined category within the apparatus of knowledge of the US military and consistent with the colonial apparatus of knowledge in the occupation of Afghanistan as a whole, is used to assimilate all the “objects in the battlespace.” This population, those whom it is acceptable to put to death, is Afghan Military Aged Males. As the text below demonstrates, the drone operators worked persistently to construe the people beneath their gaze as members of this population and consequently on the fatal side of the caesura between worthy and unworthy populations.

There was indeed combat in the vicinity of the air strike. US forces and their Afghan allies were mounting an attack on their enemies in the village of Khod, some 12 km away (McHale :1). The convoy of three vehicles was first spotted on the road at 5 am: The Predators observed its progress for some three and a half hours before the Hellfire missiles were launched. The report describes how “[a]dult men were observed gathering in and around the vehicle moving tactically and appearing to provide security” (2010:2). The “tactical” nature of this movement is not explained, nor the appearance of providing security—surely a wise precaution in any case, given that not only were the vehicles passing through Taliban-held territory, but they were in fact about to be subject to armed attack. However, one of the women who survived the attack (referred to in the report as “the female”) stated that the cars made stops to pray, and all of the passengers were unarmed (CENTCOM :2).

The operation of a necropolitical logic becomes especially visible in the back-and-forth over whether the children visible on the screen are children or adolescents and whether there are any weapons present at the scene. Here the drone operators mark out that line which divides those whom it is acceptable to put to death from those who it is not—on the fatal side of which lie Military Age Males—and cognitively assimilate all the Afghans present on their screens to this category. They are rendered “a population understood as, by definition, illegitimate, if not dubiously human” (Butler :77).

About three-quarters of an hour into the Uruzgan incident, the screeners at Hurlburt Field identified children among the passengers in the vehicles: at 5:38, 5:40, and 5:47 (McHale :3). There follows a revealing exchange about the category to which these figures belong: whether they were “children or adolescents.” When asked by the ground troops about the presence of children at 7:38, “the Predator crew discusses with the Screeners and the Screeners change their assessment to ‘adolescents.’” Although there was no agreed definition of “adolescents,” the Predator pilot reports to the JTAC [ground commander] “We're thinking early teens… adolescents” (2010:3).

This categorization of childhood, it becomes evident, is both fatal and opaque. The crews do not agree on what constitutes an adolescent and the nature of adolescent personhood: In any case, the age range identified as “adolescent” (nine to fourteen or seven to thirteen years old) appears much younger than the normal English usage of someone who has begun puberty but not reached full adulthood. A similar concern seems to exercise the US major interviewing the surviving victim of the attack, asking her twice whether children “under ten” were present (CENTCOM :3).

When asked “[i]s adolescent a different call out then [sic] child or children?” a screener replies:

‘I think it varies from Screener to Screener. One Screener may be more comfortable with calling out adolescent. It's very difficult to tell. I personally believe an adolescent is a child, an adolescent being a non-hostile person’ [Reference redacted]. He stated he believed an adolescent to be 9–14 years old [Reference redacted]. [Name redacted] the primary Screener at the time, said she believed an adolescent to be 7–13 years old, and ‘in a war situation they're considered dangerous’ [Reference redacted]. (McHale :3)

This indistinct distinction takes on a fatal aspect at 8:37, shortly before the missiles are launched, when the screeners change their assessment of “child” to “adolescent” and therefore “dangerous.” The screeners issued a call

stating that the ‘*2 children were to be adolescent’…This * indicates a corrected assessment [Reference redacted]. Ultimately, the distinction between children, adolescents and MAMs disappeared. The Predator crew immediately before the strike was ordered, only identified militarily capable war-fighting age males being on the convoy. (McHale :3)

When the Predator crew suggests firing on the vehicles on the basis of a supposed weapon sighting, the ground commander

responds ‘we notice that but you know how it is with ROEs [Rules of Engagement] so we have to be careful with those, ROEs. In contrast the Predator crew acted almost juvenile in their desire to engage the targets. When the Screeners first identified children, the Predator sensor responds ‘bullsh*t, where?’ The Predator pilot follows with ‘at least one child… Really? assisting the MAM, uh, that means he's guilty/yeah review that (expletive deleted)… why didn't he say possible child, why are they so quick to call (expletive deleted) kids but not to call (expletive deleted) a rifle.’…The Predator sensor says on internal comms, ‘I really doubt that children call, man I really (expletive deleted) hate that.’ (McHale :3)

The tangible sense of these extracts is not one of the Predator operating team committing an atrocity out of rage or fear, which would be ameliorated by the machine rationality of an autonomous drone, but rather the operation of an existing knowledge of distinction in putting to death, of which any potentially autonomous drone would still be part. The entire incident lasts a long time, more than three hours, and involves a series of back-and-forth deliberations based on the shared assumption that a Military Aged Male in Afghanistan is a member of dangerous population, liable for putting to death; thus the revealing protest of the Predator pilot: “at least one child… Really? assisting the MAM, uh, that means he's guilty.” Further, when seeking to clarify the age of the children, the teams asks not “are they children” or “are they a threat” but rather “[i]s adolescent a different call out then [sic] child or children?” That is, the concern is not with what these actual humans are doing, their age, or the threat their actions pose, but where they fit in the established categories of the US military. If “adolescent” forms a different such category to “child,” then persons found to belong to that category belong once again to the population that is liable for putting to death. The subsequent elision of children into “adolescents,” followed by the deadly missile launch, illustrates precisely that drawing of a “biological caesura” of which Mbembe writes.

“A Beautiful Target”

The assimilation of children to the status of “Military Aged Male” reflects a further step in the necropolitical logic: the assumption that all members of the MAM population pose a lethal threat, to be met with equally lethal violence. The MAM belongs to “a population understood as, by definition, illegitimate, if not dubiously human” (Butler :91). The drone pilots of the Uruzgan incident enact this understanding in their pursuit of a definition of the human forms visible on their screens as MAMs and therefore as carrying weapons: weapons that pose no threat to the drone crew (nor to the ground troops 12 km away) but whose hypothetical presence renders permissible the putting to death of the passengers in the vehicle.

The survivor who was interviewed after the incident by the US military confirmed there were no weapons in the vehicles—only poultry and other gifts for the trip being made to Kabul (CENTCOM :3). The drone pilot and screeners, however, claimed to identify three weapons throughout the three-hour long incident (McHale :3). We have already seen how the drone pilot reacts with frustration to the identification of children rather than MAMs bearing arms: “why are they so quick to call (expletive deleted) kids but not to call (expletive deleted) a rifle.” Earlier in the text, we found how “adult men gathering in and around the vehicle” was rendered as “tactical movement.” The details of how these adult men come to be weaponized in the imaginaries of the drone team are revealed in the executive summary (2010:3): “[a]t 0533D, the Screeners from Hurlburt Field Florida first identified a possible weapon with the MAMs in the convoy. There are additional reports of weapons at 0622D, 0730D, and 0734D.”

The Predator pilot and the screeners argue about what they are seeing, ratcheting up the dangerous nature of the MAMs under their gaze. What matters is not whether a male Afghan between the ages of 13 (or possibly even younger, as seen above) and 65 is actually engaged in combat of any kind but rather, first, that he belongs to this population and, second, that he is associated with an object that could be perceived to be a weapon. Thus, relates the report (2010:4), “[t]he Predator crew used the term ‘PID’ to mean positive identification of an object rather than as used in the Rules of Engagement to mean positive identification of a target.” The Predator crew shows a persistent eagerness to assimilate the people in the convoy, not to mental artifacts of their own making or to simple rage or fear, but to a prior and given interpretation of members of the Afghan population whom it is permissible to put to death. Thus,

[a]dditionally, on several occasions the Predator crew identified weapons on their own, independent of the screener's assessment. At 0511D the Predator makes a radio call to [redacted]… They prompted the screeners in mIRC [the online chat relay system used by the US military] to let them know if they could PID any weapons, but at 0518D, the screeners reported that they could not confirm any weapons. (2010:4)

The screeners interviewed after the incident confirmed the absence of weapons. It may be that the objects seen as dangerous (in an abstract sense, thousands of miles from where they were sitting) by the Predator crew were the turkeys mentioned by the injured survivor. Since both the object and the man holding it were obliterated by the US strike, we will never know.

The particular interlude prompted by the demand for weapons to be seen at 5:18 leads to a highly revealing exchange between the Predator team and the screeners, which is worth quoting at length:

At 0529D the Predator pilot states to the crew ‘does it look like he is ho'n something across his chest. It's what they've been doing here lately, they wrap their *expletive* in their man dresses so you can't PID it.’ Then on the radio to [redacted] he says ‘looks like the dismounted pax on the hilux pickup on the east side is carrying something, but we cannot PID what it is at this time but he is carrying something.’ After the Predator crew prompted them twice in mIRC, the screeners call out a possible weapon and then ask the crew to go white hot to get a better look. The response from the sensor operator is ‘white hot is not going to give us anything better, that truck would make a beautiful target.’ The Predator pilot then at 0534D made this radio call ‘All players, all Players from [redacted] from our DGS, the MAM that just mounted the back of the hilux had a possible weapon, read back possible rifle.’ During their post-strike review, the screeners determined that this was not a weapon. At 0624D the screeners called out a weapon, this the only time that the Screeners called out a weapon without being prompted by the Predator crew. At 0655D, the Predator pilot called [redacted] and told him that the Screeners called out two weapons. The Screeners had not made any call outs of weapons. At 0741 the Predator pilot calls [redacted] and says ‘there's about 6 guys riding in the back of the hilux, so they don't have a lot of room. Potentially could carry a personal weapon on themselves.’ (2010:4)

A great deal can be understood about the necropolitical logic at work in the occupation of Afghanistan through this passage. As an indicator of the role of Orientalist fantasy in the tendency of Western militaries to “effeminise the men of the [occupied] population through both symbolic and practical emasculation” (Khalili :1480), the Predator pilot's characterization of the Afghan man's clothing is quite stark: “their mandresses.” Nor does this phrase refer solely to the Predator pilot's notion of what men ought to wear (presumably trousers), and the implied denigration of those whose clothing does not meet this norm. It also reveals the drawing of a caesura, a mental and political cordon around those whose actions inherently render them part of the population it is acceptable to put to death.

We can consider this act of delineation at the basic level of pronouns. The Predator pilot describes how “what they've [emphasis added] been doing round here lately” is to “wrap their *expletive* in their man dresses so you can't PID it”. Before this, he asks for confirmation that the man on the screen does indeed look like he is holding something across his chest. Now, it may be objected that “they” is simply a pronoun here—which it is, but this usage is in no sense simple. The pilot could have said “that's what the Taliban have been doing round here lately,” or “the enemy” or “the insurgents” or a similar noun. By using “they,” the pilot shows that he already considers the man he is looking at to be one of “them,” and this “they” have very definite characteristics, culled from the imaginary of what Patrick Porter () calls “military orientalism.” “They” are effete, exotic, and treacherous in transgression of gender boundaries by, for example, their wearing “mandresses.” Nor is the mandress, however, comfortable or stylish as it may sound in comparison with US military uniform, a simple piece of clothing. It is itself weaponized, a tool of the MAM's underhand concealment of the arms he is assumed to bear, and which the action of carrying something across the chest inadvertently reveals.

The unspoken frustration behind the Predator pilot's ascription of a motive to the Afghan man's concealment of a (nonexistent) weapon is doubly instructive. Why do MAMs hold things across their chests and inside their clothes? They do so “so you can't PID it.” This implies that the pilot believes that the Taliban are manipulating US rules of engagement to the degree that they know what constitutes a positive identification of a weapon for a drone pilot and are deliberately preventing this identification, thus hampering the use of lethal force against them. The pilot therefore inverts the rules of engagement by evoking the tactical wrapping-up of objects in the “mandress”: An Afghan male without a visible weapon thereby becomes grounds for threat.

It is the potentiality of the Afghan MAM—to which category all Afghans beneath the Predator's gaze have by this point been assimilated—to bear arms against the United States that renders them acceptable to put to death. To define someone as bearing arms (even, or especially, concealed arms) is to place them in this population. The Predator crew rejects any impediment to their attacking “the beautiful target” of the pick-up truck, exclaiming that “white hot is not going to give us anything better.” The pilot raises a wide alert on the basis that a man in the back of the pick-up truck may have a rifle: “[a]ll players, all Players … MAM that just mounted the back of the hilux had a possible weapon, read back possible rifle.” Again, two hours later and without any apparent referent in this instance, the pilot claims to see “about 6 guys riding in the back of the hilux, so they don't have a lot of room. Potentially[emphasis added] could carry a personal weapon on themselves.” Not only does the pilot act on the basis of potential rather than actual weapons, he directly invents such weapons, calling another member of the team to tell “him that the Screeners called out two weapons” when no such call had been made.

Conclusion

What does this instance of drone war contribute to the debate on autonomous killing machines? One likely objection to my argument is that this is an isolated incident, at most an aberration of precisely the kind that greater drone autonomy would make less likely (a case made particularly forcefully in Arkin ). Yet this objection is based on the view of the individual rationality—or irrationality—of the drone pilot or soldier as sources of atrocious behavior, to be constrained by rules of engagement or programmed away with a more efficient algorithm. This view is at odds with close consideration of what went on in Uruzgan: not madness, rage, or prejudice on the part of the drone pilots but rather their operating within a predefined mental apparatus that renders acceptable the putting to death of Afghan adult men.

The Uruzgan incident is indeed only one instance (although given the level of reported civilian casualties from drone strikes, not an isolated one), but its uniquely rich documentation evokes the necropolitical logic in which the drones are embedded. These structures delineate places and populations for the exercise of sovereign power: “who matters and who does not, who is disposable and who is not” (Mbembe :27). The impulse of the Uruzgan drone operator to “go kinetic” was not simply a violation of the US Rules of Engagement: It was also a confirmation of them in the act of violation. The designation of a category of inherently dangerous people, Afghan Military Aged Males, leads to the assimilation of all members of that category to a threat that must be eliminated by death, and the further assimilation of all humans in sight of the drone to that category. This operation of necropolitics, of the technology of delineating between the sovereign right of death and the discursive command to live, is surely visible in the transcript of the Uruzgan incident. When we consider the meaning of the drones and their humans, it is to this logic of racial distinction that permits death—rather than the development of ever-more powerful or autonomous algorithms—that we must look.

Anon. (2012) Living Under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan . Stanford International Human Rights and Conflict Resolution Clinic and Global Justice Clinic at NYU School of Law.

International Human Rights Clinic at Stanford Law School and Global Justice Clinic at NYU School of Law. (2012) Living under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan . Available at http://www.livingunderdrones.org/download-report/. (Accessed September 1, 2012.)

The Bureau of Investigative Journalism (TBIJ) estimates that at least 2,562 people have been killed in drone strikes in Pakistan. However, given the secrecy that surrounds these and other CIA-run programs of drone killing, it is impossible to be sure exactly how any people have been killed by drones. See The Covert Drone War, available at http://www.thebureauinvestigates.com/category/projects/drones/. (Accessed on September 9, 2013.)

Steve Niva (:190–192) has also placed the evolution of drone warfare within the context of the emergence of “network-centric” and “chaoplexic” warfare in US military strategy.

The related terms “imperial” and “colonial,” although connected, should be distinguished and explained here. I follow and expand upon Robert Young (:26–27) distinction between the two. Imperialism, in this sense, is a “general system of economic domination with direct political domination being a possible, but not necessary, adjunct … think of the Pentagon and the CIA in Washington, with their global strategy of controlling events in independent states all over the world to defeat communism or Islamic resistance and further US interests.” Colonialism is, by contrast, a practice: the relationship between dominant and the dominated at the site of their encounter, the colony—most commonly in histories of settlement by metropolitan colonists (Young :17). Drone warfare, of course, does not involve the settlement of Pakistan or Afghanistan by US colonists, but is at the seam of an encounter between the dominant and the dominated in a system of global domination, thereby exhibiting continuity with the colonial projects of the nineteenth and twentieth centuries (Gregory :7–8). Thus, the system of strategic relationships intended to maintain US global dominance is imperial: The encounter between the operatives of that system (such as drones and their “pilots”) and the dominated is colonial.

However, it should be noted that Strawser () argues “that autonomous drones—weapons with an artificial intelligence, which could make lethal decisions on their own—are morally wrong in principle.”

Or biopower—the distinction is not made fully clear (Su Rasmussen :36).

As Su Rasmussen (:42) notes, the original lectures at the College de France on this point resemble Hannah Arendt's arguments on totalitarianism, but this comparison is not pursued by Foucault.