Chapter IV: Globalization and the Changing Face of IDentification

INTRODUCTION

National security measures can be defined as those technical and non-technical measures that have been initiated as a means to curb breaches in national security, irrespective of whether these might occur by nationals or aliens in or from outside the sovereign state. National security includes such government priorities as maintaining border control, safeguarding against pandemic outbreaks, preventing acts of terror, and even discovering and eliminating identification fraud. Governments worldwide are beginning to implement information and communication security techniques as a way of protecting and enhancing their national security. These techniques take the form of citizen identification card schemes using smart cards, behavioral tracking for crowd control using closed-circuit television (CCTV), electronic tagging for mass transit using radio-frequency identification (RFID), ePassports for travel using biometrics (figure 1), and 24×7 tracking of suspected terrorists using global positioning systems (GPS).

The electorate is informed that these homeland security techniques are in actual fact deployed to assist government in the protection of its citizenry and infrastructure. The introduction of these widespread measures, however, is occurring at a rapid pace without equivalent deliberation over the potential impacts in the longer term on both citizens and business. This chapter explores the background context to the proliferation of automatic identification and location-based service techniques post September 11, 2001. Such themes as globalization, the role of intelligence in preserving national security, the rise of new terrorism, and the ability to securitize a nation state are explored.

THE IMPACT OF GLOBALIZATION

Globalization is defined by Findlay (1998, p. viii) as “…the collapsing of time and space – the process whereby, through mass communication, multinational commerce, internationalized politics and transnational regulation, we seem to be moving inexorably towards a single culture…” For Findlay, crime (and more specifically transnational crime), “its representation and its impact are part of globalization.” Some scholars have even gone as far as to pronounce that globalization is a facilitator of modern transnational crime (TNC). Globalization is a paradox and reflexive concept. It generates two opposing forces. At first it attempts to bring together people of all nations, to break down borders and barriers alike. Globalization is about coordination, integration and harmonization in a bid to reduce global insecurity by increasing knowledge sharing activities. Yet this same openness and interdependence enables “various risks to destabilize the international economy” (Bruck, 2004, p. 116). For the greater part, modern TNC is piggybacking on global supply chains (Shelley, 2006); in this manner, organized crime groups can quickly form, act, and then disband after fulfilling an objective.

Terrorist organizations engaged in transnational crime (TNC) for instance, like any other transnational company, can take advantage of open markets, global reach of customers, technological innovation, new recruits (of all backgrounds and talent), international financing sources etc. There is a type of convergence which is occurring between criminal groups, between crime types and between crime regions which has been facilitated by globalization. Some of these TNC groups also claim that their involvement in organized crime is a direct response to globalization pressures.

Buzan, Wæver and de Wilde (1998, p. 4) call this the “dark side” of globalization, “where criminal organizations are said not only to have benefited from the increasingly open global economy, but to have developed powerful tools, techniques and relationships to thwart the state.” Barrie Stevens (2004, p. 10) calls this view the “flip side of the coin” where transnational organized criminal groups use the very same channels (i.e. transport and communications) to conduct their illegal activities, as if they were legal entities. These channels are “vulnerable to abuse through theft, fraud, the trafficking of humans and animals, terrorist operations and so on” (Stevens, 2004, p. 21).

In summary, there have been structural changes in transnational organized crime as a result of globalization. Transnational criminal groups now reflect a typical business set-up with established “core competencies”- they are cellular, small and flat as opposed to hierarchically structured and focus on their strengths in a given service (i.e. transnational crime). The groups form temporary alliances, form rapidly, so as to go undetected. Members of specific groups are also sub-contracted, rather than recruiting permanent staff, analogous to the notion of virtual teams in corporations.

Globalization, it should be noted, is often cited as one of the “root causes” for terrorism today, as the rich-poor divide continues to widen. According to Paul Rogers (2007), “…the combination of a widening rich-poor gap with an increasingly knowledgeable poor, is leading to a ‘revolution of unfulfilled expectations’, a prominent feature of many insurgencies and instability in Latin America, North Africa and the Middle East.”

THE RISE OF THE "NEW" TERRORISM

The key difficulties in formulating a universally acceptable definition of “terrorism” include: (a) fluidity of the term particularly over the last two hundred years; (b) subjective nature of the definition based on the biases of individuals or agencies; (c) political vernacular that has entered the discourse on terrorism; and (d) media rhetoric and involvement in the too broad labeling of terrorist acts. On commenting on the much-venerated Oxford English Dictionary definition of “terrorism”, Bruce Hoffman (2006, p. 3) states that the definition is “too literal and too historical to be of much contemporary use…”

Brian Jenkins has written that what is called “terrorism” depends on one’s point of view. The term is inherently ‘negative’ and implies a moral judgment. Once the “terrorist” tag is given to an opponent, it is highly likely that others will also be persuaded to use it in the same manner. “Hence the decision to call someone or label some organization “terrorist” becomes almost unavoidably subjective, depending largely on whether one sympathizes with or opposes the person/group/cause concerned. If one identifies with the victim of the violence, for example, then the act is terrorism. If, however, one identifies with the perpetrator, the violent act is regarded in more sympathetic, if not positive… light, and it is not terrorism” (Hoffman, 2006, p. 23). Consider that terrorist organizations themselves today will never use the term “terrorist” in any form to describe themselves or their activities. Terrorists do not see themselves as “terrorists” but as “reluctant warriors, driven by desperation” (Hoffman, 2006, p. 22).

In addition what is disturbing is that the United Nations and other organizations have taken action against terrorism even while not being able to agree on a definition (Boulden & Weiss, 2004, p. 4). In the United States definitions of “terrorism” abound, each agency (and even each department within an agency) chooses a construct that caters to their main objectives. The depth of the problem of definition can be seen in the aphorism that “my freedom fighter is your terrorist” (Ganor, 2002, p. 287). As Hoffman (2006, p. 23) elaborates, he who has been called a terrorist will almost always claim that the real terrorist is the ‘system’– whether society, law or government– that they are fighting against. What the media often labels as terrorism is not truly representative of what a terrorist act is. The term “terrorism” seems to be thrown about by the media loosely to depict wide ranging acts of violence. Hoffman points to the role of the news media that divided the United Nations in the 1970s and continues to do so today. He believes that the news media “…has further contributed to the obfuscation of the terrorist/“freedom fighter” debate, enshrining imprecision and implication as the lingua franca of political violence in the name of objectivity and neutrality” (Hoffman, 2006, p. 28).

Characteristics of the New Terrorism

Hoffman (2006, p. 19) has written that the terrorist attacks of September 11, 2001 “redefined terrorism yet again… more than twice as many Americans perished on 9/11 than had been killed by terrorists since 1968- the year acknowledged as marking the advent of modern, international terrorism.” This redefined terrorism has been given the name “new terrorism” by Simon and Benjamin (2000, p. 12), among others. The increasing trend is towards mass-casualty terrorism, a significantly more lethal preference than in the past (figures 2 and 3). According to Dolnik and Fitzgerald (2008), “new terrorism” as opposed to “traditional terrorism” is characterized by: increasing lethality, religion replacing politics, mass casualty justification, a reduction of “taboo targets”, transnational networks, advanced technologies, decentralized leadership, ad hoc groups, one-off events; and increased prominence of suicide terrorism.

What we are witnessing today, are terrorists attempting to out-do previous attacks in a bid to intensify their campaign and muster support for their plight by those who are sympathetic in the international arena. Not only are the new terrorists attempting to ‘shock’ their audience, but as the audience continues to become desensitized given wide media coverage of past attacks (Ben-Shaul, 2006), each new attack must become more and more violent to maintain the same level of fear and panic in society. The incidence of suicide operations is indicative of the new terrorism.

The increasing lethality of the new terrorism is also an attractive tool for the recruitment of new members into terrorist organizations, and even the formation of new splinter groups that end up being even more radical than their predecessors. Another important characteristic of the “new terrorism” is the expansion of target categories and a reduction of taboo targets. In the past there were certain groups that were plainly considered “off-limits” but today anyone can be a legitimate target including women and children, the elderly and the young, even members of the same constituency depending on their place of residence and to whom their taxes are paid. This all has to do with terrorists enforcing a “common grief” on their victims.

Another characteristic of the “new terrorism” is the advanced technologies that are at the disposal of today’s terrorist organizations for the careful planning, operation and execution of violent acts (McNeal, 2007, p. 789). Among these technologies, can be included specific Internet applications such as electronic mail, bulletin boards, social networking sites and web sites (Table 1). Today it is possible to do meter-by-meter reconnaissance of targets via virtual tours freely available for viewing on the Internet. It is these types of technologies which are also said to be responsible for the rise of “home grown terrorism”. Pre-paid mobile phones are even being used to detonate bombs remotely. Analogous to the network-centric nature of the Internet, the structures of terrorist organizations have also transformed into loose networks of cells which operate without any real central command.

Suicide Missions

The most popular and rapidly proliferating terrorist tactic today is suicide missions (Dolnik & Fitzgerald, 2008, p. 14). Suicide missions are defined as an act where a suicide bomber attaches explosives to themselves or a vehicle driven by him/her and with complete premeditated knowledge approaches a chosen target and detonates the bomb, and thus kills himself/herself to cause maximum damage. Lucia Ricolfi (2006, p. 103) categorizes suicide missions into three levels based on the count of deaths caused by the attack: high efficiency attacks (e.g. al-Qaeda style), medium efficiency attacks (e.g. Hamas style), and low efficiency attacks (e.g. PKK style).

Suicide attacks have become the modern day symbol of terrorism. “More than any other form of terrorism these attacks demonstrate terrorists’ determination and devotion, to the extent of killing themselves for their cause” (Merari, 2005). Bruce Hoffman, citing a Rand Corporation report, also rightly points to the effectiveness of suicide attacks, noting that such missions on average kill four times as many people as other terrorist acts (Holmes, 2006, p. 158).

Suicidal terrorism also attracts a great deal of public attention with the added value that so-called ‘martyrdom’ places on it, further attracting new recruits into terrorist organizations. “Etymologically, a ‘martyr’ is a witness giving testimony before listeners on a jury or tribunal. Their desire to bear witness before a world audience seems to be an essential reason why 9/11’s planners decided to mount an exploit of such staggering magnitude. Referring to the attacks as ‘speeches’, bin Laden himself boasted that ‘The speeches are understood by both Arabs and non-Arabs- even the Chinese” (Holmes, 2006, p. 159)

The organizers of 9/11 were strategic in choosing their targets, mounting a spectacular operation in one of America’s busiest cities that was to wreak as much havoc as possible and was destined to receive long-term coverage on global television and media. Dolnik and Fitzgerald (2008, p. 15) rightly point out that this type of coverage may even spur on popular inquiry by the international community into the motivations behind such attacks. These attacks also ensure that enemy despair is long-lasting.

Suicide attacks show that David can defeat Goliath (Rogers, 2007), that the ‘system’ in its very core is still weak and that by throwing a small stone and avalanche can be let loose to crush the giant superpower, delivering a “knockout blow” that has the ability to excite the desire of the warriors (Holmes, 2006, p. 161). In this way, suicide attacks deliver a psychological victory because the brave terrorist is ready to sacrifice his life for their cause. In contrast the enemy is seen scurrying in fear from death and destruction.

INTELLIGENCE FAILURE

According to Hannah, O’Brien and Rathmell (2005, p. iii) of the Rand Corporation, “[i]ntelligence is a special kind of knowledge, a specialized subset of information that has been put through a systematic analytical process in order to support a state’s decision and policy makers. It exists because some states or actors seek to hide information from other states or actors, who in turn seek to discover hidden information by secret or covert means.” Intelligence failure can be defined within organizational, political and even psychological parameters. Copeland (2007, pp. 4-8) provides numerous definitions of “intelligence failure” from a variety of perspectives, pointing to well-known events in American history such as Pearl Harbor, the placement of Soviet missiles in Cuba, and the Iraqi invasion of Kuwait (Higgins, 1987; Matthias, 2001; Griffin, 2004). While it is difficult to find one definition of intelligence failure the following is offered as being all-inclusive of the literature. Intelligence failures occur when a policy maker or analyst knows something or should have known something, given the amount of information available to them, to accurately assess the probability and consequences of an event taking place, and they do not act according to that knowledge (Copeland, 2007, p. 6).

Intelligence fails because of failures of “communication, of bureaucratic structure and behavior, of estimation and analysis, of warning, of policy, or of judgment… [it incorporates] leadership failures, organizational obstacles, problems of warning information, and analytical challenges” (Copeland, 2007, pp. 19-20). Many still believe that the Sept 11, 2001 terrorist attacks could have been prevented if appropriate measures had been taken in response to intelligence information available prior to the attack (Wilkie, 2004; Hersh, 2005; and Neumann & Smith, 2005). Scholars often point to policy failures as the main cause of intelligence failures. How often have intelligence analysts assessed the likelihood of a terrorist event and their warnings have gone unheeded? As Matthias (2001, p. 12) highlights, “[t]he intelligence “failure,” if there was such, lay in the question of warning: how soon was it given, to whom, with what degree of alarm, and from what level of command.”

Despite the hundreds of millions of dollars that are being invested in the development and implementation of intelligent collection systems- for the analysis of phone calls, e-mails, financial transactions- terrorists, for the greater part, have been able to bypass these measures. One has to query whether the information revolution has led to information overload- too much information and too little knowledge. That is, we can collect the data we need, we can pull all the facts together but we cannot appropriately make sense of them to extract that which is useful and to subsequently take appropriate action. It is what Roberta Wohlstetter called “the problem of signals vs. noise ratio” in the 1960s (Copeland, 2007, p. 13).

In the context of terrorism and today’s climate of asymmetric warfare, there is nothing to stop a suicide bomber who is “working alone” from detonating a device in a busy street (Posner, 2005). It would have to be a very lucky analyst to pinpoint this kind of incident. Thus we can speak of this kind of element of surprise as being beyond the capability of any intelligence organization. We cannot yet enter into people’s personal thoughts and mind. Larger terrorist plots like the Mumbai Attacks (26-29 November 2008) however, can be foiled by law enforcement agencies if the right sources of intelligence are made available to relevant authorities in time. But it must be underscored that adding layer upon layer of digital touch-points on the humble citizen is not the way to foil potential attacks; it is not a solution to a problem.

INTELLIGENCE REFORMS

A number of improvements were made to the way intelligence was conducted post the 9/11 terrorist attack, which may be collectively regarded as intelligence reforms. Intelligence reform is not a new concept- the U.S. has been practicing ‘reform’ since intelligence legislation was first instituted in World War II- it is a continuous process (Taylor & Goldman, 2004). The Reagan and Clinton Administrations for instance, were well-known for their efforts to reform intelligence. According to Berkowitz (1996, p. 40), ‘[t]he Clinton administration concentrated on two areas of intelligence reform: making the intelligence organization more efficient and responsive and defining roles, missions and priorities within the intelligence community.” What is ironic is that President Clinton instituted efficiency studies, called for better streamlining between agencies, better prioritization and planning during his time in office. Where he fell short of succeeding in his reform plans was in thinking that there was nothing wrong with the administration of intelligence and that better management would fix any shortcoming of the highly strung bureaucracy that had amassed during the Reagan era. The 9/11 attacks however triggered new questions about intelligence agencies in the U.S. and a variety of studies proposed new types of reforms: structural (reorganization), and process-oriented reforms. It should be noted however, different studies pointed to different recommendations. Among the only thing the studies all agreed on was that the mission of the intelligence community remains the same, but that the intelligence “agenda and priorities” changed after 9/11 (Lowenthal, 2006, pp. 232-3).

Given the events of 9/11 and Iraq, the intelligence cycle is moving away from a “linear and single tracked” model which was about defining requirements, assigning collection responsibilities, getting a technician to process the collected data and then using analysts to produce products that would later be disseminated to consumers. The new model which has been used by the U.S. military is more about dispersed intelligent networks where the intelligence consumers are directly linked to the intelligence collectors. The new intelligence model is less centralized and more flexible and allows every intelligence consumer to speak directly to an intelligence collector, increasing communications and coordination efforts. The information and communication technologies exist to facilitate this kind of ‘real-time’ collaborative exchange. According to Goodman (2003, p. 60), “[w]e now know from the preliminary report that the timely use and distribution of intelligence data could have prevented the terrible acts of terrorism in 2001. And the refusal of the White House and the CIA to declassify the information…” In fact, one major criticism of intelligence agencies is that they have locked themselves into technologies and cannot keep pace with the changes to information and communication technology (ICT). For instance, some American collection systems date back to 1970s, and there is some resistance to bringing new technology on board. However, when compared with process, technological changes are much easier to effect- process tends to be deeply rooted in the intelligence culture.

Inexorably linked to the alternative intelligence cycle which seeks to be effective in real-time, are structural changes to intelligence agencies which have traditionally not moved fast enough. It has to be an organizational approach that will “readily adapt as requirements for information change and as the ability of the outside world to meet these requirements improve” (Berkowitz, 1996, p. 42). After 9/11, the U.S. attempted to circumvent the silo intelligence problem by creating the Department of Homeland Security (DHS) which included an Office of Intelligence and Analysis. According to Chalk and Rosenau (2004, p. xi), “[p]roponents argue that establishing an agency that is solely concerned with information gathering, analysis, assessment, and dissemination would decisively ameliorate the type of hybrid reactive-proactive mission that so often confounds police-based intelligence units. Opponents counter that an institution of this sort would merely undermine civil liberties, unduly hinder interagency communication and coordination, and create additional barriers between intelligence and law enforcement.” The office is now responsible for funneling intelligence from the CIA, FBI and other agencies and is responsible for data analysis and ensuring that important knowledge does not slip between the “foreign intelligence-domestic intelligence divide” (Lowenthal, 2006, p. 235). Goodman (2003, p. 67) calls this “demilitarizing the intelligence community” and calls for a resolution of key turf issues between agencies. But he believes that even if the thirteen agencies and departments were willing to share information under the same umbrella agency that their “anachronistic computer systems would not allow it.”

While it was recognized that technology plays a crucial role in conducting intelligence, a debate regarding the balance between human intelligence (HUMINT) and technical intelligence (TECHINT) resurfaced post 9/11. Claims were made that the U.S. was becoming too reliant on TECHINT and needed more HUMINT and should reexamine its position regarding how best to combat terrorism. But this in itself did not stop the U.S. from continuing to invest in operational level technology towards the unique identification of every citizen, beyond the often unreliable Social Security Number (SSN). Chambliss (2005, p. 5) agreed that HUMINT needed to get the right emphasis in the intelligence reform debate. The increased role of open source intelligence (OSINT) was also highly reported on, some even questioning the role of intelligence analysts altogether. In summary, proposals to reform intelligence include: sweeping administrative changes, better correlation between strategy and intelligence, removal of analytical redundancy between agencies, better quality analysis by expert intelligence analysts, and an improvement in the intelligence collection process itself. Reforms that were not considered appealing included: boosting the number of intelligence agencies, and increasing funding for the U.S. intelligence community.

THE NATIONAL SECURITY AGENDA

Towards the Securitization of “All” Things

Securitization means taking a broader view of security, beyond just military force and war, to include issues such as transnational crime. Securitization of transnational crime, is trying to understand why it happens and how best to combat it within the context of a state, a region, and the globe. One of the major talking points as identified is the purported linked between transnational crime and terrorism. About preventing terrorism Schmid (2005, p. 223) writes: “…there is really no way that one can disregard the conditions that enable terrorism, whether these are called breeding grounds of terrorism or root causes… The root causes of terrorism are a subject that offers some intellectual challenges. When the United Nations first took up the issue of terrorism in 1972, there were two schools of thought. On the one hand there were those who were primarily interested in addressing the causes of terrorism. On the other hand, there were those who were more concerned with fighting the manifestations of terrorism itself. The second school of thought has become more prominent over the last three decades.”

After the Cold War ended many scholars believed in the expansion of the notion of “security” to include transnational crime matters, among numerous others (Gromes & Bonacker, 2007, p. 2). Buzan, Wæver and de Wilde (1998) identified five distinguishing sectors as viewed by the initiators of the securitization concept. Among them were: the military sector, the political sector, the societal sector, the economic sector and the environmental sector. Thus we can now speak of military security, political security, societal security, economic security and environmental security. Plainly, securitization can be considered an “all-hazards” approach to security. Allan Castle (1997, p. 4) regards securitization as “survival across a number of dimensions.”

To some in the traditionalists field of security studies this was considered a backward step; a watering down of the discipline to the point of rendering “security” meaningless. Ralph Emmers (2002, p. 6) has written on this diverging point: “The question which arises is why we should bother cataloguing a whole series of new concerns, to be christened “security issues,” when such a practice may render our use of the term so loose as to make it meaningless. Why not simply state that security issues promise to be increasingly minimized amongst the core states? For the traditionalist, if one adds the contribution to this debate of Ole Waever, for whom the securitization of non-military issues seems closer to a subjective manipulation of language rather than the objective emergence of new threats to core values comparable to previous military threats, the picture becomes even muddier, and may raise the suspicion that security is now what one makes of it.”[underline ours]

For Bruck (2004, p. 103), the security economy has to do with “activities preventing, dealing with and mitigating insecurity in the economy. That broad definition would include private and public activities in both legal and illegal areas of the economy.” For Stevens (2004, p. 8) the idea of a security economy “attempts to describe a kaleidoscope cluster of activities concerned with preventing or reducing the risk of deliberate harm to life and property.” In the same light then, we can speak of military security “relationships of force; the reference object usually is the state and the survival of the armed forces”; political security which has to do with “relations of authority, governance, and recognition, where an existential threat concerns a state’s sovereignty and ideology” and societal security which has to do with “collective identities” (Buzan, 1998, pp. 5-8, 21-3). In this study it is the latter notion of security which is most relevant.

Wensink (2009) from Brandeis University believes that collective identity has been a major catalyst for change throughout history. The concept more explicitly refers to “the component of one's identity held in common with a larger group. It manifests as a shared feeling of "we" or "groupness," and often coalesces around common social or political objectives. These common goals derive in part from the group's shared sense of identity, and also contribute, in circular fashion, to binding and reinforcing the group's sense of solidarity and collective identity. This mutual reinforcement between identity and goals may help explain the tremendous transformative power collective identity has historically produced.” Among the social and political movements of the 20th century, Wensink (2009) cites: the Bolshevik revolution, the Nazis, movements for colonial independence, civil rights, feminist and gay pride, and various movements involved in conflict in the Middle East. Collective identity is something different to individual identity (i.e. comprised of personal traits unique to the individual such as one’s physical characteristics) and social identity (i.e. interactions of individuals in society, such as the class of a family or employment in a given profession).

Questioning the Role of Auto-ID and LBS Technologies in National Security

Automatic identification technologies can and have been instituted by governments to either include or exclude someone from a group (e.g. in relation to citizenship, permanent residency, refugee status or alien). In the United States, after the terrorist attacks of Sept 11 in 2001, several bills were passed in Congress to allow for the creation of three new Acts related to biometric identification of citizens and aliens- the Patriot Act, Aviation and Transport Security Act, and the Enhanced Border Security and Visa Entry Reform Act. Many civil libertarians were astounded at the pace at which these bills were passed and related legislation was created. The USA has even placed pressure on international travelers and their respective countries to comply with biometric passports or forgo visiting altogether. To some degree national security measures are moving from a predominantly “internalized” perspective to one that is transnational. With this change has been a re-shaping of nation-specific requirements for citizens both in-country and outside its borders to comply with obligatory conditions.

Heightened national security sensitivities have meant a reorganization of our priorities and values, especially when it has come to identification. It seems we have now become obsessed with identifying as a means to providing additional security, as if this is the answer to national security. This is not to say that clear advantages do not exist in the use of automated systems. For example, in 2004, unidentified Tsunami victims who lost their lives in Thailand were actually fitted with RFID chips so that their loved ones might have been able to identify them later (Smith, 2005). But by and large governments are now introducing sweeping changes to citizen ID systems without considering the probable repercussions into the future, and doing so under the guise of a national security agenda. The rhetoric is the same in all instances- we need to do ‘x’ to ensure effect ‘y’ is achieved. The only problem with this line of thinking is that the evidence for such claims is almost non-existent. New technologies are introduced with little proof of their success to combat a given national problem (Michael & Michael, 2008).

What started out as a need to identify individuals within one’s borders (i.e., personal identification for the self) has now evolved into a national-wide scheme and is poised to make a debut as an international-based solution (personal identification for the ‘collective’). It is difficult not to compare these shifts in government policy with those of political movements of the twentieth century. The collective (i.e. cooperation between individuals) was considered by Karl Marx to hold civil society together (Humphrey, 1983). More explicitly for Marx, “only in the community is personal freedom possible” (Rick, 2003).

Blocks forming like the European Union with a single currency are potentially the first test-beds for the larger scale ID and location-based schemes (Michael & Michael, 2009). The trend began in the early 1990s with livestock, and humancentric schemes followed a decade later. EU legislative directives were clear in their requirements for livestock to be identified uniquely based on a common standardized approach. Today, it is people, especially persons who are suspected of crimes, who are being tracked (Michael and Michael, 2009) via a host of surveillance techniques (Laidler, 2008; Lyon, 2002; Garfinkel, 2001; Norris & Armstrong, 1999; Whitaker, 1999; Brin, 1998). The future of homeland security is draped in even more invasive technology, nanotechnology (Ratner & Ratner, 2004). The question to ask, however, is who can ensure that current and future schemes are not misused by any ruling individual or power base? And more importantly, do these schemes really work? Have technologies like ePassports, really kept the terrorists and criminals out (Hunt, Puglia & Puglia, 2007)? And what of homegrown terrorists (figure 4)? How does a national identity card prevent a given individual who is a legitimate citizen from causing harm to others?

Bruce Schneier (2009) writes of the current dilemmas facing society with regards to open access sources of intelligence. It is worth quoting him at length: “[i]t regularly comes as a surprise to people that our own infrastructure can be used against us… According to officials investigating the Mumbai attacks, the terrorists used Google Earth to help find their way around… Such incidents have led many governments to demand that Google removes or blurs images of sensitive locations: military bases, nuclear reactors, government buildings, and so on.” Schneier (2003) is correct in arguing that the good uses of infrastructure far outweigh the bad uses and that by threatening to dismantle systems like Google Earth that we are only harming ourselves. He is quite correct in his assessment, for the main reason that once a capability diffuses, it is almost impossible to go back to the way it was before. Law enforcement personnel for instance, rely on location-based services to help them conduct covert policing (Harfield & Harfield, 2008). We cannot switch off the mobile phone network because of the possibility that terrorists may use it to detonate a bomb remotely (Michael & Masters, 2006). However this can never mean that we abandon ethical debate on the consequences of innovation or stop short of arguing for the introduction of technological safeguards; or as Sara Baase (2008, p. 479) underlines, “we must always be alert to potential risks.” We are indeed living in an electric universe (Bodanis, 2005, pp. 1-3).

The Privacy Risk vs the Security Risk

While automatic identification schemes purportedly offer convenience, speed, higher productivity, better accuracy and efficiency, they are in their very nature “controlling” techniques- they either grant access or deny it (figure 5 and 6). They inevitably suffer from problems related to function creep (Hayes, 2004). History has also shown what was possible with largely manual-based techniques during WWII; auto-ID techniques at the disposal of a similar head of state could be manifold more intrusive. One need ask now, what safeguards have been put in place to prevent the misuse or abuse of one’s personal ID (Tootell, 2007). Some auto-ID technologies even pose legal dilemmas. One could claim that biometric techniques for instance, and beneath-the-skin RFID transponders, do encroach on an individual’s privacy when used for ID. Biometrics like fingerprints or DNA are wholly owned by the individual yet requested and stored by the state on large citizen databases.

While in today’s society the need for ID is unquestionable, we need to ensure we do not enforce changes that are irreversible and perhaps even uncontrollable. While national ID schemes were introduced by a number of countries after the Great Depression of the 1930s, what has changed since their inception are the technological capabilities that we have (often quite literally) at our fingertips. These auto-ID technologies are manifold more powerful and when enjoined to other automated processes are a magnitude more invasive. The periodic census is a fine example of something that was introduced by the church and state to collect data in order to help provision services for citizens. Today, however, aggregated census data is being sold as a commodity to help private organizations perform more precise “target marketing”. Perhaps it is not too long before our “private” IDs also undergo a similar transformation- “DNA for sale, anyone?” But what of the rhetoric that in order to enhance our personal security we must give up certain privacy rights to the collective, for the common good, so to speak (Perusco, Michael & Michael, 2006)? Certainly, ID systems and location systems are useful in emergency management situations such as that played out during Hurricane Katrina which struck New Orleans in 2005 and affected millions of lives (Tootell, 2008; Aloudat, Michael & Yan, 2007). The case for all pervasive systems however, which require citizen blanket coverage for the sake of potentially apprehending only a few suspects is less appealing. This is the very topic of an excellent dissertation completed by Holly Irene Tootell at the University of Wollongong in 2007, titled: The Social Impact of Auto-ID and Location Based Services in National Security.

CONCLUSION

The growing interconnectedness of systems means that any ICT solution proposed by powerful nation states will be rapidly adopted by other nations. Truly global solutions, while seemingly convenient on the surface, lend themselves to wide-ranging dangers. Certainly, policies and procedures are important, so are laws and regulations, and standards, and guidelines but these all seem to be more exactly ‘reactionary’ to the status quo. Studies have recently shown that at the height of terrorist events or other national security issues, public sentiment is swayed by media coverage, the public perception itself, and government statements. As a result sweeping changes are introduced in a short period of time, particularly ‘changes’ with large pieces of legislative content. There never seems to be enough time for additional public consultation, for broad debate and discussion; time to consider the consequences of the implementation of these far-reaching decisions and for the scrutiny of their overall effect on the community in the long-term.

We seem to have become captive to a whirlpool cycle of surplus change, a capital accumulation of power house capabilities without the follow-on forethought. New government and business challenges are created as emerging technologies are prematurely released to the market; still newer technologies are invented to overcome the challenges, laws are instituted to set the bounds of how technology should and should not be used and people are ultimately expected to learn to live with the implications and complications. Information and communication security measures adopted in haste in response to terrorism and other national security breaches have only acted to increase this cycle of change. There is also an underlying paradox in all of this which political skeptics would have already noted: though in recent years governments have been ostensibly committed to reducing state power, they have in reality increased it massively.

Tootell, H. (2008). The Social Impact of National Security Technologies: ePassports, E911 and mobile alerts. In K. Michael & M. G. Michael (Eds.), Australia and the New Technologies: Evidence Based Policy in Public Administration (pp. 87-99): University of Wollongong.

Tootell, H. I. (2007). The Social Impact of Using Automatic Identification Technologies and Location-Based Services in National Security. University of Wollongong, Wollongong.