Highlights

• The ‘exposure’ of a technological system can be derived from its configuration.

• Analysis of system ‘exposure’ allows valuable insights into vulnerability and its reduction.

Abstract

Urban dwellers are increasingly vulnerable to failures of technological systems that supply them with goods and services. Extant techniques for the analysis of those technological systems, although valuable, do not adequately quantify particular vulnerabilities. This study explores the significance of weaknesses within technological systems and proposes a metric of “exposure”, which is shown to represent the vulnerability contributed by the technological system to the end-user. The measure thus contributes to the theory and practice of vulnerability reduction. The results suggest specific and general conclusions.

Keywords

Technological vulnerability; Exposure; Urban individual; Risk

Biographies

L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

Introduction

In November 2015, as editors, we started sorting through submissions for a special collection of articles examining the paradox of technological potential. Separately, we published a call to the public at large, asking “What comes to mind, when you think about technology, and unintended consequences?”

Then using a smart phone, we documented handwritten responses from people whose paths we crossed, collecting submissions from individuals in both hemispheres. While geographically the contributors could not have been further apart, psychologically, their responses echoed one another in a resonant, prescient way. In this way we were able to physically capture the thoughts of a variety of participants, and share them with our co-editors across the globe in near real time thanks to the marvels of technology.

Simultaneously, the responses we gathered noted some recurring and challenging concerns. We heard concerns about time, presence, and disconnection. Many of these responses have been archived in a photographic collection [1].

Unanticipated consequences is a sociological concept emphasized by Robert K. Merton [2]. Although later writers have interchanged the notion of unintended consequences with unanticipated consequences, the two phrases are subtly but significantly different, despite being deeply connected [3]. In general, unintended consequences refer to those not intended by a purposeful action. Unanticipated consequences are those with outcomes that were not those that were foreseen. It follows then that an unintended consequence might/might not have been anticipated (Table 1). It is also important to state that unintended consequences can have positive, negative, or even perverse [4] impacts on individuals, groups of people, or society at large (Figure 1).

“Tons of information but not enough understanding”

“Words cannot be swallowed once spoken:”

“Too many distractions.”

“…That unfortunate moment when you hit reply all.”

Figure 1. The social implications of unintended consequences.

Some sociology of science scholars refer to this phenomenon more generally as the law of unintended consequences, defined as the actions of people, groups, organizations, or governments that may be either anticipated or unanticipated in effect. It is a set of results that was not intended as an outcome, but happened regardless. The vast majority of people consider unintended consequences to be disadvantageous, counterproductive, fraudulent, or at times detrimental and even dangerous.

Table 1. (Un)Anticipated (Un)Intended consequences.

It is a paradox that today we think of disruption as an intentional action intended to trigger market forces and spur adoption of new technologies. Contrarily, unintended consequences do not have a purposeful intentionality about them. Quite often, the creator of an innovation does not attempt to steer the adoption and use of their product in the direction that eventually results. This leaves us only to ponder “after the fact” that the consequences were unintended by the creator.

In fact, there would be nothing stopping the rest of society from speculating as to what some of these unintended consequences might be prior to commercialization.

The former United States Secretary of Defense, Donald Rumsfeld, brought the idea of “known knowns” to prominence when he answered a question at a U.S. Department of Defense (DoD) news briefing on February 12, 2002, linking the distribution of weapons of mass destruction (WMD) with the Iraqi government [5]. He constructed three contexts, saying:

“Reports that say that something hasn't happened are always interesting to me, because as we know:

There are known knowns, there are things we know we know.

We also know there are known unknowns; that is to say we know there are some things we do not know.

But there are also unknown unknowns - the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones” [6] (numbering added).

The idea of unknown unknowns was created by Joseph Luft and Harrington Ingham with the development of the Johari window in 1955 [7], which was centered on understanding the self and our relations with others. It was a concept that was widely used in NASA with respect to decision-making and risk.

While Rumsfeld referred to three categories, some philosophers such as Slavoj Žižek have proposed a fourth category of the “known unknowns.” Žižek writes that these are: “the disavowed beliefs, suppositions and obscene practices we pretend not to know about, even though they form the background of our public values” [8]. German sociologists Daase and Kessler [9] point out that while Rumsfeld emphasized the cognitive frame for political practice in what we know, what we do not know, and what we cannot know, he failed to address what we do not like to know [9].

If we take these underlying concepts and integrate them to the technological realm as in Table 1 we get:

The known knowns are those things that are anticipated consequences that history has taught us might happen in a given context. For example, some members of society may not be able to afford new technologies and so a digital divide between the haves and have-nots sets in with the increasing number of consumer technologies introduced into the market (i.e. difference between mobile phone, smart phone, smart watch, microchip implants) [10].

The known unknowns are again things that can be predicted, but often by applying good judgement and common sense principles about the possibilities. For example, the introduction of consumer products like the Hello Barbie, Alexa, DropCam, and Nest devices that have been deployed with significant social, privacy, and security problems that will inevitably create some benefit, but even greater drawbacks in unknown effects. Some would go so far as to say that these types of consumer products may have perverse impacts on particular groups like children or the mentally ill.

The unknown unknowns are completely unexpected, and are for the most part unpredictable because no evidence exists to identify particular risk factors or even attributes linking them to a given phenomenon. As technologies increase in complexity, we should expect a greater number of unknown unknowns, with a greater severity of consequence. So-called “humane robots” that are used in assisted living contexts to aid the elderly (e.g., to get dressed, remember to take their tablets, provide fall-down alert support and more) presently have a lot of unknown unknowns attached to them [11], though equally they may have benefits that cannot be disputed.

The fourth and final category is the unknown knowns. This category is somewhat of an oxymoron. It cannot be and yet it is. While creators of new technologies are likely in many ways to be in the best position to critique their own creations, most innovators openly say that they have no idea how their creations will be used by society, and that we cannot prejudge ethics. We think of technologies such as gaming apps for the smartphone that are built to encourage return visits with stickiness features. Yet some software developers will never admit to triggering addictions in members of the populace prone to addictive behaviors [12].

Such as in the fourth category above, some technologists in the areas of artificial intelligence and robotics tend to shrug off plausible anticipatory outcomes, knowing full well what the outcomes could mean for society at large in the contexts of social and behavioral problems, or even privacy and security. Propelled by a yearning to create, develop, and deploy, and to be the first movers, they may a) play down their creations and the impact they will have; b) refuse to acknowledge that anything might go wrong; or c) when they do admit a potential risk, they still ask the rest of us to go down that bleak road with them. At the point of development and potential launch of a new product, the momentum is likely to be such that we consumers do follow after them. We follow by purchasing the new systems or products, or by asking few questions. We might fail then to ask questions, say about how a new technology might further distance us from reality, or about how it might distance us further from our human relationships [13].

At the moment of writing this editorial (October 21, 2016), nearly half of the Internet in the United States is down. A massive distributed denial of service (DDoS) attack targeted a company that functions essentially as a switchboard for the U.S. Internet, translating human-facing web addresses to the numerical mode of communication used by computers. This attack is different from DDoS attacks we have seen in the past. It is larger and more powerful. By targeting a company that manages the infrastructure of the Internet the attack has impacted several mainstream websites, as opposed to a single corporation or organization.

These types of attacks are actually known knowns. That is, we can have a constant expectation of such attacks on the Internet. We can predict that these types of attacks will only increase in number exponentially, and that they will have ever greater global economic impact.

But then there are also unknown knowns related to this latest attack. Unknown knowns in this case would be vulnerabilities embedded into a world built on the Internet of Things (IoT). This October 16 attack relied on infected smart devices - networked items such as security cameras, home routers, and baby monitors - to direct traffic towards the target in an effort to overwhelm and compromise its servers. These distributed devices form a botnet, a network of Internet-connected private computers, infected with malicious software and controlled as a group. In this case, these IoT smart devices would have been in people's homes and businesses, and infected without their owner's knowledge.

As the blog KrebsonSecurity notes, these “inexpensive, mass-produced IoT devices are essentially unfixable, and will remain a danger to others unless and until they are completely unplugged from the Internet.” Of course, unplugging these devices is an unlikely scenario, given that the owners of these devices presumably do not even know that their cameras or baby monitors are infected [14]. The phenomenon of smart devices, which are actually extremely dumb when it comes to security, is a timely example of unknown knowns. As developers race to get these impressive tools to market, they overlook key security and privacy considerations-and ask that the public turn a blind eye as well.

While we consider that which is known and that which is unknown, it is important to define technological potential within a context of sustainability. Competence, skills, and effectiveness in R&D activities, as well as scientific-industrial relations within the economy must be considered [15]. We want to glean the future and consider what might aid users in the longer term [16]. We should seek new ways to meet challenges in terms of materials, processing capabilities, product functionality and use-value. In this manner the idea of “potential” is steeped in technological forecasting. But realizing technological potential cannot be accomplished in a vacuum. The forecaster must weigh things like public opinion and pricing pressures, “the possibility of changes in institutional resistances, and the probable future marginal preferences of the society” [17]. Understanding target markets with respect to new innovations is critical, and designing technologies with these markets in mind, and how they might exploit a future product is critical. Entrepreneurs who embrace technological potential at the core of their efforts are usually more optimistic than pessimistic about the social impact of their investments.

Advancement of new technologies will be essential to solving many of the most complex challenges of the 21st century—from diseases to climate change. We need only look at research in the field of alternative renewable energy sources, e.g., in various types of fuel cells, to be inspired and to consider the potential possibilities.

But what happens when a given technology is adopted for perverse ends? Or when technology is knowingly deployed to propel unhealthy practices beyond theoretical limits of its use? What if the consequences of a given technology are contrarily subsuming healthy human emotions in adults and children alike? Who then is responsible for that innovation and for the asymmetric impact it will inevitably have on many people's lives?

These issues are front and center, or should be, in the development of artificially intelligent agents, as we are essentially creating a new technological “species,” an undertaking ripe with controversy and complexity.

Our original Call for Papers for this topic, for IEEE Potentials magazine, examining the Paradox of Technological Potential (PTP), was answered with so many strong articles on the multifaceted relationship between human and machine, and the complexity of living with AI, that we have additionally chosen to also devote this special section of IEEE Technology and Society Magazine to this important subject.

Assisted living machines, for example, that were built to aid humans, could be rendered killing machines if misused or misapplied to state enemies. What was once the province of science fiction visions is now within the realm of reality. The paradox is in the contradiction of the potential, which can be used for both good and bad, but not for both at the very same time in a given implementation [18].

Part I of this project was published in IEEE Potentials in September/October 2016 as a Special Issue. Much of that special issue had to do with the relationship between the veil-lances (sur-, sous-. uber-), evidence-based/intelligence-led policing and counter-strategies to mass surveillance, predictive profiling, and finally privacy and security by design.

In this special section of IEEE Technology and Society Magazine [20], our focus is more on a futurist vision of the technological potential of Al, with a check on unintended consequences, and a specific focus on the complexities of life during the coming of age of artificial intelligences. Our question is, in the development of artificial intelligence, what is driving us? Are visions of science fiction propeling us to a Silicon Valley dreamworld filled with technologies that we well know will be dystopic?

Edward Tenner, who has deeply studied the cultural aspects of technological change, has noted that although our capabilities and technology have been expanding geometrically, our “ability to model their long-term behavior” has not kept pace with the change [21]. Tenner believes that one of the problems of our time is how we will close the gap between capabilities and foresight. In closing his March 2011 TEDtalk, he emphasized that we are living in a time of unexpected possibilities and that the secret to our future may well be to take a “really positive view” of unintended consequences in going forward.

We as guest editors of this Technology and Society Magazine special section prefer to take the approach of “cautious optimism,” whereby we can steer technologies toward sustainable causes, and then expect at least some positive return. Unknown unknowns may not be the biggest problem we are facing, but deliberately covering up the so-named unknown knowns.

We would like to thank the authors and reviewers for their contributions. In this section we have incorporated perspectives that explore methodologies that could be used to build a better future, and that investigate the potential of new technologies like wearables. We consider the impacts of artificial intelligence through science fiction, looking at the way that the future field of robotics might apply to everyday people, including the implications for everyday citizenry. We've also included a fiction piece on the possibilities of crossing the human evolutionary gap, and even an original interview with a humanoid robot, a sure sign of the times we live in. Augmenting this material, the editor has included relevant articles on the future relations between humans and artificial intelligence, on help for those subject to Internet externalities like cybersex addiction and online Internet addiction, on how to reclaim conversation, and more.

Acknowledgements

This T&S Magazine special section on “Unintended Consequences of Living with Al, The Paradox of Technological Potential - Part II,” supplements a Special Issue of IEEE Potentials published Sept./ Oct. 2016, “The Paradox of Technological Potential - Part I.”

Introduction

We are rapidly entering the uncharted and precarious terrain of an interconnected world of pervasive technologies. There will be amalgamations of networks. Machines, connected to networks of other machines, will act more autonomously and make decisions for humans. Devices will continue to be far more intelligent and ubiquitous, thereby thinking and acting for us unobtrusively behind the lines of visibility. Therefore, we considered pervasive technology as a risk category to examine.

Using aspects of the International Organization of Standardization's framework for risk assessment (ISO 31000:2009), we sought to mine out risk sources, as well as risk events in pervasive environments. This article is written to invite you—the students, young professionals, and future leaders—to contemplate the consequences and consider appropriate risk treatments.

Risk Source: The Converging Veillances

In the context of these emerging pervasive environments, we considered a veritable source of risk: the converging veillances (Fig. 1). Such environments as the Internet of Things are creating systems in which the reach and impact of the veillances may become critically extensive. Veillance, which is watching or being watched, could now extend from the sky (surveillance) to the street (dataveillance) to the person around you (sousveillance) to within you (über-veillance) and then ripple out and back to the sky. These veillances, as represented in Fig. 1, are as follows.

Fig 1. The veillances.

Surveillance (e.g., Satellite View)

Surveillance was first recognized in the early 19th century from the French “sur,” meaning “over,” and “veiller,” meaning “watch.” This is the veillance of authority, the powerful monitoring the less powerful. Examples include satellites, municipal cameras in streetlights or on/within buildings, and the interception of data for intelligence gathering by a government.

Dataveillance (e.g., Street View)

Dataveillance is the methodical and organized collection or use of digital personal data in the investigation or monitoring of one or more persons. This veillance extends from an authority watching to nonauthorities also watching us. Examples include systematic digital monitoring of people as they use the Internet or commercial data mining practices by a company with advanced capabilities in analytics to understand consumer behavior.

Sousveillance (e.g., Person View)

Sousveillance is the capturing of activities from the perspective of one participant in a shared activity with other participants. This is a veillance happening from the person's view to other people in the vicinity. Examples include a lifelogger capturing images of others attending an event or peer-to-peer social media in which your posts are viewed.

Überveillance (e.g., Sensor View)

Überveillance is electronic surveillance within the human body. Some contend it is analogous to Big Brother on the inside looking out. This veillance deals with the watching of the fundamental who (identification), where (location), and when (time) of a human being. There is the potential for deriving the why (motivation), the what (result), and the how (methods/thoughts). Examples include medical and nonmedical implants (e.g., contact lens “glass” with Internet access or iPlants within the human body) or wearables collecting health and sleep data (e.g., heart rate, perspiration, pulse, activity, and temperature).

The Convergence Intensifies

With pervasive technologies, the veillances are rapidly converging. Society is encountering shifting paradigms relative to human-machine interactions. The circles in Fig. 1, shown with faintly dotted lines, represent increasingly more permeable boundaries between the four veillances, as networks are networked to other networks. With pervasive technologies, we have more interoperable veillance networks that connect buildings to vehicles to other vehicles to wearables to spatio-temporal tracking bearables, to biosensor data from inside us and back out to be analyzed through advanced algorithms. Information exchanges can now move seamlessly and automatically in and through the human and out across multiple platforms in each of the veillances. Pervasive technologies fuel the intensification of convergence.

Überveillance is centrally positioned, because it can uniquely bring together all forms of watching from above, below, beside, and within by involuntarily or voluntarily using obtrusive or unobtrusive devices. As pervasive environments develop, internal data gleaned from the human can be combined and synthesized with additional data from across the spectrum of veillances. The consequence is rich, broad, deep, sensitive, and highly private personal data mining. The data can be analyzed relative to the current physiological and/or psychological state, and predictive analytics can help forecast the future state of the human.

Six Risk Events to Consider

Within the context of pervasive technology, which we defined as the risk category, and the converging veillances, which we defined as a source of risk, we mined out six risks events that are likely to influence the sociocultural realm, and they are as follows.

Insightfulness

With context-awareness and context-adaption, networks of ubiquitous devices will be continuously “on” and autonomously learning behaviors. With data gleaned across all veillances, devices will assess humans in multiple contexts, capacities, and over time. This is likely to lead to a capability for the system to have rich insightfulness or a precise and profound understanding of humans in the current, but also future, state. As the veillances converge, will this yield a world in which the watchers have an unique advantage with profound insight derived through an accurate, multilayered, intuitive understanding of the human?

Imperceptible

As networks are operating behind the line of visibility, humans are not likely to comprehend the scope, reach, or even timing of data practices. The processes and procedures are likely to be imperceptible. Users could be blinded to what is collected, by whom, for how long, how it is synthesized with other data, and who owns the data now—or in the future. As the veillances converge, will this yield a world in which the human does not perceive the watching and, as a result, also not the consequences of being watched?

Incomprehensibility

Our current state of terms and conditions is often murky and/or mutable. Additionally, the average human is not likely to comprehend the wide-ranging system nor the risks associated across multiple organizations sharing data. The system is likely to be incomprehensible for the consumer. Simpler technologies have already proven to be complex and convoluted to the average consumer. As the veillances converge, will this yield a world in which a human must opt-in to stipulations that are unrealistic to comprehend?

Indelibility

Data may become ineradicable. Our digital footprints are likely to leave an indelible history of analyzable behaviors, especially if we do not own our data or if data were shared and stored elsewhere in the veillances. As the veillances converge, will this yield a world in which the human's behaviors cannot be forgotten? Will humans comprehend the long-term effects of being watched?

Invasiveness

As we allow technology to listen inside of us and to our relationships, we are likely to create systems in which not only our behaviors are predicted but perhaps even our intent. As the veillances converge, will this yield a world in which such intrusion into the inner sanctum of a human could place dignity at risk—even if unintended?

Involuntariness

It is evermore compulsory for an individual to subscribe to cloud-based e-mail to be gainfully employed or to receive extensive services across disciplines (e.g., a hospital). More often, individuals are pressured to opt-in to belong and benefit socially or financially (e.g., discounts offered by an insurance company). As the veillances converge, will this yield a world in which a person must opt-in to technology to participate in society?

Conclusion

We now invite you to contemplate with us the consequences of not only the six individual risks but also the six risks collectively. If we are compelled to inattentively opt-in to a system within which we somewhat unknowingly rescind control over our data to participate in society, isn't an outcome decreased autonomy for the person? If we share personal data that can be analyzed and synthesized and reanalyzed across the veillances relative to our ongoing physiological state, while also naively rescinding our right to be forgotten, might human dignity be at stake either now or in the future?

We established the context of risk (environments of pervasive technologies), defined a substantial source (the convergence of the veillances), and identified six emerging risks. We then offered a few possible consequences to consider in the sociocultural realm. Now, in the spirit of robust risk assessment, we ask you young, brilliant students and professionals to consider the likelihood of the aforementioned risk events, so as to ensure appropriate controls are built into the design and operation of these shifting human-machine interactions.

Biometrics are the Unique Characteristics of the individual that differentiate him or her from any other person. Down and Sands [1] explained that the physiological characteristics refer to the inherited traits that are shaped in the early embryonic stages of the human development. Physical biometrics include, among other things, DNA, fingerprints, hand geometry, vein patterns, face structure, skin luminescence, palm prints, iris patterns, periocular features, retina patterns, ear shape, lip prints, heartbeats, tongue prints, and body odor/scent [2]–[3][4][5][6][7][8]. Behavioral characteristics are not inherited but acquired and learned throughout the life of the individual [1]. These include, but also are not limited to, signature, handwriting, vocal prints, keystroke dynamics, and gait—body motion [3]. As a result, the biometrics of a person cannot be stolen, forgotten, or forged. It is what we are [2].

Biometric Systems Overview

Independent of which biometric identifiers are under consideration for a given application, they are all viewed as automated pattern recognition systems. Typically, a biometric system includes a biometric reader, feature extractor, and feature matcher. Biometric readers act as sensors, feature extractors take the input signals and compute those special attributes that are unique, and feature matchers compare biometric features, attempting to find a match. A biometric authentication system consists of an enrolment subsystem, an authentication subsystem, and a database.

Figure 1. An iris scanner. An LED light flashes on the scanner if the biometric is accepted or rejected. (The photo was taken at the U.S. National Cryptologic Museum. Courtesy of mark pellegrini, 2007.)

For a biometric system to work, an individual must be enrolled, at which point the person's basic measurements of one or more biometrics are taken by the feature extractor and stored in the database. Relevant associated details may be stored alongside the biometric, such as the enrollee's name and unique ID. If the method of authentication uses verification, then typically a type of card is also linked to a person's biometric feature. A subject provides an identifier, like a smart card, and places his or her biometric on a reader. The reader senses the biometric measurements, extracts the features, and compares the input features with what is stored in the database (Figure 1). The system either accepts or rejects the subject from the given application. In the case of straightforward identification during authentication, a biometric sample from the subject is taken and the entire database is searched for matches [9, p. 7]. In practice, two separate steps occur: First an authentication mechanism will verify the identity of the subject, and second, an authorization mechanism ties the appropriate actions to someone's identity [10].

Simply put, identification is a declaration of who we are. This may include who we claim to be as a person or who a computer claims to be over a network [11]. The process of identification itself does not involve any sort of authentication, verification, or validation of the identity. That part of the process is referred to as verification, and it is usually processed as a separate transaction [11]. Recognition, on the other hand, is a notion that generally includes both identification and verification [12]. There are three modes of authentication: 1) possessions (e.g., using a smart card), 2) knowledge (e.g., recollecting a password), and 3) biometrics (e.g., using a physiological characteristic of an individual to distinguish them from others). Smith [10] describes these modes as 1) something you have, 2) something you know, and 3) something you are. During automated authentication in biometrics, two methods are common: 1) verification and 2) identification. Verification is based on a unique ID that singles out a person and that person's biometrics, while identification is based only on biometric measurements that are compared to a whole database of enrolled individuals [9, p. 5]. Depending on the manner in which biometrics are used, the process of authentication differs. Today, multifactor authentication is prevalent in most biometric systems [e.g., the use of personal identification numbers (PINs), automatic teller machine (ATM) cards, and a biometric for withdrawing cash from a biometric-enabled ATM].

There are four steps that typically take place when using a biometric system. First, data are acquired from the subject. The digital image captured of the biometric is transferred to the signal processing function (also known as image processing). Usually the data acquisition apparatus is collocated with the signal processor, but if it is not, the image is encrypted prior to transmission taking place. Second, the transmission channel that acts as the link between the primary components will transfer the data. It can transfer internal to the device or over a distributed system, usually over a private network. On occasion, data may be acquired remotely at branch locations and data stored centrally. Third, the signal processor takes the raw biometric image and begins the process for matching. The process of segmentation occurs, resulting in a feature extraction and a quality score. The matching algorithm attempts to find a record that is identical, producing a match score. Finally, a decision is made based on the resultant scores, and an acceptance or rejection is determined [13, p. 29f].

Iris Recognition Technologies

The iris is the colored part in the middle of the eye, just in front of the lens. The iris is “a thin diaphragm stretching across the anterior portion of the eye and supported by the lens” [25, p. 1,344]. The main function of the iris is to control the amount of light going into the eye, and it is the only internal organ of the human body that is externally visible [14]. Unlike other biometrics, such as fingerprints, the iris does not wear off and is not affected easily by surgeries or diseases, as it is physically protected by the eyelid and the cornea [15]. Supposedly, the iris remains stable over time and permanent from the age of 18 months and throughout the person's entire life [16].

The iris has gained much attention because of a set of qualities it enjoys. Technologies that scan or read the iris are noninvasive and contactless with the human body. Therefore, no communicable diseases can be transmitted from one individual to another, and thus, the technology is considered quite hygienic compared to others, such as touch screens for fingerprint readers that require direct contact. Technologies built on iris recognition have also gained cultural acceptance, specifically in Islamic countries where the burqa is common and women are usually prohibited from physical contact with strangers or to unveil any part of their body except the eyes.

Other significant qualities of the iris include uniqueness, universality, longevity, collectability, and antitampering, which all ensure accurate identification of the individual that is not subject to duping using an impostor's qualities [17]. Universality refers to the existence of the iris characteristics in each person, whereas uniqueness refers to the ability to distinctly identify each individual from his or her iris characteristics. The subtle textures shaping the iris have completely distinctive patterns that differentiate each person from another, far more than most of the other biometrics [17]. This means that no two people in the world would have the same iris eye print, even the left and right eyes of an individual and those between twins are different [18]. An artificial duplication of the iris is virtually impossible because of its unique properties. In addition, because the iris is closely connected to the human brain, it is one of the first parts of the body to degenerate after death, and therefore, it is impossible to forge an artificial iris or to use a dead person's iris to fraudulently bypass a security system [19].

Iris-based technologies are supposed to be 100% accurate, the most accurate among other biometric solutions and the fastest among all available biometric security solutions [20]. To be recognized, the individual needs only to look at a scanner/reader that takes a high-resolution picture of the eye, and a match is performed between the “live” digital image of the iris and a previously recorded image or template of the individual's iris [21].

Figure 2. Staff Sgt. John Silvia, 45th Expeditionary Security Forces Group entry control point, scans an Afghan woman's iris in the waiting area of the korean hospital at bagram airfield, afghanistan, 2 December 2012. Medical teams use biometrics to identify and track the records for all incoming patients by scanning their iris and fingerprints and then inputting the information into a database. (Courtesy of U.S. Air Force/Senior Airman Chris Willis.)

The spatial patterns of the iris are highly distinctive. According to Williams [22, p. 24], the possibility that two irises would be identical by random chance is approximately 1 in 1,052. Each iris is unique (like the retina). Some have reckoned automated iris recognition as second only to fingerprints, while others claim that it is the most accurate biometric identifier available today [23]. According to [24, p. 1,349], these claims can be substantiated from clinical observations and developmental biology. While some manufacturers claim to be able to capture a digital iris image at even 10 m, commercial systems have a focal distance typically not more than an arm's-length away (e.g., ATMs based on iris recognition). See Figure 2.

Since iris recognition systems are noninvasive/noncontact, some extra protections have been invented to combat the instance that a still image is used to fool the system. For this reason, scientists have developed a method to monitor the constant oscillation of the diameter of the pupil, thus declaring a live specimen is being captured [24, p. 1,349]. A transaction time of between 4 and 10 s is required for iris recognition, although most of that time is spent aligning the subject for the digital image capture.

Applications

Since 2007, biometric technologies, especially iris-recognition technologies (IRTs), together with fingerprint recognition systems, have become the preferred multimodal techniques in the security domain, especially with respect to citizen identification by government. IRT is increasingly being considered and applied in banking, e-commerce, border control, national security, and other security application areas.

One of the most prominent examples is in border security and control in several countries around the world, including the United Arab Emirates (UAE), Canada, the United States, and the United Kingdom [26]. IRTs are used for different purposes within border security. Passengers can enjoy speedy access, entering or leaving a state without the need to use a passport or any other document assertion if they have been preregistered in an iris database. Airline crew members and airport employees can use iris recognition to gain access to secure air-sides or to restricted areas. Arrivals can be screened against a watch-list database, recording the irises of persons considered dangerous, illegally returning immigrants, or of expellees excluded from entering the country [26]. IRTs in the UAE, for example, are used to identify each passenger. The process takes about 2 s against a database of over 2 million expelled foreigner records. By the beginning of 2016, the technology was able to scan up to 42 million people, with an average of 30 individuals caught every day and denied entry [20].

Another application of IRTs is within law enforcement. The technologies have been used in Jordan, as an example, for narcotics control to keep track of drug dealers and suspected drug traffickers. IRTs are also utilized for prison control in the United States. These technologies are used when booking and releasing inmates to make sure that no mistakes happen when releasing a prisoner [20], [27].

IRTs are currently used by several banks around the world as a fast and convenient method to verify clients. In 2008, the Cairo Amman Bank of Jordan was the first commercial bank in the world to integrate an iris security scanning technology into its core banking systems [28], [29]. The technology enables the bank to register its client through a dedicated iris imager, fitted next to the desk of each customer service officer. The iris print is then stored into a central database, where it is later securely retrieved, almost instantly, for recognition purposes. The client can then enjoy an easy-to-use and secured banking experience either through an iris-reader-enabled ATM or at a teller desk, eliminating the need to use a bankcard, a PIN, or even a personal identity card. See Figure 3.

Figure 3. IRT in action at one of Cairo Amman Bank's ATMs. (Courtesy of anas aloudat.)

Another domain that found an application for IRTs is in humanitarian relief functions. The United Nations Higher Commission for Refugees (UNHCR) has begun a financial inclusion project to deliver micropayments to Syrian refugees living in Jordan using their irises at ATMs without the need of bankcards or PINs, enabling cash-dispensing transactions discreetly and in limited amounts [30]. Solutions based on IRTs are also being expanded to include other services for refugees, such as medical care, food, and other financial subsidies as well, like providing vans inside refugee camps fitted with iris-reader-enabled ATMs inside [31].

IRTs were also trialed to create safe school zones. In New Jersey, the technology was exploited as an entry access control system that identifies the individual seeking to enter the school, makes a decision about whether to grant entry, and unlocks the entrance if the person is approved. A second application of the technology was as an identification system for parents who wanted to pick up their children before the end of the school day. Parents voluntarily participated to have their irises scanned rather than signing in and showing identification to the school staff [32].

Perhaps the most notable application of iris-based technologies is in national identity programs, such as the Unique Identification Authority of India (UIDAI) project. Tens of millions of Indian nationals have presented their irises for processing and registration in thousands of centers throughout India, making the project the world's largest biometric national ID of its kind [33]. But how far can these technologies actually permeate into our personal lives before they incur scope creep? Already, for instance, the national ID system in India, known as Aadhaar, is being extended for use as an employee ID by the private sector, and there are even calls now to link it to private bank accounts (Figure 4).

Figure 4. A biometric data collection camp of the aadhaar project of the Unique Identification Authority of India (UDAI), Government of India, Salt Lake, Kolkata. (Courtesy of Biswarup Ganguly.)

In the next section, we present a closer look at the social and ethical issues raised by IRT use relating to the ownership of individual iris data by government and business. With the apparent benefits of IRTs that have been presented in this article, there are commensurate concerns for their acceptable use. There are some significant issues about the role of IRTs that cannot be ignored, for example, the intangible adverse impacts of these technologies on individuals beyond the realms of individual recognition for security purposes. In the following section, the ethical and social implications of IRTs are presented.

Social Implications

First, we should point out that the IRT does not uniquely identify a person; the technology uniquely identifies an iris by matching it against templates of irises stored in a database and then mapping a name to the match. If that database has been altered or tampered with, the match will not yield the person's true identity. But the following discussion leaves this fundamental problem to the side and makes the assumption that the databases and systems built on iris-related information are almost impossible to tamper with since they continuously adhere to strict control procedures and rigorous protection by governments and businesses.

There is a general lack of awareness that the iris itself reveals more than just information that is used in the process of identifying an individual. Information derived from the individual's iris can tell us a lot about what the person is, not necessarily who the person is. A striking example is that information taken from an iris scan will, in the future, provide medical information related to an individual [34]. This information could subsequently be used to discriminate against a prospective employee in an unethical way to deny employment or other benefits owed, deriving that the individual is unfit for a given position.

Another serious pitfall related to IRTs in the literature is a taken-for-granted fact that iris patterns do not change over time. However, Rankin et al. [35] have proved that changes in iris texture appearance occur with age, disease, and medication. In their study, the researchers noted iris recognition failure rates up to 20% over six-month intervals. Any technology is undoubtedly vulnerable to faults, but in some situations, faults may have severe consequences on individuals, such as outright exclusion and physical access. In border-control environments, high error rates may potentially produce false positives or false negatives, affecting individuals in harmful ways and causing limitations on their travel, denying them entry to the country, leading them to detention or even being charged with criminal mischief based on misinterpreted iris data.

With the continuing reduction in the cost of iris scanners, we are most likely to witness applications of IRTs in environments that were not previously possible.

It is a fact that most refugees live in miserable conditions. Take, for example, refugees in Jordan who, despite of all the support and assistance provided by the Jordanian government and people, live year round in poor sanitary and housing and in extreme weather conditions year round [36]. This has a serious toll on their health status, including the condition of the iris. A refugee can be denied a payment when it is needed most because the iris-enabled ATM cannot read his or her iris. Unfortunately, using IRTs for refugees also does not eradicate the opportunities for corruption. Refugees reported that some locals demanded a commission from families in exchange for providing transportation to an ATM to redeem their payments. Another risk is the danger of retaliation by the country of origin should they obtain, somehow, the iris data for their own nationals [21].

Technologies built on scanning and reading the human iris also do not solve the problem of registering very young people. As stated earlier, iris patterns do fully stabilize until about 18 months of age, so it may be difficult to record an iris scan before this age, and as a result, IRTs cannot solely be used as the only technology in current or even future national identity programs.

There are several substances, such as alcohol, cocaine, or marijuana, that affect the condition of the iris. In an experiment carried out by Arora et al. [37], the researchers matched iris images captured before and after alcohol consumption. The consumption of alcohol causes the pupil to dilate, which causes deformation in the iris patterns and, in turn, significantly affects the matching performance of iris scanners. With the continuing reduction in the cost of iris scanners, we are most likely to witness applications of IRTs in environments that were not previously possible. As an example, a work environment where the manager can notice changes over time on employees' irises because of issues with alcohol or other substances can cause a serious invasion of privacy. Decisions might be taken that impact the career of an individual or his or her ability to socially function with such personal issues brought out into public view.

British Telecom developed a high-speed iris scanner that can capture the iris print of a person in a car driving at 50 mi/h [38]. As this technology advances and falls in price, it is likely that IRTs will find their way to law enforcement sectors in a plethora of applications, such as to screen wanted suspects out from a distance, determine individuals who drive under the influence of some illegal drug, or to serve as evidence in support of a criminal investigation, if required.

Empirical evidence by Xianchao et al. [39] and Lagree and Bowyer [40] suggests that ethnicity prediction and gender prediction are possible from iris textures. Some people have secrets to hide, sometimes even their true ethnicity, to avoid social isolation. Ahn [41], for example, reported that many Korean-Japanese with Korean ethnic origin and Japanese nationality still try to hide their ethnicity and pretend to be 100% Japanese because of the fear of discrimination and insufficient academic support given by schools and teachers to non-Japanese students, as is the case at present.

A large number of migrated individuals change their original names into names that can be easily integrated with their new societies. Attempts to hide their ethnicity, for example, at job interviews, with the potential future use of IRTs in work environments can largely damage their efforts for integration.

There is a significant population who does not wish to disclose its gender. In a world where, someday, you can shop using your iris-reader-enabled webcam instead of your credit card, the potential for targeted marketing and customer profiling becomes more invasive than ever before.

Research by Larsson et al. [42] explored the associations between personality and iris characteristics and found that people with different iris configurations tend to develop along different personality trajectories and that the characteristics of the iris are significantly associated with several approach-related behaviors, including feelings, tender-mind-edness, warmth, trust, positive emotions, and impulsiveness. Although no other study has reinforced these results, the implications are still profound. It opens the door to use iris data as a future method for genetic personality research and provide a tool to understand an individual's personality just from an iris scan.

Another issue is that iris data can be analyzed for additional information, and there is no requirement that individuals will ever be notified. To illustrate, a bank using an IRT enabled with an iris scanner can process the physiological characteristics of a lady who performs an IRT transaction, and the system denotes changes that are indicative that a female customer is actually pregnant. Indeed, as previous research has suggested, it might be possible to use an iris scan to determine not only that a woman is pregnant but also the gender of the unborn child [43], [44]. Haag and Cummings [44] provided an interesting example of how IRTs can be used for smarter customer profiling in the future. Consider the possibility that an IRT system derives a customer who is expecting a baby girl and then proceeds to provide a pink-colored printed receipt with an attached coupon for 10% off any purchase on girls' clothes at specific stores. The bank might even go further to offer financing for a minivan, offer a special second mortgage so a room for the baby can be added, or establish a tuition account for the child. The point here is that IRTs of the future will capture and process the physiological characteristics of the person performing the transaction and reveal information that the individual does not know yet know about or does not want to know about. Many parents choose not to test the gender for their unborn child, and iris scanning has the potential to dramatically disrupt that natural process.

The analysis of the physiological characteristics can also extend not only to pregnancy but may include, as stated previously, the presence of alcohol, illegal drugs, and even hair loss, low blood sugar, and vitamin deficiencies [44]. It is worrisome to think about governments and businesses using such depth of analytics. The level of invasion into individuals' privacy will be remarkably unprecedented, especially with the lack of policies and laws protecting iris information today and most of other biometrics information, both in the public and private sectors [11].

Conclusion

Biometrics, specifically iris recognition, is gaining much interest today from governments and businesses, mainly for security enforcement and intelligent customer profiling. In this article, we touched upon the concepts of biometrics that are based on the unique physiological features and behavioral patterns of the human body, explained the iris and its unparalleled characteristics that single it out from other biometrics, and presented a range of applications where IRTs were successfully utilized worldwide. We then started a discussion about the social implications of IRTs in relation to privacy, iris recognition failure, and concerns related to the ability of these technologies to reveal information, from the iris data of an individual, that is beyond the purposes of security and profiling, such as predicting pregnancy, gender, ethnicity, personality, alcohol and other substance consumption, and medical information, such as low blood sugar and vitamin deficiencies,

One of the guidelines, as Alterman [21] correctly argued, is based on the fact that the ethics of biometrics, including the iris, cannot rest on the assumption that its related data are absolutely secure. Threats to privacy in the form of uncontrolled collection of personal data and unauthorized access to personal information are all possible with iris data as with any other. Privacy policies should be in place and publically posted and clearly state when, how, and why iris information is collected and used. Limited access points and high encryption security measures should be implemented to audit official access to iris data and totally eliminate those that are unofficial.

Another guideline that should be taken into account is to seriously consider personal decisions when it comes to IRTs. Several countries started to establish their national iris-ID systems—India and Jordan to mention a few—in which citizens are obligated to scan and store their iris eye prints in national biometric databases (Figure 5). For ethical reasons, and to alleviate any personal concerns, the person should be free to choose whether or not his or her iris information can be used by governments or businesses. The person's own decision of how, when, and for how long his or her iris-related information is used and kept should also be taken into account. After all, the iris is not the only method of identification and recognition; the person should be free to select from a set of choices when it comes to registering his or her details with a government or business database. The potential for retrospective use without the enrollee's permission is another major problem. Users have no idea where their data are being stored and who has access to it for particular further investigation (Figure 5).

Advocates of IRTs would strongly argue that the substantial problems these technologies solve and benefits they bring far outweigh any “Orwellian concerns” there might be about privacy, burdens of technology failures, or technology outstripping our ability to understand the intended or the unintended consequences of its uses. Nonetheless, it is still important to bear in mind that investigating the social implications of IRTs, such as the current research, is strongly required before a world where iris-based technologies, coupled perhaps with other potential surveillance technologies, such as GPS-enabled devices [45], location-based tracking and monitoring [46]–[47][48][49], Internet of Things [50]–[51][52], and big data [53]–[54][55], become pervasive in all aspects of our daily life. A final creepy reminder of the possibilities is illustrated in Minority Report, where each and every individual on earth is instantaneously and remotely identified via a scan of his or her iris. We are speaking of fundamental human rights here that may well be impinged, heralding in an age of uberveillance.

36. B. Staton, Life is so miserable for Syrian refugees that thousands are returning home to a war zone, Jan. 2015, [online] Available: http://www.dailynews.com/general-news/20151029/life-is-so-miserable-for-syrian-refugees-that-thousands-are-returning-home-to-a-war-zone.

Wearable devices with independent computing and networking capabilities change the proximity of people and visual information to self-presentation and self-perception. This article examines the disruptive effect that wearable technologies like the Digital Eye Glass present in documenting and representing the self in a surveillant world. We look at how the power relationships in self-presentation and self-interpretation are changed by sousveillant apparatus, and we explore how these practices of “looking” mediate the subject and power in the changing ethics and politics of human-to-human and human-to-computer interaction.

Behind us are the antiquated 19th-century anthropological notions of humans as tool users and the 1960s McLuhanesque ideas of tools as extensions of our senses. In front of us is an abyss where computing moves beyond mobile tablets and wearables, beyond “smart” devices and ambient computing strategies, into our bodies and brains, and into the very definition of what it means to be human. As computers and computing move from desks and carrying cases into our watches and glasses, onto our clothes and bodies, we stand at a precipice of a new definition: what does it mean to be a “mediated” human? What is the relationship between the tool and the human? In this future, we move beyond notions of mediated reality, beyond representation, and into presentation, the construction of what is “real,” and the construction of semiotic and semantic meaning by the human-machine. The coming tsunami of wearable devices represents the normalization of luminal and defused spaces that will redefine our understanding of the “human” and the relationships of power, agency, and subjectivity that they mediate.

Wearable technologies—technologies that are worn on the body for extended periods and incorporate circuitry, independent processing, and wireless connectivity—are intercessors for the semiotic integration of the “tool” into our experience of the world. They represent a paradigm shift from the established trajectory of ethical and sociotechnical norms toward an undiscovered country where there is no separation between what is human and what is machine. The human-machine unit creates semiotic meaning about the world around us, yet simultaneously presents a dreamy technological utopia that may be hiding dystopian undercurrents. Wearables may signal a future where machines are an integral unit of our meaning production and when sensemaking becomes dependent on the human-machine. Our practices of looking, presentation of self and interpretation of other, and way of seeing our environment's affordances are inextricably linked to the human-machine hybrid. We need to be aware that the hybrid is monitored by the corporations and governments that produce, control, and survey the devices and their supporting infrastructures. From this perspective, wearable devices represent more than just a potential economic disruption, but, in a broader sense, a disruption of the ethics by which we live [2].

Much like the digital revolution that brought computing to our homes and workspaces in the 1980s and 1990s, the current movement toward wearable technology is threatening our established norms and blurring the lines between technology and the body, individuals and groups, and power and the subject. For example, consider the much-hyped Google Glass project. Google Glass is a commercialized glasses frame that incorporates video capture, independent processing, connectivity, and visual feedback to allow users immediate contextual interaction with digital information—kind of like a wearable cell phone. Although many of the features of Glass are similar to what smartphones allow people to do today, the form represents a departure from established social norms by allowing unprecedented portability and capture—to go where no digital eye has gone before. This article examines how the mass adoption of wearable computing is disrupting ethical norms in the presentation of self and other through power relationships between individuals and technology, private and public spaces, and surveillance and sousveillance.

Wearables as Disruptive Technology

In The Innovator's Dilemma, Harvard Business School Professor Clayton Christensen [4] categorized technologies as either sustaining or disruptive to an established economic market trajectory. He argued that sustaining technologies rely on incremental improvements that generally maintain the status quo. Disruptive technologies, on the other hand, destabilize established markets or market segments and, through a process of refinement, improvement, and eventual innovation, create new norms. Christensen argued that these technologies, which start at the fringes with early adopters, generally capitalize on inefficiencies, limitations, or gaps in existing technologies. As these disruptive technologies move from the periphery to mass adoption, they establish new opportunities, new norms or market niches, and, through processes of innovation and black-boxing, create a technological paradigm shift in which disruptive technologies transgress periods of stabilized social practices and norms.

The Apple iPad can be used to illustrate this point. By the time the original iPad was released in August 2010, mobile computing was an already-established concept. Companies like Bell Canada tried replacing their technicians' laptops with tablets in 2002 [27]. Companies like Motion Computing were also developing enterprise markets with analogous, if not homologous, devices. Motion Computing launched a Windows XP-based tablet PC as early as 2005; however, despite some early adoption, many felt that tablets were undesirable. Viseu [27] found that Bell technicians did not want their laptops replaced; they had gotten used to bringing their work laptops home for personal use, which they considered an employee benefit. The embryonic tablet market of the time did not permit similar uses, so the technicians resisted the idea of giving up what was perceived as a perk. Despite the existence of tablets from 2000 through 2010, the culture of computing surrounding early use proved one of the barriers to mass adoption.

The introduction of the iPad in August 2010, along with an established ecosystem of software (mostly from the existing iPhone apps), disrupted desktop, laptop, and netbook markets and, perhaps more significantly, led to major social shifts in mass adoption and use of mobile computing devices. The disruption of established paradigms and the resulting social, technological, and ethical shifts have since transformed the status quo much more broadly than even Christensen's mainly economic lens could predict. The iPad moved the masses off of desks and tables and contributed to a new norm for mobile and pervasive computing. New practices, like watching media on the second screen, began to augment, rival, and even displace established TV-viewing habits. The iPad proved to be a truly disruptive technology not just in its own market segment, but also through broad social and ethical changes that accompanied the technology. Likewise, the eventual mass adoption of wearable technologies will be economically and, more importantly, ethically disruptive to human interactions and institutions.

Like tablets, wearables are also moving from the periphery toward mass adoption. In the process, they are moving luminal cyborg identities from the margins of sociotechnological practice toward the mainstream. These devices, which provide individuals continued bidirectional connectivity, will change the presence and immediacy of information in daily experience as well as our presentations and representations of ourselves. Self-presentation, mediated by wearable technologies, has direct and immense implications for the relationship of power between self and other or, as Goffman [10] put it, between the actor, the audience, and the very nature of the “stage” upon which we construct our identities.

Presentation of Self

In the 1950s, Goffman used the stage as a technological metaphor for social interactions between self and the “other” in institutional and institutionalized settings. In his writings and his seminal book, The Presentation of Self in Everyday Life [10], Goffman frames the performance of self as a theatrical activity that is fundamentally asymmetric. He stresses that actors see from one perspective but are viewed (to paraphrase Lacan) by their audiences from many perspectives. He suggests that individuals can maximize the effectiveness of their performance by hiding aspects of the self in the backstage while allowing other aspects to appear on the front stage. Audiences receive verbal messages from the individual but are simultaneously able to read multiple cues of the performance beyond what is simply said or overtly offered by the presenter. For example, an individual who is trying to portray a particular identity at a border crossing and who appears nervous and fidgety may give inspectors unintended cues about his or her actual identity. Although the individual may wish to be viewed in one way, according to the presented documents, the performance may be interpreted differently by border officials, resulting in asymmetric viewing (i.e., individuals display only chosen aspects of self, while the audience reads more into the performance than was intended) [10].

“Backstage” becomes the metaphor and mechanism by which individuals hide aspects of themselves that are disadvantageous to the goals of an exchange. This private psychological or technological space is where individuals can be themselves and is ultimately a social imperative for what we consider privacy and democracy (see [28]). According to Goffman, we, as actors, use props and strategies to establish our identities depending on contextual needs. He also argues that we are both actors and audience members in institutionalized exchanges, implying a contextual and iterative exchange between the self and other and suggesting an agency on the part of both parties to construct and deconstruct performances as they occur. Although this exchange does not completely mitigate the asymmetries of individuals dealing with others, it does afford the self a presentation “toolbox” that can help achieve specific, contextually integral identities.

In real life, these exchanges can be seen in basic interactions between individuals who pose for ID pictures at government-authorized agencies. People posing for a photo for a driver's license or other photo ID—a 20th-century technology currently under siege by biometrics and (digital) visual analytics—tend to perform identities by subtly positioning their heads or posture for the picture and, more explicitly, by choosing appropriate outfits for the experience. Before some countries announced rules catering to early face-recognition systems in the mid-2000s, individuals posing for institutional IDs could smile to make themselves seem more friendly, pleasant, and personable. This strategy is significant not only at the initial enrollment into the institution but also later, during future performances, when individuals are asked to provide their photo IDs [6].

Shifting Paradigms

Wearable computing changes the way we present the self in everyday life. First, technological mediation allows for a new stage for presenting the self. Playing on Donna Haraway's cyborgian ideal, wearables like digital eye glasses allow individuals to construct and share potentially unconventional perspectives of self. Although by no means mainstream yet, this is seen in the practice of lifeblogging, which can be framed as an ethically disruptive practice that changes the presentation of self. Also known as cyborglogging, glogging, lifeglogging, lifelogging, and lifecasting, lifeblogging is an individual's continuous broadcast of his or her everyday experience [14]. Lifebloggers present aspects of the self in the context of everyday life. The first person to officially document a lifecast (to stream continuous, live, first-person video) from a wearable camera was Steve Mann on 22 February 1995. Mann serendipitously used his “wear-able wireless webcam” as a roving reporter on the MIT campus when he happened to encounter and capture a fire that had broken out. Hove [11] maintained that the wearable web camera went too far and prophesized that constant video broadcasting would have ethically transformative implications. However, Steve Mann was a single individual, so it was impossible to know the exact implications of wearable technology on a mass scale at that time.

Historically, radio and television generally provided curated and produced materials from centralized broadcasters. Life-casting gathers and broadcasts ubiquitous information “from below,” with 24/7-streaming incidental observations and events captured as part of the continuous flow of information. Ordinary people's audiovisually recorded events become another official record that may be used to challenge authoritative history, such as an alibi, referred to by Michael and Michael [21] as crowd-sourced sousveillance. Furthermore, these incidental vignettes of life become more real than the truth produced by centralized media channels—a kind of Baudrillardian hyperreality. Somehow, captured experience seems perceptibly more raw and more real, more genuine than produced media (see, for example, https://giveit100.com, where participants are challenged to create 10-s videos of themselves for 100 days). This is not to say that video captured from below is not subject to the same limitations as video captured from above; both videos can be tampered with, but usually there are many versions in crowd-sourced “gazing” that can corroborate the authenticity of an event, as opposed to only one version of CCTV from above. Even though most of the content stemming from life-casts can be considered banal, it does demonstrate the potential of mobile and pervasive media to capture marginal and marginalized narratives, news, and views.

This romanticized view of lifeblogging promises to propel integration of mediated experience into the “everyday” of human experience. This is particularly significant when trying to understand a subject's relationship to a means of power and self-presentation in our everyday lives. With wearables, big “P” power, or the institutionalized mechanism of agency, control, and surveillance of bodies in modern societies, becomes increasingly mediated, surreptitious, ubiquitous, and coconstituted by its subjects. Mediating power changes the subject's ethical conduct in everyday life. However, the promise of life-blogging to construct individual perspectives hinges on the design and control of the supporting information architecture. Information flow and the power over its storage, analysis, and distribution have great implications for individuals' empowerment. The institutionalization of these new practices of looking, rather than giving agency to the individual, may simply entrench the institutions' established power structures. This is particularly true of devices like Google Glass, where the corporation mediates the storage, broadcast, and (likely) analysis of the visual data and user interactions. Further, the information gathered by a lifeblogger about the blogger backstage (and about others) may be more telling than the benefits of attempting to regulate institutional power and surveillance. For example, a lifeblogger's stream may alert a potential robber that the blogger is away from home, or it may provide information about other individuals who are caught surreptitiously in the blogger's stream that is then used against them by the custodians of the information (Google in the case of Glass) or third parties with whom the custodians share information (for example, a national security agency).

Steve Mann and the thousands of early adaptors of his EyeTap technology were on the fringes of these sociotechnical practices for decades. The promise of those on the fringes and edges, at least according to scholars like Haraway and Derrida, is that they are often the ones that help establish the spaces that eventually define the mainstream, so the mass adoption of lifecasting using technologies like Google Glass has significant implications for the ethics of social and institutional interaction as well as for the overall power relationships of gazing—the visual construction of meaning.

Shifting Gaze of the “Other”

The second potential shift presented by wearables involves the perceptions based on one's self-presentation. The promise of immediacy built into devices like the Digital Eye Glass allows audiences to view an individual's performance and access information about the performer in real time to an unprecedented degree. Wearable computing potentially gives synchronous access to asynchronous information about an individual's backstage to a degree that was previously unimaginable and unavailable. The synchronous access to information about the self, facilitated by ubiquitous, mobile, and wearable computing, challenges a person's ability to maintain a Goffman-style identity backstage. For example, developing social media allow individuals to present what Sherry Turkle, in Life on the Screen (1995), has termed the multiple distributed self, in which one's identity is contextually compartmentalized. On Facebook, an individual can construct one facet of the self for presentation to the “other,” whereas on Twitter, the same “body” may present a different persona or anonym, a different version of the self. This phenomenon has also been discussed as the idio-technopolis [19]. The practice becomes problematic, however, when barriers between different self-presentations break down because of algorithmic surveillance strategies. When a boss surveys an employee on Facebook or a parent “spies” on a child, that individual's backstage becomes challenged; wearable technologies make this surveillance more possible. Moreover, the erosion of an individual's backstage is not limited to a temporal presentation of self. Data captured in one context can be retroactively searched and shifted to a completely different contextual construction of self. This temporal dissonance represents an identity function creep that was too difficult and resource-intensive to perform on mass scales before digitization and the currently evolving signal processing of big data. The control that groups of “others” exert over an individuated “self” is deeply rooted in the human condition and in the formation of society. In other words, despite changing technological paradigms, the self-presentation, whether on a stage, on an ID card, or through a biometric measurement, remains an archetypical enactment of power over the subject. Mobile, wearable, and ambient computing may, for some, represent an erosion of surveillant control or power on the part of the subject.

Mediating Technologies, Self-Presentation, and Power

The nature of the stage—the technological mediation involved in self-presentation—and the audience's gazing potential are key to understanding the shifting relationships of power embedded in the affordances for meaning that wearable computing represents. Currently, we are experiencing technological power mediation between the subject and “other.” This mediation is essentially a reconfiguration of power—of how technology is used to mediate agency and subject construction. According to Foucault [7], power is the mechanism by which humans are made into subjects of economies, institutions, and other forms of classification. Power is the underlying means by which we structure our societies and govern our ethical conduct. Human beings are objectified into systems of governance through the production of subjects. Producing an objectified subject is, according to Foucault, a key process in enacting power, and that production is mediated and changed by wearable technologies. In the case of the Digital Eye Glass, the wearer, the audience viewing the broadcast, and anyone caught in the gaze of the glass all become subjugated to the power networks maintained by the device.

For Foucault, the use of looking as a form of internalized discipline has become increasingly relevant when discussing multiple forms of visual and video surveillance. Foucault describes isolating the body of citizens into individualized identities that are constantly under obfuscated and asymmetric surveillance. In Discipline and Punish: The Birth of the Prison [8], he depicts the prison as the model for a modern economy of power that distributes and internalizes control through gazing based on a generalized (and generalizable) body. In Foucault's panopticon, a system based on asymmetric gazing between guards and prisoners, the watcher sees the body of the prisoner without being seen in return. Agents of the institution generally write, maintain, store, and interpret a record or identity, as opposed to the subject of the gaze, from whom the system is generally kept opaque. The guards (metaphorical authoritarians) use their ability to “see-but-not-be-seen” to observe and discipline people. This model suggests that we, as citizens, generally observe the rules of the authority in power because we fear repercussions. Foucault calls this internalized discipline. For this surveillance mechanism to monitor mass populations in an effective manner, there is a tension between the forms of localization (from literal to metaphorical imprisonment) and its resource implications for oversight. For the most part, in modern democracies, the gaze of surveillance and the threat of being caught encourage most citizens to obey (become docile bodies). A national ID system may thus be understood as a way of disciplining individuals into localized identities to create “known” docile bodies.

The knowledge produced by the state's gaze upon its citizens becomes a form of power. Bodies are linked (metaphorically localized) through institutional mechanisms to a specific file or record (i.e., identity), institutionalizing the presentation of self. These systems are often biased and depend on gazing asymmetries [18].

Foucault's metaphor of the citizen as prisoner in an institutional (identity) panopticon “without walls” has revolutionized the understanding and analysis of “power” in modern societies. Arguably, surveillance has pushed well beyond Foucault's vision of one-way gazing onto a localized body. Surveillance has diversified, using new ways of looking (e.g., data analytics) and new power relationships, and it has become abstracted to the level of symbols, or binary codes, aggregated and reconstituted at will by those who control it. Although surveillance and oversight once literally meant “to watch from above,” increasingly, the word surveillance (and the word oversight, its literal English translation) has taken on a broader meaning. Now, the “sight” (veiller) is being used more broadly than only for visual sensing; surveillance now also refers to audio monitoring, pressure sensing (e.g., “smart” floor tiles), and other means of data collection. The “above” (sur) no longer only means to be in a physically high place (such as a high mountaintop lookout, as suggested by Sun Tzu in The Art of War): it is now thought of metaphorically, as in to be in a position of power in a hierarchy (e.g., police keeping watch over citizens, shopkeepers keeping watch over their shoppers), regardless of one's physicallocation. For example, the computationally intensive surveillance infrastructure depicted in such television shows as Person of Interest often, in fact, accumulates in a basement computer room or a deep underground data vault, rather than at the top of a high mountain, although its network of cameras often watch us from above. A police officer recording telephone conversations from the basement of a police headquarters is still doing “surveillance,” even if the officer is physically underground, listening intently on earphones. A presidential oversight committee eavesdropping on that police officer, unbeknownst to the officer, is yet another form of surveillance—meta-surveillance (i.e., a form of “surveillance of the surveillance” that is not necessarily “sousveillance”).

The data, information, and knowledge we generate online and offline are increasingly the subjects of inspection, analysis, and aggregation by those in high places (governments, corporations, and other large organizations), as the NSA data net suggests. Surveillance has become more a matter of collecting and analyzing information than merely “looking down at people.” The very act of “looking” has become abstracted into algorithms and databases hidden behind the data shadows we leave behind [25]. We are no longer “looked at” from a hilltop or a high turret; we are now inspected in government and corporate databases years after our lives have changed. Data from our present and past can be searched and looked at by authorities and corporations across the boundaries of time and space (e.g., from distant cities or at times in the distant future). The “looking” does not stop there. Increasingly, algorithms are being taught to look ahead, to anticipate our potential actions in what is now called predictive analytics. What has remained constant is the relationship power between the gaze of power and its subject, which continues to favor the institutionalized agent (government, corporate, or hybrid entities).

Society and technology have moved us beyond the monolithic panoptic systems of surveillance into a regime often characterized by multiple levels, methods, and agents of looking. The central thesis that we are regulated by acts of looking and being seen remains a powerful argument. With advances in mobile computing, the surveyed is increasingly a surveyor, just as a Goffman performer is increasingly a member of his own audience. The act of exerting power by gazing at subjects is being further refined and undermined by increasingly fractionated practices of looking [23].

Politics of Sousveillance

Surveillance scholars have argued that we do not live in virtual Foucauldian prisons—at least not ones without some form of agency [30], [13]. From this perspective, new media, particularly personalized broadcasting, facilitated by devices such as networked wearables have significant implications for the power dynamics within society; they are the windows that allow individuals to gaze back. Imagine that prisoners in Foucault's panopticon could look back, see their guards, and record their interactions. In The Matrix (1999), human beings are reduced to a shared delusional state and serve as mere batteries to their mechanical overlords. In George Orwell's 1984, citizens cannot see who is behind the telescreen. The power of mobilized media that are always on and able to broadcast and access a network of followers changes the broader notions of surveillance and oppression proposed by The Matrix and 1984. We are entering an age in which people can not only look back but, in doing so, can potentially drive social and political change. To paraphrase the film, no one can tell you what the Matrix is, but once you've seen it, you are immediately embroiled in the power politics of sousveillance.

Sousveillance has the potential to change the relationships embedded in the asymmetric social control that Foucault discussed as the formational characteristic of modern societies: the panoptic gaze. But sousveillance differs critically from surveillance in the relationship of power between the observing gaze and its subjects. Sousveillance represents a “gazing” from below. The viewer is, by definition, at a lower power potential than the subject of the gaze [16]. In this power triangle, if the viewer's incline is small, the efficacy required for effective undersight is relatively little. However, if the inclination is steep, the power required for effective change through sousveillance is much greater. When coupled with political action, the practice of viewing from below becomes a balancing force that helps democratic societies move the overall state toward a kind of veillance equilibrium, which has been referred to as equiveillance [5].

Sousveillance allows individuals to enact power over organized gazing units by establishing counter records—distributing data about illegality or abuse of power over individuated subjects. This individuates institutional practices and creates subjects of the institutional “other.” Although this is not new in the entire scheme of human society and experience (institutional watchdogs like civil liberties associations have existed for some time), wearables may extend the scope and potential of individuals opposing institutionalized power.

In relation to systems of institutionalized power, sousveillance can also be conceptualized as a construct of organization. Highly organized, institutionalized surveillance is often only possible through complex institutional systems, whereas less organized, decentralized mechanisms that are more informally distributed could be designated as sousveillant mechanisms. Sousveillance can then be conceptualized as distributed rather than bottom-up gazing. Thus, wearing a camera does not necessarily mean we are “shooting back” against surveillance but that we are distributing the surveillance. This perspective further suggests a kind of continuum, with different orders of gazing that are not mutually exclusive, but the potential to design systems of distributed undersight with unprecedented potential for democratization—provided one can afford the technology and access—exists like never before.

Technologies of Sousveillance

Arguably, sousveillance depends more on technology than does surveillance. Technology is one mechanism that can help mediate the asymmetries of power between a viewer and a subject. In the case of surveillance, technology, as in Foucault's panopticon, can intensify the viewer's power over the subject. Although not absolutely necessary for sousveillance, pervasive digital mobile technologies can make sousveillance more effective through archive transmission and distribution. Technology extends along the range of veillance by facilitating capabilities necessary to see (and record) the subject and also to mobilize political force against the power incline.

Wearable technologies like the Digital Eye Glass can play a significant role in sousveillance precisely because looking from below is both practically and metaphorically disadvantageous (if unmediated by technology, such as glasses or a telescope). In this case, portable computing has not proved to be enough. In the early days of the World Wide Web, scholars jumped at the gnostic and democratic potentials of portable computing. As with other technologies, once the initial optimism and excitement began to wane, people realized that computing and transmission did not provide the critical mass necessary to create a cascade of large-scale institutional change. In recent years, however, mobile-networked devices have been combined with social networks that can trigger political disruption and change. Coupling portability, capture, storage, and distribution, portable media have allowed us to bring along content, but mobile media (portable media with dedicated Internet infrastructures) provide significant opportunities for individuals to capture power abuse or corruption as well as to quickly distribute and communicate it to others for political action. For example, a personal safety device that simply transmits and records data at a remote location protects the data from being destroyed by an attacker or perpetrator, regardless of whether the perpetrator is a low-level street thug or a high-ranking corrupt police officer. The coalescing of power through wearables and social media distribution represents a mechanism for potential undersight; adding sousveillance to the veillance mix supports the (re)shaping or re-envisioning of society into a more continuous dialogue spectrum between prisoners and guards, politicians and citizens, and bureaucrats and people. This relationship implies that there is power in the act of looking back and in the acts of many people looking, when mediated by technology. Even if an individual cannot see his or her guard, the looking back by many provides a kind of backchannel or social check-and-balance to ensure that the surveillance is operated within regulatory and sociopolitical boundaries. Looking back allows the individual presenting the self in everyday life to broadcast a subject's reaction and motivate an audience. Instead of one individual looking back at the panoptic guards, the wearable technologies make it possible to have multiple eyes looking back.

The potential of sousveillance is, however, also clearly linked to who controls the flow of the captured data, who has captured the resources to analyze the data, and who has the power to act on the information derived from that data. Even if sousveillance becomes a tool for individuals and grassroots organizations, if state agencies are able to surreptitiously tap into the multitude of gathered data, it is far more likely that their analytic resources and established power structures will co-opt these practices and revert back to top-down surveillance.

With mobile and pervasive computing quickly becoming part of our reality, sousveillance becomes increasingly possible. This shift toward the mainstream makes it particularly apropos to rethink the politics and ethics of sousveillance. The politics of sousveillance are themselves divisible into those that deal with the channels, media, and technology of sousveillance that in turn influence power and its efficacy. The efficacy and power of sousveillance are, in part, influenced by media technologies that channel the message. The relationship between mediated and distributed undersight, technology, communication, and politics happens at the systems design stage. If the media technology is designed to appease corporate interests or institutionalized surveillance lobbies, the final products will likely favor established power interests. As social constructionists of technology like Latour, Callon, Pinch, Bijker, and others (see [3] and [9]) have suggested, technologies have politics “baked right in,” as designers and engineers make decisions and tradeoffs even before users make choices about use. Therefore, the politics of sousveillance can potentially be “baked into” wearable technologies if gazing neutrality gives individuals more control over the information they see and record than the power preloaded by corporations and governments, the custodians and arbiters of the gathered information.

People Looking Back

Although the political recalibration between individuals and institutions resulting from the sur/sousveillant potentials of wearable technologies will have a direct ethical impact on social and political interactions from democracies to dictatorships, more immediate changes can already be seen in the implications for interpersonal interactions—the performing of identity between people in everyday life. The indiscriminate capture of information in public and quasi-public spaces (malls, bars, restaurants) will lead to changes in reasonable expectations of privacy. A fundamental ideal of privacy is that individuals can access information about themselves, correct it when wrong, and challenge the record using evidence. To deny sousveillant technologies is to deny a fundamental aspect of individual power over the institution as subject. But at the same time, is it right to allow privileged technophiles to create subjects of others in public and quasi-public spaces without their consent or their ability to screen or correct the broadcast or archive?

A video short by Infinity AR [12] asks the viewer to imagine a future where a mediated individual cheats at pool and picks up a bartender, illustrating some of the changes that could develop in daily life with the use of wearable technology [29]. The immediacy of the information provided by the Digital Eye Glass allows the protagonist, an obviously affluent and extraordinary specimen of humanity, to visualize his wardrobe, interact with his Ferrari, and navigate New York City streets. All these seem like credible and innocuous uses of the technology, but when he enters a Manhattan bar, there is a distinct repositioning of how the technology can be used. First, he plays a game of pool against an “ordinary” human—an individual unaugmented by the Digital Eye Glass. The pre-glass human does not benefit from the virtual force lines and angle options the technology provides to the protagonist, and it is easy to see how this lack of information disadvantages him and forces him to use his skill, mind, and imagination. From at least one vantage point, the discrepancy between the two players could be called cheating—steroids for the eyes and mind. The two are clearly on separate playing fields, but this game is also on a different interpersonal playing field from the next exchange.

After the pool game, the augmented human goes to the bar, where an attractive young bartender stands waiting. The face-recognition algorithm, seemingly unprompted, searches for her identity and pulls up her Facebook profile. Farfetched? Perhaps, depending on the thresholds and tolerances of the algorithm and its accesses and integration with social media, the chances of such an exact and instant match at this point seem questionable, but progress is being made [1]. The Digital Eye Glass-equipped human sees before him the woman's profile, birth date, and astrological sign—tools that allow him to engage her in conversation and link to her in the social media universe. As in the case of the pool game, this, too, can be seen as cheating, although this time as a gendered game of power and the self-presentation. Generally, men tend to be early adaptors of new technologies, and as an early adaptor, this man clearly has a technological advantage for personalized surveillance. He is able to use the Digital Eye Glass to search out information about the woman prior to, during, and after their encounter. Positioned as an audience member to the woman's self-presentation, the Glass-augmented man is no longer just reading the front-stage performance—the verbal and body-language cues—but is now able to technologically gauge physiological responses (pupil dilatation, heart rate) at previously unattainable levels. Moreover, he now has unprecedented access to backstage information about the woman's identity. The technology and supporting infrastructures allow him to “friend” her on Facebook, invite her to his apartment, and “guess” her favorite wine. These capabilities further extend the asymmetries between the identity of the performer/performance and the audience's gaze, increasing the power differentials between the individuals; they could even be considered a form of social engineering.

There is another layer to this example—one that goes beyond the power politics of gender and the self-presentation between peers. All the interactions between the individuals, the device, the social networks, and the data produced by the searches, face-recognition algorithms, geopositioning, and other forms of signal processing are likely stored and mined by the service providers. These data can likely be accessed and assessed by third parties, including government agencies, individualized or aggregated, and sold or distributed throughout multiple networks. M. G. Michael calls this the “axes of access” [22]. The foods eaten by the Glass wearer may be of interest to his health insurance provider, doctor, or local grocery outlets. The clothes the man chooses may be recorded, identified, and used to inform targeted advertising or other algorithmic surveillance. In this space, where decisions were once made privately (what we eat, how we dress) and in private places (personal kitchens, bedrooms, living rooms), the real collides with the virtual. The proximity of information to individual decision-making changes the immediacy of information to power and subject (see augmented reality implemented in contact lenses as an example [26]).

Conclusion

The iPad moved the masses off of desks and tables and contributed to a new norm for mobile and pervasive computing.

Wearables make us simultaneously potential sousveillant observers—with our own credible records—and passive drones of data collection for the institutional “other”—be it Google or the NSA. As Michael and Michael [20] write, “all this monitoring might also mean that we become acutely aware that we are being constantly watched and expected to act in particular ways in particular situations.” This negotiation of self ultimately impacts our own ability to be creative, different, diverse, and individual. It is not only the loss of privacy that is increasingly at risk but also the wonder of improvisation; we will be playing to a packed theater instead of being comfortable in our own skins and identities. The ethics of looking back are by no means monolithic; sousveillance itself can create subjects of the observed audience. When those who do not have looking-back technology get caught in the veillance net, they become subjects of the power that wearables establish in their agents. People who are caught in the gaze of the Digital Eye Glass become aspects of veillance records and data for analysis. The Digital Eye Glass worn into bars, on open streets, and even in operating theaters will invariably challenge reasonable privacy expectations as casualties of its gaze and indiscriminately capture data to be analyzed in real time or retrospectively. Casualties may also act against the data gatherer in ways never imagined.

As much as we are subjects of institutional gazes, we are increasingly gazing back at institutions using technology, new media, and distributed “cloud” politics. More akin to the telegraph than to radio or television, new media are not only channels for unidirectional broadcasting but incorporate a feedback capacity and a mechanism for organization and action. When wearable devices reach mass adoption, they will change the power politics of looking. Wearables will undoubtedly be disruptive technologies in the economic sense, but they will also likely be disruptive in the ethical and political sense.

Where virtual spaces provide affordances to distribute multiple self-identities, the hybridization of space and information will create new links between performed identities and bodies. Bodies tracked by analytics will inevitably find it harder to seek refuge behind masks or identities performed on different stages. This will change the interactions between people and communities. If a person wearing the Digital Eye Glass can access your various profiles in real time and review your likes, dislikes, and network connections, the first meeting becomes an entirely different interaction than one between two unmediated people. In many ways, the mystery, spontaneity, and discovery are taken away. It is not an interaction of learning in the moment but an interaction of the lived and learned, already time past.

Wearables create a much more complex and nuanced system of mediated looking in which individuals become nodes of sousveillance and surveillance, depending on the intent, context, and proximity of information. Closer integration between external and internal information processing will mark an intermediate point that redefines our understanding of public and personal spaces, our understanding of privacy and instant mass information distribution, and our relationships with our information and ourselves. By breaking down the boundaries between the virtual and the real and by establishing mediated space, technologies like Google Glass, when adopted by the masses, will rewrite any reasonable expectation of privacy.

The data, information, and knowledge we generate online and offline are increasingly the subjects of inspection, analysis, and aggregation by those in high places.

Wearable technologies promise to augment the world by reducing the distance to information and communication technologies. Access to information and augmenting our looking practices will change what we see and sense as well as how we see and sense our world. When the technologies move beyond mediating our senses, do they become replacements for our senses, our way of understanding the world, and our ways of seeing affordances in the natural landscape? At what point does the technology begin to construct—rather than just mediate or augment—meaning? When the technology becomes an agent in constructing meaning (likely before it is even integrated into traditional biological boundaries), does the tool define what it is to be human? These are important questions to consider, especially at a time when we are already speculating on the socioethical implications of “uberveillance”—embedded surveillance devices for the body—which will herald in an even greater pervasiveness.

References

1. J. Boone, Just when you thought Google Glass couldn't get creepier: new app allows strangers to ID you just by looking at you. E! Online, Feb. 2014, [online] Available: http://www.eonline.com/news/507361/just-when-you-thought-google-glass-couldn-t-get-creepier-new-app-allows-strangers-to-id-you-just-by-looking-at-you.

Introduction

The “internet of things” mantra promotes the potential for the interconnectedness of everyone and everything [1]. The fundamental premise is that embedded sensors (including audio and image) will herald in an age of convenience, security, and quick response [2]. We have become so oblivious to the presence and placement of sensors in civil infrastructure (e.g., shopping centers and lampposts) and computing devices (e.g., laptops and smartphones) that we do not question their placement in places of worship, restrooms, and, especially, children's toys [3].

The risk with consumer desensitization over the “sensors everywhere” paradigm is, at times, complacency, but, for the greater part, apathy. When functionality is hidden inside a black box or is wireless, consumers can underestimate the potential for harm. The old adage “what you don't know won't hurt you” is not true in this context and neither is the “I have nothing to hide” principle. Form factors can play a significant role in disarming buyers of white goods for households and gifts for minors. In context, the power of a sensor looks innocent when it is located in a children's toy, as opposed to sitting atop a mobile closed-circuit television policing unit.

Barbie is Watching

The Mattel Vidster is a digital tapeless camcorder that was marketed as a children's toy. It features a 28-mm LCD display, a 2x digital zoom, and records into AVI 320 × 240 video files encoded with the M-JPEG codec at 15 frames/s, with 22-kHz monaural sound. It also takes still photos.

An example of this shift in context is Mattel's Video Girl Barbie doll, launched in July 2010 [4]. It features a fully functional standard-definition pinhole video camera embedded in Barbie's chest, with a viewing screen on her back. Young children (Mattel is targeting ages six years and above) are supported by user design to make use of “doll's-eye-view” to record Barbie's point of view for up to 30 min. They can then create movies using the accompanying StoryTeller software. Video Girl comes with a (pink) USB plug-in cord for easy upload of the recorded footage. Initially, Mattel provided storage space for video makers in the cloud to share movies (http://barbie.com/videogirl), but the company later recanted and eliminated this video-sharing capability. We have speculated that one of Mattel's reasons for doing so was because it was faced with potential footage recorded at ground level that exposed young, carefree children at play.

The Barbie Video Girl doll—Create movies from Barbie's point-of-view with a real video camera inside the doll (the camera lens is in the necklace, and the video screen is on her back).

In his book Cybercrime, Jonathan Clough makes it clear that offenses for child pornography are stipulated in Title 3, Article 9 of the Cybercrime Convention as producing, offering or making available, distributing or transmitting, procuring, or possessing child pornography [5], [p. 281]. While definitions of what constitutes an offense under child pornography laws vary greatly from one country to the next, court cases worldwide are providing clear precedents for unacceptable behaviors. It is quite possible that Mattel did not wish to find itself in the precarious situation of “offering or making available” debatable imagery of young children or as a potential, albeit accidental, accessory for possession. In essence, this places the manufacturer at the mercy of those who would label them as groomers or even procurers of child pornography, engineers of another insidious arm of the child pornographer. Three of the offenses that constitute the “making available” category of child pornography laws include to publish, make available, and show [5], [p. 287]. Mattel had obviously not thought through all the pros and cons associated with video sharing by minors. In fact, in most social media web sites, Facebook and Instagram included, policies preclude those under the age of 13 from registration and participation.

Four months after the official launch of Video Girl, the U.S. Federal Bureau of Investigation (FBI) privately issued a warning that the doll could be used to produce child pornography [6]. On 30 November 2010, in a situational information report “cybercrime alert,” from its Sacramento field office, the FBI publicly announced in a statement that there was “no reported evidence that the doll had been used in any way other than intended” [7], [8]. However, the report also stated that the FBI had revealed that there was an instance where an individual convicted of distributing child pornography had given the Barbie doll to a 6-year-old girl. In addition, there were numerous instances where a concealed video camera had recorded child pornography as well. All of these events are unsurprising [9]. The most obvious form of possession, with respect to the Barbie, would be if the accused had the item in his or her “present manual custody.” For example, if the defendant was found to be holding a Video Girl Barbie doll containing child pornography images or video, then, subject to the requirement of knowledge, he or she would be in possession of those images or video. In addition, if the doll was likewise found in the defendant's physical control (e.g., in his or her house), even that would constitute an offense.

There are professionals who have filmed Video Girl Barbie in a sexualized manner [10], but that in itself is not an offense. Although the YouTube video that compares the camera quality of the Canon 7D to Video Girl is unlisted (only people who know the link to the video can view it, and unlisted videos do not appear in YouTube search results), it sadly shows what distortion is possible through adult eyes, through using arguably borderline “adult” humor. In the YouTube comments for the video, Naxell wrote, “[t]hat USB in the back and the leg batteries make this seem like some kind of bizarre multipurpose sex gynoid,” while Marcos Vidal wrote, “Well, think on the Barbie's use; it can spy—with Cannon 7D, it's a lot harder.” While no one is claiming that Vidal was referring to the recording of a child for duplicitous reasons, it certainly suggests that Barbie could be used as a covert camera. Essentially, it is taking a form of child's play and making that an asset of the cloud for future use and possible manipulation. And this is just a fundamental issue in the new type of cybercrime—that “the advent of digital technology has transformed the way in which child pornography is produced and distributed” [5], [p. 251]. In essence, child pornography can be defined as “the sexual depiction of a child under a certain age” [5], [p. 255].

Marketing Mishaps

While we do not need to point to a video someone has made of Barbie and her super-power recording prowess “under the hood,” we can simply look at Mattel's poor taste in advertising strategy for the Video Girl doll as a children's toy. The key question is whether those who engineered the doll at Mattel understand that they are accountable for the purposeful user design and user experience they have created [11]. In a press release, the company stated, “Mattel products are designed with children and their best interests in mind. Many of Mattel's employees are parents themselves, and we understand the importance of child safety—it is our number one priority” [12].

The Barbie Video Girl doll is “doll vision” for ages 6 and above.

At the time of the online media content review in early 2011, one of the authors, Katina Michael, was horrified to find some disturbing ways in which Mattel had softly launched the product. In fact, the doll sold out at Wal-Mart in its first release. The other author, Alexander Hayes, purchased a Barbie Video Girl in 2010 to inform his Ph.D. research on point-of-view technologies, and he told Katina that the doll was “hideous…a manifestation of the most cruel manner in which to permeate a child's play.” Katina agreed and noted that the purchased Barbie would remain forever unopened because the packaging itself formed a part of the bigger picture they would need to use for a stimulus for discussion to public audiences. Katina used the packaged Barbie during her presentation at the Fourth Regional Conference on Cybercrime and International Criminal Cooperation, which was well attended by law enforcement agencies, legal personnel, and scholars in the social implications of technology [13]. The Video Girl Barbie also made further appearances at the February 2012 SINS Workshop, “Point-of-View Technologies in Law Enforcement” [14], and an invited workshop at which Katina and Alexander spoke, the 2013 INFORMA Policing Technology Conference on the theme “Bring Your Own Body-Worn Device” [15].

In July 2010, Mattel released Barbie Video Girl, a doll with a pinhole video camera in its chest enabling clips up to 30 min to be recorded.

Perhaps the most disturbing and disappointing aspect of the Video Girl Barbie was the way in which the doll was marketed. On the packaging was the statement “I am a real working video camera.” This vernacular is akin to adult sex workers and does not fit with societal moral and ethical frameworks by which we protect innocent children. It is questionable why the word working was introduced into the phraseology. In essence, Video Girl Barbie is a photoborg [16]. She is reminiscent of Mattel's Vidster video camera toy for kids [17], cloaked in the form of a Barbie doll. Elsewhere, Mattel mentions: “Necklace is a real camera lens!” But the location of the camera on the chest looks less like a necklace and more like cleavage with an additional statement: “This Barbie has a hidden video camera” [18]. There was also a picture of Barbie depicted on her knees with a visual didactic stating “for easy shooting,” indicating the three steps to making a movie. The storytelling video demo scenario Mattel used had to do with cats at the vet and was generally in poor taste. The cat was depicted getting her heartbeat monitored in one video scene, getting an X-ray in another, and then finding herself in a basket with another cat and finding love, with a heart symbol depicted above the cats' heads.

Comments varied for iJustine's video “OMG Video Girl,” which has more than 1.4 million YouTube views [19]. Here was a female adult commenting on a toy for kids. Taylor Johnson wrote, “My Favorite was the vet Barbie! Haha!” Mssjasmine commented, “That doll is kinda creepy (like a pedophile would buy that to watch little kids…ew).” Sam Speirs similarly wrote, “This ‘toy’ of yours will/could be used as a major predator trap! And I know that the idea was for the girls to have a camera [to] do stuff, but, seriously, it's a concealed camera in a popular little girl's toy…Creepy, if you ask me!” Another product reviewer of children's toys wrote: “Barbie sees everything from a whole different angle” [20]. There were several “Boycott Barbie” websites found in 2011: “Get Rid of Barbie Video Girl” Facebook page and “Boycott Porno Barbie.”

A child plays with traditional dolls. Today, we are making dolls that are connected to the cloud and use artificial intelligence to listen to questions from children and provide them answers over the Internet without human intervention. Soon, we will be asking the question “what is real?”

Perhaps the worst example of Mattel's approach in this product was its initial press release (sent to TechCrunch by the PR firm responsible), which stated: “Unsuspecting subjects won't know that Barbie is watching their every move…” [21]. Issues for Mattel to consider have much to do with corporate responsibility. Excluding the potential for pedophiles to use this technology to cause harm, what happens if innocents produce illegal content which would otherwise mean criminalization? Could the doll be used to groom and seduce victims of child pornography?

Hello? Barbie is Listening

But Mattel, like most high-tech manufacturers, has not stopped there. Convergence has become an integral part of the development cycle. If the Barbie Video Girl doll seemed amazing as a concept, then the Hello Barbie doll has outdone it. In its own words, Mattel states that the Hello Barbie is “a whole new way to play with Barbie!” She differs from Barbie Video Girl in several ways. The doll still comes equipped with a whole bunch of electronics, but Hello Barbie uses speech-recognition technology to hold a conversation with a child and only allows for still-shot photo capture. The product information page on Mattel's website reads:

Using Wi-Fi and speech-recognition technology, Hello Barbie doll can interact uniquely with each child by holding conversations, playing games, sharing stories, and even telling jokes! […] Use is simple after set up—push the doll's belt buckle to start a conversation, and release to hear her respond […] To get started, download the Hello Barbie companion app to your own smart device from your device's app store (not included). Parents must also set up a ToyTalk account and connect the doll to use the conversational features. Hello Barbie doll can remember up to three different Wi-Fi locations [22].

Thus, the doll transmits data back to a service called ToyTalk. Forbes reported that ToyTalk has terms of service and a privacy policy that allow it to “share audio recordings with third-party vendors who assist [Mattel] with speech recognition.” Customer “recordings and photos may also be used for research and development purposes, such as to improve speech recognition technology and artificial intelligence algorithms and create better entertainment experiences” [23]. There is, however, a “SafePlay” option, where parents and guardians are still “in control of their child's data and can manage this data through the ToyTalk account at any time” [22].

To manage SafePlay, parents must visit www.mattel.com/hellobarbiefaq to get more information, or call +1 888 256 0224—and every parent will certainly have time to do this [24]. “Parents must also set up a ToyTalk account and connect to use the conversational features…Use of Hello Barbie involves recording of voice data; see ToyTalk's privacy policy at http://www.toytalk.” Of course, it is not the parents who will end up downloading these apps but the children.

Continued Infiltration

This raises many questions about the trajectory of toys and everyday products that increasingly contain networked features that introduce new parameters to what was once innocent child's play, unseen and carefree. First, Samsung launched a television set that can hear household conversations [25], and now we are to believe that it is the real Barbie who is “chatting” with our children. Are we too blind to see what is occurring? Is this really play? Or is it the best way of gathering marketing data and instituting further manipulation into those too young to know that the Barbie talking to them is not real and actually a robot of sorts? Just like we were once oblivious to the fact that our typed entries in search boxes were being collated to study our habits, likes, and dislikes, we are presently oblivious to the onslaught of products that are trying to infiltrate our homes and even our minds.

A spate of products has entered the market doing exactly the same thing as Hello Barbie but targeting a variety of vertical segments—from Amazon Echo for families who allegedly need a cloud connector because they cannot spell words like cantaloupe [26], [27] to NEST's thermostat and smoke-detection capability that doubles as human activity monitoring and tracking (NEST says so openly in its promotional commercials) [28], to DropCam's reconnaissance video recordings of what happens in your household 24/7, just in case there is a perpetrator who dares to enter [29].

Cayla is Talking—And It's Not Always Pretty

Perhaps our “favorite” is the My Friend Cayla doll [30], which connects to the cloud like the Hello Barbie. She is seemingly innocent but has shown herself to be the stuff of nightmares, akin to the horror movie Child's Play featuring the character Chucky [31]. On the Australian Cayla page, potential buyers are again greeted by a splash page with a cat on it: “I love my cat Lily. I will tell you her story.” Cayla is depicted talking to two little girls. The British Christmas best seller is effectively a Bluetooth headset dressed as a doll. With the help of a Wi-Fi connection (like Hello Barbie), she can answer a whole lot of tough questions, Amazon Echo style, and you would be surprised at her capacity [32]. But security researcher Ken Munro from Pen Test Partners put Cayla to the test and identified some major security flaws that could give perpetrators a way in. In essence, Cayla was hacked. She was made to speak a list of 1,500 strong words and expletives, and her responses to questions were modified [33].

This reminds us of the 2015 article in IEEE Technology and Society Magazine by K. Albrecht and L. McIntyre on IP cameras that double as baby monitors [34]. The moral of the story is the same whether the cloud-connected device is a children's monitor, children's toy, desktop game for kids, television console, Q&A tool for households, or a plain-old Wi-Fi-enabled smoke detector or thermostat: if it's connected, then it's vulnerable to security hacks and breaches in privacy [35]. Worse still, if it can talk back to you in the spoken word, then you need to think about the logic behind the process and what we are teaching our children about what is human and what is not. If these electronics products are going back to the Internet seeking results, then don't be surprised if nonphysical autonomous software robots one day begin to spit out bizarre answers and manipulative responses based on what is out there on the Internet.

As Kate Darling said in a Berkman talk at Harvard University in 2013, “[s]o not to undermine everything that I've just said here, but I do wonder…Say McDonald's gets its hands on a whole bunch of children's toys that are social robots and interacts with the kids socially, and the toys are telling the kids…to eat more McDonald's, and the kids are responding to that. That is something that we also need to think about and talk about, when these things start to happen. They could be used for good and for evil” [36]. If only that is all they will be saying to the next generation!

Katina visited the My Friend Cayla website recently and found this message: “Due to changes in the external website which Cayla gets some information from, she is temporarily unable to answer some types of questions. Cayla can still talk about herself, do maths and spelling, and all other functions are unaffected. A free app update will be issued (for both iOS and Android users) within the next two weeks with a fix. Thank you for your understanding” [37]. Keeping our children safe and aware of the difference between virtual and real is one thing, but, if we aren't careful, we will soon welcome a future where My Friend Cayla might well be facing off against Hello Barbie in another Child's Play blockbuster.

13. K. Michael, "The FBI's cybercrime alert on Mattel's Barbie video girl: A possible method for the production of child pornography or just another point of view", Conf. Cybercrime and Int. Criminal Cooperation, 2011-May-19–20.

14. K. Michael, M. G. Michael, "Point of view technologies in law enforcement" in The Social Implications of National Security, Sydney Univ., 2012.

Introduction

What happens when experimental technologies are deployed into society by market leaders without much forethought of the consequences on everyday life? When state-based regulations are deliberately ignored by rapid innovation design practices, giving birth to unconventional and radical production, a whole series of impacts play out in real life. One such example is Google's Glass product: an optical head-mounted display unit that is effectively a wearable computer. In early 2013, Google reached out to U.S. citizens asking potential Glass users to send a Twitter message with the #IfIHadGlass hashtag to qualify for consideration and to pay US$1,500 for the product if numbered among the eligible for its early adoption. About 8,000 consumers in the United States allegedly were invited to purchase the Explorer edition of Glass. By April 2013, Google had opened up Glass to its “Innovation in the Open” (I/O) developer community, and by May 2014, they allowed purchases of the product from anywhere in the world.

The early adopters of the open beta product quickly became tech evangelists for the Google brand. As was expected, the touted benefits of Glass, by the self-professed “Glassholes,” were projected as mainstream benefits to society via YouTube and Hangout. Tech-savvy value-added service providers who stood to gain from the adoption and citizens who wished to be recognized as forward-thinking, entrepreneurial, and cool came to almost instantaneous fame. There were, however, only a few dissenting voices that were audible during the trialability phase of diffusion, with most people in society either not paying much attention to “yet another device launch” by Google or ignoring folk who were just geeks working on hip stuff. About the biggest thought people had when confronted by one of these “glasses” in reality was “What's that?” followed by “Are you recording me?” The media played an interesting role in at least highlighting some of the potential risks of the technology, but for the most part, Glass was depicted as a next-generation technology that was here now and that even Australia's own then-Prime Minister Julia Gillard had to try out. Yep, another whiz-bang product that most of us would not dare to live without.

With apparently no limits set, users of Glass have applied the device to diverse contexts, from the operating theater in hospitals to preschools in education and evidence gathering in policing. Yes, it is here, right now. Google claims no responsibility for how its product is applied by individual consumers, and why should they—they're a tech company, right? Caveat emptor! But from the global to the local, Glass has received some very mixed reactions from society at large.

Scenario-Planning Approach

This article focuses on the social-ethical implications of Glass-style devices in a campus setting. It uses secondary sources of evidence to inspire nine short scenarios that depict a plausible “day in the life” of a person possessing a body-worn video camera. A scenario is “an internally consistent view of what the future might turn out to be” [1]. One gleans the current state of technology to map the future trajectory [2, p. 402]. Scenarios allow us two distinct qualities as researchers: 1) an opportunity to anticipate possible and desirable changes to society by the introduction of a new technology known as proactivity and 2) an opportunity to prepare for action before a technology is introduced into the mainstream, known as preactivity [3, p. 8]. While change is inevitable as technology develops and is diffused into society, we should be able to assess possible strategic directions to better prepare for expected changes and, to an extent, unexpected changes. This article aims to raise awareness of the possible social, cultural, and ethical implications of body-worn video recorders. It purposefully focuses on signs of threats and opportunities that body-worn recording devices presently raise in a campus setting such as a university [1, p. 59]. A similar approach was used successfully in [4] with respect to location-based services in 2007.

In February 2013, Katina and M.G. Michael were invited to write an opinion piece about the ethics of wearable cameras for Communications of the ACM (CACM) [5]. Upon the article's acceptance in September of the same year, the CACM editor provided the option of submitting a short video to accompany the article online, to act as a summary of the issues addressed. Encouraged by the University of Wollongong's videographer, Adam Preston from Learning, Teaching and Curriculum, after some initial correspondence on prospective scenarios, it was jointly decided to simulate the Glass experience with a head-mounted GoPro camera [6] and to discuss on camera some of the themes presented in the article within a university campus setting (Figure 1). A few months prior, in June, Katina hosted the International Symposium on Technology and Society (ISTAS13) with wearable pioneer Prof. Steve Mann [7]. Ethics approval for filming the three-day international symposium with a variety of wearable recorders was gained from the University of Wollongong's Human Research Ethics Committee (HREC) for the University of Toronto-based event. Importantly, it must be emphasized that the scenarios themselves are fictitious in terms of the characters and continuity. They did not happen in the manner stated, but, like a tapestry, they have been woven together to tell a larger story. That story is titled: “Recording on the Run.” Each scenario can be read in isolation, but, when placed side by side with other scenarios, becomes a telling narrative of what might be with respect to societal implications if such recording devices proliferate.

Figure 1. A GoPro device clipped to an elastine headband ready to mount on a user. Photo courtesy of Katina Michael.

Having hired the videographer for 2 h to do the filming for CACM, we preplanned a walkthrough on the University of Wollongong's campus (Figure 2). Deniz Gokyer (Figures 3 and 4) was approached to participate in the video to play the protagonist GoPro wearer, as he was engaged in a master's major project on wearables in the School of Information Systems and Technology. Lifelogging Web sites such as Gloggler.mobi that publish point-of-view (POV) video content direct from a mobile device were also used to support claims made in the scenarios. The key question pondered at the conclusion of the scenarios is, how do we deal with the ever-increasing complexity in the global innovation environment that continues to emerge around us with seemingly no boundaries whatsoever? The scenarios are deliberately not interpreted by the authors to allow for debate and discussion. The primary purpose of the article was to demonstrate that body-worn recording products can have some very significant expected and unexpected side effects, additionally conflicting with state laws and regulations and campus-based policies and guidelines.

Figure 2. (a) The making of a short video to discuss the ethical implications of wearable devices for CACM. (b) The simultaneous GoPro view emanating from the user's head-mounted device. Screenshots courtesy of Adam Preston.

Figure 4. The aftereffect of wearing a GoPro mounted on an elastic band for 2 h. Photo courtesy of Katina Michael.

Recording on the Run

Scenario 1: The Lecture

Anthony rushed into his morning lecture on structures some 10 min late. Everyone had their heads down taking copious notes and listening to their dedicated professor as he provided some guidance on how to prepare for the final examination, which was worth 50% of their total mark. Anthony was mad at himself for being late, but the bus driver had not accepted his AUD$20 note in lieu of the Opal Card now available. Prof. Markson turned to the board and began writing the practice equations wildly, knowing that he had so much to get through. Anthony made sure to keep his hands free of anything that would sidetrack him. Instead, he recorded the lecture with a GoPro on his head. Some of the girls giggled in the back row as he probably looked rather stupid, but the laughter soon subsided and everyone got back to work, copying down Markson's examples. At one stage, Markson turned to look at what the giggles were about, made startling eye contact with Anthony, and probably thought to himself: “What's that? Whatever it is, it's not going to help him pass—nothing but calculators are allowed in exam situations.”

Anthony caught sight of Sophie, who motioned for him to go to the back row, but by then, he thought it would probably be better recording from the very front and he would cause less disruption by just sitting there. Markson was a little behind the times when it came to innovation in teaching, but he was a brilliant lecturer and tutor. Anthony thought to himself, if anyone asks for the recording, he would make sure that it would be available to them. The other students took note of the device that was firmly strapped to his head with a band but were somewhat unphased. Anthony had always argued that recording with a GoPro is nothing more than recording with a mobile phone. He surfed a lot at Austinmer Beach, and he thought the video he took of himself on the board was just awesome, even though his girlfriend thought it was vain. It was like a motion selfie.

Scenario 2: The Restroom

It had been one long day, practically like any other, save for the fact that today Anthony had chosen to wear the GoPro on a head-mounted bandana to record his lectures. They were in the serious part of the session, and he wanted to make sure that he had every opportunity to pass. Anthony was so tired from pulling an all-nighter with assessment tasks that he didn't even realize that he had walked into the restroom toward the end of his morning lecture with the device switched on and recording everything in full view. Lucky for him, no one had been accidentally caught on film while in midstream. Instead, as he walked in, he was greeted by someone who was walking out and a second guy who avoided eye contact but likely noticed the camera on Anthony's head from the reflection in the mirror while washing his hands. The third one didn't even care but just kept on doing what he was doing, and the fourth locked his eyes to the camera with rage for a while. They didn't speak, but Anthony could sense what he thought—“what the heck?” Anthony was an attractive young man who sported tattoos and always tried to look different in some way. He hated conformity. Now that he had watched the video to extract the lecture material, he wondered why no one had stopped him to punch the living daylights out of him in the restroom. Anthony had thought people were getting used to the pervasiveness of cameras everywhere—not just in the street and in lecture theaters but also in restrooms and probably soon in their homes as well.

Scenario 3: The Corridor

By this time, Anthony was feeling rather hungry. In fact, he was so hungry that he was beginning to feel very weak. All of those late nights were beginning to catch up now. Sophie demanded that they go eat before the afternoon lecture. As they walked out of the main tower building, they bumped into an acquaintance from the previous session. Oxford, as he was known by his Aussie name, was always polite. The conversation went something like this. “Hello Oxford! How are you?” said Sophie. Oxford replied, “I'm fine, thank you. Good to see you guys!” Sophie quickly pointed to Anthony's head-mounted camera and said, “Oxford, can you believe how desperate Anthony has become? He's even recording his lectures with this thing now!” Oxford, who was surprised, remarked, “Oh yeah. I've never seen one of these before. Are you recording right now, Anthony?” “Yes, I am,” Anthony affirmed, “but to be honest, I completely forgot about it—I'm dreaming about food right now.” Anthony patted his tummy, which was by now making grumbling noises. “Want to come with us to the café near the gymnasium?” Anthony asked.

“He just filmed most of the structures lecture—I'm thinking like, this might be the coolest thing that might stick,” Sophie reflected, ignoring Anthony. “No kidding,” Oxford said, “You're recording me right now? I'm not exactly thrilled about this, but ‘hi,’ for what it's worth.” Oxford waved to the camera and smiled. Sophie interjected, “Oxford, it is not like he's making a movie of you, haha!” Sophie grabbed Oxford's arm to pull it toward her—the jab was signified to make it clear she was joking. But suddenly, things became serious instead of lighter. Oxford continued, “No, I'm not quite good in front of the camera…like I don't like pictures being taken of me or even recordings of my voice. It's probably the way I was raised back home.”

Anthony told Oxford not to worry because he was not looking at him, and so, therefore, nothing but his voice was really being recorded. Little did he realize that was breaking local New South Wales laws, or at least that was what he would find out later in the day when someone from security spotted him on campus. Sophie asked with curiosity, “Do you think someone should ask you if they want to record you on campus?” Oxford thought that was a no brainer—“Of course they should ask. You're wearing this thing on your head, and there's nothing telling people passing by whether you are watching them and recording them. C'mon Anthony, you're a smart guy, you should know this stuff; you're studying engineering, aren't you? We're supposed to be the ones that think of everything before it actually happens. You might as well be a walking CCTV camera.” There was dead silence among the friends. Then Anthony blurted out, “But I'm not watching you; you just happen to be in my field of view.”

Sophie began to consider the deeper implications while Anthony was getting flustered. He wanted to eat, and they were just beginning a philosophical conversation. “C'mon Oxford, come with us, we're starving…and we can talk more at lunch, even though we should be studying.” As they walked, Sophie continued: “It's not like this is the worst form of camera that could be watching. I saw this thing on the news a couple of weeks ago. The cameras are getting tinier; you cannot even see them. The company was called OzSpy, I think, and they're importing cheap stuff from Asia, but I don't think it's legal in every state. The cameras are now embedded in USBs, wristbands, pens, keyfobs, bags, and t-shirts. How do you know you're being recorded with that kind of stuff?” Oxford was beginning to feel uneasy. Anthony felt like taking off the contraption but left it on because he was just too lazy to put the thing back in its box and then back on again in less than 2 h. Oxford confessed again: “I feel uncomfortable around cameras, and it's not because I'm doing anything wrong.” They walked quietly for a few minutes and then got to the café. Sophie pointed to the wall as they queued. “Look up there. It's not like we're not always under surveillance. What's the difference if it is on a building wall versus on someone's head?”

Anthony wished they'd change the subject because it was starting to become a little boring to him. Oxford thoughtfully replied to Sophie, “Maybe it's your culture or something, but I even wave to CCTV cameras because it's only for security to see on campus. But if someone else is recording me, I don't know how he or she will use the footage against me. I don't like that at all. I think if you're recording me to show other people, then I don't think it's okay at all.” Sophie chuckled, “Hey, Oxford, this way Anthony will never forget you even when you have finished your degree and return to Thailand in ten years; when he is rich and famous, he'll remember the good old days.” The truth was that Oxford never wanted to return to Thailand; he liked the opportunities in Australia but added, “Okay, so you will remember me and my voice forever.”

By this time, Anthony was at the front of the queue. “Guys, can we forget about this now? I need to order. Okay, Oxford, I promise to delete it if that makes you feel better.” Oxford said, “No, Anthony, you don't understand me. I don't mind if you keep this for old times sake, but just don't put it on the Internet. I mean don't make it public, that's all. Guys, I just remembered I have to go and return some library books so I don't get a fine. It's been nice chatting. Sorry I cannot stay for lunch. Good luck in your finals—let's catch up and do something after exams.” “Sure thing,” Sophie said. “See ya.” As Oxford left and Anthony ordered food, she exclaimed, “Your hair is going to be great on the video!” Oxford replied, “I know my hair is always great, but this jacket I am wearing is pretty old.” Oxford continued from afar, “Anthony, remind me to wear something nicer next time. Bye now.” Sophie waved as Oxford ran into the distance.

Scenario 4: Ordering at the Cafe

Anthony ordered a cappuccino and his favorite chicken and avocado toastie. The manager, who was in his 50s, asked for Anthony's name to write on the cup. “That will be 10 note and waited for change. “And how are you today?” asked the manager. “I'm fine thanks.” “Yeah, good,” replied the manager, “Okay, see you later, and have a good one.” Anthony muttered, “I'll try.” Next it was Sophie's turn to order. “What's up with him?” asked the café manager. “What's that thing on his head? He looks like a goose.” Sophie cracked up laughing and struck up a conversation with the manager. She was known to be friendly to everyone.

Anthony went to the service area waiting for his cappuccino and toastie. For once, the line was not a mile long. The male attendant asked Anthony, “What's with the camera?” By then, Anthony had decided that he'd play along—sick of feeling like he had to defend himself, yet again. He wasn't holding a gun after all. What was the big deal? He replied, “What's with the camera, mate? Well, I'm recording you right now.” “Oh, okay, awesome,” said the male attendant. Anthony probed, “How do you feel about that?” The male attendant answered, “Well, I don't really like it man.” “Yeah, why not?” asked Anthony, trying to figure out what all the hoo-ha was about. There were CCTV cameras crawling all over campus, and many of them were now even embedded in light fixtures.

“Hey, Josie, Josie—how do you feel about being filmed?” exclaimed the male attendant to the female barista cheekily. “I don't really mind. I always wanted to be an actress when I was little, here's my chance!” “Yeah?!” asked Anthony, in a surprised tone. “Are you filming me right now? Are you going to make me look real good?” laughed the barista in a frisky voice. Anthony smiled and, by then, Sophie had joined him at the service area, a little jealous. “What's this for?” asked Josie. She had worked on campus for a long time and was used to serving all sorts of weirdos. “No reason. I just filmed my structures class. And now, well now, I've just decided to keep the camera rolling.” Josie asked again, “Are you really filming me right now?” Anthony reaffirmed, “Yes.”

Sophie looked on in disbelief. The camera had just become the focal point for flirtation. She wasn't liking it one bit. Josie asked Anthony again, “Why are you filming?” Anthony didn't know why he blurted out what he did but he said, “Umm…to sort of get the reactions of people. Like how they act when they see someone actually recording them.” The male attendant interrupted, “You know what you should do? You should go up to him,” pointing to the manager, “and just stare at him, like just stare him in the face.” “I will, I will,” said Anthony. Egging Anthony on, the male attendant smiled, “Stand in front of the queue there, and just stare at him. He'll love it, he'll love it, trust me. You'd make his day man.” “Hey, where's my cappuccino and toastie?” demanded Anthony. The male attendant handed the food over and got Sophie's food ready too. “And this must be yours.” “Yes,” Sophie replied. The male attendant insisted: “Focus on him now, don't focus on me, all right?” “Yup, ok, see you later. Cheers.” Anthony felt a little diminished; although he was surprised that the barista talked to him for as long as she did, he wasn't about to pick a fight with an old bloke. What he was doing was harmless, he thought; he left the counter to take a seat, but considered switching off the device.

Scenario 5: Finding a Table at the Cafe

Sophie found a table with two seats left in a sunny spot and put her things down. Lack of sleep during exam time meant that everyone generally felt cold. Anthony sat down also. At the large oblong table was a small group of three—two girls and a guy. Sophie went looking for serviettes, as they forgot them at the counter. As soon as Anthony pulled up a chair to sit down, one of the girls got up and said, “And you have a lovely afternoon.” Anthony replied, “Thank you and you too.” Speechless, the other two students at the table picked up whatever was left of their drinks and left not long after. As Sophie returned, she saw the small group leaving and whispered, “Anthony, maybe you should take that thing off. You're getting quite a bit of attention. It's not good. A joke's a joke. Alright, I could cope with the classroom situation, but coming to the café and telling people you're recording. Surely, you are not, right? You're just kidding, right?” “Listen, Sophie, I'm recording you now. The battery pack lasts a while, about an hour, before it needs replacing. I'm going to have to charge the backup during the next lecture.” “Anthony,” Sophie whined, “c'mon, just turn it off.” Anthony acted like he was turning it off reluctantly although he had not. “Now put it away,” Sophie insisted. “No, I'm going to leave it on my head,” Anthony said. “I couldn't be bothered, to tell you honestly. Just don't forget to remind me to turn it back on when we are in class.” “Good,” said Sophie.

By then, two girls asked if they could sit down at the table. “Sure,” said Sophie. The girls were known to Sophie, at the Residence but they merely exchanged niceties. “My name is Klara,” said one of the girls. “And my name is Cygneta,” said the other. “I'm Sophie, and this is my boyfriend Anthony. Nice to finally get to talk to you. That'd be right. Just when we should all be studying, we're procrastinating and socializing.” Anthony was happy for the change of conversation, so he thought.

“I know what that is, Anthony! It's a GoPro,” Cygneta exclaimed. “Sophie, Sophie, I wouldn't let my man carry that thing around on campus filming all those pretty ladies.” Cygneta giggled childishly, and Klara joined her in harmony but did not know anything about the contraption on Anthony's head. Sophie was reminded why she had never bothered approaching Cygneta at the Residence. Those two were inseparable and always too cute—the typical creative arts and marketing students. Sophie retorted, “Well, he's not filming right now. He just filmed the lecture we were in.” Anthony made Sophie think twice. “How do you know I'm not filming right now?” Sophie said, “Because the counter on the LCD is not ticking.” Cygneta had used a GoPro to film her major project and knew that you could toggle the LCD not to show a counter, sharing this with the group. Sophie didn't like it one bit. It made her doubt Anthony.

Anthony proceeded to ask Klara, “How do you feel when you see someone recording you?” “Yeah, not great. I feel, like, really awkward,” confessed Klara. Then Anthony asked the million dollar question: “What if most people wore a Google Glass on campus and freed themselves of having to carry an iPhone?” Klara at this point was really confused. “Google what?” Sophie repeated, “Google Glass” in unison with Anthony. Shaking her head from side to side, Klara said, “Nah, I'm not into that kind of marketing at all.” “But it's the perfect marketing tool to gather information,” considered Anthony. “Maybe you're going to start using it one day as well? Don't you think?” Klara looked at Sophie and Anthony and replied, “What do you mean? Sorry?” Anthony repeated, “Do you reckon you're gonna be using Google Glass in a couple of years?” Klara turned to Cygneta for advice. “What in the world is Google Glass? It sounds dangerous?” Anthony explained, “It's a computer that you can wear as glasses. But it's a computer at the same time.” Klara let out a sigh. “I had no idea that even existed, and I think I'm a good marketing student and on top of things.”

By this stage, Sophie was feeling slighted and decided to finish her food, which was now cold. Anthony, caught off guard by Klara's lack of awareness, reaffirmed, “So you don't reckon you'd be wearing glasses that can record and works as a phone or a headband capable of reading brain waves?” Cygneta said, “Probably not,” and Klara also agreed, “No. I like my phone just fine. At least I can choose when I want to switch it off. Who knows what could happen with these glasses? It's a bit too out there for me. That stuff's for geeks, I think. And anyway, there's nothing interesting in my life to capture—just one big boring stream of uni, work, and home.”

Sophie pointed out an interesting fact: “Hey girls, did you know that there's no law in Australia that forbids people from video recording others in public? If it's happening out on the street, then it ain't private.” Cygneta replied, “Yeah I heard this news the other day; one of the ministers was caught on video heavily cursing to another minister when he was listening to his speech. He was waiting for his turn to give a speech of his own, apparently, and he didn't even notice someone was recording him. What an idiot!”

Sophie asked Anthony to accompany her to the bank. Lunch was almost over, and the lecture was now less than an hour away. The pair had not studied, although at the very next table was a group of six buried in books from the structures class. Klara and Cygneta went to order a meal at the café and said goodbye. Anthony reluctantly got up from the table and followed Sophie to the study group. Sophie bravely asked, “Anyone got any solutions yet to the latest practice questions?” People looked up, and the “little master,” who was codenamed for his genius, said, “Not yet.” None of the other engineering students, mostly of Asian background, could even care less about the camera mounted on Anthony's head. Sophie found this disturbing and startling. She immediately thought about those little drones being developed and how men seemed to purchase these toys way more than any woman she knew. Who knows what the future would hold for humankind, she thought. Maybe the guys would end up loving their machines so much they'd forget spending time with real people! Sophie liked the challenge of engineering, but it was at times strange to be in a room full of guys.

The power to exclude, delete, or misrepresent an event is with the wearer and not the passive passerby.

Scenario 6: A Visit to the C.A.B. Bank

Sophie was beginning to really tire of the GoPro shenanigans. She asked Anthony to wait outside the bank since he would not take off the contraption. Sophie was being pushed to the limit. Stressed out with exams coming up and a boyfriend who seemed preoccupied with proving a point, whatever that point was, she just needed things to go smoothly at the bank. Luckily this was the less popular bank on campus, and there was hardly anyone in it. Sophie went right up to the attendant but called out for Anthony to help her with her bag while she rummaged in her handbag for her driver's license. Anthony sat down on one of the sitting cubes and, looking up, realized he was now in the “being recorded” position in the bank himself. One attendant left the bank smiling directly into the camera and at Anthony. He thought, “How's that for security?” The third teller leaned over the screen and asked Anthony, “Is there anything we can help you with?” Anthony said, “I'm waiting for my girlfriend,” which seemed to appease the teller too easily.

It was now time for Sophie to withdraw money at the teller. Anthony really didn't mind because Sophie was always there to support him, no matter how long it took. They reflected that they had not more than 30 min left to do a couple more errands, including visit the ATM and go to the library. There were four people in the queue at the ATM. Anthony grabbed Sophie's hand and whispered in her ear, “Sophie, do you realize something? If I was recording right now, I'd be able to see all the PIN numbers of all the people in front of us.” Sophie shushed Anthony. “You're going to get us in trouble today. Enough's enough.” “No really, Sophie, we've got to tell security. They're worried about tiny cameras looking down and skimming devices, but what about the cameras people are wearing now?” Sophie squeezed Anthony's hand—“Anthony, you are going to get us in serious trouble. And this is not the time to be saving the world from cybercriminals.” Anthony moved away from the queue, realizing that his face was probably being recorded on CCTV. The last thing he ever wanted was to be in trouble. He went to instantly budge the GoPro off his head; it was becoming rather hot even though it had been a cool day, and it was beginning to feel uncomfortable and heavy on his back and neck muscles. By the time he could get his act together, Sophie had made her transaction and they were hurriedly off to the library just before class.

Scenario 7: In the Library

As they rushed into the library to get some last-minute resources, Anthony and Sophie decided to split up. Sophie was going to the reserved collection to ask for access to notes that the special topics lecturer had put on closed reserve, and Anthony was going to do some last-minute bibliographic searches for the group assignment that was due in a few days. Why was it that things were always crammed into the last two weeks of the session? How on earth was any human being able to survive those kinds of demands? Anthony grabbed Sophie's bag and proceeded to the front computers. It was packed in the library because everyone was trying to do their final assignments. As Anthony hovered behind the other students, he remembered the shoulder-surfing phenomenon he had considered at the ATM. It was exactly the same. Anthony made sure not to look forward. As soon as there was an empty computer, he'd be next. He conducted some library searches standing up and then spotted two guys moving away from a sit-down desk area. Given all the stuff he was carrying, he thought he'd ask the guys nearby if they had finished. They said yes and tried to vacate the space as fast as they could, being courteous to Anthony's needs. By this time, Anthony was also sweating profusely and had begun to look stressed out.

The cameras are now embedded in USBs, wristbands, pens, keyfobs, bags, and t-shirts.

Anthony dumped his stuff on the ground, and the shorter of the two men said, “Are you wearing a camera on your head?” Anthony muttered to himself, “Oh no, not again.” Had he been able to take the device off his head effortlessly, he would have. After wearing it for over 2 h straight, it had developed an octopus-like suction to his forehead. “Yeah, yeah, it's a camera.” This camera had brought him nothing but bad luck all day. Okay, so he had taped most of the first lecture in the morning, but it had not been any good since. Sophie was angry with him over the café discussions, Oxford was not interested in being filmed without his knowledge, and Anthony's shoulders were really starting to ache and he was developing a splitting headache. “You guys would not happen to be from civil engineering?” Anthony asked in the hope that he and Sophie might get some hints for the forthcoming group assignment. “Nah, we're from commerce.” Both men walked away after saying goodbye, and Anthony was left to ponder. Time was running out quickly, so he left his things where they were and decided to go to the desk and ask for help directly.

“Hello, I am wondering if you would be willing to help me. My name is Anthony, and I am doing research on…” The librarian studied Anthony's head closely. “Umm…can I just ask what's happening here? Please tell me you are not recording this conversation,” asked the librarian politely. “What?” said Anthony, completely oblivious to the camera mounted on his head. He then came to his senses. “Oh that? That's just a GoPro. I've not got it on. See?” He brought his head nearer to the librarian, who put on her glasses. “Now, I'm looking for…” “I'm sorry, young man, I'm going to have to call down the manager on duty. You just cannot come into the library looking like that. In fact, even onto campus.”

Anthony felt like all of his worst nightmares were coming true. He felt like running, but his and Sophie's belongings were at the cubicle and besides, the library security CCTV had been recording for the last few minutes. His parents would never forgive him if anything jeopardized his studies. Sophie was still likely photocopying in closed reserve. What would she think if she came out to be greeted by all this commotion? The manager of “The Library”—oh he felt a bad feeling in the pit of his stomach. Anthony knew he had done nothing wrong, but that was not the point at this time. The librarian seemed less informed than even he was of his citizen rights, and while she was on the phone, hurriedly trying to get through to the manager, Sophie returned with materials.

“Where are our bags? My laptop is in there Anthony.” Anthony signaled over to the cubicle, didn't go into details, and asked Sophie to return to the desk to do some more searches while he was with the librarian. Surprisingly, she complied immediately given the time on the clock. Anthony was relieved. “Look,” he said to the librarian, “I am not crazy, and I know what I am doing is legal.” She gestured to him to wait until she got off the phone. “Right-o, so the manager's at lunch, and so I'll have to have a chat with you. First and foremost, when you're taking footage of the students, you need permission and all that sort of thing. I'm just here to clarify that to you.” “Look, umm, Sue, I'm not recording right now, so I guess I can wear whatever I want and look as stupid as I want so long as I'm not being a public nuisance.” “Young man, can I have your student ID card please?” Anthony claimed he did not have one with him, but was trying to avoid returning back to where Sophie was to get hit with even more questions. Anthony proceeded by providing the librarian his full name.

“Well, Anthony Fielding, it is against university policy to go around recording people in a public or private space,” stated the librarian firmly. Anthony, by now, had enough. “Look, Sue, for the second time, I've not recorded anyone in the library. I did record part of my lecture today with this device. It is called a GoPro. Why hasn't anyone but me heard about it?” “Well we have heard of Google Glass here, and we know for now, we don't want just anyone waltzing around filming indiscriminately. That doesn't help anyone on campus,” the librarian responded. “Okay, based on my experience today, I know you are right,” Anthony admitted. “But can you at least point me toward a library policy that clearly stipulates what we can and cannot do with cameras? And why is this kind of camera one that you're alarmed about rather than a more flexible handheld one like this one?” Anthony pulled out his iPhone 6. The librarian seemed oblivious to what Anthony was trying to argue. Meanwhile, Anthony glanced over to Sophie half-smiling, indicating they will have to make a move soon by pointing at his watch and then the exit.

“Look, I know you mean well. But…” Anthony was interrupted again by the librarian. “Anthony Fielding, it is very important you understand what I am about to tell you; otherwise you might end up getting yourself in quite a bit of trouble. If you're recording students, you actually have to inform the student and ask if it's okay, because quite a lot of them are hesitant about being filmed.” Anthony retorted, “I know, I know, do unto others as you'd have them do unto you, but I already told you, I'm not recording…But which policy do you want to refer me to and I'll go and read it, I promise.” The librarian hesitated and murmured behind her computer, “Ah…I'll have to look…look…look and find it for you, but I just…I just know that…” The librarian realized the students were going to be late for a lecture. “Look, if you're right and there is no policy, assuming I've not made an error, then we need to develop one.” “Look, Sue, I don't mean to be rude, but we've already filmed in a lecture theater today. I wouldn't call a public theater, private in any capacity. Sure people can have private conversations in a theater, but they shouldn't be talking about private things unless they want to actively share it during class discussion time.” “Look, that's a bit of a gray area,” the librarian answered. “I think I am going to have to ask security to come over. It's just that I don't think the safety of others is being put first. For starters, you should take that thing off.” Anthony realized that things were now serious. He attempted to take off the band, which was soaking wet from sweat given his latest predicament.

Sophie realized something was wrong when she was walking with the bags back to the information desk. “Anthony, what's happening?” Sophie had a worried look on her face. “I've been asked to wait for security,” said Anthony. “Can you please not worry and just leave for class? I won't feel so bad if you go on without me.” Sophie responded, “Anthony, I told you this thing was trouble—you should have just taken it off—oh Anthony!” “What now?” said Anthony. “Your forehead…are you okay? It's all red and wrinkly and sweaty. Are you feeling okay?” Sophie put her hand on Anthony's forehead and realized he was running a fever. “Look, is this really necessary? My boyfriend has not done anything wrong. He's taken off the device. If you want to see the lecture footage, we'll show you. But really, the guy has to pass this subject. Please can we go to the lecture theater?” The librarian was unequivocally unemotional. Anthony looked at Sophie and she nodded okay and left for class with all the bags. “Please ring me if you need anything, and I'll be here in a flash.” Sophie kissed Anthony goodbye.

Scenario 8: Security on Campus

Moments later, security arrived on the scene. Anthony challenged the security guards and emphasized that he had done nothing wrong. Anthony was escorted back to the security office on campus some 500 m away. At this point, he was told he was not being detained, that simply university security staff were going to have a chat with him. Anthony became deeply concerned when several security staff greeted him at the front desk. They welcomed him inside and asked him to take a seat and whether or not he'd like a cup of coffee.

“Anthony, there have been a spate of thefts on campus of late. We'd like to ask you where you got your GoPro camera.” “Well, it was a birthday present from my older brother a few months ago,” Anthony explained. “He knows I've always made home movies from when I was a youngster, and he thought I might use it to film my own skateboarding stunts.” “Right,” said the police officer, “Could you let me take a look at the serial number at the bottom of the unit?” “Sure,” said Anthony, “and then can I go? I haven't stolen anything.” The security staff inspected the device and checked the serial number against their database, handing it back to Anthony. “Ok, you're free to go now.” “What? And I thought you were going to interrogate me for the footage I took today!”

“Look Anthony, that's a delicate issue. Yeah, under the Surveillance Devices Act, for you to be able to record somebody you need their explicit permission, which is why you'll see wherever we've got cameras we've got signage that states you're being filmed, and even then we've got a strict policy about what we do with the recordings. We can't let anybody view it unless it's police and so on, but it's really strict.” Anthony replied, “What happens when Google Glass begins to proliferate on campus? The GoPro, which will be obvious, won't be what you're looking out for but rather Glass being misused or covert devices.” “Look, security, the way it works at universities is that you are concerned with the here and now. I can't predict what will happen in about three months' time, right?” At this point Anthony was thinking about his lecture and how he was running late, yet again, however, this time through no fault of his own.

“Is she with you?” asked the security manager. “Who do you mean?” questioned Anthony. “That young lady over there,” the manager replied, pointing through the screen door. “Oh, that's my girlfriend, Sophie. I reckon she was worried about me and came to see what was going on.” Sophie had her iPhone out and was recording the goings on. Anthony just had to ask, “Am I right? Is my girlfriend allowed to do that? She isn't trespassing. The university campus is a public space for all to enjoy.” The security manager replied, “Actually, she's recording me, but she's not really allowed to do that without giving me some sort of notification. We might have cameras crawling all over this campus for student and staff safety, but our laws state if people don't want to be recorded, then you should not be recording them. On top of this, you would probably realize that when you walk around the campus in large areas like the walkways, they're actually facing the road, they're not facing people. So yes, you need permission for what she's doing there or adequate signage explaining what is going on.”

Sophie put the phone down and knocked on the door. “Can I come inside?” “Of course you can,” said the security manager. “Join the party!” “Anthony, Prof. Gabriel is asking for you; otherwise, he'll count you absent and you won't get your 10% participation mark for the session. I told him I knew where you were. If we get back within 15 min, you're off the hook.” “Hang on Sophie,” Anthony continued, “I'd like to solve this problem now to avoid any future misunderstandings. After all, I'm about to enter the classroom and record it for my own learning and progress. What do you think? Is that against the law?” Anthony asked the security manager. The security manager pondered for a long while. “Look, we get lots and lots of requests asking us to investigate the filming of an individual; we take that very seriously. But there is no law against that taking place in a public space.” “Is a lecture theater a public space?” Anthony prompted. The security manager replied, “I think you should be allowed to use headmounted display video cameras if it's obvious what you're doing and unless a bystander asks you to cease recording. The lecture rooms are open and are usually mixed with the reception areas, which makes them public areas; so if you want to gain access to the room, obviously you can because it's a public area. You don't have to use a swipe card to get in, you see. But then there are still things that you can't do in a public area, like you can't ride a bicycle in there; or if someone is giving a lecture, you can't interrupt the lecture. That sort of thing.”

Anthony started speaking from the experience of his day. “I was queueing in front of the ATM today, and I realized that I could easily see the activities of the people in front of me and the same in the library. When I hover around somebody's computer, I can see their screen and what they're up to on the Internet. It bothered even me after my experience today; unintentionally I'm seeing someone's ATM PIN number, I'm seeing someone searching on Google about how to survive HIV, which is personal and highly sensitive private stuff. No one should be seeing that. I just wore my GoPro to record my lecture for study purposes, but these kinds of devices in everyday life must be very disturbing for the people being recorded. That's why I'm curious what would happen on campus.” The security manager interrupted, “We already have some policies in place. For example, you can make a video recording, but what are you going to do with it? Are you going to watch it yourself or are you going to e-mail it around? You can't do that using your university e-mail account. You can't download, transfer, or copy videos using university Internet, your university account, or your university e-mail account. Look it up; there are also rules about harassment…It's fairly strict and already organized in that regard. But if you're asking where the university is applying policies, you're asking the wrong people because we don't get involved in policy making. You should be talking to the legal department. We don't make the policies; we just follow the procedures. Every citizen of this nation also has to abide by state and federal laws.”

The explanation satisfied Anthony. He realized that the security manager was not the person to talk to for any further inquiries. “Thank you for taking the time to answer my questions; you've been very helpful,” Anthony said as he headed to the door to attend his class with Sophie. He did need that 10% attendance mark from Prof. Gabriel if he wanted to be in the running for a Distinction grade.

Scenario 9: Sophie's Lecture

After their last lecture together, Anthony was happy thinking he was almost done for the day and he would be heading back home but Sophie had one more hour of tutorial. Anthony walked Sophie to her last tutorial's classroom. “C'mon Anthony, it'll only take half an hour tops. After this class, we can leave together; bear with it for just a while,” Sophie insisted. “Okay,” said Anthony; his mind was overflowing with the thought of the final exams and questions raised in his mind by his unique experience with the GoPro all day.

They arrived a few minutes late. Sophie quietly opened the door as Anthony walked in behind her. The lecturer took a glimpse of Anthony with the GoPro on his head. The lecturer asked Anthony, “Are you in this class?” “No, I'm just with a friend,” replied Anthony as he was still trying to walk in and take a seat. “Okay and you're wearing a camera?” “Yeah?!” Anthony replied, confused by the tone of the lecturer. “Take it off!” the lecturer exclaimed. “You don't have permission to wear a camera in my class!” Silence fell over the classroom. As the lecturer's tone became more aggravated, everyone stopped, trying to understand what was going on. “Ok, but it's not…” The lecturer refused to hear any explanation. “You're not supposed to interrupt my class, and you're not supposed to be wearing a camera, so please take the camera off and leave the class!”

Anthony saw no point in explaining himself and left the class. Sophie, in shock, followed Anthony outside to check up on him and make sure he was all right. “Oh Anthony, I don't know how many times I told you to take it off all day…Are you ok?” Anthony was shocked as well. “I don't understand why he got so upset.” Anthony was facing the lecture theater's glass door; it opened and the lecturer stepped out and asked, “Excuse me, are you filming inside the class?” “Professor…” Anthony tried to say he was sorry for the trouble and that he wasn't even recording. “No! Were you filming inside the class?” the lecturer asked again. “I'm sorry if I caused you trouble, professor, the camera is not even on.” The professor, angry at both of them for interrupting his class with such a silly incident, asked them to leave and returned to the lecture theater. Sophie was surprised. “He's a very nice person; I don't understand why he got so upset.” Anthony's shock turned into anger. “I thought this was a public space and I don't think there's any policy that forbids me to record the lecture! Couldn't he at least say it nicely? You get back in, I'll see you after your class, and meanwhile I'll take this darn thing off.” Anthony kissed Sophie goodbye and left for the library without the GoPro on his head.

Conclusion

Wearable computers—digital glasses, watches, headbands, armbands, and other apparel that can lifelog and record visual evidence—tell you where you are on the Earth's surface and how to navigate to your destination, alert you of your physical condition (heart and pulse rate monitors), and even inform you when you are running late to catch a plane, offering rescheduling advice. These devices are windows to others through social networking, bridges to storage centers, and, even on occasion, companions as they listen to your commands and respond like a personal assistant. Google Glass, for instance, is a wearable computer with an optical head-mounted display that acts on voice commands like “take a picture” and allows for hands-free recording. You can share what you see live with your social network, and it provides directions right in front of your eyes. Glass even syncs your deadlines with speed, distance, and time data critical to forthcoming appointments.

The slim-line Narrative Clip is the latest gadget to enter the wearable space.

But Google is not alone. Microsoft was in the business of lifelogging more than a decade ago with its SenseCam device, which has now been replaced by the Autographer. Initially developed to help those suffering with dementia as a memory aid, the Autographer takes a 5-mp picture about 2,000 times a day and can be replayed in fast-forward mode in about 5 min. It is jam-packed with sensors that provide a context for the photo including an accelerometer, light sensor, magnetometer, infrared motion detector, and thermometer as well as a GPS chipset. The slim-line Narrative Clip is the latest gadget to enter the wearable space. Far less obtrusive than Glass or Autographer, it can be pinned onto your shirt, takes a snapshot every 30 s, and is so lightweight that you quickly forget you are even wearing it.

These devices make computers part of the human interface. But what are the implications of inviting all this technology onto the body? We seem to be producing innovations at an ever-increasing rate and expect adoption to match that cycle of change. But while humans have limitations, technologies do not. We can keep developing at an incredible speed, but there are many questions about trust, privacy, security, and the effects on psychological well-being that, if left unaddressed, could have major risks and often negative societal effects. The most invasive feature of all of these wearables, however, is the image se