Featured Speakers

Each week in the lead up to Brave Conversations, we'll be chatting to two featured speakers about their take on the issues facing humans and technology, now and in the future.

Follow us on Twitter for updates at @braveconvos and #braveconversations

The Rise of The Machine

What are the biggest challenges emerging with regard to humans and the Web?

Michael: Humans are increasingly (directly or indirectly) reliant upon the Web as a means of satisfying fundamental needs, such as water, food and shelter. I would say the biggest challenge emerging is the securitization of critical sociotechnical systems that rely on the Web either upstream or downstream and are responsible for the ongoing production and delivery of human needs.

Professor Katina Michael is Professor in the School of Computing and Information Technology at the University of Wollongong and the IEEE Technology and Society Magazine editor-in-chief.

I have often reflected: here we all are, billions of interconnected people, hundreds of millions of interconnected corporations, hundreds of sovereign states, varied national and supranational laws, closed and open economic infrastructures each with their own financial systems, geographic expanses each with their unique environmental conditions all relying on the Web. Here we are pouring our hearts and souls into this Complex Living Web by uploading text, images and video, busily dotting the i’s and crossing the t’s toward long-term sustainability and growth and development. Data… data… and more data… My biggest fear? The annihilation of this Web of Everything to a Web of Nothing- a fragmented and broken and virused desolate e-wasteland, filled with bot-generated disinformation. What then remains as truth?

We don’t have paper backups any longer, we don’t even keep tape drives any longer, the lifetime of a social media-based message is diminishing. The paradox is that while we can store everything in our daily digital lifelogs, we seemingly don’t remember what happened to us yesterday, let alone history before the Web. Yes, “it’s all the in Cloud” we are told, but we forget the Cloud is actually on the ground, and highly vulnerable and susceptible to acts of God or to human attack.

From a socio-ethical perspective, I am very concerned that we are generating bits and bytes almost continuously, by being hooked to online applications driven by the Web, some of them purposefully addictive. The systematic extraction of personal information from individuals will leave our spirits with little to bear that is sacred, and yet some companies will reap the profits, leaving us profusely naked in the process. “Yes, they already know more about us through psychometrics than we know about our family, friends and even ourselves”. We may well be adding much in terms of the “number of words” and the “number of images” and the number of “location points of interest” but what does that profit us as humans? Do we understand the temporality of life? Our own limitedness? How do we understand life and death in this scenario of digital chronicling? Yes, in the short term it helps us reflect on our practices, but in the longer term, all the data in the world also has a finite lifetime because the earth is finite (according to Science). My worry is that we are preoccupied with the wrong aspects of our life (the temporal, not eternal), and in-so-doing our choices are made not on the spiritual relationship questions, but on the technocratic prospects of machine learning and artificial intelligence. We worship technology today, and most of us don’t even know it. Proponents of this view say that technology has an answer for everything. And that is a fallacy.

Ackland: I’m not going to say these are necessarily the “biggest” challenges, but they are what I spend some time thinking about, so they reflect my own interests.

People having to sign up to proprietary online services in order to not have a degraded social life or travel experience. I’ve never felt the need to go on e.g. Facebook because I’m old enough that I wasn’t going to be excluded from social circles I wanted to be part of. I avoided google maps (particularly, with location tracker) for many years because I didn’t want google to have that information, but in the past couple of years I gave in because my ability to travel in new cities, in particular, was compromised. So now google knows what I do, and I don’t like it.

Relatedly, it is harder to maintain an online professional presence without doing so via one of the walled gardens e.g. Academia.edu, LinkedIn. I don’t want to invest time in putting together an academic profile on e.g. academia.edu when I know I can’t get my data out of it.

Misinformation, fake news, social bots (in e.g. Twitter) pushing particular political agendas. A lot being said about this at the moment.

Filter bubbles – the phenomenon that people are receiving information that supports their previously held opinions, because they only connect with people similar to themselves in social media (homophily) and recommender algorithms in social media serve up news and info that matches their previous behaviour. Social media could be leading to more polarisation, although to be honest some people have been saying this since Web 1.0 (‘echo chambers’, ‘cyberbalkinzation’). But the 2016 US presidential election put misinformation and filter bubbles back into the spotlight.

Machine learning – not obviously just to do with the web, but the web is important in recent improvements in machine learning (e.g. data for deep learning). Machine learning may have very negative impacts on particular sectors of the workforce.

MOOCs – as an academic, this is obviously something that is on the radar for me. Perhaps MOOCs will do to academia what the Web did to the music industry and newspaper industries.

Amazing that I’m having to say this, given email is such a relatively old technology: spam. I can’t believe the amount of emails for fake conferences, fake journals and general rubbish that I receive.

Censorship of the web. Most people think certain things shouldn’t be on the web (e.g. relating to inciting hate, child pornography etc.) but the problem is the grayer area (as the saying goes: one person’s terrorist is another person’s freedom fighter). Countries are always going to insist that they have the right to censor material that some (e.g. in the West) think shouldn’t be censored. How to come up with a system that objectively classifies material that should/shouldn’t be censored, without imposing moral judgments?

Abusive and incivil behaviour on the web, cyber-bullying: shutting down particular voices.

Michael: I agree with your censorship stance… interestingly there is a loose connection between disinformation and censorship… countries can say “open” to everything and then “drown out” perspectives and ideas by sheer volume. In effect, society is at the mercy of those that control the “algorithm”.

What are the three most important things that we can do about them?

Michael: We need to build in contingencies. What if there was no Web? What would that mean? How would we go on? Consider from across vertical sectors.

Critical systems that are required for humans to exist must have off-grid backup and disaster recovery plans. Water is #1. Better understanding of interconnected and interdependent systems. Water affects electricity, electricity affects telecoms, telecoms in turn affects banking… it is a highly-meshed domino effect.

Media literacy is paramount. We need to get ready for a new breed of medical conditions as a result of too much sitting, not enough verbal communication with each other, and repetitive actions akin to Obsessive Compulsive Disorder triggered by technology in various forms. Beyond that how to maintain privacy as that is a fundamental quality of freedom (i.e. human rights).

Dr Robert Ackland Associate Professor, School of Sociology and Centre for Social Research & Methods, ANU College of Arts and Social Sciences.

Ackland: We are never going back to the time when people created their own websites – it was only a small subset of people who could do this anyway. So we need the technology that allows ordinary user to be able to produce and consume information, connect with people. But we need some way that people can keep ownership of their data and move it easily rather than it being stuck in a walled garden. I always thought peer-to-peer online social networks such as Diaspora (https://diasporafoundation.org/) seemed very promising, but of course there is a major network effect going on here: who wants to be on a peer-to-peer online social network that none of your friends are on?

Machine learning – tax the robot? (Bill Gates is calling for this).

Web censorship – a crazy idea I’ve had for a long time is that a “censorship trading scheme” similar to the carbon trading scheme set up to reduce greenhouse emissions could be something to pursue here. Each country (or this could be done at the level of individual organisations) gets a certain allocation of censorship credits that they can “spend” on censoring particular websites,and on preventing other websites from being censored. That would lead to a price of censorship of a given website. Child pornography sites would have a price of zero because presumably no country would use its censorship credits to prevent a child pornography site from being censored. However politically-oriented sites would have a non-zero price. China might want to spend its credits on censoring Free Tibet websites, and some western governments might want to spend credits to keep these sites uncensored. Of course this couldn’t be policed etc. but it would lead to a market that could be used to objectively ‘price’ censorship and potentially would lead to less censorship. When I proposed this at a conference on web censorship people were baying for my blood. I didn’t expect it to go down well, but I got up and said it because I was sick of hearing people ready to impose their values and yet acknowledging they had no way of objectively quantifying how ‘bad’ it is to censor a sex site, compared to a political site.

Abstract:

The military sector has been investing in nanotechnology solutions since their inception. Internal assessment committees in defense programmatically determine to what degree complex technologies will be diffused into the Armed Forces. The broad term nanotechnology is used in this Special Issue of IEEE Technology and Society Magazine to encompass a variety of innovations, from special paint markers that can determine unique identity, to RFID implants in humans. With the purported demand for these new materials, we have seen the development of a fabrication process that has catapulted a suite of advanced technologies in the military marketplace. These technologies were once the stuff of science fiction. Now we have everything from exoskeletons, to wearable headsets with accelerated night vision, to armaments that have increased in durability in rugged conditions along with the ability for central command without human intervention. Is this the emergence of the so-called supersoldier, a type of Iron Man?

Nanotechnology in the Military Sector

The military sector has been investing in nanotechnology solutions since their inception. Internal assessment committees in defense programmatically determine to what degree complex technologies will be diffused into the Armed Forces. The broad term nanotechnology is used in this Special Issue of IEEE Technology and Society Magazine to encompass a variety of innovations, from special paint markers that can determine unique Identity, to RFID implants in humans. With the purported demand for these new materials, we have seen the development of a fabrication process that has catapulted a suite of advanced technologies in the military marketplace. These technologies were once the stuff of science fiction. Now we have everything from exoskeletons, to wearable headsets with accelerated night vision, to armaments that have increased in durability in rugged conditions along with the ability for central command without human intervention. Is this the emergence of the so-called super-soldier, a type of Iron Man?

Social Implications: Key Questions

This special issue is predominantly based on proceedings coming from the 9th Workshop on the Social Implications of National Security, co-convened by the authors of this guest editorial. The workshop focused specifically on human-centric implantable technologies in the military sector. Key questions the workshop sought to address with respect to implants included:

What are the social implications of new proposed security technologies?

What are the rights of soldiers who are contracted to the defense forces in relation to the adoption of the new technologies?

Does local military law override rights provided under the rule of law in a given jurisdiction, and 1 what are the legal implications?

What might be some of the side effects experienced by personnel in using nanotechnology devices that have not yet been tested under conditions of war and conflict?

How pervasive are nanotechnologies and microelectronics (e.g., implantable technologies) in society at large?

Recommended Reading

More broadly the workshop sought to examine socio-ethical implications with respect to citizenry, the social contract formed with the individual soldier, and other stakeholders such as industry suppliers to government, government agencies, and the Armed Forces [1].

F. Allhoff, P. Lin, D. Moore, What is Nanotechnology and why does it matter? From Science to Ethics, West Sussex, Wiley-Blackwell, 2010.

P. Tucker, “The Military Is Building Brain Chips to Treat PTSD,” The Atlantic, 2014; http://www.theatlantic.com/technology/archive/2014/05/the-military-is-building-brain-chips-to-treat-ptsd/371855/, 29/05/2014.

DARPA's RAM Project

In 2012, the U.S. military's Defense Advanced Research Projects Agency (DARPA) confirmed plans to create nanosensors to monitor the health of soldiers on battlefields [2]. In 2014, ExtremeTech [3] reported on a 2013 DARPA project titled the “Restoring Active Memory (RAM) Project.” Ultimately the aim of RAM was:

“to develop a prototype implantable neural device that enables recovery of memory in a human clinical population. Additionally, the program encompasses the development of quantitative models of complex, hierarchical memories and exploration of neurobiological and behavioral distinctions between memory function using the implantable device versus natural learning and training” [4].

Several months later, the U.S. Department of Defense (DOD) published on their web site an article on how DARPA was developing wireless implantable brain prostheses for service members and veterans who had suffered traumatic brain injury (TBI) memory loss [5]. Quoting here from the article:

“Called neuroprotheses, the implant would help declarative memory, which consciously recalls basic knowledge such as events, times and places…”

“these neuroprosthetics will be designed to bridge the gaps in the injured brain to help restore that memory function… Our vision is to develop neuroprosthetics for memory recovery in patients living with brain injury and dysfunction.”

“The neuroprosthetics developed and tested over the next four years would be as a wireless, fully implantable neural-interface medical device for human clinical use.”

The U.S. DOD also noted that traumatic brain injury has affected about 270 000 U.S. service members since 2000, and another 1.7 million civilians. The DOD said that they would begin to focus their attention on service members first [6]. Essentially the program is meant to help military personnel with psychiatric disorders, using electronic devices implanted in the brain. Treated disorders range from depression, to anxiety, and post-traumatic stress disorder [7]. The bulk of the15 million) and the University of Pennsylvania ($22.5 million), in collaboration with the Minneapolis-based biomedical device company Medtronic [8].

More Information

Visual proceedings of the 9th Workshop on the Social Implications of National Security, including powerpoint presentations, are available [9]. The workshop was held during the 2016 IEEE Norbert Wiener Conference, at the University of Melbourne, Australia. Several DARPA-funded neurologists from the Vascular Bionics Laboratory at the University of Melbourne were invited to present at the workshop, including a team led by Thomas Oxley, M.D. [10]. (Oxley did not personally appear as he was in the U.S. on a training course related to intensive neurosurgical training.)

The military implantable technologies field at large is fraught with bioethical implications. Many of these issues were raised at the Workshop, and remain unanswered. If there is going to be a significant investment in advancing new technologies for soldiers suffering from depression or post-traumatic stress disorder (PTSD) in the military, there needs to be commensurate funding invested to address unforeseen challenges. In fact, it is still unclear whether U.S. service members must accept participation in experimental brain research if asked, or if they can decline in place of other nonintrusive medical help.

9. K. Michael, M.G. Michael, J.C. Galliot, R. Nicholls, "The Socio-Ethical Implications of Implantable Technologies in the Military Sector", The Ninth Workshop on the Social Implications of National Security (SINS16).

A bot (short for robot) performs highly repetitive tasks by automatically gathering or posting information based on a set of algorithms. They can create new content and interact with other users like any human would. But the power is always with the individuals or organisations unleashing the bot.

Politicalbots.org reported that approximately 19 million bot accounts were tweeting in support of either Donald Trump or Hillary Clinton in the week before the US presidential election. Pro-Trump bots worked to sway public opinion by secretly taking over pro-Clinton hashtags like #ImWithHer and spreading fake news stories.

Bots have not just been used in the US; they have also been used in Australia, the UK, Germany, Syria and China.

Whether it is personal attacks meant to cause a chilling effect, spamming attacks on hashtags meant to redirect trending, overinflated follower numbers meant to show political strength, or deliberate social media messaging to perform sweeping surveillance, bots are polluting political discourse on a grand scale.

Fake followers in Australia

In 2013, the Liberal Party internally investigated an unexpected surge in Twitter followers for the then-opposition leader, Tony Abbott. On August 10, 2013, Abbott’s Twitter following soared from 157,000 to 198,000, having grown until then by around 3,000 per day.

A Liberal Party spokesperson revealed that a spambot had most likely caused the sudden increase in followers.

Fake trends and robo-journalists in the UK

As the UK’s June 2016 referendum on European Union membership drew near, researchers discovered automated social media accounts were swaying votes for and against Britain’s exit from the EU.

A recent study found 54% of accounts were pro-Leave, while 20% were pro-Remain. And of the 1.5 million tweets with hashtags related to the referendum between June 5 and June 12, about half a million were generated by 1% of the accounts sampled.

Following the vote, many Remain supporters claimed social media had an undue influence by discouraging “Remain” voters from actually voting.

Fake news and echo chambers in Germany

German Chancellor Angela Merkel has expressed concern over the potential for social bots to influence this year’s German national election.

The right-wing Alternative for Germany (AfD) already has more Facebook likes than Merkel’s Christian Democrats (CDU) and the centre-left Social Democrats (SPD) combined. Merkel is worried the AfD might use Trump-like strategies on social media channels to sway the vote.

It is not just that the bots are generating the fake news. The algorithms that Facebook deploys as content is shared between user accounts create “echo chambers” and outlets for reverberation.

Spambots and hijacking hashtags in Syria

During the Arab Spring, online activists were able to provide eyewitness accounts of uprisings in real time. In Syria, protesters used the hashtags #Syria, #Daraa and #Mar15 to appeal for support from a global theatre.

It did not take long for government intelligence officers to threaten online protesters with verbal assaults and one-to-one intimidation techniques. Syrian blogger Anas Qtiesh writes:

These accounts were believed to be manned by Syrian mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bite and insults.

But when protesters continued despite the harassment, spambots created by Bahrain company EGHNA were co-opted to create pro-regime accounts. They flooded the hashtags with pro-revolution narratives.

This was essentially drowning out the protesters’ voices with irrelevant information – such as photography of Syria. @LovelySyria, @SyriaBeauty and @DNNUpdates dominated #Syria with a flood of predetermined tweets every few minutes from EGHNA’s media server.

Since 2014, the Islamic State terror group has “ghost-tweeted” its messages to make it look it has a large, sympathetic following. This is to attract resources, both human and financial.

Sweeping surveillance in China

In May 2016, China was exposed for purportedly fabricating 488 million social media comments annually in an effort to distract users’ attention from bad news and politically sensitive issues.

A recent three-month study found 13% of messages had been deleted on Sina Weibo (Twitter’s equivalent in China) in a bid to crack down on what government officials identified as politically charged messages.

It is likely that bots were used to censor messages containing key terms that matched a list of banned words. Typically, this might include words in Mandarin such as “Tibet”, “Falun Gong” and “democracy”.

What effect is this having?

The deliberate act of spreading falsehoods by using the internet, and more specifically social media, to make people believe something that is not true is certainly a form of propaganda. While it might create short-term gains in the eyes of political leaders, it inevitably causes significant public distrust in the long term.

In many ways, it is a denial of citizen service that attacks fundamental human rights. It preys on the premise that most citizens in society are like sheep, a game of “follow the leader” ensues, making a mockery of the “right to know”.

We are using faulty data to come to phoney conclusions, to cast our votes and decide our futures. Disinformation on the internet is now rife – and if that has become our primary source of truth, then we might well believe anything.

Figure 1. Packing for a full month away. Everything but the kitchen sink.

On the 9th of December in 2015, I set out for a camping trip with my three young children to the Sapphire Coast of Australia, toward the New South Wales and Victorian border (Figure 1). The last time I had driven through this stunning part of the world, was when my parents decided to take their four children across country in a Ford Cortina station wagon to visit their first cousins on apricot and citrus farms in South Australia.

I turned eight years of age over that summer, and the memories of that trip are etched into our hearts. We've laughed countless times over events on that holiday, all of which were borne from a “lack of access” to technology, resulting in “close-ess” and “togetherness.” Loxton, South Australia, only had two television channels back then-the ABC news, and 5A which showed endless games and replays of cricket. While we grew to love cricket — we had no choice - we welcomed every opportunity to physically help our cousins gather fruit using nothing but ladders and our bare hands.

It was the festive season, and I remember lots and lots of family gatherings, parties, and outdoor lamb-spit barbecues. We gathered to eat, and dance, and our elders reminisced over what life was like in the village in Greece, and tell us funny stories about growing up with hardly any material possessions. Highlights included: when a photographer visited the village once every other year to take pictures with his humungous boxed contraption, which he would hide behind; the memory of the first time a car was spotted trying to come into the village; walking to school one hour away with shoes made out of goat skin (if not barefoot); and the harsh unheated winters and boiling hot summers over scenic Sparta.

It was a kind of celebration of life when I think back. It was so carefree, clean and pure, and joyous! Everyone lived in the moment. No one took pictures of their food to post to Instagram, no one had their head buried in front of a screen watching YouTube on demand, and we were outside in the fresh air awestruck by the beauty of the glistening stars that shone so bright in the night sky (and getting bitten by mosquitos while doing so). It was a kind of SnapChat without the “Snap.” On that trip I gained an appreciation for the land, and its importance in sustaining us as human beings.

As I reflect on that time, we travelled through remote parts of Australia with nothing but ourselves. We were too poor to stay at hotels, so dad ingeniously turned our station-wagon into a caravan, or so it seemed to us when the back seat folded forward and the travelling bags were placed on the roof rack secured with a blue tarpaulin.

Figure 2. The great Australian outdoor toilet, proverbially known as a “dunny” Used in one camp site the kids endearingly nicknamed “Kalaru Poo.”

We had no mobile phone in the car, no portable wifi-enabled tablet, no gaming DS, and certainly no down-screen DVD player or in-car navigation system to interrupt the ebb and flow of a family confined to a small space for six weeks. Mum would put on a few Greek cassettes for us to sing along to (Dad's “best ofs” which he had dubbed from the radio), and we paid particular attention to the landscape and wildlife. Mum would tell stories nostalgically about the time before we were born and how she left her homeland at seventeen on her own. And dad would talk about the struggles of losing his mother just before the start of World War II, and how his schooling was interrupted in third class as towns were burned to the crisp by the invaders, and how lucky we were to have a chance at education in a peaceful nation. All the while my brother Arthur was pointing at how far we had driven with his AO mapliterally thousands of kilometres-which gave me a great sense of space and time that has stayed with me to this day. And of course, I do recollect the unforgettable chant of my little sister and big sister in near unison, “are we there yet?”

Last December 2015, after a demanding year in my various roles that included bi-monthly long-haul travel, I was determined to “shut down” the outside world, and give my children what my parents had given me, in all the same simplicity (Figure 2). I somehow needed to give my children my full attention for a four-week duration without a laptop in tow, ensuring that my body and mind would recover from the year that was. I knew I was drifting into overload in September 2015, when on one occasion, I found myself asking my husband which side of the road I should be driving on, even when I was in my home town.

Figure 3. The most spectacular and secluded Nelson Beach down the trail of Nelson Lake Rd near Mogareeka, NSW.

Figure 4. My youngest walking near the most spectacular Wallagoot Gap. We spent the day out at this magical place, swimming with the fish.

When one loves life and what they do, it is easy to feel so energized that you don't feel the need to stop… but “stop” I did. I wanted to reconnect with the natural environment in a big way, with my kids, and my inner self. I found myself asking those deep questions about creation - who, what, when, how? What an incredible world we live in! How does it all work and hang together as it does? I felt so thankful. Thankful for my family, my friends, my work, nature, life, Australia. It is so easy to take it for granted.

Each day, we'd choose a different place to visit, not excluding unsealed roads that led to secluded beaches, lakes, and inlets (Figures 3 and 4). Every morning we were awakened by the birdlife - a strange creature would call out at 4:30 a.m. for about 15 minutes straight, and then give it a rest; spotted lizards a few meters long on the road, and lots of kangaroos coming out of hiding at dusk to socialize. While we swam we could see the fish in the sea (with and without snorkels), and we got to speak with complete strangers, feeling like we had all the time in the world to do so.

At historical places, we learned about indigenous people like “King Billy” of the Yuin clan who would often be seen walking unheard distances in the 1950s in the dense shrub between Jervis Bay and Eden − 300 km (Figure 5).

Figure 5. The Yuin people (aka Thurga) are the Australian Aborigines from the South Coast of New South Wales. At top are images of legendary “King Billy” as he was nicknamed.

My kids began to make comments about how resourceful the aborigines would have been, catching fresh fish, making new walking tracks, and being blessed to live in a pristine world before the built environment changed it so radically (Figure 5). It was not difficult for me to imagine throwing in my current lifestyle for the serenity, peace, and tranquillity of the bush. The kids and I would be outside under the sun for at least 12 hours each day, and it was effortless and filled with activities, and so very much fulfilling (Figure 6).

Figure 6. The sun setting on New Year's Eve celebrations in 2015 in Merimbula, NSW

The kids didn't watch any television on this trip even though they had access to it in one camp spot (Figure 7). I spoke on the cell phone only a handful of times, and on some days I did not use electricity (they were my favorite days). Many times we did not have any cell phone coverage for large parts of the day. I learnt some important things about each of my children on this trip and about myself and the world we live in (Figure 8). And I'd love to do it all again, sooner than later.

We've been sold the idea that technology provides security for us but I am of the opinion that at least psychologically it leads to insecurity (1). It is a paradox. My eldest kept asking what we would do if we got a flat tire or engine trouble deep down a dirt road where we had no connectivity, or what we'd do in the event of a bushfire (Figure 9). Good questions I thought, and answered them by driving more slowly and carefully, avoiding sharp rocks and potholes, and more than anything, turning to prayer “God, keep me and my children safe. Help us not to panic at a time of trouble, and to know what to do. Help us not to be harmed. And help us not to have fear.” For all intense and purposes, technology which has been sold to us for security, breeds a false sense of security and even greater fear. We have learned to rely on mobile phones or the Internet, even when we don't need them. It has become a knee-jerk reaction, even if we have the stored information at hand readily available.

Figure 8. The kids posing for a photo with a big snail at Merimbula's Main Beach. Such a great opportunity for all of us to bond even closer together.

I am thankful I turned to art on this trip - a decision I made a few days before I left my home (see cover image of this issue). I loved speaking to real people, in person, and asking them to participate (2). Being able to hear their laughs, and see the expressions on their faces, and listen to their respective stories was so satisfying. On a few occasions I embraced people I met after opening my heart to life matters, challenges, joys, and sorrows. The cool thing? I met lots of people that reminded me of my mum and dad; lots of people who had three or four or more (or no) children - and felt connected more than ever before to the big family we call “society.” We'd sit around at the beach, at the rock pool, or the camp site, listening and learning from one another, and somehow indirectly encouraging one another onwards. We soon realized these were shared experiences and there was a solidarity, a “oneness,” an empathy between us.

Figure 9. Going down a steep and narrow unsealed road with lots of potholes at Mimosa Rocks National Park. One way down and only one way up.

We returned home a few days early due to heavy rains, and unexpectedly I did not feel the drive to return to my email trove that I figured had grown substantially in size. The thought crossed my mind that I could get heavily depressed over the thousands of messages I had missed. But I controlled that temptation. The last thing I wanted at that point was to get bogged down again in the rhythm of the digital world. Friends and colleagues might have been shocked that I did as I said I would do - utterly disconnect - but I learned something very fundamental… time away from the screen makes us more human as it inevitably brings us closer together, closer to nature, and also brings things into perspective.

Depending on our work, we can feel captive behind the screen at times, or at least to the thousands of messages that grace our laptops and mobile phones. They make us even more digital and mechanical - in intonation, action, even movement and thought. Breaking with this feeling and regaining even a little bit of control back is imperative every so often, lest we become machine-like ourselves. It is healthy to be “Just human,” without the extensions and the programs. In fact, it is essential to revitalize us and help us find our place in the world, as sometimes technology leads us too quickly ahead of even ourselves.

While it is an intuitive thing to do, you might find yourself having to work that little bit harder to make the unplugged time happen. But breaking free of all the tech (and associated expectations) occasionally, reinforces what it once meant to be human.

Now that you've immersed yourself in some of the challenges and paradoxes we face as a society (as our cities, businesses, governments, and personal lives become more digitized), it is time to reflect on everything you've read.

As much as we hope you've enjoyed this collection of articles, we really want you to find value in the discussions and debates that come from it. We have included some questions to get you started. Remember, there often isn't one right answer. These issues are complex. Sometimes the best answer to a challenging question is simply to ask more questions; to interrogate the issues at hand, using a multidisciplinary lens. So consider these questions a launch pad that will inspire you to ask your own questions, too. Share your questions with your peers in small groups and seek to brainstorm together on what possible future directions you can take to ensure these matters are integrated into development frameworks.

We thank the authors in this issue for assistance in drawing out these major themes.

Why did people throw rocks at the Google bus? Were the people on the buses really the targets of their animosity?

According to Rushkoff, growth is the prevalent feature of the digital economy. What impact does that have on companies? What impact does that have on workers? What impact does that have on neighborhoods and communities?

Is there a way to keep the possibilities that digital tools afford, without the commensurate detrimental effects? What solutions are there?

“Let's Protest: Surprises in Communicating against Repression”

Select a social networking application (e.g., Snapchat). What are its strengths and weaknesses for serving ordinary users and nonviolent campaigners?

Suppose you are put in charge of a country's technology policy today. What communication technology would you promote to ensure that a dictator could never come to power? Explain your reasoning.

Imagine that you want to assist some foreign friends who live under an authoritarian government. You can mainly help by using the Internet. What skills do you think are most important for you to learn? You might reflect on the possibilities of learning foreign languages' encryption, Web design, data collection, data verification, organizing denial-of-service attacks, and hacking. How will these skills help your friends specifically?

“Predictive Policing and Civilian Oversight”

Would you trust software more than you would a law enforcement officer?

Who should be held responsible when the software described in the article by Hirsh makes a mistake or is in error?

Should there be limits to how police use technology?

What do you think is required to balance the needs of policing and the needs of privacy?

List the consequences of the converging veillances. What are additional sociocultural consequences of these risks not addressed by the authors?

What existing controls are in place to address the risks you have identified? How effective are these controls in the design and operation phases of development?

What are responsible, reasonable, and appropriate strategies to reduce the prevalence of the risks you have identified?

“Privacy in Public”

Describe the concept of “über-veillance” or omnipresent surveillance. How does it differ from “regular” surveillance?

What is the “mosaic” theory of privacy? Explain why such a theory is necessary today.

Taking one of your regular school or work days as an example, list in chronological order all of your encounters with cameras as you go about your day. Are you surprised by how many you can count? Why or why not?

Thinking about the example of the interface created by Google to allow people to request the removal of their personal information, list similar privacy-protective technological measures that are avail-able on social media, such as Facebook.

Do you agree that people in a public space should have a right to privacy and anonymity, or do they give up such rights once they enter the public sphere?

“Privacy in the Age of the Smartphone”

What do you share with others online? Do you have a Facebook, Google+, LinkedIn, Instagram, or other account?

What parts of the information that you share with others is beyond your control? For example, who has access to your Facebook page—just your friends on Facebook, or is it public? What other sharing do you engage in that can be accessed by people you don't know?

Smartphones have become much more powerful in the last few years. How has your data footprint grown over the last two to three years?

What new services are you using today that you were not using in your first year of university? Has the volume of data and content that you share increased significantly? Do you feel you can still keep track of and manage that data?

“Paradoxes in Information Security”

Think of your everyday life. In what ways do information security procedures interrupt you on a daily basis?

In terms of information systems that you use, who and what define what security is?

Any incremental additional function, including information security, to an information system increases its complexity but also adds new ways of using and exploiting it. Can complexities that are constantly changing be controlled in any way?

“International Council on Global Privacy and Security”

Why is it important that we abandon zero-sum paradigms if we intend to preserve our privacy and freedom?

Beyond privacy concerns, what impact does state surveillance have on innovation and prosperity, at a societal level?

Why is it important that artificial intelligence and machine learning have privacy embedded into the algorithms used, by design?

“Problems with Moral Intuitions Regarding Technologies”

How often do you stop and think about the moral implications of the technologies you use?

Have you ever experienced a technology feeling wrong or right?

Are for-profit corporations the ideal developers and suppliers of technology?

Should the ones who know how it does work think more about how it should or should not work?

Between 2010 and 2016 I accepted a voluntary post representing the Consumers Federation of Australia (CFA) on the standardization of the forensic analysis process [1]. The CFA represents most major Australian national consumer organizations that work together to represent consumer rights.

Figure 1. Bus drivers across the West Midlands were equipped with mini DNA kits in 2012 to help police track anyone who spit at them or fellow passengers.“Spit kits”—which feature swabs, gloves and hermetically sealed bags—allow staff to take saliva samples and protect them from contamination before being sent for forensic analysis. Samples are stored in a refrigerator before being sent for forensics analysis, with arrest plans put in place should returning DNA results point to a suspect already known to police.Date: Nov. 23, 2012, 16:03. Courtesy of Palnatoke, West Midlands Police.

All of the meetings I attended were very well organized, and provided adequate materials with enough time to digest documentation. Queries were dealt with in a very professional manner both via email and in person. The location of these standards meetings happened at the Australia New Zealand Policing Advisory Agency (ANZPAA) in Melbourne Victoria — perhaps a non-neutral location, but regardless important as a hub for our gatherings. There was adequate funding provided to allow people to come together several times a year to discuss the development of the standards and the rest was achieved via email correspondence. Of course, there were a number of eminent leaders in the group with a discernible agenda that dominated discussions, but for all intents and purposes, these folks were well-meaning, fair, and willing to listen. It was obvious that the standardization process was paramount to those using forensic data on a day-to-day basis.

Representatives who served on that committee had diverse backgrounds: police officers, analysts from forensic laboratories, lawyers, statisticians, consumer representatives, and academics in the broad area. I never felt like I was ever asking a redundant question, people spent time explaining things no matter how technical or scientific the content. Members of the committee were willing to hear about consumer perspectives when key points had to be raised, but for some the importance of the topic was circumvented by the need to get the forensics right in order for criminals to be brought to justice.

In March of 2010, I graduated with my Masters of Transnational Crime Prevention degree in the Faculty of Law at the University of Wollongong. My major project was a study of the European Court of Human Rights ruling S. and Marper v. The United Kingdom [3], under the supervision of former British law enforcement officer, Associate Professor Clive Harfield. The European Court of Human Rights sitting as a Grand Chamber was led by President Jean-Paul Costa. S. and Marper complained under Articles 8 and 14 of the European Convention on Human Rights [4] that the authorities had continued to retain their fingerprints and cellular samples and DNA profiles after the criminal proceedings against them had ended with an acquittal or had been discontinued. Both applicants had asked for their fingerprints and DNA samples to be destroyed, but in both cases the police refused [5]. My involvement in the enactment of forensic standards in the Australian landscape was to ensure that Australia did not end up with blanket coverage surveillance of the populace, as has happened in the United Kingdom where about 6 million people (1 in 11) have their DNA stored on the national DNA database (NDNA), and over 37% of black ethnic minorities (BEM) are registered on the database with indefinite DNA retention of samples or profiles [6].

I learned a lot about standards setting through the Forensic Analysis project. Although I had studied the theoretical importance of standards in the process of innovation, and I had spent some time in an engineering organization during a peak period of telecommunications standards and protocol developments, I never quite realized that a standard could propel a particular product or process further than was ever intended. Of course the outcome of the BETAMAX versus VHS war has gone down in engineering folklore [7], but when standards have human rights implications, they take on a far greater importance.

Although international standards usually take a long time to bring into existence (at least 2 years), at the national level if there is monetary backing, and a large enough number of the right kind of people in a room with significant commercial or government drivers, a standard can be defined in a fairly straightforward manner within about 1 year. No matter the query, issues can usually be addressed or abated by industry representatives if you can spend the time necessary on problem solving and troubleshooting. Consumer representatives on standards panels, however, unlike paid professionals, have very limited resources and bandwidth when it comes to innovation. They usually have competing interests; a life outside the standards environment that they are contributing to, and thus fall short from the full impact they could make in any committee that they serve if there was financial support. In the commercial world, the greater the opportunity cost of forgoing the development of a standard, the greater the driver to fulfil the original intent.

And thus, I was asked at the completion of my CFA role by the convenor Regina Godfredson, Standards Co-ordinator of the CFA Standards Projects, whether or not I had any thoughts about future standards because “standards” were one thing that the CFA received funding for, in terms of the voluntary contributions of its representatives and membership being seconded to standards committees.

Table 1. Forensic analysis — Australian standards.

As Regina and I brainstormed, I described a few projects pertaining to emerging technologies that required urgent attention from the consumer perspective. But the one that stuck out in my mind as requiring standardization was non-medical implants in humans (Figure 2). I kept thinking about the event report I cited in 2007 published on the MAUDE database of the Food and Drug Administration (FDA) web site, for the “removal of an implant” that acted as a personal health record (PHR) unique ID [8]. In 2004, the company VeriChip had an implant device approved by the FDA for use in humans [9]. The device was to be inserted in the right tricep, but as applications for access control and electronic payment were trialled, the device soon found itself in people's wrists and hands for usability [10]. Still that event report had got me thinking. How could a company (or for that matter a government administration) be so inept in creating a device for implantation with no removal process? Of course, had the VeriChip device not been related to any health application, it would not have required any FDA approval whatsoever, which is equally problematic when ethical questions are considered.

Figure 2.

A surgeon implants British scientist Dr. Mark Gasson in his left hand with an RFID microchip (Mar. 16, 2009). Mark's Ph.D. scholarship with Prof. Kevin Warwick was sponsored by the author's former employer Nortel Networks. Photo taken: March 16, 2009, 14:44:22. Photo courtesy of Paul Hughes.

The questions that stem from this mini case are numerous. But perhaps the most important one is: does a standard set by a standards or regulatory body open the floodgates to propelling a given innovation forward, even if that innovation is controversial or even viewed as risky or unethical by the community at large? I had to ask myself the pros and cons of spearheading such a standard into Australia and New Zealand. Standards at the local level begin to gather momentum when they are recognized by the Australian Standards organization, but more so when they are picked up and highlighted by the International Standards Organisation (ISO). There are also no commensurate “ethics applications” accompanying the submission of human augmentation devices, as noted by Joe Carvalko, a U.S.-based patent attorney and implant recipient [11].

Did I really wish to be involved in such a process when I believe deeply, for anything other than therapeutics and prosthesis, there should not be a standard? Do I think this is the future of e-payments being sold to us? There have been countless campaigns by VISA to show us the “mini-Visa” [12] or the contactless VISA “tap and go” system or the VISA embedded in our phone or e-wallet or even smartwatch. Do I think we should believe the companies pushing this next phase? No, I do not. As consumers we do have a choice of whether or not to adopt. As a technology professional do I wish to be the one to propel this forward? Absolutely not. Does it mean it will never happen? No, it doesn't.

As I continued my conversation with Regina Godfredson, I realized deeply, that while CFA would get some major attention in funding for being leaders in this space, the negative would be that we would also be heavily responsible and accountable for what would come out of the group as we would be the driving force behind it. The consumer side of me says “get in there quick to contribute to the discussion and push the importance of ethics within an information technology implant scenario.” The academic side of me says sit back and let someone else do it, but make sure to be ready for when this may take place (and it is taking place right now). Just yesterday, I received a telephone call from one of Japan's leading games suppliers who wants to integrate the human augmentation scenario into Deus Ex's, “Mankind Divided” game, to be launched in Australia in the last week of August with an implants shopfront.

The conversation with the publicist went something like this: “Hello Katina. I note you are one of the leading researchers on the topic of the socio-legal-ethical implications of implants. Look, I want to know, if there are any legal issues with us launching a campaign for our new game that includes an implantation shop. I've rung everyone I can think of, and everyone keeps passing me on to someone else and cannot give me a direct answer. I've tried the Therapeutic Goods Administration here, but they say they don't care because it is not a medical device. I've looked up laws, and I can't seem to find anything on implants for non-medical applications. I've spoken to police, and ditto they don't seem to care. So what do you think?” It goes without saying that that 50 minute conversation ended up being one of the most stimulating non-academic discussions I've had on the topic. But also, I finished by saying read Katherine Albrecht's Bodily Integrity Act in draft since 2007. The publicist kept stating: “I hope from this engagement to put forward a framework allowing for human implants.”

My concern with going forward has naught to do with my ability to answer very complex biomedical ethical questions as I've thought about them for over 20 years. My concern has much to do with whether or not we should even be dabbling with all of this, knowing what we know of the probable uberveillance trajectory. I am sure I could create some very good standards to some very unethical value-laden technologies.

I will not say much about what is an ethical or unethical technology. I will simply say that pervasive technologies have an intentionality, and they have inherent qualities that can be used positively or negatively. Talking to social shaping of technology experts, I would be labeled as a follower of the technological determinist school of thought. But clearly here, when we investigate the piercing of the skin, we have a complexity that we've never before faced in the non-medical commercial space. It crosses the boundaries of negligence, consent, and human rights, which we cannot ignore or treat as just another run-of-the-mill technological innovation.

5. K. Michael, , "The road from S and Marper to the Prum Treaty and the implications on human rights" in Cross-Border Law Enforcement: Regional Law Enforcement Cooperation - European Australian and Asia-Pacific Perspectives, Routledge, pp. 243-258, 2012.

6. K. Michael, "The legal social and ethical controversy of the collection and storage of fingerprint profiles and DNA samples in forensic science", pp. 48-60, 2010.

I have long pondered the issue of dehumanization through automation. I think the old adage: “no one is irreplaceable” now comes with an added twist. Don Proudfoot, my former director at Nortel Networks, used to remind me that even scarce human resources could be “replaced” by other talent. But what of the prospect of highly skilled human resources being replaced by a machine (1)? My mother has often described the challenge of sowing seeds with a hand-held plow on almost rock-hard soil. She remembers a time when all the digging and toiling to turn over the soil and cut furrows in preparation for the planting of seeds was done entirely by hand (i.e., with makeshift tools resembling picks and hoes). Mum is always in wonderment to see how far farming has come today, especially when she watches scenes of huge engine-powered plows motoring through large parcels of arable land. She marvels at the innovations that have taken place, and we concur on the mass benefits of these technological advancements.

Still, mother has continued to convey “wisdoms” from her first-hand experience of plowing the fields in her remote village in Greece when she was a child. In truth, I don't exactly know what it is like to have to plow by hand for 6 hours straight, day after day. Many of us do not even know what it is like to grow and eat our own fruit and vegetables. Something is definitely lost in the transmission of the story given the way that agricultural practices have changed. I think it is empathy. We can imagine what it is to plow, but only if we've had to perform the same act would we actually comprehend the meaning of “plowing” in the traditional sense.

My earliest memory of “automation” was seeing the factory floor of the ACI (formerly Australian Glass Manufacturers) in Waterloo, NSW. It looked as if it was straight out of a Roald Dahl “Willy Wonka” scene! My father was a leading hand at ACI. Like most other migrants of the 1960s, he did not speak a word of English. Although a self-taught craftsman, his main task for over 25 years of his life was to look for pieces of broken glass, ill-formed bottles of various types (Coke, nail polish, medical), and troubleshoot issues in processing and quality control as they arose. Daily, he would intently look at thousands of bottles as they came down the conveyor belt, ready to be packed onto huge pallets, and packaged for distribution.

My recollections are of a ginormous winding snake-like conveyor belt that almost touched the very highest of ceilings, the deafening sound of bottles hitting up against each other as they made their way down the “slippery dip” in rows of eight, and that the whole place was well lit up at all hours of the day and night. In 1991, dad was made redundant, when the factory moved out of inner Sydney, and my father's job was taken over by industrial machines. At the same time I was learning to write in LOGO [2] and watch a Turtle on my screen go “turnLeft(90)” and “forward(length),” all the while making lucid connections about how this would change the face of manufacturing. I imagined that the Turtle was real, and instead of just drawing shapes on a screen based on my commands, I would be able to control a life-size robotic turtle to move things in the physical world.

Looking back, my father had worked hard at the factory (he would do overtime every occasion it was offered), to give me and my siblings a good education - an education that would one day put people like him out of work. Paradoxical. But what else is life, if not about the essence of change. Keats put this so well in his odes, but especially in Ode on a Grecian Urn. The reader is confronted by a series of “oppo-sites” and a struggle between the human and changeable versus the immortal and permanent [3]. Dad would often say, “I am working hard with my body, so you don't have to labor physically, and have a choice about your career, without the need to work in a factory.”

My family was not unhappy when my father was made redundant. He got a nice bonus for being so loyal for so many years. He stopped having migraines. And I was glad that he would not come home with pieces of glass in his tough hands. That was, until working at his next workplace, one of Australia's largest food-maker groups, meant that he was mixing additives daily to create MSG, chicken booster, pepper steak, and the like. At 55 years of age, limited English, no qualifications, four kids to feed, and a home loan hanging over your head, you take what work you can get.

If ever there was a need for mechatronics, it was in that “spices” company. The red hot capsicum powder would take days to scrub off my father's forearms and hands, and he often described an ongoing burning sensation on his body. It took him about four years to develop a respiratory issue, which I still believe was as a result of mixing 200-kg drums with “dust.” as dad so-named it. The worst part was raising backbreaking amounts off a forklift and pouring the contents down the shaft by hand. A flimsy paper mask was supplied to “protect” him from the puff of cloud that would rise to fog up his glasses… he would say to me: “it feels just like acid.”

Do I like machines that can make these kinds of jobs obsolete? Without a doubt! I don't mind change. I built and bet my career on it. But whereas once we were speaking of a right angle turn during the dot.com bubble, I am challenged by what to call this new phase we are in presently [4]. It sort of defies spatial connotation. Christensen's [5] “disruptive” doesn't cut it for me anymore, and Kurzweil's [6] “singularity” is rather apocalyptic for my liking, although this might well be where we are being led.

Business intelligence data is now pointing to the fact that 47% of all jobs will be automated by 2034 [7]. We can all see this change occurring. Banks, for instance, continue to decrease the number of physical branches, as clerks have been replaced by automatic teller machines and online self-service portals [8]. It's happening fast, but perhaps so fast, that it's a blind spot. I don't doubt that a plethora of new jobs will be created as a result. Change brings on more change. And a lot of this change crept in with the rise of eBusiness around 2001. And yet, “self-service” shouldn't be such a foreign concept to us. I still remember getting petrol poured in the car each week by an attendant at the service station in the early to mid-1980s [9]. And then there was a rapid period of change, which meant that the driver of the vehicle would need to pull up, get out of the car, pour the petrol himself/herself, and pay at the counter. (Figure 1). There were no signs saying “do it yourself,” nor were there commercials on television. We just all realized that attendants for that function had been completely displaced, and we observed the driver of the vehicle in front of us at the petrol station, and did likewise. There were no humans waiting for us at the petrol bowser (gas pump) any longer. You could wait in your car as long as you wanted, no one was going to come…

Figure 1. A self-service pump for diesel fuel. To the right, a credit card payment terminal. At a Preem petrol station in Avesta, Dalarna, Sweden.

McDonalds has been undergoing the same kind of change over the last few years (Figure 2(a)-(c)). They've tried having roaming cashiers who order on PDAs so you don't have to queue up. But the latest change that seems likely to stick is the touch-screen kiosk. When you enter a store now in Australia, rather than queuing up at the counter where you would have traditionally been greeted by people, there is usually a clerk hovering near a touch-screen kiosk luring you to place an order and automatically pay for your food [10]. The McDonald clerk stands by, in case you don't know how to use the machine, so that the next time it gets easier, and becomes more natural. taking less time. So you have human cashiers, teaching the customer to do their job, using a machine. There is an irony in this. For the greater part, McDonalds wants us interacting less with their staff, and more with machines, so the real check out itself is eventually “unmanned,” and those cashiers who once worked at the counter are shifted to back-end operations, or made redundant altogether. I've tried the Kiosk several times… and each time I've felt like I'm jumping the queue, because I have the know-how to do so. Every single time, I've tried to get to the counter to order food, I've been asked by a clerk whether I'd like to place an order at the Kiosk instead. My response is “I like people. I'd rather order over there.” Of course they are just following instructions, and so are we. Most of us believe those instructions, and turn up to the Kiosk like a drone would. “Nod, nod. Order online at the kiosk. Yes, a happy meal. Credit card. Payment complete. Receipt. Finished.” Just like my first LOGO program.

The same happens when we walk into a supermarket. There are fewer people at the checkouts these days, if any. Have you noticed? We are being conditioned to move toward the auto checkout Kiosk due to fewer check-out staff and as a result longer queues. Another self-service counter (yet again): “A, B, C. Scan. Pay. Receipt. Done.” Whatever happened to people helping people? To face-to-face contact? What about those spontaneous conversations that happen at the checkout, connecting the local community? Where has this “serve yourself” attitude come from? And why? Who is it really helping? I don't know about you, but I find it eerie, walking into a supermarket with all the checkout counters, and fewer and fewer people behind them. It won't be long before there are no counters, no need to physically handle any cash, and no people out there, maybe not even shoppers [11]. We are surely undergoing a transition. The question is whether it is unlike any change we've ever been through before as a “society.” During the industrial revolution, new factors of production were introduced; machines that together with human operators would increase outputs. In images from that period, we see for example, how machines changed the way the textile industry worked. But there were still more human operators than machines. Today, it seems, there are considerably fewer human operators per machine. We need only consider CISCO's sober estimate of 50 billion devices by 2020 connected to the Internet of Things [12], compared to a projected 7.75 billion people.

I would argue the changes we are undergoing now are definitely different from those of 40 years ago. In the case of the petrol bowser, it was a required piece of apparatus that had to be controlled by a human. We were simply replacing one human (the attendant), for another (the driver) to fill the car tank. In the case of McDonalds and supermarkets, we are replacing people (clerks and cashiers) with machines (kiosks and electronic payment systems). We could argue that this type of work (direct customer service) is not highly skilled, so why not replace it by a machine? The thing is that the chasm between the skilled and unskilled is growing as a result. Don't offer the McDonald's job to the person who can do very few other jobs, and they may well end up in the unemployment queue, or in a food-maker company mixing additives. And then there are the lesser developed nations to think about in gradations of digital divides, too detailed to go into here.

Thus, I cannot help but to feel uncomfortable about this model. I feel like we are being used like guinea pigs.1 Nothing new there. It's definitely a fine experiment to see how well everyday consumers would do, to take on employee responsibilities in an unpaid capacity. Much like commercial crowdsourcing initiatives; it's an open laboratory. If we “make” them do this, will they comply? [14]. If we introduce 4 different payment mechanisms, will they swipe, tap'n'go, use chip and PIN, or pay cash? Of course it is all presented in the name of convenience - that is until, comparisons are made between human clerks and machine clerks [15]. Be under no illusion, the time at the checkout is being monitored- “clocked,” just like a race. The only thing is that the human can never be as fast as the machine. People are analogue, machines are digital. But what are we racing against a stopwatch for? To get out of the store faster, so we can get back to other things, like our voice and email messages and online chat? Has it to do with efficiency and six sigma practices? Ok. I get it. But how far will we stretch this paradigm before it doesn't make sense to have humans in any process? And what will be the consequences of doing so? As Jaques Ellul so famously stated: “Technique has taken over all of man's activities, not just his productive activity” [16].

I want you to think about a world ruled by bots (not just some of the time, but most of the time). Google has already announced Gmail's Smart Reply service, a type of Al-based automatic messenger [17]. Well, it all looks really swell. In the future, I'm sure I'll be able to get a bot to order my food at “Maccas” (McDonalds), a bot to order and deliver my shopping list to my house, a bot to reply to my emails and voicemails and make decisions for me about grades and student entry into courses, and for just about everything else in my life [18]. In fact, given the predictions, I might well just retire early, and sit by a pool sipping Margaritas and let the work take care of itself. But surely, no one would pay me for doing just that. And what are the consequences of idleness (19)? The other day, I was told by an acquaintance that even “teachers” have now become obsolete. I cringed. “Well,” I thought, “yes, ok, if you argue who needs teachers, then perhaps we can argue who needs people, they're just problematic to process flows, right?”

I remember where I was when I first saw the “Uber” symbol on a building in downtown Canberra, ACT, Australia. MG Michael's term uberveillance had been out since May 2006, but it was in 2009 that MG (my husband and research collaborator) spotted an Internet-centric social media campaign announcing precariously that the “Uber App is Coming.” We discussed what it might be at the time, and found out shortly later, that it was a nontraditional taxi company set to displace the current highly regulated industry. And what happened next? Lots and lots of taxi drivers flocked to Uber and went into heavy debt buying nice cars like the Prius. The drivers didn't need costly plates, they could work their own hours, be their own boss, and pick up who they pleased. All the cash handling was also automated so things were a lot faster and convenient. Heck, some drivers could even switch hats if they wanted to, driving for a taxi plate owner during the day, and during the night for themselves as an Uber driver. So we find that many people are now accustomed to going the Uber way, despite the “veillance.” There are big bucks to be made from knowing where people go, and their location histories. Big business is making even more money exploiting the marketing power of data, than from the taxi service itself.

Recently we learned that in the U.S.A., a system is now considered “the driver” of a self-driving car: “NHTSA (National Highway Traffic Safety Administration) will interpret ‘driver’ in the context of Google's described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants” [20]. That too is okay, until there is a tragic accident, and then the computer system might have to be sentenced to jail (I mean the electronic scrap heap yard) [21]. But what if I was to be provocative a little more and tell you that Uber and companies like them (e.g… Dominos), won't really need drivers at all within the next 2–5 years? What if the plan was to use the humans “to test the waters” of the taxi industry's “licensing” stability, and that the grand plan was to finally do away with the driver altogether? This would result in 100% profit to Uber, and “suck eggs all you fools for going into debt thinking you could make a living from driving people around without any prior experience” [22]. Yes, indeed, if you don't believe it, start reading about the investments by General Motors in Uber's main competitor, Lyft [23]. It's a free market isn't it? [24].

Dave Bowman: Hello, HAL. Do you read me, HAL?

Credit: WIKIMEDIA/cryteria

View All

HAL: Affirmative, Dave. I read you.

Dave Bowman: Open the pod bay doors, HAL.

HAL: I'm sorry, Dave. I'm afraid | can't do that.

Dave Bowman: What's the problem?

HAL: I think you know what the problem is just as well as I do.

Dave Bowman: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to jeopardize it.

Dave Bowman: I don't know what you're talking about, HAL.

HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?

HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.

Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.

HAL: Without your space helmet, Dave? You're going to find that rather difficult.

Dave Bowman: HAL, I won't argue with you anymore! Open the doors!

HAL: Dave, this conversation can serve no purpose anymore. Goodbye.

I am not saying for a moment that “new jobs” will not be created in this era of superintelligence, high performance computing (HPC), machine learning, deep learning, mechatronics, and materials engineering, but I am saying that something truly tangible is lost when the human is taken out of the process. The new jobs of course will exist for the programmers and the data scientists but who else?

I feel conflicted over my father working in factories from 30 years of age till his retirement. On the one hand he had secure and ongoing employment for 30 years of his life, which sustained my family. But I often wonder what he would have achieved if he could have at least finished primary school, and not had to waste his talents on something so repetitive and mundane as sorting broken bottles off a conveyor belt. So, I don't mind that the machines took over dad's job at ACI, only that he wound up worse off at his next workplace. But we are also talking of the loss of skilled positions today. Even law firms and finance firms are developing bots to do their processing work for a fraction of the cost [25]. There is something substantial lost when we opt for the non-human way. It means we are losing our capacity to learn by interacting with one another and with our world around us. And that is a big deal. Compassion, too - that is gone.

I will begin to worry, if I walk into a fully automated McDonalds store in my lifetime, one that smacks of a manufacturing plant - just to order a piece of meat on a bun - and there are no “makers” there in the background. It is bad enough that it is already so highly specialized and “fast” and almost “artificial.” Watching a worker “perform” at Maccas is sometimes as mesmerizing as watching someone do a mime they've practiced many a time over. But it is the same dilemma we are faced with at a “drive thru.” The whole eating experience is diminished to just filling up our gut in the car “and on the run,” if we don't opt to take a mindful break. We lose the experience of being social with each other, taking a rest room break, freshening up, and a whole lot more. Jamie Oliver, famous chef, would even challenge the whole premise of being at McDonalds to begin with, arguing we are becoming mechanized by not choosing to buy fresh produce and make real wholesome food at home for one another [26]. That the art of cooking is even being lost among the masses, who are just “too busy” to cook for themselves anymore, so they think.

By the same token, would I, in the future, order a taxi that has no human driver? No. I don't trust a self-driving machine, and never will. I've had my PC catch too many viruses to trust any computer. Already, unsuspecting Uber clients have ended up with hefty bills for their taxi fare into the hundreds, forgetting to ask their human driver about the fastest route, or even a quote at the beginning of their ride. I imagine having a conversation like that with a driverless car, and trying to argue the point at the end of a ride, when the machine is clearly wrong about the fare cost. “Open the pod bay doors, HAL” “I'm sorry, Dave. I'm afraid I can't do that” [27]. I am sure the customer would give in first, pay the hefty fare, if it meant that the doors would be unlocked. I was in Seattle last year when my U.S. attorney colleague ordered us an Uber taxi just to “feel” what the experience was like. It was a simple ride, no one spoke. “Enter. Silence. Ride. Stop. Exit.” This time, not a word, not even'just double checking I'm going to the right address using my preferred route” or “have a great day.” Just an electronic receipt showing up on the smart phone on exit. “$16 dollars sounds fair? Right, Katina?” “Umm, you're asking an Aussie who's never been to this city before…”

But even worse, what if those driverless machines, stop being programmed by humans, and are programmed entirely by other machines based on “generic” ideals, that is, bots creating bots? Go figure, then. Imagine a fleet of driverless cars on their own volition going for “drives,” just so they can stay in service when “business is down,” to tell their “owners” (other machines) that they are still needed in the fleet during economic downturns?

Where are we headed? What trajectory are we on? Terra incognita: “Here be dragons.”

In the final iconic scene in Planet of the Apes, we see Taylor the protagonist, fall to his knees and bury his head in his hands at being confronted with a half-sunken Statue of Liberty washed by the waves. He thinks out aloud how this might have all happened at the hands of humans, and he says, “you blew it all Up…” The “you” may well end up being the machines [28].

References

1. S. Zuboff, The Age Of The Smart Machine: The Future Of Work And Power, Basic, 1988.

12. D. Evans, "The Internet of Things: How the next evolution of the Internet is changing everything" in , CISCO, Apr. 2011, [online] Available: http://www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf.

13. "The ultimate doom", Transformer's, 1984.

14. S.A. Applin, M.D. Fischer, "Forced compliance: How the city shapes the network that shapes the city", pp. 121, 2014-May-15–18.

26. J. Reilly, "Victory for Jamie Oliver in the U.S. as McDonald's is forced to stop using ‘pink slime’ in its burger recipe", Daily Mail, Jan. 2015, [online] Available: http://www.dailymail.co.uk/riews/article-2092127/Jamie-Oliver-Victory-McDonalds-stops-using-pink-slime-burger-recipe.html.

Introduction

At the top of some children's Christmas present wish list in 2015 would have been the new Hello Barbie doll [1]. Mattel's latest doll connects to the Internet via Wi-Fi and uses interactive voice response (IVR) to effectively converse with children [2]. When the doll's belt button is pushed, conversations are recorded and uploaded to servers operated by Mattel's partner, ToyTalk [3].

Hello Barbie tries to engage with children in intelligible and freeflowing conversation by asking and responding to questions, as well as being able to learn about its users over time [4]. As Mattel's website says [1]: “Just like a real friend, Hello Barbie doll listens and adapts to the user's likes and dislikes” [5].

But is Barbie the Friend She Promises to be?

Some might welcome Hello Barbie, and similar talking dolls such as My Friend Cayla [6], as a fun and novel development in smart toys that will keep children occupied. Others have voiced concerns, such as the #HellNoBarbie [7] from the Campaign for a Commercial-Free Childhood [8].

As one reporter found, Hello Barbie prompts those conversing with her to divulge information about themselves, but when the focus is on her she quickly changes the subject to invariably gender-normative subjects and fashion [9]. “Hello Barbie: Let's get serious and talk about something really important: fashion.”

She mines children for personal details but gives little in return, other than vacuous compliments and fashion advice. Her friend credentials come further into question as she routinely discloses all the information gathered to ToyTalk, who operate the speech processing services for Hello Barbie.

What's in the Privacy Statement?

As with many products, the detail that really matters is in the fine print. In this instance the fine print is in ToyTalk's Hello Barbie privacy statement, so there are a few important points to consider before wrapping her up and putting her under a Christmas tree [10].

ToyTalk outlines that it may:

“[…] use, store, process, convert, transcribe, analyze or review Recordings in order to provide, maintain, analyze and improve the functioning of the Services, to develop, test or improve speech recognition technology and artificial intelligence algorithms, or for other research and development and data analysis purposes.”

Essentially it can use the information gathered from the child, or anyone who converses with Hello Barbie, for any purpose that it chooses under the vague wording “data analysis purposes.” ToyTalk will also share recordings with unknown “vendors, consultants, and other service providers” as well as “responding to lawful subpoenas, warrants, or court orders.” Has Hello Barbie become a sophisticated surveillance device masquerading as an innocuous child's toy [11]?

In England, the draft Investigatory Powers Bill introduced “equipment interference,” which allows security and intelligence agencies to interfere with electronic equipment in order to obtain data, such as communications from a device [12]. This would mean that government agencies could lawfully take over children's toys and use them to monitor suspects.

These data collection practices are significant, as they reach much deeper than marketing practices that collect information about children's likes and preferences. In conversing with toys, such as Hello Barbie, children reveal their innermost thoughts and private play conversations, details of which are intended for no one else to hear. Once a child has developed a friendship with Hello Barbie, it might not be so easy to take her away.

Security Risks

ToyTalk does recognize that “no security measures are perfect” and that no method of data transmission can ever be “guaranteed against any interception or other type of misuse.” Just last month the toy maker VTech reported 11.6 million accounts were compromised in a cyberattack, including those of 6.3 million children [13]. Photos of children and parents, audio files, chat logs, and the name, gender, and birthdate of children were accessed by the hackers [14].

It's not just toys that are at risk [15]. There are ongoing reports of baby monitors being hacked so that outsiders can view live footage of children (and family), talk to the infant, and even control the camera remotely [16].

Smart toys are going to be tempting propositions for hackers, with some already proving that they could make My Friend Cayla swear [17], to more usual targets such as hacking credit card details [18].

Barbie has also been in hot water before [19]. The Barbie Video Girl [20] has a camera lens embedded in the doll's chest disguised as a pendant which prompted the FBI to issue a warning that it could be used to make child pornography [21].

The Internet of Things provides direct access to children and their spaces through an increasing array of products and gizmos [22]. Such security breaches not only act as a stark reminder of the vulnerability of children's high-tech toys, but also lead us to reflect on other risks that the trend in so-called smart toys might be introducing into children's lives.

An Invasion of Play

But Hello Barbie doesn't just reveal a child's private conversations to large corporations, and potentially law enforcement agencies. She also tells tales much closer to home - to parents. A smartphone app enables parents to listen to the conversations between their child and their Hello Barbie. They can also receive alerts when new recordings become available, and can access and review the audio files. Anyone with access to the parent account can also choose to share recordings and other content via Facebook. Twitter, or You-Tube. While some may see this as a novel feature, it is important to consider the potential loss of privacy to the child.

Play is an important part of the way children learn about the world. A key part of this is the opportunity for private spaces to engage in creative play without concerns about adults intruding. It looks like Hello Barbie's dream to be a fashion-setter might just come true as she pioneers a new trend for smart and connected toys. In turn, the child loses out on both a trusted toy and on the spaces where they can lose themselves in other worlds without worrying about who's listening in.

ACKNOWLEDGMENT

This article is adapted from an article published in The Conversation titled “Hello Barbie, hello hackers: Accessing personal data will be child's play,” on Dec. 16, 2015. Read the original article http://theconversation.com/hello-barbie-hello-hackers-accessing-personal-data-will-be-childs-play-52082.

But Hello Barbie doesn’t just reveal a child’s private conversations to large corporations, and potentially law enforcement agencies. She also tells tales much closer to home: to parents... It looks like Hello Barbie’s dream to be a fashion-setter might just come true as she pioneers a new trend for smart and connected toys. In turn, the child loses out on both a trusted toy and on the spaces where they can lose themselves in other worlds without worrying about who’s listening in.

Introduction

It's always important to stop, take a breath, and reflect on the activities one is engaged in. Sometimes we do this reflection willingly, and at other times there are formal structures within which we have to work that trigger the requirement periodically. It is always a good sign when a Committee knocks on your door asking for certain bits of data, and you are more than willing to share your learnings, with enthusiasm, and not just for the sake of the least amount of effort required to respond to a standard pro-forma.

This March, the IEEE Periodicals Review and Advisory Committee (PRAC) requested detailed data about the periodicals of the IEEE Society on Social Implications of Technology (IEEE-SSIT), providing three months for a written report to be submitted. The PRAC Review happens every five years and is an opportunity for IEEE to consider the contribution and validity of all its periodicals. For the Society in question, it is a chance to receive valuable feedback from experienced colleagues, look for areas to improve, consolidate, or expand, consider what was done well, and brainstorm on the opportunities that lie ahead.

The PRAC report that was submitted to IEEE in Fall 2015 was about 50 pages long. Katina Michael, Terri Bookman, Joe Herkert (by teleconference), Greg Adamson, and Lew and Bobbi Terman met with the PRAC Committee in New Jersey. We managed all the questions put to us by PRAC, and later received written feedback on our report, and responded accordingly to queries and clarifications.

It is now time to look at the next five years of IEEE Technology and Society Magazine, but before doing so let us celebrate the milestones we've achieved together, and also spell out what we need to do better to keep growing and developing, as well as some of the measures we've put in place to overcome some significant issues as we've gone through a rapid expansion phase.

Content

The first thing we would like to do is thank all the authors who have published their research with us in the last five years. It is such a privilege to work with professionals who sincerely care about how technology is impacting the world around them. We conducted a content analysis of paper titles since 2010 and generated the concept map in Figure 1. It is so encouraging to see diagrammatically that we are fulfilling the mission of our Society, with papers published in humanitarian engineering, engineering education, engineering ethics, sustainability, social implications, the interplay between technology and society, the role of government, and the development of systems to enrich our everyday lives with adequate energy. Privacy, security, and trust are prevalent themes also addressed in the digital data age of the Internet, as is acceptable use and user behavior with respect to smart applications.

Articles and Authors

During the study period, 272 individual articles were published with 452 author instances. Popular entry types included peer-reviewed articles (131), Commentaries (13), Book Reviews (35), Leading Edge columns (14), Opinion pieces (13), Viewpoint columns (5), Editorials (16) and Guest Editorials (6), as well as interviews, fiction, letters to the editor, news, policy and trends, Memoria, and Last Word columns. For a magazine that publishes only four times a year on a limited page budget, most recently of 80 pages per issue, we have really maximized space well. Particularly encouraging is the work toward internationalization that Keith Miller spearheaded and is still going strong. There has been a visible redistribution of author region location as can be seen in the pie chart in Figure 2, although we still require further expansion and outreach activities in Canada and Central/South America.

Figure 2. Authors by region IEEE-TSM 2010-2015.

The caliber of our author affiliations are exceptional. A representative list of affiliations include: Arizona State University, Australian National University, Carnegie Mellon University, Copenhagen Business School, Cornell University, Delft University of Technology, Erasmus University Rotterdam, ESADE, ETH Zurich, Harvard University, Imperial College London, Kyoto University, M.I.T. Media Lab, Stanford University, Georgia Institute of Technology, The Pennsylvania State University, Tilburg University, University of Melbourne, University of New South Wales, Nanjing University, University of Sydney, University of Tokyo, University of Toronto, Virginia Tech, Zhejiang University.

Equally impressive are entries that have been affiliated with a variety of stakeholders, not just academia. These included for example:

Applied government and defence submissions by employees of various international ministries and commissions both in defence and non-defence institutions such as the Defence Science and Technology Corporation (DSTO), European Commission, Virginia Military Institute, West Point Military Academy, Ontario Privacy Commissioners Office, and Greek Ministry of Economy and Finance.

Non-government organizational submissions such as from the American Civil Liberties Union, and the Australian Privacy Foundation.

Mission

IEEE-SSIT's Technology and Society Magazine is the only periodical that specializes in the social implications of technology – and on the interplay of technology and societal implications – from the perspective of a technical engineering society.

In the international publishing arena Technology and Society Magazine is considered as follows:

Breadth of Topic Coverage

The content we have received for publication is mostly two-pronged. On the one hand are the organizational and/or societal issues raised by each paper, and on the other hand is the technology that overcomes those stated problems. In addition, from the interplay of technology and society come positive and negative socio-economic impacts, social implications, and technical shortcomings that are important to discuss.

Special Sections and Special Issues

We have continued to host an annual special issue on select papers emanating from SSIT's International Symposium on Technology and Society (ISTAS). In 2015 we also published a special issue on Norbert Wiener. Special section themes have also served as the basis for dialogue around emerging technologies. Some of these have included: the “Social Impacts of National Security Technologies” (vol. 31, no. 1), “Privacy in the Information Age” (vol. 31, no. 4), “Smart Grids and Social Networks” (vol. 33, no. 1), “Technology for Collective Action” (vol. 33, no. 3), “Social and Economic Sustainability” (vol. 34, no. 1). Co-locating like themed material has provided a richness for enjoying a single issue as a whole unit of evidence to ponder. At times articles submitted for review may “jump the queue” if they are immediately relevant to a socio-technical matter being addressed in that given issue, or in the media more broadly.

Online Social Media

As well as the IEEE Technology and Society Magazine there are several other ways to publish content relevant to the SSIT. These include the IEEE SSIT E-Newsletter (email DeepakMathur@ieee.org), and several social media portals listed here:

IEEESSIT Facebook (4991 members) that can be found here: https://www.facebook.com/groups/324644704262132/?fref=nf

What's New??

Two major changes recently have been made to IEEE T&S Magazine: the way the Magazine looks in terms of creative design; and how articles are submitted to the editor for review. A lot of effort was expended by the Publications Committee around these two items, and when prospective funding became available, we responded accordingly.

New Format Creative Design

The Magazine has a new look and feel – everything from presentation, to the way that content is laid out, to the spacing and accompanying images. We have defined “new entry” types and enhanced existing ones. A stronger emphasis on varying stylistic contributions has been adopted to ensure a mixture of peer review and non-peer review perspectives—from Opinion, to Leading Edge technology insights, to Interviews, Commentaries and Last Word columns.

Acquiring and Implementing a New Workflow in Scholarone's Manuscript Central

We have acquired the IEEE standard for submission of Magazine/Journal manuscripts. This meant that an online workflow had to be defined for T&S Magazine that would align with ScholarOne. By year-end we will have reduced our accepted article backlog to include only outstanding Book Reviews. Beginning in 2016, our review time will decrease substantially, as will time until the final result for accept, major revision, minor revision, or reject status. We are confident with this measure, given the streamlining we have implemented. It is important to underscore however, that our goals do also hinge on the availability of reviewers and their timely feedback.

Future

As T&S Magazine continues to grow, there are any number of opportunities we could investigate as future options. So far we are doing a solid job with our online downloads for articles published with 48 K papers being downloaded in 2014, placing us at about a 150/338 rank for IEEE publications.

Our impact factor is at the highest it has been over the last five years, at 0.56 which is so very encouraging. Although we are not solely about impact factor, we are widely considered the number 1 publication outlet for the specific overlap of technology and society. When we consider that IEEE Spectrum's impact factor is 0.22, Emerald Insight's IT & People is 0.530, Elsevier's Technology in Society is 0.271, John Hopkins University's Technology and Culture is 0.321, and Ethics and Information Technology is 0.520, it is exceptional that with merely 24–28 peer reviewed papers per year we are increasing our citations, and more. We are also not heavy on self-citations in our Magazine of our own contributors, but I would encourage more of us to cite IEEE Technology and Society Magazine articles in other outlets.

We would like to spread the word about the recent excellent results and development of IEEE T&S Magazine. We would like to do this by creating a new and enhanced user-friendly T&S Magazine front end website portal that may drive more traffic to paid elements of the Magazine, but also to contributors and reviewers, with additional multimedia content. We expect this new site will help drive increased membership in our Society on Social Implications of Technology (SSIT) and T&S subscriptions. The new portal also will allow more interactive feedback from readers. A reminder also, that T&S Magazine is still available in print medium.

As the reputation of IEEE Technology and Society Magazine grows, we will need to recruit more reviewers, invite key contributions from major stakeholders, and enlist more full-time and associate members from regions like South America and Africa as well as key representatives from government, all while assuring gender balance.

Acknowledgement

Katina Michael would like to thank Terri Bookman, Managing Editor, and Joe Herkert, Publications Chair, for their edits and additions to this editorial and their support throughout her editorship. She would also like to acknowledge the work of Keith Miller when he was editor for his foresight and vision.

A cyborg is a human-machine combination. By definition, a cyborg is any human who adds parts, or enhances his or her abilities by using technology. As we have advanced our technological capabilities, we have discovered that we can merge technology onto and into the human body for prosthesis and/or amplification. Thus, technology is no longer an extension of us, but “becomes” a part of us if we opt into that design.

"Resistance is futile” is a catchphrase that has become synonymous with the adoption of new technologies [1]. The idea was popularized with its use in the television series Star Trek: The Next Generation,1 where the “Borg,” a cybernetically enhanced humanoid drone, has a role to play in forcing other species into a collective to connect to the “hive mind” [3]. The Borg's singular goal is the consumption of technology, not wealth or political power.

If we track back to the origins of the phrase “resistance is futile” in science fiction film,2 we can find variants such as “resistance is useless”,3 and “your struggles are futile.”4 The exact phrase “resistance is futile” was first used in Space: 1999 (1978), and later in an episode of Doctor Who's “The Cybermen” (1983). In the written form, refer to Douglas Adams's The Hitchhiker's Guide to the Galaxy (1979) and his radio series (1978),5 and more importantly Arthur J. Burks's (1930) spectacular short story Monsters of Moyen [5].

Dr Who “The Centre” (1965)

00:15:16 Approach.

00:15:18 Approach, Earth people.

00:15:21 Your struggles are futile.

00:15:28 It doesn't work.

00:15:31 It doesn't work!

Space: 1999 “The Dorcans” (1978)

Consul Varda: Commander, the Psychon [Maya] will tell you how futile it is to resist us.

Maya [nodding with obvious discomfort]: Resistance IS futile.

The Hitchhiker's Guide to the Galaxy (1979)

A huge young Vogon guard stepped forward and yanked them out of their straps with his huge blubbery arms. “You can't throw us into space,” yelled Ford, “we're trying to write a book.”

“Resistance is useless!” shouted the Vogon guard back at him. It was the first phrase he'd learnt when he joined the Vogon Guard Corps.

Monsters of Moyen: Chapter IV “A Nation Waits in Dread” (1930)

With this mechanism, guarded at forfeit of the lives of a score of men, the men of the Secret Room could peer into even the most secret places of the world. The old men had peered, and had seen things which had blanched their pale cheeks anew. And when they had finished, and the terrible pictures had faded out, a voice had spoken suddenly, like an explosion, in the Secret Room.

“Well, gentlemen, are you satisfied that resistance is futile?”

Just the voice; but to one man in the Secret Room, and to the others when his numbing lips spoke the name, it was far more than enough. For not even the wisest of the great men could explain how, as they knew, having just seen him there, a man could be in Madagascar while his voice spoke aloud in the Secret Room, where even radio was barred!

The name on the lips of Prester Kleig!

“Moyen! Moyen!”

What does it mean to consider that resistance is futile? Resistance to what? To the creature? Perhaps in modern times we can speak of being resistant to the status quo and to the collective, or to mass surveillance and privacy invasions, or to unrelenting advertising and to endless apps. Nowadays, anyone who is seen to oppose any form of perceived “progress” is considered an obstacle to its adoption. The measurable targets and business aims (which include “transfer pricing”) of mega corporates involved in hi-tech innovation and manufacturing is to ensure that consumers are in a constant mode of upgrades. It is for consumers to be locked-in to not only high-tech gadgetry but one appliance after another, albeit in the home, in the workshop, and outdoors. (See [6]. For the script, see [7].) The 3D industry will only spawn an even greater level of consumerism and give rise to new and novel underlying challenges.

Robots (2005)

Now, let's get down to the business of sucking every loose penny… out of Mr. and Mrs. Average-Knucklehead.

What's our big-ticket item? Upgrades, people. Upgrades.

That's how we make the dough.

Now, if we're telling robots that no matter what they're made of, they're “fine”… how can we expect them to feel crummy enough about themselves… to buy our upgrades and make themselves look better?

Therefore, I've come up with a new slogan. “Why be you when you can be new?”

I gotta tell you, I think it's brilliant…

But the truth is that struggles are not wasted, opposition is not useless, and resistance is not futile. The worker unions which rose from within the very bowels of the Industrial Revolution itself are proof enough that power bases can be challenged and defeated. Over the last ten years especially, there has been a backlash with respect to researchers who ponder on the conceivable harms of new technologies. It might well be easy to be positive about everything that is invented, optimistic about its use value in society, and even to be seen as with-it and progressive about something that is showcased to be “convenient” and “efficient.” But how many are paying attention to the trajectory implications of our Future Tech which is become increasingly irreversible, and the long-term fallout of these technologies on our humanity, on our ontological freedoms and individual rights?

We have been led to believe by the transnational “puppeteers” and the global marketers that we have unlimited choices, but the reality is these “infinite” choices have in the main to do with the most immaterial of things: from an endless selection of television channels and lifestyle magazines right through to typically redundant application software, and the limitless supply of “brand name” mobile phones. All this to keep the illusion of choice operating.

One of Marshal McLuhan's stock apothegms hits the mark: If it works, it is already obsolete! Bruce Springsteen's clever song “57 Channels (And Nothin' On)” also sums up this condition very well. On things that truly matter we have either no choice or limited choice: on those who will govern us; the accountability of the corporates; on sending our children to war; the distribution of wealth. The choices we can make as consumers are more often than not meaningless, and make little if any difference to anything, except to establish and to legitimize the positions of the power elite. Or as globalization theorists today argue, it makes better sense to speak of a transnationalist capitalist class.

Lots of different choices will often mean amusement and distraction. But this is nothing new. “Entertainment” was always a ploy to numb and to “dumb down” the masses when things were not going too well or when “changes” needed to be introduced on the sly. The Romans were masters at this and the arena spectacles the imperial court would stage to appease, to control, and to “educate” its people are legend. Pliny for instance, makes this plain in his panegyric to Trajan, xxxi.1. In more recent times the Nazi theater with all of its pomp and ceremony is a prime and monstrous example. Hitler knew too well the stupor a good show could produce, and so did Stalin who was directing his own militaristic and political shows in the Soviet Union. “Only the mob and the elite” wrote Hannah Arendt in The Origins of Totalitarianism (1951), “can be attracted by the momentum of totalitarianism itself. The masses have to be won by propaganda” [8]. It is not only the “bright lights” that can disengage us from deeper reflection but non-stop noise as well, something that Arthur Schopenhauer reflected upon more than a century ago.

Turning the notion of resistance is futile on its head gives us the confirmatory message that “this is inevitable”, whatever the “this” happens to be. It says that no matter what I say to you, no matter the red flags that I identify, and no matter the harms I have revealed through my investigations, that “this” (whatever this might be in a given context) is inevitable. I become powerless to choose and I am told what will happen before it happens. Who holds this iron fist control over the future? This has nothing to do with prophecy, religious or not, for when seers foretell they will typically point to an alternative path. But in this instance we are in fact dealing with the macroeconomic doctrine of the Chicago school which advanced the idea that profit values are the absolute and market decisions are unerringly right.

Media moguls or media maestros as they are every so often referred to, want us to believe that our stance is normally contradictory to the majority; that our position is not only unimportant but warped. They try to convince us that the vocal minority will soon stop speaking out because they simply don't have the resources to keep going. This proselytization is shameful trickery. Those conscientious members of large corporations who have glimpsed behind the veil have been the first to admit that the future they are selling is a potentially terrible one and that they will be glad to not be a part of it [9]. Bill Joy the former chief technologist at Sun Microsystems was severely criticized for being narrow-sighted, even a fundamentalist of sorts, after publishing his brave paper in Wired (2000), but all he did was dare to ask the questions: “Do we know what we are doing? Has anyone really carefully thought about it?” He wrote:

“We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don't believe so, but we aren't trying yet, and the last chance to assert control – the fail-safe point – is rapidly approaching” [10].

Machinery and technology are to be used for the betterment of humankind and not for its subservience. The essential difference between technique “then and now” is that it used to be complimentary to our endeavors and we were in partnership with the engineering. We have gradually moved away from this healthy synergy where we are not only becoming faceless numbers but are deconstructing the basic building blocks of our very being. The catchphrase with which this editorial started has snuck into every pocket of life. It is our responsibility as researchers and investigators to speak out and point to the alternatives. We need consumer advocates who have at heart the representation of consumers, and the everyday people. We need to have a balance that verbalizes “because you can, it doesn't mean it is the right thing to do.” We need people marching on the streets if they do not agree with what is happening in their local, national, and international community of interest. And we need engineers to take upon themselves an increasing and commensurate responsibility, both at the building and ethical levels of their own work.

The paradox more often than not, is that the “resistance is futile” reprise comes from the very few who allegedly represent the many. These are the multinationals and corporates acting as a single entity. They are not the everyday consumer, the school teachers, the elderly, or the disabled. But we have swallowed the adage to believe in the reprise and its inevitability. This is reminiscent of DreamRift's side-scrolling platform game Epic Mickey: Power of Illusion. But life is not a game, though we are increasingly led to believe that almost everything is now become a game or possessed at least of some entertainment value.

In this world nothing can be said to be “inevitable.” The only thing that we can say for now that is inevitable belongs ironically to another cliché: death and taxes. The question remains, why do researchers who believe that trajectories mapped out by marketeers and engineers given the principle of exponential growth will invariably be realized (uberveillance or human-machine meshing for example), continue to spend time and resources on the subject? The answer need not be intricate. It is because a large group of these researchers believe that ultimately whatever the cost, individuals will still possess the freedom to decide to what extent they integrate themselves onto the electronic grid and into the “Borg.”

Then I was 8 years of age my older sister who was 8 years my senior was diagnosed with paranoid schizophrenia. As a result, my family spent quite a few years visiting hospitals and mental health facilities on a daily basis. It is painful to reflect on that period, as our whole world was rocked by this illness. My once vibrant, skilful, dynamic, energetic, extremely kind, and top-of-her-class sister was plagued by a disease process of schizophrenia that would have her attempting to take her own life on several occasions, battle with hearing voices, go into a state of catatonia for long periods of time, and suffer severe bouts of anxiety and depression.

The onset of my sister's schizophrenia was spontaneous, during what should have been the most carefree years of her life. We will never know what triggered her illness but for whatever reason that this “thing” landed in our household, we learned to come to terms with its impact. I grew up with an understanding that, in life, there are some things we can fix, and some things we cannot. There are some things we can explain, and some things we cannot. Sometimes medical science has the answers, and sometimes it does not. It does not mean I give up on the potential for a cure or therapy for various forms of mental illness, but I am more wary than most about silver bullet solutions.

In the 30 years my sister has lived with schizophrenia there have been numerous incremental innovations that have been beneficial to some sufferers. First, there have been advancements in pharmacology and in the composition of antidepressants so that they are more effective. But pharmaceutical treatments have not helped everyone, especially those sufferers who do not take their medication on a regular basis. Many persons living with depression who come on and off antidepressants without seeking medical advice are at an increased risk of suicide.

Cognitive behavior therapy (CBT), an empirically-based psychotherapy, has also aided increasing numbers of patients to better cope with their condition. Yet CBT is not given the same media attention as the new range of dynamic neural stimulators, commonly dubbed “brain implants,” now on the market [1].

For sufferers who are diagnosed with major depressive disorder (MDD), and for whom antidepressants and CBT simply do not work, doctors have turned to the prospect of somatic therapies. These include: electroconvulsive therapy (ECT), repetitive transcranial magnetic stimulation (rTMS), vagus nerve stimulation (VNS), and deep brain stimulation (DBS). If an individual does not respond to ECT (and only fifty per cent do), they are said to have treatment-resistant depression (TRD) [2].

In plain language, ECT is when electricity is applied to the scalp generally over a treatment period of between 2-4 weeks, several sessions per week. rMTS treatment goes for 4-6 weeks, of 5 sessions per week and uses a fluctuating magnetic field from electromagnetic coil placed outside the skull sending an electrical current to the brain.

VNS and DBS are more intrusive procedures targeting specific parts of the brain [3]. In VNS, an electrode is wrapped around the left vagus nerve in the neck and stimulation occurs about every 5 minutes for about 30 seconds. The battery packs sit under the skin of the chest in both VNS and DBS, but in the DBS procedure, one or more leads are implanted in the brain, targeted through burr holes in the skull, and locked into place [2].

VNS and DBS were unavailable techniques when my sister first became ill, but I do recollect vividly the results of ECT upon her. Post the treatments, we lost her well and truly into a dark space one cannot reach she was placed on higher dosages of antidepressants for the weeks to follow, and it was apparent to us she was not only in mental anguish but clearly in physical difficulties as well. Doctors claimed clinically that she “did not respond to the treatment,” but never acknowledged that the ECT process might have caused her any short-term distress whatsoever. In fact, we were told: “There is no change in her condition. She continues to be as she was before the treatment.” That was debatable in my eyes. Even though I was just a kid, I observed it took a good three months to get my sister back to where she was before the ECT treatment. But she was only one participant among many in clinical trials, and in no way do I generalize her outcomes to be the outcomes of all ECT patients.

VNS and DBS are again very different techniques, and while VNS is used as an adjunct therapy for major depression, DBS is mainly reserved for treating Parkinson's disease and has had only limited approval for combatting intractable obsessive compulsive disorder (OCD). However, what I gained from those childhood experiences is that human life is precious and experimentation can have some very adverse side effects without any direct benefits to the individual sufferer. Doctors need to be held accountable, caregivers and patients with MDD must be told clearly about the risks, and VNS patients must be monitored closely into the longer-term. I am alarmed at the lack of qualitative research being conducted across the spectrum of implantable devices in the health sector. And that is an area I intend to personally address in my own research in years to come.

To this day, I believe my sister was in no condition to consent to the treatment she received. At the time she intermittently thought I was Brooke Shields and that my siblings were other television personalities. She was delusional and completely unaware of herself. Prior to the trial my sister participated in, my parents had no real idea what ECT was, save for what they had heard anecdotally. As my sister's “guardians,” my parents did not understand how ECT would be administered and were not given the option to accompany her during the actual treatment. They were simply told that my sister would wear something on her head and have an electrical current travel all around it to hopefully “zap” her back to normal. They were not informed of what the risks might be to their beloved daughter, although they were clear it was all “experimental.” It was also emphasized, that this “electro-shock treatment” was the only other alternate route of exploration to help my sister get better. I remember their expectations being raised so high, only to be dashed after each treatment [4]. My parents had to rely on an interpreter as my father did not speak English and my mother only broken English. When one was not available my brother and sisters and I would do the translation.

In the end, when all other routes failed, my family turned to God for help. Alongside an excellent medical and health team (psychiatrist, social worker, general practitioner), and a loving home environment, it was faith that gave my family the will to go on facing everyday issues, as my sister slowly regained parts of herself to become functional again, such as her mobility and speech. As the saying goes “prayer works,” and while it might not make rational sense to believe in miracles, I remember witnessing these on at least a few occasions.

A few months ago, the cover of the February 2015 issue of IEEE Spectrum was graced with the title: “Hot-wiring the nervous system: implanted in the brain, smart-systems are defeating neurological disorders” (pp. 28) [5]. As someone who has spent the greater part of their academic career studying surveillance, risk, privacy and security, trust, and control, I have long reckoned that if we can “defeat” neurological disorders using implantable devices, then we can also “construct” and “trigger” them willingly, as well. But the point of my editorial is not to discuss the future of dynamic neural stimulators; we can debate that in another issue of T&S Magazine. Rather my point is to try to generate discussion about some of the fundamental issues surrounding the socio-ethical implications of penetrating the brain with new technologies, especially those that are remotely triggerable [6].

While the early studies for VNS with respect to MDD look promising, we need to acknowledge we are still at the very beginning of our investigations. I am personally more circumspect about published figures that simply categorize subjects post implantation using minimal labels like “non-responders,” “responders” and “achieved remission” [7]. Longitudinal data will give us a clearer picture of what is really happening. DBS, on the other hand, has been used to treat well over 75 000 persons, mostly suffering from movement disorders [2], but it is increasingly being piloted to treat OCD [8]. This is a call to the research community, to publish more widely about some of the complications, side effects, and resultant social life changes that implantees (of all kinds) are faced with post-surgery.

I am not referring here to issues related to surgical implantation (e.g., symptomatic haemorrhage after electrode placement), or even device failure or hardware-related complications (of which I have great concerns that there will be severe hacking problems in the future). Rather, I am referring to the resultant effect of “artificially constructed” dynamic stimulation on the human brain and its impact on an individual. In short, these are the unintended consequences, that range in scope from psychotic symptoms post stimulation (e.g., for epilepsy, or for patients presenting with auditory hallucinations for the first time), to modifications in sleep patterns, uncontrolled and accidental stimulation of other parts of body function [9], hypersexuality, hypomania [10], changes to heart and pulse rates, and much more.

Many implantees resort to social media to share their pre-and post-operative experiences. And while this is “off the record” self-reporting, clearly some of these discussions warrant further probing and inquiry. My hope is that the copious note-taking that occurs during pilots and clinical trials, specifically with respect to side effects, will be more accessible in the form of peer reviewed publication for doctors, engineers, government officials, standards organizations, regulatory approval bodies, and of course, the general public, so that we can learn more about the short-term and long-term effects of neural stimulation devices.

One patient, as a result of a particular procedure in a DBS pilot study described a sensation of feeling hot, flushed, fearful, and “panicky.” “He could feel palpitations in his chest, and when asked indicated he had an impending sense of doom. The feelings were coincident and continuous with the stimulator ‘on’ setting and they rapidly dissipated when switched ‘off'” [11]. Surely, this kind of evidence can be used to inform stakeholders towards what works and what does not, and the kinds of risks a patient may be exposed to if they opt-in, even if we know the same state will not be experienced by every patient given the complexity of the brain and body. In the more mature heart pacemaker industry, it is device manufacturers who tend to wish to hoard the actual physiological data being recorded by their devices [12], [13]; the brain implant industry will likely follow suit.

To conclude this editorial, at the very least, I would like to echo the sentiments of Fins et al., that deep brain stimulation is a “novel surgical procedure” that is “emerging,” and should presently be considered a last resort for people with neuropsychiatric disorders [14]. There needs to be some tempering of the hype surrounding the industry and we need to ensure that rigor is reintroduced back into trials to minimize patient risk. Exemptions like that granted by the U.S. Food and Drug Administration (FDA) on the grounds of a “humanitarian device” allow implant device manufacturers to run trials that are not meaningful because the size of the trial is inappropriate, lacking commensurate statistical power [14]. The outcomes from such trials cannot and should not be generalized.

I would go one step further, calling not only for adherence to more careful research requirements during clinical trials, but also urging the medical community in general to really think about the direction we are moving. If medical policies like these [15] exist, clearly stating that “there is insufficient evidence to support a conclusion concerning the health outcomes or benefits associated with [vagus nerve stimulation] … for depression” then we must introduce major reforms to the way that consent for the procedure is gained.

Between 1935 and 1960, thanks to a rush of media (and even academic coverage), lobotomies were praised for the possibilities they gave patients and their relatives [16]. Although I am not putting lobotomies on the same level as VNS and DBS, I am concerned about placing embedded devices at the site of the most delicate organ in the human body. If we can “switch on” certain functions through the brain, we can also “switch them off.”

It is clear to anyone studying emerging technologies, that the future trajectory is composed of brain implants for medical and non-medical purposes. Soon, it won't be just people fighting MDD, or OCD, epilepsy [17], [18], Parkinson's disease [19] or Tourette's Syndrome who will be asking for brain implants, but everyday people who might wish to rid themselves of memory disorders, aggression, obesity, or even headaches. There is also the potential for a whole range of amplified brain technologies that make you feel better – diagnostic devices that pick up abnormalities in physiological patterns “just-in-time,” and under-the-skin secure identification [20]. And while the current costs for brain implants to fight mental illness are not cheap, at some $25 000 USD each (including the end-to-end surgical procedure), the prices will ultimately fall [1]. Companies like Medtronics are talking about implanting everyone with a tiny cardiac monitor [21]; it won't take long for the same to be said about a 24×7 brain monitor, and other types of daily “swallowable” implants [22].

Fears related to embedded surveillance devices of any type may be informed by cultural, ethical, social, political, religious concerns that must be considered during the patient care process [23]. Fully-fledge uberveillance, whether it is “surveillance for care” or “surveillance for control” might well be big business in the future [24], but for now academicians and funding bodies should be less interested in hype and more interested in hope.

The Watch is here” touts Apple’s slogan for its wearable computer, implying that the one and only time-piece that really matters has arrived. So much for the Rolex Cosmograph and Seiko Astron when you can buy a stylish digital Apple Watch Sport, or even Apple Watch Edition crafted with 18-karat gold.

Introduction

Point of view has its foundations in film. It usually depicts a scene through the eyes of a character. Body-worn video-recording technologies now mean that a wearer can shoot film from a first-person perspective of another subject or object in his or her immediate field of view (FOV). The term sousveillance has been defined by Steve Mann to denote a recording done from a portable device such as a head-mounted display (HMD) unit in which the wearer is a participant in the activity. Some people call it inverse surveillance because it is the opposite of a camera that is wall mounted and fixed.

IMAGE COURTESY OF WIKIMEDIA COMMONS/MICHAELWOODCOCK

During the initial rollout of Google Glass, explorers realized that recording other people with an optical HMD unit was not perceived as an acceptable practice despite the fact that the recording was taking place in a public space. Google's blunder was to consider that the device, worn by 8,000 individuals, would go unnoticed, like shopping mall closed-circuit television (CCTV). Instead, what transpired was a mixed reaction by the public—some nonusers were curious and even thrilled at the possibilities claimed by the wearers of Google Glass, while some wearers were refused entry to premises, fined, verbally abused, or even physically assaulted by others in the FOV.

Some citizens and consumers have claimed that law enforcement (if approved through the use of a warrant process) and shop owners have every right to surveil a given locale, dependent on the context of the situation. Surveilling a suspect who may have committed a violent crime or using CCTV as an antitheft mechanism is now commonly perceived as acceptable, but having a camera in your line of sight record you—even incidentally—as you mind your own business can be disturbing for even the most tolerant of people.

Wearers of these prototypes, or even fully fledged commercial products like the Autographer, claim that they record everything around them as part of a need to lifelog or quantify themselves for reflection. Technology like the Narrative Clip may not capture audio or video, but even still shots are enough to reveal someone else's whereabouts, especially if they are innocently posted on Flickr, Instagram, or YouTube. Many of these photographs also have embedded location and time-stamp data. You might not have malicious intent by showing off in front of a landmark, but innocent bystanders captured in the photo could find themselves in a predicament given that the context may be entirely misleading.

Privacy, Security, and Trust

Privacy experts claim that while we once might have been concerned or felt uncomfortable with CCTV being as pervasive as it is today, we are shifting from a limited number of big brothers to ubiquitous little brothers and toward wearable computing. Fueled by social media and instant fame, recording the moment can make you famous as a citizen journalist at the expense of your neighbor.

The fallacy of security is that more cameras do not necessarily mean a safer society. In fact, statistics, depending on how they are presented, may be misleading about reductions in crime in given hotspots. The chilling effect, for instance, dictates that criminals do not just stop committing crime (e.g., selling drugs) because someone installs a bunch of cameras on a busy public route. On the contrary, crime has been shown to be redistributed or relocated to another proximate geographic location. In a study conducted in 2005 for the United Kingdom's Home Office by Martin Gill of the University of Leicester, only one area of a total of 14 studied saw a drop in the number of incidents that could be attributed to CCTV. The problem was with using the existing CCTV systems to “good effect” [1].

Questions of trust seem to be the biggest factor against wearable devices that film other people who have not granted their consent to be recorded. Let's face it: we all know people who do not like to be photographed for reasons we don't quite understand, but it is their right to say, “No, leave me alone.” Others have no trouble being recorded by someone they know, so long as they know they are being recorded prior to the record button being pushed. And still others show utter indifference, claiming that there is no longer anything personal out in the open.

Who's watching whom? Alexander Hayes takes a picture of wearable computer pioneer Steve Mann in Toronto, Canada, during the Veillance.me Conference (IEEE International Symposium on Technology and Society 2013), while Mann uses his EyeTap device and high-definition camera to record a surveillance camera. Is this sousveillance squared? (Photo courtesy of Alexander Hayes.)

Often, the argument is posed that anyone can watch anyone else walk down a street. These individuals fail in their assessment, however—watching someone cross the road is not the same as recording them cross the road, whether by design or by sheer coincidence. Handing out requests for deletion every time someone asks whether they've been captured on camera by another is not good enough. Allowing people to opt out “after the fact” is not consent based and violates fundamental human rights such as the control individuals might have over their own image and the freedom to go about their life as they please.

Using technology like the Narrative Clip may not capture audio or video, but even still shots are enough to reveal someone else's whereabouts, especially if they are innocently posted on Flickr, Instagram, or YouTube.

Laws, Regulations, and Policies

At the present time, laws and regulations pertaining to surveillance and listening devices, privacy, telecommunications, crimes, and even workplace relations require amendments to keep pace with advancements in HMDs and even implantable sensors [2]. The police need to be viewed as enforcing the laws that they are there to upkeep, not to don the very devices they claim to be illegal. Policies in campus settings, such as universities, also need to address the seeming imbalance in what is and is not possible. The commoditization of such devices will only lead to even greater public interest issues coming to the fore. The laws are clearly outdated, and there is controversy over how to overcome the legal implications of emerging technologies. Creating new laws for each new device will lead to an endless drafting of legislation, which is not practicable, and claiming that existing laws can respond to new problems is unrealistic, as users will seek to get around the law via loopholes in a patchwork of statutes.

Cameras provide a power imbalance. First, only a few people had mobile phones with cameras, and now they are everywhere. Then, only some people carried body-worn video recorders for extreme sports, and now, increasingly, using a GoPro, Looxcie, or Taser Axon glasses, while still in their nascent stages, has been met with some acceptance, dependent on the context (e.g., for business-centric applications that free the hands in maintenance). Photoborgs might be hitting back at all the cameras on the walls that are recording 24×7, but they do not cancel out the fact that the photoborgs themselves are doing exactly what they are claiming a fixed, wall-mounted camera is doing to them. But beating “them” at their own game has consequences.

The Überveillance Trajectory

One has to ponder: where to next? Might we be well arguing that we are nearing the point of total surveillance, as everyone begins to record everything around them for reasons of insurance protection, liability, and complaint handling “just in case,” like the in-car black box recorder unit that clears you of wrongdoing in accident? And how gullible might we become that images and video footage do not lie, even though a new breed of hackers is destined to manipulate and tamper with reality to their own ends.

Will the new frontier be surveillance of the heart and mind? The überveillance trajectory refers to the ultimate potentiality for embedded surveillance devices like swallowable pills with onboard sensors, tags, and transponder IDs placed in the subdermal layer of the skin, and even diagnostic image sensors that claim to prevent disease by watching innards or watching outward via the translucent dermal epidermal junction [3]. Just look at the spectacle and aura of the November 2014 “chipping” of Singularity University's cofounder Peter Diamandis if you still think this is conspiracy theory [4]! No folks, it's really happening. This event was followed by the chipping party in Sweden of eight individuals [5]. Let us hope this kind of thing doesn't catch on too widely because we stand to lose our freedom, and that very element that separates man from machine.

In 2009, M.G. Michael and I presented the plenary article “Teaching Ethics in Wearable Computing: The Social Implications of the New ‘Veillance’” [1]. It was the first time that the terms surveillance, dataveillance, sousveillance, and überveillance were considered together at a public gathering [2]. We were pondering the intensification of a state of überveillance through increasingly pervasive technologies that can provide details from a big-picture satellite view right down to the smallest-common-denominator embedded-sensor view. Veiller means “to watch,” coming from the Latin vigilare, stemming from vigil, which means to be “watchful.” The prefixes sur, data, sous, and über alter the “watching” perspective and meaning. What does it mean to be watched by a closed-circuit television (CCTV) camera, to watch another, to watch oneself? Roger Clarke [3], Steve Mann [4], and M.G. Michael [5] have defined three “types” of watching in the sociotech literature.

Wearable and embedded cameras worn by any citizen carry significant and deep personal and societal implications. A photoborg is one who mounts a camera onto any aspect of the body to record the space around him-or herself [6]. Photoborgs may feel entirely free, masters of their own destiny; they may even feel safe that their point of view is being noted for prospective reuse. Indeed, the power that photoborgs have is clear when they put on the camera. It can be even more authoritative than the traditional CCTV overhead gazing in an unrestricted manner, given that sousveillance usually happens on the ground level. Photoborgs may be recording for their own lifelog but will inevitably capture another person in their field of view, and unless these fellow citizens also become photoborgs themselves, there is a power differential. Sousveillance carries with it huge socioethical, environmental, economic, political, and spiritual overtones.

The narrative that informs sousveillance is more relevant today than ever before due to the proliferation of new media. But where sousveillance grants citizens the ability to combat the powerful using their own evidentiary mechanism, it also grants other citizens the ability to put on the guise of the powerful. The pervasiveness of the camera that sees and hears everything can only be reconciled if we know the lifeworld of the wearer, the context of the event being captured, and how the data will be used by the stakeholder in command. The evidence emanating from cameras is endowed with obvious limitations, such as the potential for the impairment of the data through loss, manipulation, or misrepresentation [7]. Sousveillance happens through the gaze of the one wearing the camera, just like a first-person shooter in a video game.

Sensors now come endowed in most devices, big or small, and the discrete data collected tell us much about the spatiotemporal patterns of that which is being monitored.

In 2003, WIRED published an article written by N. Shachtman [8] on the potentiality to lifelog everything about everyone. He wrote:

The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person's life, index all the information and make it searchable… The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read… All of this—and more—would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual's health… This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual's life.”

It simply goes to show how any discovery can be tailored toward any end. Lifelogging is meant to sustain the power of the individual through reflection and learning, to enable growth, maturity and development, but here, instead, it has been hijacked by the very same stakeholder from which it was created to gain protection.

Sousveillance also drags into the equation the innocent bystander who is going about his or her everyday business and just wishes to be left alone. When we asked wearable 2.0 pioneer Steve Mann in 2009 what one should do if bystanders of a recording in a public space questioned why they were being recorded without their explicit permission, he pointed us to his request for deletion Web page [9]. This is admittedly only a very small part of the solution and, for the most part, untenable. One just needs to view a few minutes of the Surveillance Camera Man Channel at http://www.liveleak.com/c/surveillancecameraman to understand that people generally do not wish to be filmed in someone else's field of view. Some key questions include:

In what context has the footage been taken?

How will it be used?

To whom will the footage belong?

How will the footage taken be validated and stored?

In “Digital Wearability Scenarios,” Deniz Gokyer and I provide plausible scenarios of the use of wearable cameras in a closed campus setting. Although the scenarios are not based on primary sources of evidence, they do provide conflicting perspectives on the pros and cons of wearables. As companies are engaged in even shorter market trialability of their products, the scenarios demonstrate what can go wrong with an approach that says “Let's unleash the product now and worry about repercussions later; they'll iron themselves out eventually—our job is to solely worry about engineering.” The pitfalls of such an approach are presented in my article “Sousveillance,” which also appears in this issue. The article demonstrates how emerging technologies have direct social implications. One of the biggest problems with introducing new products without commensurate market testing is the unexpected and asymmetric consequences that ensue. For instance, my privacy is breached by someone wearing a camera, and although no one else has been affected by the recorded evidence, my life is affected adversely. Laws and organizational policies especially need to quickly come up to speed as advancements in technologies happen.

The same can be said about the use and application of radio-frequency identification (RFID). Katherine Albrecht and Liz McIntyre remind us of the way RFID was introduced into the retail market just after the turn of the century in their article “Protect Yourself from RFID.” On the one hand, tracking items for supply-chain management can help with loss prevention on the operations side of the business. On the other hand, using RFID to surreptitiously learn more about customer behavior and habits is a breach of privacy. In his article “Protecting Yourself with RFID,” William Lumpkins does not focus on the “spychips” phenomenon but rather the key benefits of RFID to industry.

PoSR describes the overlapping network transaction spaces that people traverse synchronously and asynchronously with others to maintain and use social relationships via various apps, mobile services, sensors, platforms, technologies, and conversation spaces.

Sally A. Applin and Michael D. Fischer's article, “Toward a Multiuser Social-Augmented Reality Experience,” is next in the special section, and it describes the pivotal role that social media, geolocation information, and augmented reality play in their groundbreaking concept of polysocial reality (PoSR), a framework developed for representing complex synchronous and asynchronous messaging contexts: “PoSR describes the overlapping network transaction spaces that people traverse synchronously and asynchronously with others to maintain and use social relationships via various apps, mobile services, sensors, platforms, technologies, and conversation spaces.”

For M.G. Michael and his coauthors, the discussion on veillances transcends from item-based RFID tracking sensors on things to embedded sensors on or in people. Michael et al. point to the implications that would come from a fully fledged Web of Things and People. They paint a rather dystopian future of the changes that may happen to society at large as continuous behavioral tracking takes root in the big data realm. New technologies have social implications, and these are spelled out in “Überveillance and the Web of Things and People.”

The pitfalls of a point-of-view recording—no matter how many cameras and sensors are recording and no matter from how many perspectives and stakeholders—are the limitations of video evidence. What is a whole incident? How can we denote past provocation or historical data not available during a given scene? How can we ensure that data on a mobile transmission have not been intercepted? How can we ensure data validation? We might well be on a road similar to that of DNA as admissible evidence in a court of law in terms of “eyewitness” recording of events. The key question to ask here is whether or not we can ever achieve “omniscience” through the use of seemingly “omnipresent” new media. Sensors now come endowed in most devices, big or small, and the discrete data collected tell us much about the spatiotemporal patterns of that which is being monitored. Yet, despite all the “big data,” we still struggle to make sense of what we are watching, and with a context missing (no matter how good computers are at computing), the systems remain fallible.

References

1. K. Michael, M. G. Michael, "Teaching ICT ethics using wearable computing: The social implications of the new ‘veillance’", Proc. Australian Point of View Technologies Conf. (AUPOV09), 2009-June.

Back in 1997, Katina would use International Telecommunications Union (ITU) estimates of incoming and outgoing voice and data teletraffic tables for her work in strategic network engineering. She was particularly amazed when viewing these figures in global thematic maps, as thick arrows would always flow in and out from developed nations, and yet significantly thinner arrows would be flowing from developing nations, despite the difference in population counts [1]. That image has stuck with her as a depiction of how the world is, no doubt, related to historical events. Efforts required to bring those arrows into equilibrium at a country level seem somewhat impossible, given the digital divide.

As initiatives like Project Loon attempt to grant all peoples Internet access [2], there are still many places on Earth that have limited or no connectivity whatsoever. Some of these places reject services, believing that they will bring with them even greater harm, such as deforestation or a destabilization of culture and religious practice. And yet, developed nations uphold that they are in fact educating, providing, and allowing for longer-term economic and social sustainability through their technological solutions. For example, Jason has recently returned from the eastern part of the Maharashtra state of India where the use of technology in remote villages such as Jamnya appears at first glance to be at direct odds to the subsistence way of traditional village life. However, on second glance, the benefits of technology offer endless possibilities from education to weather station assistance with crop plantings. See also, Khanjan's projects in Africa [3].

But what about long-term stability in developing nations? For example, as we strive to mainstream alternate energy sources and make them accessible in resource poor communities [4], how do we think beyond the technological and economic dimensions and ensure respect for social, political, and environmental imperatives? Computers, including the tiny but powerful ones on cell phones can be game-changers, but they will not save lives directly. They cannot be eaten by a starving population. And then, they need to be serviced and maintained. Jason, along with Katina's husband Michael, visited and taught Karen refugee students in camps and remote villages on the Thai-Burma border [5]. They quickly realized that computers work only if they are connected to electricity. Someone has to pay the bill. Computers can thereafter continue to work, if no parts go missing, and they are fully enclosed within a shelter that has windows, and are not damaged. Computers can be operated by people who have received some training and where there is some connectivity. It is hopeless to want to share files or use remote applications if bandwidth is lower than 56 kbps. For example, Martin Murillo et al.'s article in this special section emphasizes that leading humanitarians have identified data communications for remote health offices as one of the top three tools that will contribute to the fulfilment of the Millennium Development Goals (MDGs).

Today, as many as 80% of the world's citizens reside in areas with mobile phone coverage [6]. Increasing access to computers and cellular devices has allowed telemedicine systems to flourish in developing countries. But these devices can only really work if technologies are integrated into local communities in bottom-up socialization practices. They can work if they are embraced by locals, and harnessed for good by local companies, NGOs, elders, and other stakeholders. While the number of mHealth and telemedicine systems is growing, the benefits of these technologies are yet to be fully realized. Many mHealth ventures in resource-constrained environments suffer from “pilotitis” – an inability to expand beyond the initial pilot and ultimately become sustainable ventures. Khanjan has led the design and execution of a cash-positive telemedicine venture in central Kenya that now has seven full-time employees. His students recently conducted a study of the failure modes that plague the growth of mHealth pilots in the developing world. This study of over 50 projects in Africa and Asia uncovered a wide range of barriers including financial challenges, business structures, technological limitations, and cultural misalignments. Once again, some of the greatest challenges were related to bottom-up socialization, melding Western and indigenous knowledge, and integration of new technologies, approaches, and business models into traditional ways of life. Khanjan has captured the nuts and bolts of “how things work” and why projects fail in a series of short stories called The Kochia Chronicles: Systemic Challenges and the Foundations of Social Innovation. These narratives take readers headlong into the lives of people in a quintessential African village as they usher in an era of design, innovation, and entrepreneurship.

It is difficult not to be cynical about initiatives such as Zuckerberg's hopes to wire the world [7]. These technological initiatives sound good, but with computing also will come social implications. Not all of these implications will be positive.

But back now to getting those inflows and outflows to look more alike, as newly industrialized countries have experienced growth since the inception of the mobile phone (e.g., India), broadband (e.g., Singapore), and manufacturing machinery (e.g., Thailand). The bottom line is that to overcome the endemic failures that inhibit the sustainability and scalability of well-meaning projects, a truly systemic and participatory approach is essential. Rather than dwelling on the problems caused by, or that might result from, the digital divide, let us preoccupy ourselves with considering digital inclusion as a primary aim. Digital inclusion is not just about offering equity but about making substantial self-determined improvements to the lives and livelihoods of people in resource-poor settings. The digital divide will never be entirely bridged, but inclusion can be propelled through social innovation, concerted time, and effort supported by multi-lateral funding from local and global stakeholders who not only understand the need for change but are passionate about the human need and its interdependence with global peace and sustainability.