Privacy notice

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.

Share this article

In a world where companies and governments can, and increasingly do, track vast amounts of personal information about our habits, preferences, behavior, communications and even our thoughts, where will we draw the line? People evidently enjoy the benefits of social media and big data—Amazon suggesting “you might like”; your phone flagging the proximity of a potential date pre-vetted by your friends; monitoring systems that send alerts when an elderly parent isn’t taking medication—but sometime in the last year we seem to have crossed the “creepy line” that Google’s CEO famously warned against. Now governments are struggling to regulate the use of personal information, companies are weathering backlash for customer surveillance in-store and online, and individuals are exploring a suite of options for covering their digital tracks. As pointed out in the previous essay in this report, digital data is the one of the few resources increasing, rather than being depleted, by human activity, and plenty of companies are setting out to exploit this resource. Some of this data is generated by online activities: e-mail, use of social media, shopping and Web browsing are all fair game. Tweetmining is the practice of some companies harvesting, analyzing and selling old tweets to others that want to gauge reactions to their products and services.

The new Graph Search engine on Facebook makes it easier than ever for marketers, or hackers, to assemble detailed profiles of users. We’re even developing technologies that can reach through the screen to gather data on the physical you, using the monitor’s camera to track what you are looking at on the screen and how you react to advertisements.

Just carrying a cell phone makes you a target. Retailers can use smartphone signals to capture serial numbers and track users’ locations. A firm hired to install bomb-proof trash bins in London prior to the Olympic Games embedded tracking technology that used phone signals to follow people through the streets.

Being off-line, even sans phone, is no protection. Our world is saturated with sensors that watch you in the real world. How many video cameras do you pass in the course of a day? Video is data, and enormous amounts of information can be gleaned from a feed. Stores use facial recognition software tied to video feeds to analyze the demographic profile of shoppers—age, gender and even race—to tweak everything from what merchandise they display to what kind of music they play to appeal to their typical client at a given time of day. Some high-end stores are using facial recognition software to spot VIPs so staff can be alerted via iPad or smartphone and provided with data on the celeb’s preferences or buying history.

The abilities of these systems border on telepathy, analyzing facial expressions to infer feelings and moods. One retail system specializes in deducing shoppers’ “emotional engagement” with products from video streams in-store or onscreen. The Department of Homeland Security is testing Future Attribute Screening Technology, a “pre-crime” detection program based on sensors that secretly collect video, audio, cardiovascular signals, pheromones, electrodermal activity and respiration, and applies algorithms to identify suspicious individuals (hopefully distinguishing between the elevated heart rate of a mere nervous traveler and cues denoting a true “unknown terrorist”).

Museums are adapting surveillance technology to their own purposes. They are using the feed from security cameras to create safety perimeters around objects on display. Some are monitoring (and responding to) real-time tweets and location data. (See, for example, how the Tate Modern used Twitter for “sentiment analysis” of its exhibit “The Tanks: Art in Action.”) Indoor GPS systems give museums the ability to tell—to within 3 to 10 feet, depending on the system being used—where a visitor is in the building, and museums are using this ability in conjunction with apps to push location-appropriate content to visitors, tailored to the exhibit they are in. Researchers have even played with measuring the physiological reactions of visitors as they move through an exhibit.

There is a long history of concerns about privacy; these are being accelerated by ever more sophisticated surveillance technologies. Tidying up your legacy is no longer as simple as burning a pile of letters. People are becoming more conscious and careful about what data they share, and with whom. One recent Pew study reports that over half of app users have uninstalled or not installed an app due to concerns about personal information, and 19 percent turn off the location-tracking feature on their cell phone. Another Pew study showed 68 percent of the public feels current laws are inadequate to protect people’s privacy online, and half of Internet users are concerned about the amount of personal information about them that is online.

In particular, people are protesting the collection of data they have not voluntarily provided. Many major retailers now use Indoor Location Tracking Software that ties together data from surveillance cameras, sensors and WiFi. Shoppers aren’t always aware this is happening or happy when they find out. When Nordstrom’s, in an effort to be transparent in its operations, disclosed how it was monitoring customers, it provoked a “firestorm” of criticism and had to stop. Google decided not to install facial recognition software on its wearable heads-up display, Google Glass, due to concerns expressed by the public and a congressional committee that this feature could be used to call up personal data on anyone a Glass user encounters. With or without facial recognition, 5 Point Café in Seattle made a point of banning Google Glass before it as even commercially available, while a related establishment actually booted out an early adopter of Glass later in the year.

“Museums have always been sites of intense surveillance—security guards, metal detectors, bag searches and lots of cameras. Now that surveillance is being extended to visitors’ smartphones, social media posts and even their facial expressions, not to protect the collections, but to better understand visitor behaviors.”—Eric Hintz, Historian, Lemelson Center for the Study of Invention and Innovation, National Museum of American History, Smithsonian Institution

In 2013 these privacy concerns came to a head. The leaks by former NSA contractor Edward Snowden of classified security documents showed the massive extent of U.S. government surveillance, and his claim that Google, Facebook, Apple and Microsoft were complicit shook users’ faith in those companies. While the Boston Marathon bombings showed the power of surveillance plus social media to identify perpetrators, it also demonstrated the power of those forces to stigmatize the innocent. Surveillance drones have become such a reflexively scary meme that when trusted vendor Amazon announced it might start delivering packages via drone, it unleashed a torrent of concern.

Not content to wait for legal protection, some people are inventing creative ways to protect themselves from overbearing surveillance. Artist Adam Harvey has introduced “Stealth Wear” clothing that can outfox drones by countering thermal imaging, an “anti-paparazzi clutch” and a stylish privacy case that blocks the signal of mobile phones. Harvey has also come up with makeup patterns and hairstyles that can defeat facial recognition software. Other artists are raising awareness about and provoking debate on privacy issues by creating “surveillance art” that appropriates public data and images from the Web or uses covert tactics to collect data from unsuspecting subjects.

What This Means for Society

Society is struggling to find appropriate boundaries and regulations to rein in digital surveillance. The city of London, for example, made the Renew ad firm yank those WiFi tracking trash bins from the streets, even though it wasn’t clear they were violating any specific regulations. On a broader scale, the European Commission is struggling with how to protect the data of EU citizens, including trying to extend their control to foreign companies that process personal data of EU citizens. In the U.S. the Obama administration has proposed a Consumer Privacy Bill of Rights to give users more control over how their information is handled. Yet even if the government or corporations promise not to collect personal identifying information or to strip it from the data they do collect, data analytics are capable of inferring personal data—creating data that isn’t protected by current measures.

There is a growing awareness that the issue at stake isn’t just privacy, it is also control of personal data—the right to know what data is being collected about you, to set boundaries on how others can use it and to access this data for your own use. At the corporate level, a few businesses are beginning to voluntarily give users access to data they generate through the company’s services. Arguing that data is a form of personal property, some are calling for a Digital Consumer Bill of Rights that recognizes and protects the value of that property, establishing, for example, that people deserve compensation if a company loses or misuses personal data it has “borrowed.”

Children traditionally have fewer privacy rights when it comes to masking their activities from parents, teachers, and other guardians and protectors. The past couple of decades have shortened the leash even further, due to increased concerns about safety and the rise of the helicopter parent. Now parents are deploying surveillance tech as well, using watches and other miniature sensing devices to keep track of their kids. Some have wondered if this may culminate in microchipping children as the ultimate safeguard. Would this be the first step towards an Orwellian society in which everyone is chipped and monitored (for their own good, of course)?

Just as the rise of the Internet gave rise to calls for digital literacy, concerns over surveillance and data mining have highlighted the need for privacy literacy. This may range from promulgating simple rules that eventually will seem like common sense (but currently are far from obvious to many people) to creating privacy education programs for children.

What This Means for Museums

Emerging surveillance technologies hold enormous promise for evaluating and fine-tuning what museums do, and for meeting the rising demand for personalized experiences. Some museums are already installing pervasive free WiFi systems that support the use of indoor GPS and content delivery for visitors. These systems can also be harnessed to track visitors, just as retail stores are doing. In the near future, museums may also have the capability of monitoring how much of a label visitors are reading, how long they look at a painting or their emotional reaction to an object. This would provide the ultimate in visitor feedback and offer the opportunity to feed visitors content personalized to their actual behavior. Taking a lesson from Nordstrom’s, however, museums must balance the benefits of using these technologies with the potential backlash.

Museums enjoy high levels of trust, but that may mean people also expect them to maintain higher standards than commercial companies or the government. Museums need to determine how to apply the standards of transparency and accountability which govern their institutional operations to the collection and use of personal data, while striving to communicate more clearly than the average social media privacy policy.

As artists mine digital data and social media for content, or deploy their own surveillance devices, museums need to consider the privacy concerns of people whose posts, images and information may have been co-opted without their permission. The legal issues surrounding use of online data and images appropriated from social media are still being worked out, but even where the legal issues are murky, such concerns may have ethical standing—presenting a whole new area of sensitivity for museums to navigate.

Adam Harvey’s “Anti-Drone” scarf is made of a metallicized fabric that protects against thermal imaging.

Museum Examples

In August 2013 the NEW MUSEUM debuted the Privacy Gift Shop, a pop-up store featuring “stealth wear” by artist Adam Harvey and fashion designer Johanna Bloomfield. The project was designed to promote conversation about domestic and international surveillance and threats to individual privacy.

Stealth wear in the “Privacy Gift Shop” at the New Museum. The OFF Pocket phone case blocks all incoming and outgoing phone signals. Photos courtesy Adam Harvey.

The JEWISH MUSEUM decided to remove photographs from the exhibit “Composed: Identity, Politics, Sex” after receiving complaints from men who appeared in the photos. The artist, Marc Adelman, had appropriated the images from a gay Internet dating site. The museum positioned their decision as a response to “complex issues of privacy, privacy expectations regarding photos made available on social media, personal safety, and the consequences of image appropriation in the digital age,” but Adelman took issue with the decision, saying he felt that the work itself was an appropriate way to explore those very concerns. A number of recent museum exhibits have explored the history of surveillance and its appropriation by artists.

The NORTHERN ILLINOIS UNIVERSITY ART MUSEUM presented “On Watching and Being Seen” in fall 2013. The exhibit and accompanying film series explored the impact of social media and surveillance technology on voyeurism and exhibitionism. Works included dot paintings by Houston artist William Betts derived from public surveillance camera footage, and embroidered drawings by Chicago-based Kathy Halper of intimate Facebook posts by young adults. In 2010 the TATE MODERN originated “Exposed: Voyeurism, Surveillance & the Camera”, exploring the history of covert photography as well as current issues related to individual rights versus security in an age of terrorism. The exhibit subsequently went on tour to the San Francisco Museum of Modern Art and the Walker Art Center in Minneapolis.

Museums Might Want to…

Review internal policies and procedures about data collection to ensure that any personal data they hold is secure and that appropriate disclosures have been made about what is being collected and how it will be used. Consider creating data protection statements for subscriptions to e-newsletters, online interactions and apps. Consider sharing with users the data that the museum collects about them, in a way that is useful to them. Recognize that the data people provide the museum is valuable, and give them something of value in return.

Consider the privacy concerns of employees and staff. HR staff are increasingly using Internet searches to vet prospective employees. In making policies about what is, and is not, fair game, museums should consider whether such New Age background checks disadvantage candidates who are not savvy about tidying their digital footprints, which might further narrow the diversity of museums’ already homogenous applicant pool. Insider theft is a significant source of risk to museums, and reasonable monitoring of staff is prudent— but what constitutes reasonable? Museum policies about background checks, hiring and monitoring of staff’s physical and online activities need to adapt to these new realities, while keeping pace with employees’ expectations and concerns. Devote time, in their organizational planning, to envisioning the future of surveillance technology and privacy concerns five or 10 years from now, and factor this into their decision making. This will help create a technological, physical and policy infrastructure that can adapt to the rapid changes in these fields.

Further Reading

In November 2013, the National Museum of American History presented a day-long symposium on “Inventing the Surveillance Society.” Content is available via the archived webcast, blog posts, podcast (with designer Adam Harvey) and Storify.

Dave Eggers, The Circle (Knopf, 2013). In this work of dystopian futurist fiction, Eggers explores the total control exerted by a giant tech company over its employees and society as a whole. In an economy based on strip mining personal data, the Circle’s motto is “Privacy is theft, Secrets are lies.”

Seeta Penˇa Gagnadharan, Joining the Surveillance Society? New Internet Users in an Age of Tracking. (New America Foundation, 2013, PDF, 18 pp.). An in depth look at surveillance and privacy problems faced by individuals who turn to digital literacy organizations for training and Internet access.

Wolfgang Sofsky, Privacy: A Manifesto (Princeton University Press, 2008.) An exploration of the history of the status of privacy in society, and deconstruction of the social, political and technological forces eroding privacy today. First chapter available free online as HTML or PDF download.

Daniel J. Solove, numerous publications on surveillance and privacy from a legal perspective. Solov is a professor of law at George Washington University Law School in Washington, DC.