For scientific purposes, sharing the raw data (in addition to any interesting conclusions) is the way to go. In sensing situations where there are privacy concerns, which may not occur in the case of hydrologic data, an open source design process might involve not sharing all the raw data. Figuring out which cases are which will be a challenge! Thanks for the pointer, Russ. —Chris Peterson

A new book Nanotechnology for Chemical and Biological Defense, ed. Margaret Kosal (Springer, 2009), includes sensor scenarios for nanotech-based defense against chemical and biological attacks. As is usual with scenario planning, multiple versions are presented, in this case reaching out to the year 2030. Here’s one from the “Radical Game Changers” scenario:

A terrorist organization releases a stealth nanoparticle-encapsulated biochemical agent at eight separate airports outside of the continental US. The initial dissemination of the novel agent is undetected. Passive networks of sensors at two US points of entry, however, recognize an increase in the average elevated temperature of passengers at security checkpoints. Additional sensors show elevated levels of liver enzymes in airport waste streams. Mobile response laboratories, in coordination with National Guard Civil Support teams, are dispatched and identify the causal agent. Intensive forensics reveals that the nanoparticles are engineered to aerosolize easily and then accumulate in the human liver where they slowly release the agent. Countermeasures are administered within 12 hours. In the world of Radical Game Changers, such highly-evolved technologies require equally-evolved detection schemes.

Whew, a scary scenario indeed. Though the defense succeeds in this scenario, it’s clear that the world is a very dangerous place in this vision. One goal for Open Source Sensing would be to head off such scenarios entirely. Meanwhile, take a look at the book for both other long-term scenarios and much nearer-term issues — you can search inside the book at Amazon.com. More on this topic over at Foresight’s main blog Nanodot. —Chris Peterson

Simson Garfinkel gave a talk a while back that examined the “Code of Fair Information Practices”, developed originally by a U.S. government task force and described thusly:

• There must be no personal data record-keeping systems whose very existence is secret.
• There must be a way for a person to find out what information about the person is in a record and how it is used.
• There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent.
• There must be a way for a person to correct or amend a record of identifiable information about the person.
• Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data.

Is this a useful model for how sensing data should be handled? It certainly is not being followed now. We do need to look at this list and ask whether it infringes on freedom of speech, though — see the third bullet above, for example. Sticky issues! —Chris Peterson

Not everyone realizes that “electronic” surveillance can include not just what we think of as electronic information (email, etc.) but physical data as well. In an EFF article on the UK’s half million intercepts of communications data in 2008 — which has no judicial review — this is explained:

These orders can reveal lists of websites visited, email headers, name and address lookups, and, perhaps most controversially, the real-time location of a particular mobile telephone.

So your cell phone is continually reporting your location, which in the UK sounds like pretty easy info for authorities to get. This is a lousy idea from a civil liberties perspective, to put it mildly. For those of you who trust the authorities in your own country, think of the ones elsewhere that you don’t trust: they could have this technology too. (Credit: Mark Finnern) —Chris Peterson

An ITU paper spells out the main reason to care who gets sensing data about individuals:

From a political standpoint privacy is generally considered to be an indispensable ingredient for democratic societies. This is because it is seen to foster the plurality of ideas and critical debate necessary in such societies…

• Privacy is also a regulating agent in the sense that it can be used to balance and check the power of those capable of collecting data…

Lessig’s list of reasons for protecting privacy belongs to what Colin Bennett and Charles Raab have called the ‘privacy paradigm’—a set of assumptions based on more fundamental political ideas: ‘The modern claim to privacy … rests on the pervasive assumption of a civil society comprised of relatively autonomous individuals who need a modicum of privacy in order to be able to fulfil the various roles of the citizen in a liberal democratic state.’

So the main reason is to protect our political freedom. This is why I hope to find an alternative to the word ‘privacy’ in our discussions. While a useful word, it has connotations of guilt or shame, which are inappropriate in this discussion of how to preserve and strengthen our freedoms. Any ideas on alternative terms? —Chris Peterson

In some cases, concerns about seemingly invasive sensors could be mitigated by changing the length of time that data were retained. While nearly half of the participants were unwilling to use GPS if the raw data (e.g., the latitude and longitude coordinates) were kept, all but one participant were willing to use it if the raw data were kept only for as long as was necessary to calculate the characteristics of detected physical activities (e.g., distance or pace of a run), and then promptly discarded. The exact length of the data window that the participants thought was acceptable varied, but most who wanted data purging thought that retaining one to 10 minutes of raw data at a time, unless a physical activity is being detected, was reasonable.

We found similar results for audio. A sliding data window of no more than one minute at a time of raw audio data was acceptable to 29% (7 of 24) of participants, although the majority (71%) found recording of any raw audio too invasive. Filtered audio fared better, however. If only a 10 minute sliding window of filtered audio was being saved, except for times when a physical activity is being detected, 62.5% (15 of 24) of participants were willing to use the microphone to get better activity detection.

And some recommendations:

Our results suggest at least three ways in which the acceptability of sensing can be increased, while respecting privacy. First, sensor data should be saved only when relevant activities are taking place. Results for both GPS and audio revealed that continuously purging the raw data increased user acceptance of both sensors. Second, whenever possible, a system’s core functionality should be based on minimallyinvasive sensing. The users can then be given a choice to decide whether to enable additional functionality that might require more invasive sensors. Physical activity detection, much of which can be done with a simple 3-D accelerometer, is a good example of a domain where such graded sensing could be implemented. And third, researchers should explore ways to capture only those features of the sensor data that are truly necessary for a given application. This means, however, that sensor systems might need to have enough computational power to perform onboard processing so that each application that uses a sensor can capture only the information that it needs.

We also note that users can make informed privacy trade-offs only if they understand what the technology is doing, why, and what the potential privacy and security implications are. Building visibility into systems so that users can see and control what data is being recorded and for how long supports informed use. Determining how this can best be done is a difficult, but important, design challenge.

Gathering data of any kind irrevocably leads to privacy concerns. Where should the data be stored and what boundaries shouldn’t it cross? Who should have access and who doesn’t? These questions aren’t new to ubiquitous computing. But the pervasiveness of these sensors adds a new layer of complexity to understanding and managing all the possible data streams. Can one subpoena the data collected by ubiquitous computing systems? As the answer is probably yes, there might be a demand for ubiquitous computing systems where the raw sensor data cannot be accessed at all, but only processed inferences from the data, like “burglar entry,” can.

Quite right, there is such a demand. How do we move forward from the demand to the reality? —Chris Peterson

Principled sensing will often involve getting permission from those being sensed. We can get some ideas about how to think about this process from the paper Affective Sensors, Privacy, and Ethical Contracts by two MIT Media lab researchers, Carson Reynolds (now at U. Tokyo) and Prof. Rosalind Picard. While not a new paper, it seems like a good place to get started for newcomers to the goal of appropriate sensing. From the abstract:

Sensing affect raises critical privacy concerns, which are examined here using ethical theory, and with a study that illuminates the connection between ethical theory and privacy. We take the perspective that affect sensing systems encode a designer’s ethical and moral decisions: which emotions will be recognized, who can access recognition results, and what use is made of recognized emotions. Previous work on privacy has argued that users want feedback and control over such ethical choices. In response, we develop ethical contracts from the theory of contractualism, which grounds moral decisions on mutual agreement. Current findings indicate that users report significantly more respect for privacy in systems with an ethical contract when compared to a control.

A later quote: “Our theory asserts that ethical decisions are encoded by interaction technology.” Sounds right to me. See the Affective Computing Group for more recent papers. —Chris Peterson

This paper outlines two alternative architectures for ANPR, referred to as the ‘mass surveillance’ and ‘blacklist-in-camera’ approaches. They reflect vastly different approaches to the balance between surveillance and civil liberties.

Basically it sounds like the wrong way is to collect all vehicle data in a centralized location regardless of whether the vehicle is suspected, and the less-wrong way is to have a list in the camera of numbers being looked for. About the latter:

Further key requirements of the ‘Blacklist in Camera’ design include: certified non-accessibility and non-recording of any personal data other than that arising under the above circumstances

This requirement is the kind of thing that Open Source Sensing advocates: note the word “certified”.

Apparently something somewhat similar to the latter method is done in Canada, but Australia is headed in the wrong direction, according to the author. —Chris Peterson

Charles Nevin writes in Intelligent Life, a culture magazine published by The Economist, comparing progress toward the surveillance state in the UK, Germany, and Romania. The Brits are ‘winning’:

Britain had the worst result in Europe, falling into the category of “endemic surveillance societies” alongside Russia and China.

Nevin quotes a policy paper presented by the Portuguese presidency of the EU Council:

Every object the individual uses, every transaction they make and almost everywhere they go will create a detailed digital record. This will generate a wealth of information for public security organisations, and create huge opportunities for more effective and productive public security efforts.