It’s a thought that arose when I was reading a review and came across this passage:

At one point during my time with Glass, we all went out to navigate to a nearby Starbucks — the camera crew I’d brought with me came along. As soon as we got inside however, the employees at Starbucks asked us to stop filming. Sure, no problem. But I kept the Glass’ video recorder going, all the way through my order and getting my coffee. Yes, you can see a light in the prism when the device is recording, but I got the impression that most people had no idea what they were looking at. The cashier seemed to be on the verge of asking me what I was wearing on my face, but the question never came. He certainly never asked me to stop filming.

My first reaction was…not good. They were told to stop filming, and they didn’t? Pretty damn intrusive. But I wondered if I was overreacting, so I let it slide. Then I saw (and Parheliated) this much-linked article. It expresses a slightly different problem, equally valid but rather more generally interesting (and a little less fraught). That one’s no longer “a feature that no one’s talking about”.

It made me revisit my earlier discomfort. The resultant tweet’s been much retweeted, and I’ve had some interesting conversations as a result. A few thoughts arising from them:

It looks like it would be harder to enforce a ban on creepshots than on copyrighted material.

This is partly because we already have the technological infrastructure in place around copy protection: watermarks, automated comparison algorithms, all that gubbins.

The technological infrastructure is in place because it’s been worth people’s money to overcome technical obstacles and set up systems. Smart people have been well-paid to do so.

We also have the cultural and legal infrastructure in place. We’ve made the choice (not me, and not without protest, but we as a society have done) that fair use is acceptable collateral damage in the protection of copyrighted material. Hollywood has lobbyists and legislation.

There’s no money in limiting creepshots.

No money means no history of effort put into dealing with the obstacles, either technical or social.

No history of effort put into overcoming them means the obstacles look insurmountable.

Various solutions floated around my Twitter stream, some serious, some not: facial recognition tied to a Do Not Photograph database; a lapel pin broadcasting a signal that disables the camera; dazzle camo. Giving up and licensing images for a fee. Pretty good notions for a bunch of amateurs spitballing in their spare time. But, as @awa64 pointed out during a really interesting exchange:

@evilrooster @trinandtonic I’m uncomfortable with the idea of privacy as an opt-in, as it seems to lead to victim-blaming.

That’s one of the serious social (rather than technical) problems to limiting the Google Glass’ ability to photograph people in public. There are others. For instance, any solution—technical or legal—that a woman on the subway can use, a misbehaving police officer can use as well. Is that a more or less acceptable sacrifice to make than fair use? I honestly don’t know. I think there’s a hard discussion to be had there.

But I have the weary suspicion that what we will end up doing is nothing, because that’s the way our technological innovations generally work: implement first, dismiss the consequences later. Still, somewhere in this wide-ranging discussion, I caught a glimpse of a world where the safety and respect of real human beings got the same kind of innovative, problem-solving attention that Steamboat Willie does. It was just a glimpse of a possibility, but it intrigued me.

Hello all from the incipient panopticon. I've sort of surrendered to the fact that privacy has become almost completely debased in value. But. I've been to London relatively recently - the omnipresence of CCTV disturbed me on a very basic level, and sent me fleeing back to my nook in Russell Square swiftly. I don't know how the likes of me will cope with 24-hour surveillance. Poorly, I suspect.

This strikes me as an area where science fiction authors have figured out some good model solutions.

I'm waiting for someone to implement a real-world version of the gevulot privacy protocol in Hannu Rajaniemi's Quantum Thief. Seems like it would sort this right out, if I understood the novel correctly. (I probably didn't.) Or if something like that is technically possible. (I have no idea.)

That or radical, total openness as in David Brin's Transparent Society: everyone needs to be able to monitor what everyone else is monitoring.

I think one of the key things we have to consider here is that privacy, as we know it today (namely, the presumption that our activities only concern us, rather than the wider public) is a VERY recent conceit. It's another of the things which came about as an outgrowth of the post-World War II consumer boom. It's also a very American concept (albeit one which has been exported to every corner of the globe) and in many ways it's an outgrowth of the Frontier mindset mentioned in the comments for the previous post - the presumption that it can and should be possible to get away from other people and become fully independent of society (and that this is a Good Thing). Prior to the end of WW2, your personal privacy in most communities occurred at the consent of your neighbours - they saw what was going on, or heard it, or could guess from what they saw and heard, but they kept quiet because it wasn't their business (most of the time). Private lives were largely lived in public (the extreme version of this is the conditions which prevailed in London's East End during the tail end of the 19th century, where there was a large population of people who were homeless, who slept in doss-houses if they could afford it, and who otherwise lived pretty much constantly on the street in the full public gaze).

This is not to say I'm looking forward to going back to those days. For one thing, being "abnormal" (in the sense of not complying to a majority of social norms) was a lot harder back then. In a world where privacy is secondary and where everyone's life is on constant load to the Cloud, being constantly analysed for profit-seeking opportunities (even more closely targeted advertising; even more closely targeted demographics; building kinship and friendship maps in order to be able to better target the preferences of one's family and friends from the presents you buy them) there will be even more pressure to conform closely to social norms purely in self-defence. After all, if you're too strange, you might not be able to get a job, or you might not be able to keep one.

John A Arkansawyer @4: Yes, algorithms do have politics too. And this is one of the reasons I get very worried in my Computer Science classes these days, because one of the big gaps I can see is budding computer scientists and technicians aren't being taught to think about the ethical questions involved in their profession. Indeed, there's almost a gleeful discarding of these questions - ethical thinking for computer science or for coders is outsourced to the boss, to the legal department, to HR, to anyone else (and the most which will be dealt with by any of those is the question not so much of "right and wrong", but rather "legal or illegal", or at worst "are we likely to get caught?"). I contrast this with my psychology classes, where every single damn one of them so far has covered the ethics required by the profession, and I can't help but be scared. Computer Science as a profession strikes me as frighteningly amoral.

It really doesn't help that the IT geek mindset really doesn't cover much thinking ahead to the consequences of one's actions - there's no call for reflection on "should I do X?", but rather a strict focus on "is X possible?". We're seeing the consequences of this mindset now in things like the Stuxxnet virus (designed as a form of computerised warfare) and the various hacking runs from China penetrating US commercial and government entities (and almost certainly it was present in any US-led attempts to penetrate Chinese firewalls, which I suspect have also happened, even if we aren't hearing about them - things don't just happen in a vacuum). Combine this with a corporate mindset, where employees are strongly discouraged from considering the ethical rights and wrongs of their actions if they wish to remain employed, and we wind up with a world in which anything goes provided Legal believes it won't cause an actionable breach of the Eleventh Commandment.

Megpie, I'm not sure privacy is necessarily rooted in the notion of making oneself independent of human society. (It's arguably impossible anyway.) I think of it as having the option of not telling the world things the world doesn't need to know. My conscience and my political beliefs are my own business. So are the books I read, the music I listen to, and the websites I visit.

Other science-fictional sources that touch on some of the same concerns:

- HellSpark by Janet Kagan. Most people wear "spectacles" that appear to do some of the things Google Glass is promising to do. Also noted is that in at least one culture, wearing one's spectacles while talking with another person is considered to be extremely rude because "it implies that one would rather be seeing something else, talking with someone else".

- The Neanderthal Trilogy (Hominids, Humans, Hybrids) by Robert J. Sawyer. The alternate-Earth Neanderthal culture does life-logging as a matter of routine, and for them it mostly works well. But their culture is a LOT different from ours.

- This Alien Shore by C.S. Friedman. This is a pretty close match to some of the things described in the article abi links -- both the good and the bad aspects are explored.

- Mother of Storms by John Barnes. Not quite as close a match, because one has to be "jacked in" to the network to make use of it. But again, some of the potential upsides and downsides are discussed.

Megpie71 @12: I had the impression that privacy pre-WWII was rather unevenly distributed. It was more imposed by the environment and less of a choice, though some people chose their environment with consideration of the privacy it could provide. I'm thinking here of people who "lit out for the frontier" specifically to get away from stifling small towns (or city neighborhoods) where everyone knew them and their entire history.

I suspect that the difference in emphasis on ethics between psychology and CS is more a peculiarity of psychology, and less so of CS. From what I've seen, CS's lack of emphasis on ethics is not that different from other engineering and science fields. My wife's studying psychology at a school that is mainly a medical school, and she claims that the medical students seem to have been given much less emphasis on ethics and confidentiality than the psychology students (even things as simple as teaching the students to knock before entering a room that might be in use for a therapy or counseling session).

Jeremy Leader @ 15 - "Much less" I could cope with. What appals me is the difference between "some" and "none whatsoever". In things like engineering and the other sciences, there's at least things like considerations of safety and responsibility involved there. But in computer science, there's no consideration of the impact of one's actions whatsoever, and that genuinely frightens me, because the mindset it encourages is one where one's actions are considered to have no impact.

Now in psychology, I can understand the high degree of consideration for ethical behaviour and ethical research - these are, after all, other people's minds and lives we're playing with. It's only reasonable to expect that considerations of doing the right thing are going to be paramount, because unethical psychological research can move quickly into the domain of criminal behaviour such as torture.

What bothers me is that it wouldn't be ethical of me as a psychology student to closely question my research subjects regarding their personal lives, their purchasing habits, their recreational practices, and such, and correlate all of this information into a single database to be used as a predictive matrix to be used to predict criminal behaviour (for example). But as a computer science student, it's perfectly ethical and acceptable for me to be asked to devise an algorithm and code a program to take information regarding people's personal lives, recreational practices, purchasing habits, websites visited, email contacts and so on, and create a single database and predictive matrix to be used to predict criminal behaviour (or to enable it, as per Texanne's comment @16).

I'm one of the people saying "copyright is more amenable to algorithmic regulation than creepshots." Let me unpack what I mean by that, as it arguably falls into the category of Stuff I Do For a Living.

First, let's set aside value judgments -- I agree that regulating creepshots is more important to society than regulating copyright violation.

Second, let's imagine we have none of the dedicated infrastructure for copyright policing, none of the existing agreements, databases, watermarks, etc. Sort of a technological veil of ignorance.

Because we're talking about Glass, and not video/picture hosting or broadcast modalities in general, let's define "creepshot" as "a photograph or video taken without the subject's knowledge and/or consent, for a use to which the subject would not consent, where the subject is not a person of public interest" and "copyright violation" as "a photograph or video reproducing an image or video subject to copyright, where the recorder is not the owner or licensee of the copyright in question."

The first option that solves both, and is amenable to an algorithmic solution is "do not allow users to record anything that could be a [creepshot|copyright violation]." For creepshots, this means anything that includes a person; for copyright violation, that means anything that is a still image or video. This solution is clearly a non-starter.

All the other solutions require identifying creepshots from among headshots, bodyshots, clothing shots, etc. etc. A "do not permit anyone not on my whitelist to photograph or film me" registry could keep faces out of creepshots, but barring a solution that includes pervasive tracking of a person on a registry by Google, it wouldn't protect against faceless creepshots, which are plenty common and plenty disgusting.

On the other hand, a registry, as in the solution used on YouTube right now, of "this item is copyrighted", allows much better identification of copyright violations. Because the violation inheres in the identifying information, it's hard, and getting harder, to effectively violate copyright while concealing such violation from a screening algorithm.

On the other, other hand, as effective algorithmic screening is against copyright violation and ineffective it is against creepshots, social, post-hoc screening is the exact opposite. No one but content owners cares about copyright enforcement, and except for well-lobbied legislators (and attorneys like myself), no one is going to help the owners run the violators out of town. On the other hand, public, grassroots campaigns have made substantial inroads into creepshots on, say, Reddit. I suspect this is largely because while no one cares much about copyright as a moral issue or thinks violators are particularly bad people, everyone immediately gets that the people trading upskirt shots of tweens taken on mall stairways are scumbags and everyone is willing to help run them out of town, no litigation or legislation necessary.

So, yes, I expect Google to prescreen for copyright violations but not creepshots, but I also expect creepshotters to get outed with some regularity.

From memory here.... It seems to me that I've read about Spanish/Arab styles of housing where the house presents a blank front to the street, and has a garden inside the wall. There still might be a lack of privacy inside the house, but what people are doing in the house isn't obvious to the public.

And I'm more sure of remembering reading (in Vance Packard?) of how remarkable the American suburb was, with its unfenced lots and picture windows.

Both would be for those who could afford such housing, but it would still be quite a lot of people.

Re #6: Good luck with that in a world which, post-Anonymous and post-Occupy, is increasingly banning the wearing of masks.

Privacy is already an unequally distributed good. One thing I came completely unprepared for when I read Sudhir Venkatesh's must-read Off the Books was how much of ghetto street culture is driven by the fact that people are driven, economically, to live multiple people per bedroom, multiple families per housing unit. In middle class culture (and up), you go inside for privacy; in poor culture, you go outdoors for privacy. Of course that breaks down; if somebody chooses the same bus shelter or back alley couch or park bench to go for privacy, they're going to walk in on yours; how that's dealt with, socially, is a powerful cultural prohibition on intrusion and on revealing what you discover by accidental intrusion; as with all cultural prohibitions, it's not failure-proof but it's better than nothing.

I was talking about this with a friend of mine a week or so ago, and he put it the most succinctly I've heard it said yet: "This is going to be a real problem during the transition time between when everybody finds out what everybody is doing and when everybody stops caring."

What bothers me is that it wouldn't be ethical of me as a psychology student to closely question my research subjects regarding their personal lives, their purchasing habits, their recreational practices, and such, and correlate all of this information into a single database to be used as a predictive matrix to be used to predict criminal behaviour (for example). But as a computer science student, it's perfectly ethical and acceptable for me to be asked to devise an algorithm and code a program to take information regarding people's personal lives, recreational practices, purchasing habits, websites visited, email contacts and so on, and create a single database and predictive matrix to be used to predict criminal behaviour (or to enable it, as per Texanne's comment @16).

Some years back, I started to write, and then abandoned, a long blog post on the uses of data warehousing which made almost exactly that point, throwing in the idea that social scientists working with those same tools in the corporate world weren't ethically bound as academic researchers are, and that our understanding of social science as ethically bound was mistaken.

When I read Poul Anderson's Shield, that was the first time it'd ever occurred to me that someone might get an MS in physics, and then willingly work for a gangster. I don't know why it shocked me so, as by then I'd gotten a fuller view of my early heroes, the bomb-builders of the Manhattan Project, and a more cynical take on their too-late qualms, but it did.

And now I'm cynical about the social sciences as well. If this keeps up, I'm eventually going to believe the paranoid bullshit about the medical industry hiding a cure for cancer.

Anne McCaffrey's "Dull Drums" is set in a computer science class in a computerized 21st-century society (it was written in the 1970s), and considers the issue of IT geeks not being inclined to give due consideration to the real people behind their technical challenges and data flows. (McCaffrey's lecturer does try to teach his class better, though.)

4. Every person has a right to the safeguard of his dignity, honour and reputation.

5. Every person has a right to respect for his private life.

The Supreme Court of Canada has upheld the rights of individuals to not have their photos published without their permission, with exceptions: newsworthy events, people who are in the public eye, like politicians or celebrities, or if the person was incidental to the photo, and not the main subject(s).*

Les Quebecois likely wouldn't have much problem getting traction with their objections; the problem would then be a jurisdictional issue, if the photos were published on Google servers outside of Canada.

It's also significant here that privacy vs preventing movie piracy is not an apples-to-apples comparison.

Privacy vs. human creativity is the apples-to-apples version, at least as far as the creators' rights to attribution, et al.

Commercial content providers evolved in an environment of copyright law which partially protects creative work, and which is (as a practical matter) only partially enforceable. They evolved to profit from what was enforceable.

So the fact that these companies, which squeezed through a series of oddly-shaped holes to get where they are, are in almost the right shape to squeeze through this new-but-similar hole is not especially surprising. The fact that a basic human value is less well-shaped is not surprising either.

To put it another way: your right to not be photographed when you don't want to is similar to your right to hum a little tune to yourself and not have anyone record your original work. Glass won't (and maybe can't) enforce either of those rights. The kind of "copyright enforcement" we're comparing here (the right of multimillion-dollar companies to not have unauthorized copies made of their widely-distributed works) is maybe more like your right not to have anyone take pictures in your bedroom, a special case which Glass maybe could enforce.

Doesn't affect the fundamental human rights involved: I'm much more sympathetic to humans wishing not to be photographed, and to tune-hummers, than I am to the sorts of companies that are going to actually be given the tools to protect themselves. But I do think it's useful to remember that we are comparing a general human right to a very special subset of a different human right, particularly since the special subset in question was selected in large part for its ease of enforcement.

Go look at the TOS for most internet services. It's often technology passed through lawyers—it's difficult to delete your data from the back-ups when you stop using a service—but there is so much I see which is explainable by ethical inadequacy.

I have been known to be impolite on the subject, even vulgar. It seems that in some places, such language is seen as being worse than being a fucking crook, which strikes me as ethically sub-optimal.

It occurs to me that much of the grief of the modern world comes from a lack of political agency. We had a century or so of people with some power seeing it as a good thing that everyone had some power. It gave us electoral reform in Victorian England, and universal education, and groups of working class people intensely debating politics. It was a period that gave us Marxism, and the idea that political agency was possible.

These things were perverted by top-down power structures.

The Victorians at least had some ethics, even if driven by pragmatic self-preservation, but we mostly hear about the ones who acted ethically. And if you push them, suddenly it's out with the machine guns, whether you're a brown-skinned native or a coal-miner on strike.

The political machines have changed. Fox News has taken the place of Tammany Hall. But, however the system is rigged, what chance do we have of changing things?

I am full of reactions today, but I will filter them down to one: the very term "privacy" is being redefined (rather, refined) for this situation.

I might, twenty years ago, have defined nested layers of information: I don't want anybody to see *this*; I want only my close family to see *that*; I want only my family and friends to see *this*; but I don't care if a stranger sees *that*. We still think in these terms; Facebook's access controls, for an obvious example.

Now we're tripping over an unstated assumption: I don't care if a stranger sees *that*, but I *do* care if she *remembers* it! Or (different axis) if she shows it to somebody else! This is what we're talking about, right? Not a secret camera inside my bedroom (although that's worth discussion too), but a secret camera inside the coffee shop full of strangers that I walked into.

Worse: technological barriers to information flow are brittle. (This is my response to MarkG@2's early comment about mathematical privacy protocols -- security theory is full of perfect theoretical solutions. In real life, good luck with that. See also Monday's xkcd comic.) Contrariwise, legal barriers are permeable; stuff leaks wherever the law is inattentive or not yet written. Conclusion: we *can't* think in terms of "keeping information from leaking." We have to think in terms of flood control; friction and drainage, not walls. "Hard enough", not "impossible". But this means inventing a sense of "good enough" at Internet scale, and is that possible? Are you going to be satisfied if your muffin purchase is noticed by less than a thousand people, rather than more than a million? Social conventions don't scale.

ObRedMars: "Young men and women, educated very carefully to be apolitical, to be technicians who thought they disliked politics, making them putty in the hands of their rulers, just like always."

I think part of the difference, at least from what I remember of my glancing encounters with hard-science ethics* instruction, is that chemists and physicists carry the scars of seeing their work misused in a way that programmers, so far, don't. That's not to say that it hasn't happened, just that we don't remember it the same way.

I also think that as a programmer, I often encounter people who are, from my perspective, a little bit paranoid. They're scared of giving me information that I could get anyway, or scared that giving me information will result in its release to nefarious third parties. So I've spent some time soothing and/or tricking people into giving me information.

That puts me in a kind of paternal position: trust me, I'll take care of your stuff. "Don't be evil," in other words, rather than "don't create and give away a tool capable of evil uses." That's a lesson chemists and physicists learned a long time ago, to think about what a king or general might do with what you've made.

You'd think, after IBM's early history as well as our field's founding story about Bletchley Park, we'd think a bit more about what a Fuhrer might do with what we've made. But it seems like our ethics are still stuck at the "making a gun is okay as long as you don't shoot anyone" stage.

*Granted, I was taught by a guy who quit Sandia because he didn't want to make nukes anymore. But his teaching, while more vivid, was in the same vein as what I saw even in high school.

We talk about what's possible and impossible, technically and socially, from a perspective formed by a set of values. And by values, I don't just mean "what is good and what is not?", but also, "who matters and who does not?"

I'm a software tester (well, test manager, these days), and I know rather a lot about how your starting assumptions form your eventual conclusions. I can't think how many crashing bugs I've found in "working" code, because I rejected the developer's starting assumptions.

And I used to work with a front-end developer whose task it was to make our product render the same (or as close as possible) on multiple browsers, including IE6. I can't recall how many times he declared that this time, it was going to be genuinely impossible, technically simply impossible, to get IE to do the thing he needed to do. But his paycheck, and our success as a company, depended on him finding a way. You know what proportion of times the Declaration of Total Impossibility and Subsequent Stewing was not followed by a solution? 0%

When I look at the world we live in, and the world my children will inherit, I see a large category of problems that straddle the messy borderlands between the various elements of our society: technology, psychology, custom, law, economics, ethics, practice. Problems like privacy, harassment, contraception, changing work cultures, urban blight, and determining who made the best film last year. And what's interesting about those messy, insoluble problems is that it's so rarely the wealthy white men who happen to get the short ends of the messy, insoluble sticks.

That's because these guys have the power to say, "I reject the insolubility of the problem. Here is a large stack of money. Who wants to join me in rejecting its insolubility and, with that new mindset, find a solution?"

And the solutions they come up with aren't always fair, and they often externalize yet more costs onto the people with too much melanin, or too many X chromosomes, or too few commas on their bank statements. And many of them shouldn't be adopted, because they come with unacceptable collateral damage. But the solutions exist, and I don't think that's particularly a coincidence. I don't think that immunity from insoluble problems is actually an intrinsic quality exclusive to the ruling class.

Adding to the list of science-fictional treatments of the subject: Bob Shaw's Other Days, Other Eyes, which I riffed off of when I Parheliated the original "issue no one is talking about" article.

The mechanics of the problem were slightly different: the images in the slow glass still need to be recorded more permanently, and any speech had to be added by lipreaders. But Shaw saw the consequences of pervasive, ubiquitous recording clearly. (And yes, it is ironic that it was a character eyeing up a girl that brought the matter home in the story.)

That’s one of the serious social (rather than technical) problems to limiting the Google Glass’ ability to photograph people in public.

I would expect that the most likely outcome is simply that people will converge around the belief: "you don't have an expectation of privacy in public; if you're out on the street, then people are maybe going to look at you, people are maybe going to photograph you, and people are maybe going to film you."

That's already the legal position in the UK, for example: you can use the public street for any reasonable purpose, as long as you don't get in people's way, harass or threaten them. That includes taking photos of people without their consent and selling those photos for money. The key phrase is "reasonable expectation of privacy" and, in the street, you don't have one.

See here, for example: http://www.urban75.org/photos/photographers-rights-street-shooting.html

BSD @ 18:let's define "creepshot" as "a photograph or video taken without the subject's knowledge and/or consent, for a use to which the subject would not consent, where the subject is not a person of public interest"

It's unclear to me how that wouldn't amount to a generic ban on (or at least condemnation of) most forms of street photography (e.g., Walker Evans, Henri Cartier-Bresson, Diane Arbus, etc.) -- not to mention a great deal of photojournalism.

The end of street photography and most photojournalism might conceivably be a price people are willing to pay in the name of very strong forms of privacy, but it's at least worth considering some of the side effects.

Peter Erwin @38:
You're right about photojournalism, so let's expand that definition to include general matters of public interest.

As to "street photography", I suspect most people would consent, in the abstract at least, to being peripherally in a crowd scene or street-shot. Anything that someone would find objectionable to include them in should require consent.

I think part of the difference, at least from what I remember of my glancing encounters with hard-science ethics* instruction, is that chemists and physicists carry the scars of seeing their work misused in a way that programmers, so far, don't. That's not to say that it hasn't happened, just that we don't remember it the same way.

And yet, it's happening on a low level every day. And there are people calling it to their attention. But many of these violations can be justified on the glibertarian principles which so infest tech politics, and that provides a very comfortable pot for the frog to boil in. Plus there's that thing about your living depending on not noticing the facts of what you're doing.

(And you've got very odd cultural drivers helping it along. Would anyone who started Doonesbury in the early seventies as I did have expected Trudeau to place heroes in DARPA and the CIA?)

If a photo of a public street is only legal if everyone in it has signed a waiver, that's the end of street photography, and the end of most holiday snaps, too. Try getting everyone at the castle in Disneyland to sign waivers for everyone else trying to take a pic.

I also think that Google Glass will be a bad tool for trying to take creepshots. The camera will always have less megapixels than a phone, and surreptitiously getting your Glasses close to a pair of yoga pants is harder than doing the same with a phone.

I think it depends on the type of creepshot. Walking behind a woman and videoing her ass as she walks gets easier. Until people know how to tell if the camera is on, I expect an upsurge in clevage videos. But it's true that upskirt photography is not likely to be revolutionized.

Also, how long do you think that the cameras will be lower resolution than a phone? And is that too low-res for people to share them, comment on them, and drool over them? How low-resolution does a stalkeriffic photo have to be to be safe?

I do kinda get the feeling that you don't think creepshots are really a problem in themselves. You keep minimizing the issue, bringing up other things that would be better talked about, etc. Is that the case? If it is, can we recommend some resources to explain why they, as a specific category, matter?

Here's one. How long before there's an app that allows you to see what a phone camera sees in the HUD of the glass? Then you can position the camera better for shots of places where you can get your hand, but not your head. Maybe you still have to tap the shutter picture; maybe you can voice-activate the camera (and maybe you can choose a keyword, so you don't do it by saying, 'Take picture').

I can think of dozens of useful applications, from car repair to checking the hair on the back of your head. But it would also make upskirt photography a hell of a lot less hit and miss than I suspect it currently is.

As to "street photography", I suspect most people would consent, in the abstract at least, to being peripherally in a crowd scene or street-shot. Anything that someone would find objectionable to include them in should require consent.

That opens up another problem, though. Say you're on a beach. Do you mind being in the background of someone else's photo without being asked? Of course not. Do you mind someone drooling over a photo of just you, in your swimsuit, taken without your permission? Quite possibly.
But the difference between those two photos might be, basically, cropping.

So to fight that you'd have to insert controls not just at the image-capture stage but throughout the handling chain - to fight the threat of Alice innocently posting her holiday snaps online, and Bob looking at them and cropping out Eve in the background - which then becomes a creepshot.

Things like that are why I posted 36: that I think the most likely outcome is just general acceptance that this sort of thing will happen.

BSD @ 39:
Any sufficiently large collection of people is going to include some people who would object, no matter how "unreasonable" you or I might find their objections. ("But I've always said I wouldn't be caught dead at that place!" "Not while I'm wearing something like this, and haven't shaved for a day!" "Not in the same picture with those people!") And I happen to think the line between "street photography" and "general matters of public interest" is a terribly fuzzy one.

Consider, for example, Ruth Orkin's famous "American Girl in Italy". As it happens, the titular "American girl" was a friend of Orkin's who was participating in the series, so her consent was there from the beginning. But the various Italian men in the photo -- which of them are subjects whose consent must be obtained, and which of them are just "peripheral"?

Not to mention the problems inherent in defining "objectionable" -- when? according to whose standards? I can imagine that some of those Italian men might -- at the time -- have had no objection at all to being included in the photo. (And maybe some would have objected -- "What if my wife sees me looking at that young woman?") Years later, when the photograph has become, for some people at least, an emblematic image of sexism and harassment, might they change their minds?

Try getting everyone at the castle in Disneyland to sign waivers for everyone else trying to take a pic.

Actually, that's the problem that's easiest to solve. Just have a clickthrough when you're buying the tickets: "By clicking CONTINUE you accept that your photograph may be taken by Disney employees or third parties at any time during your visit, and anywhere in the Park with the exception of [hotel rooms, changing rooms, bathrooms and so on] and you explicitly and waive your right to sue any such parties for taking, processing, selling, displaying or using such photographs".

I think Google Glass has the potential to be an accelerant for creepshots, because people can't (haven't learned to, can't see clearly enough to, whatever) tell when they're being photographed. That was the problem in that Starbuck's in the article I quoted. Although you can hold an iPhone in something other than "photography position" and take a shot, it's awkward, and you can't generally see to frame the picture well. The counter staff would have noticed that they were being filmed.

Are creepshots the only way that Glass' always-available-to-stealth-film functionality will be abused? Nope. Wait till we can build files for 3D printers based on Glass shots; then you'll see some interesting times.

But this was a half-technical, half-sociological post, and the second half was the question was, "why don't we care about the people getting creepshotted enough to even consider bending the world out of shape the way we do for Disney?" I know there isn't an answer that doesn't boil down to "this is just the way it is; suck it up", and I know we shouldn't even ask questions if we can't face the answers we're likely to get back, but sometimes I do get weary of this short-ended stick that my gender has drawn.

On the matter of consent to being photographed:
I worked in downtown Los Angeles, and (as you might expect) there's a certain amount of movie and TV work done there. You see signs that say that (paraphrasing) past this point, you implicitly consent to being filmed and no, you won't be paid for it.

re 41: The lack of freedom of panorama already makes almost any outdoor picture where you can see a recently enough erected building of questionable legality.

A few not entirely tied together thoughts:

It could be argued that there is a default principle of "where there is a door, there is an expectation of some degree of privacy." That is, after all, part of the abstract function of doors: to shut things, and people, out.

It also seems to me that part of the problem is that it is increasingly difficult to tell if one is being photographed.

The lack of freedom of panorama already makes almost any outdoor picture where you can see a recently enough erected building of questionable legality.

Again, this depends on which jurisdiction you're in. In the US and the UK, as long as you are standing in a public place, you can photograph a building and sell the photos without needing permission.

"The copyright in an architectural work that has been constructed does not include the right to prevent the making, distributing, or public display of pictures, paintings, photographs, or other pictorial representations of the work, if the building in which the work is embodied is located in or ordinarily visible from a public place.
U.S. Copyright Office, Title 17 USC Section 120"

The owner's at liberty to say "no photography inside this building", of course. And the law will be on his side. But if you're not on his property, he can't stop you.

On the original post, I think any kind of action is going to have to wait on a general acceptance in law that you suffer actual harm from someone taking your photo in public under certain circumstances. That isn't there at present.

I can't remember which science fiction story caused the realization (possibly Niven's bit about pickpocketing not being a crime on a very crowded future earth) what counts as a crime is what people agree is a crime. The effects of surveillance are not easily predicted.

I can't remember which science fiction story caused the realization (possibly Niven's bit about pickpocketing not being a crime on a very crowded future earth) what counts as a crime is what people agree is a crime

Which, I think, also makes the distinction between things that are against the law (double parking), things that are wrong (cheating on your partner), and things that are both against the law and wrong (armed robbery).
The Court of Public Opinion can determine that creepshots are wrong, to the extent that it hasn't already done so. But for the law to determine that creepshots shouldn't be allowed, I think you'd have to produce some good arguments that they cause actual harm.

But this was a half-technical, half-sociological post, and the second half was the question was, "why don't we care about the people getting creepshotted enough to even consider bending the world out of shape the way we do for Disney?" I know there isn't an answer that doesn't boil down to "this is just the way it is; suck it up", and I know we shouldn't even ask questions if we can't face the answers we're likely to get back, but sometimes I do get weary of this short-ended stick that my gender has drawn.

I think there are at least partial answers, and ajay points to one here:

ajay @ 54:

There is, by contrast, a general acceptance in law that Disney (or whoever) suffers actual harm from people copying its stuff and selling it without permission.

The harm Disney suffers is financial and the power Disney wields is considerable. The harm women and others suffer is not financial and the power they wield is not considerable.

In each case, there is a connection between the first and second clauses, but the connection is neither direct nor simple, especially in the second case.

If there's an answer--and I think there is--it will require both rough parity of power among genders. It'll also require some destruction of the finanical power of capital.

Also: Is creepshotting in and of itself a problem? Or is the problem the physical behavior it can enable? And are the two discrete items or part of a continuous spectrum?

It seems to me (as someone who hasn't directly experienced it), that if the problem is less the information flow from the creepshot--does it really matter if someone out there finds a picture wankworthy and leaves it at that?--than the behavioral backwash from it, from with verbal harassment on up the scale to physical violence, then there are good, achievable solutions.

(I have the multiple-mindedness on this issue one would expect from a person who discovered both Robert Heinlein and Susan Brownmiller in high school and ate them both up.)

I am wondering about one issue. A creepshot is a surreptitiously taken photograph of a woman's buttocks. One taken in a public place.

There is no giant conceptual (as opposed to technological) leap from Google Glass® to a prosthetic eye that is also a camera, albeit one that is linked both to the internet and to the optic nerve. I can foresee that being a technology available within the next quarter century, or even less. How shall we know when someone is recording and when they are not?

Also: this is going to make HIPAA impossible to enforce. How do you protect your patients from being filmed going in and out of your office, or sitting in your waiting room?

(True fact: back in the '90s my boss's wife heard from a friend that I'd been to the OB-GYN, and confronted me with "are you pregnant?" I wasn't, as it happened. But when I later became pregnant, I got fired. These issues aren't just about squeamishness.)

So many sfnal discussions of the panopticon problem: "Private Eye", by Lewis Padgett, "The Dead Past", by Isaac Asimov, and "E for Effort", by T. L. Sherred, just for openers.

By the way, there may be a partial technical solution to the creepshot problem. Wear a superbright near-IR light source; it'll overwhelm the camera sensors without interfering with ordinary human vision.

Sometimes, yes. Hence anti-harassment laws. But sometimes it doesn't, hence the current state of the law on photographing people in public. At present, whenever you walk down a street or sit in a park, you could be photographed without your knowledge.

But the law at present, in the UK and the US, implicitly takes the view that a society where you couldn't take any photos anywhere without the permission of everyone in the shot would be worse than a society where some people are made uncomfortable at some times by the knowledge that they could be photographed without their consent.

Now, I'm not saying that's the right answer, but it is the basis of current law.

How do you protect your patients from being filmed going in and out of your office, or sitting in your waiting room?

In the waiting room they would a) be on private property (where the owner or tenant would be able to forbid photography) and b) have a reasonable expectation of privacy.

And I'm showing my orientoatlantic privilege here, but if you're worried about being fired because you can be photographed going into your OB-GYN, and your boss is then going to fire you for being pregnant, then the problem with your situation is not really one of lax regulation of photography in public places; the problem is that your boss can legally fire you for being pregnant!

Wow, that is horrifying! If Glass becomes commonplace, I may just have to become a hermit.

I must admit to being amused at one of the advertising vids, though. "We want technology to be out of the way, but still be there for when we need it." Um....what about this nifty invention called the POCKET, folks? Are pockets not "cool" enough anymore?

About privacy as not only a social construct but a recent one: Nancy @19 has already mentioned what domestic architecture can signal, and Magpie @12 and Brad @22 have pointed out the economic/class constraints on privacy. I thought of Fernand Braudel's description of the layout of 17th-century houses (including aristocratic homes) in The Structures of Everyday Life and what it implied about expectations of (and locations for) privacy. (Then there's the royal lever and coucher, but that is about political theatre rather than ordinary domestic life.)

I would add that on our stays in the UK, we noted how much higher what might be called the intimacy barrier is--that is, how much less common it was to be invited into the home of a colleague or acquaintance. We would meet at a pub or tea-shop or restaurant with some frequency, but only once in a stay of several months were we asked to dinner in a home. And it was clear from our other UK friends that this signalled a higher level of intimacy, or at least acceptance, than the same invitation would have in Minnesota. It reminded me of the public/private decorum depicted in UK literature at least from the 19th century into the post-war years of the 20th. (It seems to be of a piece with the practice of not addressing mere acquaintances by their first names, not knowing anything about their familial circumstances--and to be class-bound.)

None of which addresses the challenge of a panaopticon enabled by cheap, widely distributed technology in the hands of perennial adolescents, let alone government agencies or corporations.

The harm Disney suffers is financial and the power Disney wields is considerable. The harm women and others suffer is not financial and the power they wield is not considerable.

I think the first part of that second sentence is an assertion rather than a self-evident fact. The differential harm that women suffer from the asymmetric uses and consequences of this type of technology has far-reaching, if not always blatantly obvious or direct, financial consequences.

There is financial harm in being differentially socially penalized for certain behaviors or attributes. There is financial harm in being socially penalized for attributes unique to one's perceived social sub-group. That harm may come in the form of reduced productivity. It may come in the form of cost-benefit analyses for participating in certain types of remunerative activities or being present at certain types of contexts that affect one's current or potential financial status.

One needn't even necessarily bring in the gender asymmetry of how these types of surveillance are employed or in the social consequences of their results to note that there can be financial harm to the individuals targeted. But the gender asymmetry does affect how that financial harm is perceived and evaluated.

On the plus side the lack of being able to tell if you are being filmed will probably mean many more films of bad behavior on the part of the police. While it will not eliminate bad actors in the system I expect this technology will greatly reduce instances of police brutality in the future. I also think this will result in people assuming that the police are far worse now than in the past just as other reporting on violent crime has made people assume crime rates are up when in fact they are down.

I think the first part of that second sentence is an assertion rather than a self-evident fact.

Not only that, it's wrong as stated, and I thank you for the corrective action.

It'd be better to have prefaced it all with, "In this specific instance," and to have said that something about the primary harm in this instance.

I'm more comfortable with that, even though it's not perfect.

I still kind of like the original, though.

It's the sort of oversimplified, not-quite-right analysis that's potentially actionable. My primary problem with so much current thought about privilege and oppression is the extent to which it assumes the problems are not solvable and that all we can do is make things better. The complexity of a full, accurate analysis of most problems can be a road to depression and inaction.

ajay @ #66, I don't disagree with you (had my employer had more than 50 employees, it WOULD have been illegal--but I wouldn't have been able to do anything about it, because I coudn't afford a lawyer). The situation is actually somewhat worse now, as I live in a so-called "right to work" state.

However, even on private property it's difficult to forbid photography when in order to be sure your rules are being followed, you have to frisk/search your customers. Technically that's already true, but not so many people currently have concealed/concealable cameras. That appears to be changing rapidly.

How well will Google Glass capture the content of a book that I leaf through, rapidly but systematically, in the bookstore? Asking for a friend.

Does Google have a plausible business model from being a data-warehouse for everybody in the world's CCTV cameras (not just Glass images)? Stream your content to them for a small monthly fee, and in return they'll... what, run face-recognition on suspects? Look algorithmically for suspicious behaviour?

Helmet-mounted cameras on cyclists are something I'm noticing often enough now that they're clearly gonna be ubiquitous in a year or two.

"Like any responsible father, Hugh Morrison had installed cameras in every room in the flat. You bang them in like nails, the work experience had told him, and bang them in Hugh did. The internet said they transmitted to the police station. The bubble pack said they recorded. Hugh knew which to believe, and banged them in without a worry. You could only pick them up on the house wifi. The bubble pack said that too."—opening paragraph of Ken MacLeod's Intrusion, which you should all read.

How much of the damage likely to be done here is from taking the picture surreptitiously, and how much is from uploading the picture and making it public? Because there's a pretty straightforward technical fix to not wanting pictures taken and uploaded, though I don't know how well it would work in practice. Develop a standard, machine-readable "don't upload photos of me" glyph, and sell clothing and pins with it. Get major sites that do photo sharing and search engines to agree to follow the expressed request by default, and put that in the algorithms they use.

You could imagine making cameras refuse to record pictures with that glyph in it, but that seems more likely to both be bypassed and to be misused by people who don't want anyone taking pictures of their bad behavior, or stores who don't want anyone able to take a picture of a pricetag or product description for comparison shopping. But it seems to me that a fairly straightforward opt-out would address at least a large part of the problem. You'd want to limit it to pictures of humans, and I'm assuming it's not intractible to get an algorithm to determine pretty well that a given picture is or isn't a close-up of someone's cleavage, tight yoga pants, beer gut sticking out from under his T-shirt, etc.

I suspect one danger with trying to do this is that powerful people will likely want that "no uploading photos of this" glyph for all sorts of other things, like classified or proprietary documents, prices, police uniforms, various buildings, etc.

You'd definitely want some kind of way of getting human judgement on whether or not a picutre could be posted built into this, but I haven't tried to think it through far enough to be sure how that ought to work yet. (No human intervention is the way to make it an easy sell to Google and Facebook and such.)

Consumers of your information mostly live in the future, at least if you're not 99 years old or something. Their uses of that information will be what is being done now, plus a whole bunch of stuff that nobody has thought of yet.

Technical barriers to discovering an identity behind a pseudonym, or linking all a person's online writings, will likely go away over time. (Automated textual analysis plus logs plus interest and background and location databases probably make it almost impossible for pseudonyms to work for a large-scale network presence, either against a determined attacker now or against a casual one in a decade.)

Commercial barriers involving companies making promises or trying not to be evil or not wanting to scare off their customers will mostly go away over time--the incentives are for the "don't be evil" guys to turn into an evil empire, the unsinkable massive social network company of today is likely tomorrow's AOL ("Wow, are those guys still in buisness?"), and quite possibly they will be bought by someone hoping to mine their data. Data doesn't go away unless you take a lot of trouble to make it go away, and almost nobody does. (The best way to make it go away is not to collect it in the first place.)

One sort of odd cross-thread linking. Most of the footage from that spectacular meteor explosion over the Russian city came from car dash cameras, widely used in Russia, apparently, for insurance reasons. (I gather there's a lot of problem with staged or claimed crashes, so having your own footage of the accident is helpful.)

It's hard for me to even imagine what would stop the proliferation of cameras and microphones and other sensors in the world. They get cheaper and more capable every year (along with network connections and memory for them), there are all kinds of good and bad incentives for putting them everywhere. What would reverse the trend?

How to lay claim to a modicum of privacy in the ever-more-networked and ever-less-forgetful future is an interesting problem, for values of interesting on the level of "old Chinese proverb".

I had a thought about using QR codes as a sort of photographable "robots.txt", but I don't like the implications of having to opt-in for privacy, nor the possible/likely misuse of them as "target indicators".

Power imbalance between different sorts of people (and groups thereof) seems to be one of the bigger levers that enable/compound harm in these areas. Working on better balance might help a lot of things, really.

(Some) Bicyclists are starting to wear cameras constantly, in the US, specifically because of the he said/she said nature of car-versus-bike collisions, especially doorings. If there don't happen to be helpful exterior witnesses, it can be extremely difficult for a bicyclist to convince a court that the motorist did something unreasonable -- so, cameras.

Elliott Mason @85: I haven't seen that many cameras on bicyclists yet, but I'd estimate that more than half the motorcyclists I see on the streets and freeways of Los Angeles have at least one camera, sometimes two, on their helmet or their rear fender. There may be more who mount them less visibly, but I think there's a perceived deterrent effect to "I've got a camera, don't crowd me".

I've also overheard several people comment nostalgically "remember when concert venues used to prohibit cameras?"

#86: I'll second the recommendation. I do think DB's solution / thesis -- that people must be allowed to "look back" at the authorities looking at them -- underestimates the unwillingness of those who own the cameras to give up control, and of companies and governments to allow themselves to be scrutinized.

I read a draft of a Brin story whose title (and place of publication) I forget. It is set in a near-future where Google-glass like augmented reality spex are ubiquitous, and shows how the system deals with a guy who abuses the technology.

Niall, #33: I wouldn't have thought that creepshots are the biggest issue.

That would be because white men aren't generally the subject of creepshots. Therefore that issue doesn't loom particularly large for them.

Hmmm... which makes me wonder if one effective method of changing public opinion about that issue might be websites featuring sneakily-taken crotch-shots of random men out in public?

ajay, #36: There's already been one case in Tulsa where some asshole did a creepshot up a young woman's skirt in a department store, and the judge ruled that if she wasn't in the dressing room she had no reasonable expectation of privacy even under her dress. I thought I had the link somewhere, but now I can't find it. I know I posted about it here at the time, but apparently not on my LJ.

Russell, #69: This is reminding me of the disconnect between myself and my parents when I was in college and living in the dorm. To my parents, that dorm room was a bedroom and the idea that I might invite a male friend up to it was scandalous! To me, that dorm room was my home (despite the fact that there was a bed in it), and was the natural place for me to invite friends to socialize.

Mishalak, #72: Good point. This technology is definitely a two-edged blade, and one edge is aimed directly at those who abuse power.

Jeremy @87, I've also overheard several people comment nostalgically "remember when concert venues used to prohibit cameras?"

I was at a gallery show opening a few weeks ago (my husband had a photograph juried into the show) and I was discomfitted by the number of people with videocameras and (of course) the ubiquitous iPhone taking video of the artwork. It was a crowded show (opening night, and the local mayor attended) and at first I tried to be polite and not duck into people's shots (I figured, at first, they were friends-of-the-artist shots) but after awhile the sheer number of people doing it, and the rudeness of it all, got to me, when I'm expected to stop looking at a painting or a sculpture so someone can run a video-camera across it. My husband is a visual artist and he's not wildly enthusiastic about people photographing his photographs; I can't imagine painters (as a class) are any less discomfitted.

abi @44, I don't think anybody will need to use a hand or a voice command to take a photo. Ever done the mind reading act where you put your hands on a confederate's temples and they send a number to you? It's done by clenching your teeth silently the right number of times — if your teeth are already shut, it's not perceptible to anybody watching. Screen out false positives by setting it to click when you clench three times in a half second or whatever. There'll be ways.

Still, I have to wonder if the police will behave any better if everybody seems to be wearing Glasses. Even if nine tenths of them are inactive plastic knockoffs.

Lee @90: I hadn't heard about that case in Tulsa, but I'd been thinking that one response to creepshots would be to blame the victim: "if you don't want it photographed, cover it (from all directions, with material opaque to all wavelengths of light, and all other forms of radiation, or ultrasound, or gas chromatography, or ...)". The end result being the air-tight lead-foil burqa.

I don't have any answer to where the correct balance is between the right of people to innocently take photographs in public versus the right of people not to be photographed in ways that make them uncomfortable.

I wonder if the problem is all the other issues around gender, and power imbalance, and so forth? In other words, maybe it's impossible to solve the creepshot problem in isolation, because it's a symptom of a deeper problem? Although I'm also uncomfortable saying "don't worry about that problem, we need to solve this other, much bigger problem first", especially when they're problems that other people suffer from much more than I do.

It wasn't URLs, presumably it was Words of Power. I just wish my words actually had power to do something other than provoke the gnomes! About all I have to offer them here at my desk is a fruit and nut bar, or some nasty cold over-steeped tea.

I've almost read the David Brin's The Transparent Society since it came out, but I've long been uncomfortable with his apparent sexism.

Over the last few years, the increase in my interest in the book as the technological capability for personal "lifebox" video has become close to reality has been balanced by my increasing awareness of how the societal privilege I share with Brin is going to make it hard for me to notice and repudiate any bullshit he might promulgate.

Cross-threading a little, the question of whether "creepshots" and such practices are really harmful to people who are less privileged than those asking the question resonates, for me, with Lee's comment at #314 in Open Thread 181.

That every change in technology is highly likely to widen the gap between the more privileged and the less privileged seems like a structural issue: it's the nature of privilege. The only way to counter it is for the less privileged to struggle successfully to reduce the privilege differentials that hold them down, which takes great courage and determination -- or I suppose just being beyond caring about the consequences of fighting back.

It's a beautiful thing when people find a way to use a new technological capability to fight back against privilege. I can only hope that someone will find a way to do so with ubiquitous video recording.

Jeremy, #94: You are absolutely correct about the creepshot issue being intertwined with a lot of other problems in the way our society views gender. Which doesn't necessarily mean we can't do anything about it in isolation, but does make it significantly harder. OTOH, it also means that any pushback we can make in those other interlinked areas will have ramifications on the creepshot issue as well. Mostly, it's going to be about changing the attitude that women (especially young women) are fair prey rather than people.

I don't really agree that technological changes inevitably widen gaps between the powerful and powerless, or privileged/nonpriveleged[1]. It depends on the technology and the particular groups in question. Cellphone video has taken power away from policemen and transferred a bit of it to the folks who used to just get a consequenceless ass-kicking for contempt of cop, for example.

[1] Though that's a messed up way of phrasing it--it's not like this is a single value that can be put in an unambiguous order.

The omnipresent scanners recorded every act and cut through the sham of straw men and proxy voting. Even a man of Follard's power could not avoid them, but he could arrange that availability of the data be sharply restricted. There would always be friends, contacts, favors...When everything became open to a few, much could be forbidden to the generality.

I, too, worry that people will be effectively visible and trackable through the omnipresent database of randomly-generated cameras; for a price. Also, if the government doesn't have a large stake/backdoor in Google yet...well, the only reason that it won't go down the Lacey way is that Google will get people to *pay* for the privilege of being a scanner.

There is a reason I keep pseudonymity; the good news is that the potentially fatal reason is of infinitesimal (but not zero) probability. The one of near certainty leads to me not necessarily getting another job in my field, which is bad enough - but likely not important enough to the prospective employer to attempt to break it. The fact that although they may not be able to figure out who I am online, they can work out from Glassview that I show up here every Tuesday night, and there four times a year, and what goes on here then and there then, and ...

Never mind the people who see me as someone worth burgling, and go to check where else they can find me. Oh, he's here every Tuesday? Well, then he won't be home, will he?

And I'm a straight white male - which means that inconveniences and potentialities for me are less trivial for others.

71 et seq: it might be useful, though, to look at the stages in the chain of events and work out at what point the harm actually occurs (and thus what point to act at). As we've thrashed out in this thread, it's going to be pretty tricky to act at the image capture stage.

I agree, use of cellphone video to document police abuse of individuals has sometimes documented police abuse of power, and that is a good thing. Do you happen to know the relative proportion of those incidents where that kind of documentation has been useful to, say, a person of color vs. a caucasian?

I ask because it seems pretty clear that in the US and Europe, people of color are abused far more often than caucasians, that it's a fairly widely known problem, and that it's widely excused or accepted as business as usual or justified. If cellphone video has mostly helped document abuse of people already enjoying a rather high degree of privilege, it's not so germane to my point.

On the other side, cellphone photography and video can be used abusively, for bullying, creepshots, child pornography, humiliation, shaming, selective depiction of members of less-privileged persons in ways that confirm the prejudices which are used to justify keeping them in their less-privileged status, and so on.

I'm prepared to believe that cellphone video has been more helpful or not, but I'd need to see some evidence. Not scientific/legal quality evidence, but more than I've seen in casually looking at the world.

By the way, I agree that "privileged/non-privileged" is a messed-up way of describing the situation. Which is why I try hard not to use it. In the writing you're responding to, I succeeded in not using it.

Degree of privilege is contextual. In the context of society at this time, currently able-bodied heterosexual white males from long-time wealthy families whose generations have lived in the same nation, spoken its majority language, and been members of its majority religion seem to be pretty invariably at the top.

Mycroft W @100: regarding burglary, I've noticed lately that burglary reports in my local newspaper's police blotter commonly mention that no images of the burglars were obtained from security cameras. To me, that implies that a standard item on the police checklist for burglary investigations (of middle-class suburban homes) is now "do you have security cameras?" A multi-channel digital DVR complete with a handful of cameras is cheaper than Google Glass. So I don't know whether the proliferation of cameras will increase or decrease crime. Or more likely, change the relative frequency of different kinds of crime in complicated ways.

Low level crime evolves around defenses where possible--the criminals will quickly learn that CCTV footage got them busted, and so burglars and such will take precautions. Just going masked while breaking into a house may be enough, depending on how much surveillance data is available.

There is something really valuable about being able to be in public without being on display to the whole world forever. That's doubly true in places where you might really want to worry about other stuff than your appearance and public reputation--at the hospital or unemployment office or your kids' school or the gym or whatever. That is potentially destroyed by ubiquitous cameras plus social media that encourages us to upload everything by default.

ISTM that this is much of the ugliness of creepshots and shaming pictures (People of Wal-Mart) and all sorts of related stuff. You go from having your normal activities be normal activities,to having every secnd you're outside your house, being on display to everyone forever. I imagine that's something minor celeberties understand pretty well now (look, here's a picture of some pregnant starlet on the beach with her husband, zoomed in to see how much weigh she's gained and how she doesn't have any makeup on.).

I don't know where you'd find hard numbers, but the targets of police brutality have typically been people whose word would not be given much weight against that of the police. Video footage changes that in a way that goes against privilege--people whose word would have been given weight in a police brutality complaint gain less than people whose word would not have been given any weight.

More broadly, technological changes have unpredictable effects on power dynamics, often unforseen even by their designers. It would be an amazing coincidence if that always worked out in favor of those already favored. Certainly, that hasn't happened in business--go ask Kodak and Xerox and all the city newspapers how that worked out. Now, powerful people can push more easily to get the law and commercial practice to respond to their concerns, but even there, often there isn't much of a solution. For example, the record companies have gotten some of what they wanted imposed by law and lawsuit thanks to their political connections and deep pockets, but it's not like they've gotten what they really wanted most of the time. Most downloaded music now is not copy protected, for example.

Peter Erwin @ 102: Is that equivalent in your eye to upskirt shots? I don't want to get too much into this line, because there are plenty of creepshots which aren't as far down the line as upskirts, and then your example definitely qualifies.

But to me, the chief thing about the Google Glass's potential use for creepshots isn't so much in the picture itself; it's in tagging it with identifying information.

At this point in time, the number of online photos explicitly tagged with my name or some other identifying information for me is fairly small, and I can pretty safely assume I know where all of them are. (And frankly, I use one as a userpic on LJ.)

I may feature in other peoples' photos in the background, or even semi-foregrounded as an unknown unnamed object of curiosity. I accept this as reality. But if nobody could tell it was me, if nothing connected that information to my name or some other aspect of my actual life, I might not be too upset.

(I have some pictures that feature strangers and would not be the same if they did not. But they lack any means to identify the person. Also, I happen to know my house is *really obvious* in the background of a police/newspaper photo related to a murder.)

I would be upset - plausibly actionably so - if I learned of cleavage shots, upskirts or other wankshots wherein I could be identified.

I also understand where others might make their choices in the direction of preferring more privacy or more guarantees of not being uploaded. I don't say they're wrong. On the contrary, I made my choices so far thinkingly, and I know others need more privacy. (And *need* is used advisedly, but *want* should be enough.)

But if Google starts doing its usual attempts to gather and make explicit information, then I may lose control of where and when my identifying information appears. And that scares me.

Lee @ 90 - The nature of privilege means that creepshots of men would most often not seem creepy to those men. More likely flattering. If there were a site of groin shots of men, I would not be surprised if there were self uploads.

That also means that simply asking your average SWM to look at from the targets viewpoint doesn't work, because the inversion of 'what if someone was taking pictures of you' doesn't seem to them a problem.

ajay #104: As Lenora Rose #110 alludes to, there are actually a bunch of littler harms involved, that accumulate with various actions: peeping under the skirt itself, distributing and/or publicizing the photo, identifying the photo's subject, and so on. A lot of these are actually harms against privilege in a technical sense, but these are privileges that are heavily backed by our current society.

At the same time, even that nosy village wasn't on this scale -- most people had work to do, and their investigations were limited by free time. But when you can image-search from someone's name, without putting your drink down? And the index includes incidental shots from from everyone in the vicinity, with comments from an unseen army of perverts, idlers and smartasses, most of whom didn't have to put down their bags of chips? It's like living in a nosy city of clairvoyants, with a side of telepathy.

It's a beautiful thing when people find a way to use a new technological capability to fight back against privilege. I can only hope that someone will find a way to do so with ubiquitous video recording.

It's very easy to make a case that people are already doing this.

The increasing availability of video recorders has had the effect of making it easier to document and expose incidents of police abuse and brutality, starting (at least) with the Rodney King beating back in 1991.

While there is clearly government (and corporate) interest in widespread surveillance which is under their control, it's very much in the interests of governments and corporations to restrict photography and recording by ordinary citizens. See here, for example:
http://reason.com/archives/2010/12/07/the-war-on-cameras/singlepage

An example of a UK corporation using a court order to prohibit photography and filming of their controversial waste-dumping activities:
http://www.epuk.org/News/472/npower-injunction-on-epuk-member

Pictures of your house, with a big red arrow added in to show where you keep your Hide-A-Key. Plus window-shots, showing all your lovely possessions.

Maybe people won't know where your house is. Maybe they will, but are too honest to break in.

So maybe they'll just say what a great wide-screen TV you have, and how much they'd love to take that computer off your hands. Sometimes in passing, sometimes in intricate detail about where they'd put it in their own house, how they'd use your treasured possessions, how nice Granny's ashes urn would do as a planter.

But you'd always wonder if any of them recognized the house, and might actually stop by...

In any case, someone was by your house when they took those pictures, walking in your flowerbeds and breathing on your windows. The whole thing started with someone right there, on your territory, in real life, coveting your books and eyeing up your stereo.

Wait, did you just hear something? Was that someone at the door, late at night, in the dark?

I thought about something like this just the other day when I heard that story from the UK about a man dressed as Batman bringing in a robbery suspect to the local police station. They had him on camera, but didn't know who he was (though someone eventually spilled the beans).

This made me wonder whether cosplay will be used as a way to maintain a semblance of privacy in the public sphere.

Steve Downey #111: There's a site that could qualify as a variant of creepshots, with men as the objects, but it's intended as humiliation comedy, not as titillation: Awkward Boners.

The contrast between the purposes of titillation and mockery probably says a lot about the male gaze, and how both varieties assume they're playing to it.

I'm not sure the threat level of the "awkward" shots, if you added in identifiers, would rise to the same level as creepshots of women with the same information added. And that says a lot about rape culture.

The potential to link identifiers to Hollaback photos of street harassers probably wouldn't be enough to counteract the ill effects of identifiable creepshots.

Rikibeth, 117: Because I'm feeling cynical tonight, here goes. A man who gets linked on Hollaback, and doesn't like it, can potentially go on Google Glass and find out who put him up there. The rest is left to our "oh calm DOWN it's a COMPLIMENT!" imaginations.

Paul, #103: Thanks for the link -- I always forget that I can do something like that. The salient point, in the discussion downthread from it, was that if the creep had put his face in that position it would have been actionable, so why was the camera okay? (Also that the charge should have been sexual assault.)

Steve D., #111: Good point. Think about all those guys who send penis photos to women on dating sites, for example.

But to me, the chief thing about the Google Glass's potential use for creepshots isn't so much in the picture itself; it's in tagging it with identifying information.

That makes a lot of sense - thanks. In which case it's not even, really, a problem of Google Glass itself, except inasmuch as they might make it easier to capture images surreptitiously; it's a problem of widely available image tagging. More of a Facebook problem, really. (And an image recognition and search software problem as well.)

albatross @79: Re: glyphs — while not exactly what you're describing, it reminded me of a theme in the first season of an anime series Ghost In The Shell: Stand Alone Complex. This was an SF series set in a future Tokyo where even average people had cybernetic brain augmentation. An ongoing storyline involved the search for 'The Laughing Man'. This was a person involved in a hostage-taking situation, but no one knew what he looked like — his face was covered over by a 'Laughing Man' logo in all surveillance footage and eyewitness memory recordings. Implausibly, even un-augmented individuals working with police sketch artists described the logo.

I saw this series a few years back when it was playing on The Cartoon Network. It is currently available among Hulu's free/ad-supported offerings.

The man dressed as Batman, from Bradford, there's something odd about the original reports and the later details. The guy he "brought in" was a friend who was a suspect for several crimes, and was surrendering himself to the Police after both had been to a major football match.

I would be very surprised if the Police didn't know who he was, but how did the picture get out? I suppose it could have been passed around amongst the Police as a bit of a funny, but how did it get picked up by the media and turned into the masked vigilante story?

I am currently finishing up class readings on COPPA, and this whole conversation has me wondering how COPPA and Google Glass are going to interact. How do you prevent Google Glass from collecting information on children under 13? (Keep in mind a new version is coming which addresses geolocation and photographic info.) What is going to be considered commericial when it comes to Google Glass - and apps that use it?

(and why do I get the feeling Google hasn't even thought of this?)

And how depressed and pissed off I will be that the courts are likely to decide in Google Glass's favor, while they decided against the ALA's argument that CIPA (as written, as it could be enforced at the time) violated their patron's first amendment rights.

One thing that's hard to wrap your brain around is the fraction of people who will go out of their way to hurt or upset or frighten a stranger. At the extreme end, you get people who will make it their main hobby to troll and grief and otherwise make the world a worse place, often spending their own money and lots of their own spare time and taking significant risks to do so.

I suspect that's the same broad phenomenon whether it's vandalism or trolling on the internet or beating people up in bars or schoolyards or streetcorners, or various kinds of online harassment.

At some level, this shouldn't be any more surprising than the set of people who are willing to do all those things to make the world a better place--the guys writing high-quality amateur blogs, doing podcasts, editing and writing Wikipedia entries, etc. But somehow, it always seems harder to get my brain around.

Any social system that doesn't account for a certain amount of that kind of behavior somehow, falls apart.

No shirt. No shoes. No augmented reality glasses. No service. Earlier this month, human cyborg and University of Toronto Professor Steve Mann, claims he was brutalized and kicked out of a Paris McDonald’s after employees objected to his headset and its ability to record photos and videos of his experiences.

"I'm not sure why the perpetrators attacked, but 'Perp. 1′ [Mann's name for one of his assailants] did mention about cameras not being allowed," he told us in an exclusive email interview. Mann was unavailable for a phone call because his iPhone was also damaged in the alleged attack.

jennygadget @ 125: Or maybe they have considered it and decided to make a brute force attack on the law. Consider how Google changed the practice of posting copyrighted material when they bought Youtube. Until there was enough force of money and market power behind Youtube, what it did was clearly illegal. Once Google put their power behind it, it became standard practice.

Were the copyright owners and their interests considered in the process? Don't be silly.

So perhaps Google finds COPPA to be an annoyance and a barrier to profit and is thus planning to make such a brute force attack on the law in practice that the law becomes unenforceable.

I fear you may be right. And while I am, in part, sympathetic to their desire to bring more coolness into the world the majority of me is going: kids! not old enough to consent to anything! wtf google!

In class tonight we are having a guest speaker from the FTC; I think I definitely need to ask her what plans they have regarding Google Glass and related apps.

Why wasn't Weev charged in relation to threats against Kathy Sierra or his attacks on LJ? Because we've made embarrassing the powerful an implicit crime through selective enforcement of laws such as the Computer Fraud and Abuse Act, and our culture continues to ignore threats against women and communities of women.

Bill, #135: Whoa. When a Bushite Republican, who was involved in building the current surveillance structures, says there's a privacy problem... you'd better believe there's a HUGE problem.

Welcome to Making Light's comment section. The moderators are Avram Grumer, Jim Macdonald, Teresa & Patrick Nielsen Hayden, and Abi Sutherland. Abi is the moderator most frequently onsite. She's also the kindest. Teresa is the theoretician. Are you feeling lucky?

If you are a spammer, your fate is in the hands of Jim Macdonald, and your foot shall slide in due time.

Comments containing more than seven URLs will be held for approval. If you want to comment on a thread that's been closed, please post to the most recent "Open Thread" discussion.

You can subscribe (via RSS) to this particular comment thread. (If this option is baffling, here's a quick introduction.)