As expected, the organizers did an excellent job providing attendees with provocative panels, presentations and keynotes talks—in particular an excellent presentation from my former UC Berkeley colleague Marc Davis, who has just joined Microsoft.

There were smart ideas from several entrepreneurs working on privacy-related startups, and deep thinking from academics, lawyers and policy analysts.

There were deep dives into new products from Intel, European history and the metaphysics of identity.

But what interested me most was just how emotional everyone gets at the mere mention of private information, or what is known in the legal trade as “personally-identifiable” information. People get enervated just thinking about how it is being generated, collected, distributed and monetized as part of the evolution of digital life. And pointing out that someone is having an emotional reaction often generates one that is even more primal.

Privacy, like the related problems of copyright, security, and net neutrality, is often seen as a binary issue. Either you believe governments and corporations are evil entities determined to strip citizens and consumers of all human dignity or you think, as leading tech CEOs have the unfortunate habit of repeating, that privacy is long gone, get over it.

But many of the individual problems that come up are much more subtle that that. Think of Google Street View, which has generated investigations and litigation around the world, particularly in Germany where, as Jeff Jarvis pointed out, Germans think nothing of naked co-ed saunas.

Or how about targeted or personalized or, depending on your conclusion about it, “behavioral” advertising? Without it, whether on broadcast TV or the web, we don’t get great free content. And besides, the more targeted advertising is, the less we have to look at ads for stuff we aren’t the least bit interested in and the more likely that an ad isn’t just an annoyance but is actually helpful.

On the other hand, ads that suggest products and services I am specifically interested in are “creepy.” (I find them creepy, but I expect I’ll get used to it, especially when they work.)

And what about governments? Governments shouldn’t be spying on their citizens, but at the same time we’re furious when bad guys aren’t immediately caught using every ounce of surveillance technology in the arsenal.

Search engines, mobile phone carriers and others are berated for retaining data (most of it not even linked to individuals, or at least not directly) and at the same time are required to retain it for law enforcement purposes. The only difference is the proposed use of the information (spying vs. public safety), which can only be known after data collection.

As comments from Jeff Jarvis and Andrew Keen in particular got the audience riled up, I found myself having an increasingly familiar but strange response. The more contentious and emotional the discussion became, the more I found myself agreeing with everything everyone was saying, including those who appeared to be violently disagreeing.

We should divulge absolutely everything about ourselves! No one should have any information about us without our permission, which governments should oversee because we’re too stupid to know when not to give it! We need regulators to protect us from corporations; we need civil rights to protect us from regulators.

Logical Systems and Non-Rational Responses

I can think of at least two important explanations for this paradox. The first is a mismatch of thought systems. Conferences, panel discussions, essays and regulation are all premised on rational thinking, logic, and reason. But the more the subject of these conversations turns to information that describes our behavior, our thoughts, and our preferences, the more the natural response is not rational but emotional.

Try having a logical conversation with an infant—or a dog, or a significant other who is upset–about its immediate needs. Try convincing someone that their religion is wrong. Try reasoning your way out of or into a sexual preference. It just doesn’t work.

Which raises at least one interesting problem. Privacy is not only an emotional subject, it’s also increasingly a profitable one. According to a recent Wall Street Journal article, venture capitalists are now pouring millions into privacy-related startups. Intel just offered $8 billion for security service provider McAfee. Every time Facebook blinks, the blogosphere lights up.

So the mismatch of thought systems will lead to more, not fewer, collisions all the time.

Given that, how does a company develop a strategic plan in the face of unpredictable and emotional response from potential users, the media, and regulators? Strategic planning, to the extent anyone really does it seriously, is based on cold, hard facts—as far from emotion as its practitioners can possibly get. The patron saint of management science, after all, is Frederick Winslow Taylor who, among other things, invented time-and-motion studies to achieve maximum efficiency of human “machines.”

But the rational vehicle of planning simply crumples against the brick wall of emotion.

As I wrote in an early chapter of “The Laws of Disruption,” for example, companies experimenting with early prototypes of radio frequency ID tags (still not ready for mass deployment ten years later) could never have predicted the violent protests that accompanied tests of the tags in warehouses and factories.

Much of that protest was led by a woman who believes that RFID tags are literally the technology prophesied by the Book of Revelations as the sign of the Antichrist. Assuming one is not an agent of the devil, or in any case isn’t aware that one is, how do you plan for that response?

The more that intimacy becomes a feature of products and services, including products and services aimed at managing intimate information, the more the logical religion of management science will need to incorporate non-rational approaches to management, scenario planning and economics.

It won’t be easy—the science of management science isn’t very scientific in the first place and, as I just said, changing someone’s religion doesn’t happen through rational arguments—the kind I’m making right now.

The Bankruptcy of the Property Metaphor for Information

The second problem that kept hitting me over the head during PII 2010 was one of linguistics. Which is: the language everyone uses to talk about (or around) privacy. We speak of ownership, stealing, tracking, hijacking, and controlling. This is the language of personal property, and it’s an even worse fit for the privacy conversation than is the mental discipline of logic.

In discussions about information of any kind, including creative works as well as privacy and security, the prevailing metaphor is to talk about information as a kind of possession. What kind? That’s part of the problem. Given the youth of digital life and the early evolution of our information economy, most of us really only understand one kind of property, and that is where our minds inevitably and often unintentionally go.

We think of property as the moveable, tangible variety—cattle, collectibles, commodities–that in legal terminology goes by the name “chattels.”

Only now has that metaphor become a serious obstacle. While there has been a market for information for centuries, the revolutionary feature of digital life is that is has, for the first time in human history, separated information from the physical containers in which it has traditionally been encapsulated, packaged, transported, retailed, and consumed.

A book is not the ideas in the book, but a book can be bought, sold, controlled, and destroyed. A computer tape containing credit card transactions is not the decision-making process of the buyers and sellers of those transactions, but a tape can be lost, stolen, or sold.

When information could only be used by first reducing it to physical artifacts, the property metaphor more-or-less worked. Control the means of production, and you controlled the flow of information. When Gutenberg perfected movable type, the first thing he published was the Bible—but in German, not Latin. Hand-made manuscripts and a dead language gave the medieval Catholic Church a monopoly on the mystical. Turn the means of production over to the people and you have the Protestant Reformation and the beginning of censorship–a legal control on information.

The digital revolution makes the liberation of information all the more potent. Yet in all conversations about information value, most of us move seamlessly and dangerously between the medium—the artifact—and the message—the information.

But now that information can be used in a variety of productive and destructive ways without ever taking a tangible form, the property metaphor has become bankrupt. Information is not property the way a barrel of oil is property. The barrel of oil can only be possessed by one person at a time. It can be converted, but only once, to lubricants, gasoline, or remain in crude form. Once the oil is burned, the property is gone. In the meantime, the barrel of oil can be stolen, tracked, and moved from one jurisdiction to another.

Digital information isn’t like that. Everyone can use it at the same time. It exists everywhere and nowhere. Once it’s used, it’s still there, and often more valuable for having been used. It can be remixed, modified, and adapted in ways that create new uses, even as the original information remains intact and usable in the original form.

Tangible property obeys the law of supply and demand, as does information forced into tangible containers. But information set free from the mortal coil obeys only the law of networks, where value is a function of use and not of scarcity.

But once the privacy conversation (as well as the copyright conversation) enters the realm of the property metaphor, the cognitive dissonance of thinking everyone is right (or wrong) begins. Are users of copyrighted content “pirates”? Or are copyright holders “hoarders”? Yes.

(“Intellectual property,” as I’ve come to accept, is an oxymoron. That’s hard for an IP lawyer to admit!)

It’s true that there are other kinds of property that might better fit our emerging information markets. Real estate (land) is tangible but immovable. Use rights (e.g., a ticket to a movie theater, the right to drill under someone’s land or to block their view) are also long established.

But both the legal framework and the economic theory describing these kinds of property are underdeveloped at the very least. Convincing everyone to shift their property paradigm would be hard when the new location is so barren.

Here are a few examples of the problem from the conference. What term would make consumers most comfortable with a product that helps them protect their privacy, one speaker asked the audience. Do we prefer “bank,” “vault,” “dossier,” “account” etc.?

“Shouldn’t consumers own their own information?” an attendee asked, a double misuse of the word “own.” Do you mean the media on which information may be stored or transferred, or do you mean the inherent value of the bits (which is nothing)? In what sense is information that describes characteristics or behaviors of an individual that person’s “own” information?

And what does it mean to “own” that information? Does ownership bring with it the related concepts of being bought, sold, transferred, shared, waived? What about information that is created by combining information—whether we are talking about Wikipedia or targeted advertising? Does everyone or no one own it?

And by ownership, do we mean the rights to derive all value from it, even when what makes information valuable is the combining, processing, analyzing and repurposing done by others? Doesn’t that part of the value generation count for something in divvying up the monetization of the resulting information products and services? Or perhaps everything?

Human beings need metaphors to discuss intangible concepts like immortality, depression, and information. But increasingly I believe that the property metaphor applied to information is doing more harm than good. It makes every conversation about privacy a conversation of generalizations, and generalizations encourage the visceral responses that make it impossible to make any progress.

Perhaps that’s why survey after survey reveals both that consumers care very much about the erosion of a zone of privacy in their increasingly digital lives and, at the same time, give up intimate information the moment a website asks them for it. (I agree with everything and its opposite.)

There’s also a more insidious use of language and metaphor to steer the conversation toward one view of property or another—privacy as personal property or privacy as community property. Consider, for example, how the question is asked, e.g.:

“My cell phone tracks where I go”

or

“My cell phone can tell me where I am.”

A recent series of articles in The Wall Street Journal dealing with privacy (I won’t bother linking to it, because the Journal believes the information in those articles is private and property and won’t share it unless you pay for a subscription, but here is a “free” transcript of a conversation with the author of the articles on NPR’s “Fresh Air”) made many factual errors in describing current practices in on-line advertising. But those aside, what made the articles sensational was not so much what they reported but the adjectives and pronouns that went with the facts.

Companies know a lot “about you,” for example, from your web surfing habits (in fact they know nothing about “you,” but rather about your computer, whoever may be using it), cookies are a kind of “surveillance technology” that “track” where “you” go and what “you do,” and often “spawn” themselves without “your” knowledge.

Assumptions about the meaning of loaded terms such as ownership, identity and what it means for information to be private poison the conversation. But anyone raising that point is immediately accused of shilling for corporations or law enforcement agencies who don’t want the conversation to happen at all.

A User and Use-based Model – Productive and Destructive Uses

So if the property metaphor is failing to advance an important conversation—both of a business and policy nature—what metaphor works better?

As I wrote in “Laws of Disruption,” I think a better way to talk about information as an economic good is to focus on information users and information uses. “Private” information, for starters, is private only depending on the potential user. Whether it is our spouse, employer, an advertiser or a law-enforcement agent, in other words, can make all the difference in the world as to whether I consider some information private or not. Context is nearly everything.

Example: Is location tracking software on cell phones or embedded chips an invasion of privacy? It is if a government agency is intercepting the signals, and using them to (fill in the blank). But ask a parent who is trying to find a missing child, or an adult child trying to find a missing and demented parent. It’s not the technology; it’s the user and the use.

Use, likewise, often empties much of the emotional baggage that goes with conversations about privacy in the abstract. A website asks for my credit card number—is that an invasion of my privacy? Well not if I’m trying to pay for my new television set from Amazon with a credit card. On the other hand, if I’m signing up for an email newsletter that is free, there’s certainly something suspicious about the question.

To simplify a long discussion, I prefer to talk about information of all varieties through a lens of “productive” (uses that add value to information, e.g., collaboration) and “destructive” (uses that reduce the value of information, e.g., “identity” “theft”). Though it may not be a perfect metaphor (many uses can be both productive and destructive, and the metrics for weighing both are undeveloped at best), I find it works much better in conversations about the business and policy of information.

That is, assuming one isn’t simply in the mood to vent and rant, which can also be fun, if not productive.

Interesting thoughts, Larry, though here's a concern: I don't recognize the metaphor you're proposing to use in place of property. “[T]o focus on information users and information uses” is not a framework that I can figure out how to apply.

I personally favor the property metaphor, with the caveat that you have to understand the concept of abandonment. When people say, “We should own our information,” I'll say, “You do, right up until you give it away.” Given the properties of information, it is often abandoned, meaning it is available to anyone for any use or recombination.

Here's a piece on how Lockean property theory interacts with “intellectual property,” by which I mean cognitive and volitional product — not the narrow class of stuff protected by certain federal statutes.

I do think that information has the hallmarks of property. Its volatility (in the chemical sense) causes many people to think that it requires all new ways of thinking about it. I'd rather take people through new glosses on the property metaphor than work with all new frames of reference. That's why I'm interested in the new metaphor you say your proposing, which I'm sorry to say I don't recognize as a metaphor.

Jim Harper

Annnd, I put the wrong link in the second slot above. Meant to put this:

(I'm an admin here, of course, but I'm boxed out of Disqus, which I hate, so I couldn't just go fix it.)

God I hate Disqus.

http://srynas.blogspot.com/ Steve R.

Fundamentally – privacy is lost so get over it. Nevertheless, I believe the privacy debate falls short in one important regard; that is the responsibility of those who collect the data not to abuse the use of that data. Forbes, a while back, had an article along the lines of “privacy for sale”. Essentially, the article was about corporations selling products/software to protect one’s privacy. My immediate recreation was that this is absurd, why should anyone have to pay “protection money”. While privacy is still ultimately the responsibility of the individual, those who collect information need to be responsible too. That is what is missing from the discussion.

If you go naked into the sauna and a satellite takes a photograph, too bad. Targeted advertising, cell phone tracking, RFID tags? I don’t have an issue with those concepts. But when companies acquire personal information (such as a phone number, your GPS locations, where you used your credit card) they should not convert that information into marketing opportunities such as sharing it with every marketing firm in existence. To immediately clarify, companies should not contact you unless you specifically opt-in; its OK for a company to use the data it collects but it should not share. Like the saying “What happens in Vegas, stays in Vegas”.

Larry

Thanks, Jim, for the thoughtful feedback. As I say, there are forms of property that could be metaphors for information (especially licenses), but both their economics and law are so poorly developed that it doesn't seem productive to promote them. Americans in particular are going to have a hard time in any case ingesting the idea of abandonment! Just watch a few episodes of “Hoarders”!

Fair enough that I don't really lay out my alternative in much detail or clarity in the post. Once I get close to 3,000 words, I figure no one (including me) is still reading. But I do make the case in detail, for better or worse, in the privacy chapter of “Laws of Disruption.” Though I'm sure that was only the outline of something much longer requiring much more work I haven't done.

And we do agree that all forms of information, whatever we call them, share the same properties and therefore ought to be talked about in similar terms. See Adam's post today on that point and the odd dissonance of those who want strong regulation of “private” information but weak regulation of “intellectual” information. It's a deep misunderstanding, as you know better than I do.