Facebook recently beat a humiliating retreat from Beacon, its new system for peer-based advertising, in the face of users’ outrage about the system’s privacy implications. (When you bought or browsed products on certain third-party sites, Beacon would show your Facebook friends what you had done.)

Beacon was a clever use of technology and might have brought Facebook significant ad revenue, but it seemed a pretty obvious nonstarter from users’ point of view. Trying to deploy it, especially without a strong opt-out capability, was a mistake. On the theory that mistakes are often instructive, let’s take a few minutes to work through possible lessons from the Beacon incident.

To start, note that this wasn’t a privacy accident, where user data is leaked because of a bug, procedural breakdown, or treacherous employee. Facebook knew exactly what it was doing, and thought it was making a good business decision. Facebook obviously didn’t foresee their users’ response to Beacon. Though the money – not to mention the chance to demonstrate business model innovation – must have been a powerful enticement, the decision to proceed with Beacon could only have made sense if the company thought a strong user backlash was unlikely.

Organizations often have trouble predicting what will cause privacy outrage. The classic example is the U.S. government’s now-infamous Total Information Awareness program. TIA’s advocates in the government were honestly surprised when the program’s revelation caused a public furor. This wasn’t just public posturing. I still remember a private conversation I had with a TIA official who ridiculed my suggestion that the program might turn out to be controversial. This blindness contributed to the program’s counterproductive branding such as the creepy all-seeing-eye logo. Facebook’s error was similar, though of much smaller magnitude.

Of course, privacy is not the only area where organizations misjudge their clients’ preferences. But there does seem to be something about privacy that makes these sorts of errors more common.

What makes privacy different? I’m not entirely certain, but since I owe you at least a strawman answer, let me suggest some possibilities.

(1) Overlawyerization: Organizations see privacy as a legal compliance problem. They’re happy as long as what they’re doing doesn’t break the law; so they do something that is lawful but foolish.

(2) Institutional structure: Privacy is spun off to a special office or officer so the rest of the organization doesn’t have to worry about it; and the privacy office doesn’t have the power to head off mistakes.

(3) Treating privacy as only a PR problem: Rather than asking whether its practices are really acceptable to clients, the organization does what it wants and then tries to sell its actions to clients. The strategy works, until angry clients seize control of the conversation.

(4) Undervaluing emotional factors: The organization sees a potential privacy backlash as “only” an emotional response, which must take a backseat to more important business factors. But clients might be angry for a reason; and in any case they will act on their anger.

(5) Irrational desire for control: Decisionmakers like to feel that they’re in control of client interactions. Sometimes they insist on control even when it would be rational to follow the client’s lead. Where privacy is concerned, they want to decide what clients should want, rather than listening to what clients actually do want.

Perhaps the underlying cause is the complex and subtle nature of privacy. We agree that privacy matters, but we don’t all agree on its contours. It’s hard to offer precise rules for recognizing a privacy problem, but we know one when we see it. Or t least we know it after we’ve seen it.

Techies have been chortling all week about comments made by Universal Music CEO Doug Morris to Wired’s Seth Mnookin. Morris, despite being in what is now a technology-based industry, professed extreme ignorance about the digital world. Here’s the money quote:

Morris insists there wasn’t a thing he or anyone else could have done differently. “There’s no one in the record company that’s a technologist,” Morris explains. “That’s a misconception writers make all the time, that the record industry missed this. They didn’t. They just didn’t know what to do. It’s like if you were suddenly asked to operate on your dog to remove his kidney. What would you do?”

Personally, I would hire a vet. But to Morris, even that wasn’t an option. “We didn’t know who to hire,” he says, becoming more agitated. “I wouldn’t be able to recognize a good technology person — anyone with a good bullshit story would have gotten past me.” Morris’ almost willful cluelessness is telling. “He wasn’t prepared for a business that was going to be so totally disrupted by technology,” says a longtime industry insider who has worked with Morris. “He just doesn’t have that kind of mind.”

Morris’s explanation isn’t just pathetic, it’s also wrong. The problem wasn’t that the company had no digital strategy. They had a strategy, and they had technologists on the payroll who were supposed to implement it. But their strategy was a bad one, combining impractical copy-protection schemes with locked-down subscription services that would appeal to few if any customers.

The most interesting side of the story is that Universal’s strategy is improving now – they’re selling unencumbered MP3s, for example – even though the same proud technophobe is still in charge.

Why the change?

The best explanation, I think, is a fear that Apple would use its iPod/iTunes technologies to grab control of digital music distribution. If Universal couldn’t quite understand the digital transition, it could at least recognize a threat to its distribution channel. So it responded by competing – that is, trying to give customers what they wanted.

Still, if I were a Universal shareholder I wouldn’t let Morris off the hook. What kind of manager, in an industry facing historic disruption, is uninterested in learning about the source of that disruption? A CEO can’t be an expert on everything. But can’t the guy learn just a little bit about technology?

Freedom to Tinker is hosted by Princeton's Center for Information Technology Policy, a research center that studies digital technologies in public life. Here you'll find comment and analysis from the digital frontier, written by the Center's faculty, students, and friends.