GIN, TELEVISION, AND COGNITIVE SURPLUS

And this is the other thing about the size of the cognitive surplus we're talking about. It's so large that even a small change could have huge ramifications. Let's say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That's about five times the size of the annual U.S. consumption. One per cent of that is 98 Wikipedia projects per year worth of participation.

Retreating to the luxury of Sonoma to discuss economic theory in mid-2008 conveys images of Fiddling while Rome Burns. Do the architects of Microsoft, Amazon, Google, PayPal, and Facebook have anything to teach the behavioral economists—and anything to learn? So what? What's new?? As it turns out, all kinds of things are new.

"All kinds of things are new", and something very big is in the air. According to Sean Parker, the cofounder of Napster, Plaxo, and Facebook (as well as Facebook's founding president) who was present in Sonoma. "If you're not on Facebook, you don't exist".

Social software has arrived, and if you don't pay attention and take onboard the developments at Google, Twitter, Facebook, Wikipedia, etc., you are opting out of being a serious player in the realm of 21st Century ideas.

One of the more interesting contributions to the 2008 Edge World Question Center event was by Tim O'Reilly, the always-innovative guru, entrepreneur, publisher/evangelist of Web 2.0 social software revolution. In his piece (below), O'Reilly writes about his initial skepticism regarding Clay Shirky's 2002 vision of "social software". These comments are an infomative preamble to a recent talk in which Shirky coins the phrase "cognitive surplus".

According to Shirky:

Starting after the second world war, a whole host of factors, like rising GDP, rising educational attainment, and rising life-span, forced the industrialized world to grapple with something new: free time. Lots and lots of free time. The amount of unstructured time among the educated population ballooned, accounting for billions of hours a year. And what did we do with that time? Mostly, we watched TV.

Society never really knows what do do with any surplus at first. (That's what makes it a surplus.) In this case, we had to find something to do with the sudden spike in surplus hours. The sitcom was our gin, a ready-made response to the crisis of free time. TV has become a half-time job for most citizens of the industrialized world, at an average of 20 hours a week, every week, for decades.

Now, though, for the first time in its history, young people are watching less TV than their elders, and the cause of the decline is competition for their free time from media that allow for active and social participation, not just passive and individual consumption.

The value in media is no longer in sources but in flows; when we pool our cognitive surplus, it creates value that doesn't exist when we operate in isolation. The displacement of TV watching is coming among people who are using more of their time to make things and do things, sometimes alone and sometimes together, and to share those things with others.

When Shirky first made this assertion at a tech conference, he was astonished to see the video of the speech rocket around the web faster and more broadly than anything else he had ever said or done.

Shirky believes that "we can take advantage of our cognitive surplus, but only if we start regarding pure consumption as an anomaly, and broad participation as the norm. This not a dispassionate argument, because the stakes are so high. We don't get to decide whether we want a new society. The changes we are under can't be rolled back, nor contained in the present institutional frameworks. What we might get to decide is how we want this change to turn out."

"To call the current opportunity 'once in a lifetime'", he continues, "understates its enormity; the change in the social landscape is altering institutions that have been stable for generations, and making possible new kinds of human engagement that have never existed before. The results could be a marvel, or a catastrophe, depending on how seriously we try to shape what's possible."

If you want new, and original thinking, look no further.

Edge is pleased to present the video and transcript of Shirky's talk below with the hope that an ensuing Reality Club discussion will further sharpen the argument.

CLAY SHIRKY is an adjunct professor in NYU's graduate Interactive Telecommunications Program (ITP), where he teaches courses on the interrelated effects of social and technological network topology—how our networks shape culture and vice-versa. He is the author of Here Comes Everybody.

In November 2002, Clay Shirky organized a "social software summit," based on the premise that we were entering a "golden age of social software... greatly extending the ability of groups to self-organize."

I was skeptical of the term "social software" at the time. The explicit social software of the day, applications like friendster and meetup, were interesting, but didn't seem likely to be the seed of the next big Silicon Valley revolution.

I preferred to focus instead on the related ideas that I eventually formulated as "Web 2.0," namely that the internet is displacing Microsoft Windows as the dominant software development platform, and that the competitive edge on that platform comes from aggregating the collective intelligence of everyone who uses the platform. The common thread that linked Google's PageRank, ebay's marketplace, Amazon's user reviews, Wikipedia's user-generated encyclopedia, and CraigsList's self-service classified advertising seemed too broad a phenomenon to be successfully captured by the term "social software." (This is also my complaint about the term "user generated content.") By framing the phenomenon too narrowly, you can exclude the exemplars that help to understand its true nature. I was looking for a bigger metaphor, one that would tie together everything from open source software to the rise of web applications.

You wouldn't think to describe Google as social software, yet Google's search results are profoundly shaped by its collective interactions with its users: every time someone makes a link on the web, Google follows that link to find the new site. It weights the value of the link based on a kind of implicit social graph (a link from site A is more authoritative than one from site B, based in part on the size and quality of the network that in turn references either A or B). When someone makes a search, they also benefit from the data Google has mined from the choices millions of other people have made when following links provided as the result of previous searches.

You wouldn't describe ebay or Craigslist or Wikipedia as social software either, yet each of them is the product of a passionate community, without which none of those sites would exist, and from which they draw their strength, like Antaeus touching mother earth. Photo sharing site Flickr or bookmark sharing site del.icio.us (both now owned by Yahoo!) also exploit the power of an internet community to build a collective work that is more valuable than could be provided by an individual contributor. But again, the social aspect is implicit — harnessed and applied, but never the featured act.

Now, five years after Clay's social software summit, Facebook, an application that explicitly explores the notion of the social network, has captured the imagination of those looking for the next internet frontier. I find myself ruefully remembering my skeptical comments to Clay after the summit, and wondering if he's saying "I told you so."

Mark Zuckerberg, Facebook's young founder and CEO, woke up the industry when he began speaking of "the social graph" — that's computer-science-speak for the mathematical structure that maps the relationships between people participating in Facebook — as the core of his platform. There is real power in thinking of today's leading internet applications explicitly as social software.

Mark's insight that the opportunity is not just about building a "social networking site" but rather building a platform based on the social graph itself provides a lens through which to re-think countless other applications. Products like xobni (inbox spelled backwards) and MarkLogic's MarkMail explore the social graph hidden in our email communications; Google and Yahoo! have both announced projects around this same idea. Google also acquired Jaiku, a pioneer in building a social-graph enabled address book for the phone.

This is not to say that the idea of the social graph as the next big thing invalidates the other insights I was working with. Instead, it clarifies and expands them:

Massive collections of data and the software that manipulates those collections, not software alone, are the heart of the next generation of applications.
The social graph is only one instance of a class of data structure that will prove increasingly important as we build applications powered by data at internet scale. You can think of the mapping of people, businesses, and events to places as the "location graph", or the relationship of search queries to results and advertisements as the "question-answer graph."

The graph exists outside of any particular application; multiple applications may explore and expose parts of it, gradually building a model of relationships that exist in the real world.

As these various data graphs become the indispensable foundation of the next generation "internet operating system," we face one of two outcomes: either the data will be shared by interoperable applications, or the company that first gets to a critical mass of useful data will become the supplier to other applications, and ultimately the master of that domain.

So have I really changed my mind? As you can see, I'm incorporating "social software" into my own ongoing explanations of the future of computer applications.

It's curious to look back at the notes from that first Social Software summit. Many core insights are there, but the details are all wrong. Many of the projects and companies mentioned have disappeared, while the ideas have moved beyond that small group of 30 or so people, and in the process have become clearer and more focused, imperceptibly shifting from what we thought then to what we think now.

Both Clay, who thought then that "social software" was a meaningful metaphor and I, who found it less useful then than I do today, have changed our minds. A concept is a frame, an organizing principle, a tool that helps us see. It seems to me that we all change our minds every day through the accretion of new facts, new ideas, new circumstances. We constantly retell the story of the past as seen through the lens of the present, and only sometimes are the changes profound enough to require a complete repudiation of what went before.

Ideas themselves are perhaps the ultimate social software, evolving via the conversations we have with each other, the artifacts we create, and the stories we tell to explain them.

Yes, if facts change our mind, that's science. But when ideas change our minds, we see those facts afresh, and that's history, culture, science, and philosophy all in one.

TIM O'REILLY is the founder and CEO of O'Reilly Media, Inc., one of the leading computer book publishers in the world. O'Reilly Media also hosts conferences on technology topics, including the the Web 2.0 Summit, the Web 2.0 Expo, the O'Reilly Open Source Convention, and the O'Reilly Emerging Technology Conference. O'Reilly's blog, the O'Reilly Radar, "watches the alpha geeks".

I was recently reminded of some reading I did in college, way back in the last century, by a British historian arguing that the critical technology, for the early phase of the industrial revolution, was gin.

The transformation from rural to urban life was so sudden, and so wrenching, that the only thing society could do to manage was to drink itself into a stupor for a generation. The stories from that era are amazing—there were gin pushcarts working their way through the streets of London.

And it wasn't until society woke up from that collective bender that we actually started to get the institutional structures that we associate with the industrial revolution today. Things like public libraries and museums, increasingly broad education for children, elected leaders—a lot of things we like—didn't happen until having all of those people together stopped seeming like a crisis and started seeming like an asset.

It wasn't until people started thinking of this as a vast civic surplus, one they could design for rather than just dissipate, that we started to get what we think of now as an industrial society.

If I had to pick the critical technology for the 20th century, the bit of social lubricant without which the wheels would've come off the whole enterprise, I'd say it was the sitcom. Starting with the Second World War a whole series of things happened—rising GDP per capita, rising educational attainment, rising life expectancy and, critically, a rising number of people who were working five-day work weeks. For the first time, society forced onto an enormous number of its citizens the requirement to manage something they had never had to manage before—free time.

And what did we do with that free time? Well, mostly we spent it watching TV.

We did that for decades. We watched I Love Lucy. We watched Gilligan's Island. We watch Malcolm in the Middle. We watch Desperate Housewives. Desperate Housewives essentially functioned as a kind of cognitive heat sink, dissipating thinking that might otherwise have built up and caused society to overheat.

And it's only now, as we're waking up from that collective bender, that we're starting to see the cognitive surplus as an asset rather than as a crisis. We're seeing things being designed to take advantage of that surplus, to deploy it in ways more engaging than just having a TV in everybody's basement.

This hit me in a conversation I had about two months ago. I've finished a book called which has recently come out, and this recognition came out of a conversation I had about the book. I was being interviewed by a TV producer to see whether I should be on their show, and she asked me, "What are you seeing out there that's interesting?"

I started telling her about the Wikipedia article on Pluto. You may remember that Pluto got kicked out of the planet club a couple of years ago, so all of a sudden there was all of this activity on Wikipedia. The talk pages light up, people are editing the article like mad, and the whole community is in an ruckus —"How should we characterize this change in Pluto's status?" And a little bit at a time they move the article—fighting offstage all the while—from, "Pluto is the ninth planet," to "Pluto is an odd-shaped rock with an odd-shaped orbit at the edge of the solar system."

So I tell her all this stuff, and I think, "Okay, we're going to have a conversation about authority or social construction or whatever." That wasn't her question. She heard this story and she shook her head and said, "Where do people find the time?" That was her question. And I just kind of snapped. And I said, "No one who works in TV gets to ask that question. You know where the time comes from. It comes from the cognitive surplus you've been masking for 50 years."

So how big is that surplus? If you take Wikipedia as a kind of unit, all of Wikipedia, the whole project—every page, every edit, every line of code, in every language Wikipedia exists in—that represents something like the cumulation of 98 million hours of human thought. I worked this out with Martin Wattenberg at IBM; it's a back-of-the-envelope calculation, but it's the right order of magnitude, about 98 million hours of thought.

And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that's 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 98 million hours every weekend, just watching the ads. This is a pretty big surplus. People asking, "Where do they find the time?" when they're looking at things like Wikipedia don't understand how tiny that entire project is, as a carve-out of the cognitive surplus that's finally being dragged into what Tim O'Reilly calls an architecture of participation.

Now, the interesting thing about a surplus like that is that society doesn't know what to do with it at first—hence the gin, hence the sitcoms. Because if people knew what to do with a surplus with reference to the existing social institutions, it wouldn't be a surplus, would it? It's precisely when no one has any idea how to deploy something that people have to start experimenting with it, in order for the surplus to get integrated, and the course of that integration can transform society.

The early phase for taking advantage of this cognitive surplus, the phase I think we're still in, is all special cases. The physics of participation is much more like the physics of weather than it is like the physics of gravity. We know all the forces that combine to make these kinds of things work: there's an interesting community over here, there's an interesting sharing model over there, those people are collaborating on open source software. But despite knowing the inputs, we can't predict the outputs yet because there's so much complexity.

The way you explore complex ecosystems is you just try lots and lots and lots of things, and you hope that everybody who fails fails informatively so that you can at least find a skull on a pikestaff near where you're going. That's the phase we're in now.

Just to pick one example, one I'm in love with, but it's tiny. A couple of weeks one of my students at ITP forwarded me a a project started by a professor in Brazil, in Fortaleza, named Vasco Furtado. It's a Wiki Map for crime in Brazil. If there's an assault, if there's a burglary, if there's a mugging, a robbery, a rape, a murder, you can go and put a push-pin on a Google Map, and you can characterize the assault, and you start to see a map of where these crimes are occurring.

Now, this already exists as tacit information. Anybody who knows a town has some sense of, "Don't go there. That street corner is dangerous. Don't go in this neighborhood. Be careful there after dark." But it's something society knows without society really knowing it, which is to say there's no public source where you can take advantage of it. And the cops, if they have that information, they're certainly not sharing. In fact, one of the things Furtado says in starting the Wiki crime map was, "This information may or may not exist some place in society, but it's actually easier for me to try to rebuild it from scratch than to try and get it from the authorities who might have it now."

Maybe this will succeed or maybe it will fail. The normal case of social software is still failure; most of these experiments don't pan out. But the ones that do are quite incredible, and I hope that this one succeeds, obviously. But even if it doesn't, it's illustrated the point already, which is that someone working alone, with really cheap tools, has a reasonable hope of carving out enough of the cognitive surplus, enough of the desire to participate, enough of the collective goodwill of the citizens, to create a resource you couldn't have imagined existing even five years ago.

So that's the answer to the question, "Where do they find the time?" Or, rather, that's the numerical answer. But beneath that question was another thought, this one not a question but an observation. In this same conversation with the TV producer I was talking about World of Warcraft guilds, and as I was talking, I could sort of see what she was thinking: "Losers. Grown men sitting in their basement pretending to be elves."

At least they're doing something.

Did you ever see that episode of Gilligan's Island where they almost get off the island and then Gilligan messes up and then they don't? I saw that one. I saw that one a lot when I was growing up. And every half-hour that I watched that was a half an hour I wasn't posting at my blog or editing Wikipedia or contributing to a mailing list. Now I had an ironclad excuse for not doing those things, which is none of those things existed then. I was forced into the channel of media the way it was because it was the only option. Now it's not, and that's the big surprise. However lousy it is to sit in your basement and pretend to be an elf, I can tell you from personal experience it's worse to sit in your basement and try to figure if Ginger or Mary Ann is cuter.

And I'm willing to raise that to a general principle. It's better to do something than to do nothing. Even lolcats, even cute pictures of kittens made even cuter with the addition of cute captions, hold out an invitation to participation. When you see a lolcat, one of the things it says to the viewer is, "If you have some sans-serif fonts on your computer, you can play this game, too." And that's message—I can do that, too—is a big change.

This is something that people in the media world don't understand. Media in the 20th century was run as a single race—consumption. How much can we produce? How much can you consume? Can we produce more and you'll consume more? And the answer to that question has generally been yes. But media is actually a triathlon, it 's three different events. People like to consume, but they also like to produce, and they like to share.

And what's astonished people who were committed to the structure of the previous society, prior to trying to take this surplus and do something interesting, is that they're discovering that when you offer people the opportunity to produce and to share, they'll take you up on that offer. It doesn't mean that we'll never sit around mindlessly watching Scrubs on the couch. It just means we'll do it less.

And this is the other thing about the size of the cognitive surplus we're talking about. It's so large that even a small change could have huge ramifications. Let's say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That's about five times the size of the annual U.S. consumption. One per cent of that is 98 Wikipedia projects per year worth of participation.

I think that's going to be a big deal. Don't you?

Well, the TV producer did not think this was going to be a big deal; she was not digging this line of thought. And her final question to me was essentially, "Isn't this all just a fad?" You know, sort of the flagpole-sitting of the early early 21st century? It's fun to go out and produce and share a little bit, but then people are going to eventually realize, "This isn't as good as doing what I was doing before," and settle down. And I made a spirited argument that no, this wasn't the case, that this was in fact a big one-time shift, more analogous to the industrial revolution than to flagpole-sitting.

I was arguing that this isn't the sort of thing society grows out of. It's the sort of thing that society grows into. But I'm not sure she believed me, in part because she didn't want to believe me, but also in part because I didn't have the right story yet. And now I do.

I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, "Looking for the mouse."

Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for. Those are things that make me believe that this is a one-way change. Because four year olds, the people who are soaking most deeply in the current environment, who won't have to go through the trauma that I have to go through of trying to unlearn a childhood spent watching Gilligan's Island, they just assume that media includes consuming, producing and sharing.

It's also become my motto, when people ask me what we're doing—and when I say "we" I mean the larger society trying to figure out how to deploy this cognitive surplus, but I also mean we, especially, the people in this room, the people who are working hammer and tongs at figuring out the next good idea. From now on, that's what I'm going to tell them: We're looking for the mouse.

We're going to look at every place that a reader or a listener or a viewer or a user has been locked out, has been served up passive or a fixed or a canned experience, and ask ourselves, "If we carve out a little bit of the cognitive surplus and deploy it here, could we make a good thing happen?" And I'm betting the answer is yes.

Reality Club Discussion

Edge has latterly published two provocative pieces, Jon Haidt's essay on why people vote Republican and Clay Shirky's ruminations and calculations on the cognitive surplus we have at our disposal. To a historian, these pieces dovetail and underscore a fundamental landslip that's taking place around us. ... no Edge visitor should miss either. Roughly speaking, we are discovering that words don't matter.

Or they don't matter as much as we thought. ...

Shirky's piece gives more context for our transition away from words that matter. I don't mean we don't speak and write and that words aren't highly functional tools, but the exact framing of sentences and the precise structure of the verbal argument are less and less important. Bullet points on a powerpoint get the conversation going and the group working together gets to the result that matters. The "writer" is less important than he has been since, oh, Herodotus. (Example? Obama's speech on race earlier this summer. Good work, well-written, seen by almost no one, read by a few, and then blown off the screens by his preacher's TV appearances. Net result, the image and the illogic prevail.)

Shirky is one of many voices confirming that this fading of the power of the specific written word is not all bad news and even has good news to it, but the old classics professor in me at least needs to slow down long enough to observe the the humanistic culture of the orator from Demosthenes to Martin Luther King Jr. is decisively gone. We don't fully understand what's replacing it, but it's happening all around us—you might even call it a third culture...

When Clay Shirky says "here comes everybody" and foresees a rapidly-exponentiating realm for assertive human creativity, I am a fellow-traveler—although with some worries and dour reservations.

Twenty years ago I wrote about a near future when online communications, agile vision and instant knowledge would unleash individual self-expression in a profusion of hobbies, avocations, side-vocations and ad-hoc interest groups, shattering rigid categories and guild boundaries of the past. The coming of an "Age of Amateurs" seemed obvious then, for a number of reasons.

And yes, I appreciate Clay Shirky's historical narrative about "getting accustomed to surplus." These revolutions go way back and often require adjustment. I loved his reference to everybody getting stoned in the Age of Gin, and comparing this (simplistically but amusingly) to the way folks became couch potatoes in the era of TV. And yet, Nicholas Carr is right to take umbrage, pointing out that those decades contained plenty of people bent on being more than mere passive content-consumers. The great work of improving society took gumption. Indeed, the personal computer arose out of hobbyists who saw, in the CRT screen, something potentially far greater than a mere glass teat.

Still, I balked when Carr brought up 1968 as a year of shining involvement. I shuddered. But more on that, anon.

As usual, I find myself pointing out the obvious—that things have been a lot more complex than either Shirky or Carr would have us perceive. Both of them are very right and both tragically wrong. For example, even in the 1960s, Marshall McLuhan sensed somehow—without actually foreseeing the Internet—that new media would foster a more active way of viewing the world. He had a vague notion of what he wanted, what was needed, but only vaporous hopes for how it might come about.

Recall how some folks fantasized such a role for Public Access cable TV? Again, desire far exceeded reification. And yet, desire sometimes refuses to be thwarted! Think about the lowly VCR—a Rube Goldberg contraption of such astounding complexity, that it should never have worked, let-alone been mass produced so cheaply and reliably that, soon, nobody even bothered to repair them. Long before the arrival of digital media, people somehow got what they wanted most, an ability to control what they would watch, the purest case of mass desire overcoming the limitations of practical technology.

Key point: Yes, new tools can propel new ways of thinking. But just as often, vision precedes the tools. A theme that I'll return-to.

Getting back to the notion of specialization, I've asserted that the one and only truly monotonic trend of the 20th Century was the professionalization of everything—continuing down a road that began in early farming towns of the Zagros mountains. When agriculture provided a predictable excess of food production, a top layer of specialists could be supported. At first, specialist thieves and bullies. Then—after some adjustment—a layer of specialists who gave value back with literacy (expanded memory) and the perspective from atop a ziggurat (expanded vision.) These vision/knowledge revolutions have happened many times since, and Shirky is right that adjustment is never easy.

And, yes, he can sense that the most recent shift is new and different from all of those that came before. After six thousand years, that trend toward ever-greater specialization has reached both its culmination (in a society filled with highly-trained college graduates) and its ultimate limit.

Demographically, the number of professionals can at-most go through two more doublings before you simply run out of plausible people on Earth to professionalize! Even if the Age of Amateurs had not already been spawned by leisure time and the internet, we'd soon be forced to invent it.

Indeed, given the range of proliferating problems that lie ahead, only such a civilization will have the agility to respond quickly to rapidly varying demands for attention and expertise and critical exchanges of accountability. So, yes, so far I agree with Shirky. But where we part company is over how natural or easy the next step on this path will be, or whether all that eager "involvement" will actually accomplish much.

In fact, I am far less satisfied than he is with the enabling systems that exist, or seem to be on the drawing boards. A world filled with assertive amateurs will be better than one of bland consumer-drones, sure. But it will still fall far short of its potential, if those amateurs are effectively lobotomized by software and interfaces and tools that limit what they can ponder, communicate or achieve.

Indeed, there are some failure modes—e.g. the creation of a myriad super-empowered angry young fanatics—that are likely to be fostered by a primitive fiesta of self-expression. We already see a grand vista, not of discourse but of miniature Nuremberg rallies, with millions coalescing to heil their group totems. And when this goes sour, there will be only two possible solutions. Either a retreat into hierarchical control, or a true continuation down the path of empowered citizenship, to a world where reciprocal accountability and mass/individualist creativity take us to another level.

Clay Shirky's essay prompts a question: "If those past "revolutions" were so chaotic and painful, why are you so blithe about the present one going well?" I look at all the crude socialnet sites, at Second Life, at the blogosphere, and perceive something halfway between his wondrous, self-organizing realm of free citizenship and the cesspool of rancid opinion perceived by Nicholas Carr and the cybergrouches. A lot of good has come out of the new trends... the web and wikis and blogosphere have been (variously) useful and empowering and a lot more potential is there. But overall, if this is all we can hope for—a Force 5 gale of raw opinion—then the grouches win on points.

Tell me about the sites where really bad assertions go to die —the way phlogiston and witch-burnings died—a well-deserved death that ought to follow the most noxious assertions across our culture, so that truly disproved nonsense can actually go away, making way for new ideas. If you dismiss this as impossible, then I think your hopes for the web are far too timid, since the allegory should be a vivid human mind—and complex human beings, sane ones, can actually drop a bad idea, from time to time.

Show me the synchronous virtual realms where people communicate in units larger than a cutoff sentence. Yes, there are asynchronous realms, like this one, where bright adults do express ideas more complex than a sentence. Terrific. But does anything actually happen? Show me the software that helps really smart mobs to coalesce. To those who say such things already exist, I have to reply "Guys, your standards and expectations are really low! And unworthy of your dreams."

Recall Nicholas Carr's evocation of that dire year, 1968, one that was more exhausting than any decade. A majority of Americans did sit at home, across that awful, compact-epoch, suckling their boob tubes and nursing resentment toward those who had chosen to get involved. Shirky is right that the post-Web world would have overcome some of that passivity and provided more varieties of involvement. Still, Carr is also right, to suggestplus ca change...

To me, the allegory of that year is far more disturbing. My father was twenty feet from RFK when he was shot. I saw the roiling maelstrom of sanctimony and delusion that drenched all sides, in an era when people thought that they were fantastically well-informed by new media and when oversimplifications made caricatures of every good intention. And I see a chilling reflection of today.

In November 2002, Clay Shirky organized a "social software summit," based on the premise that we were entering a "golden age of social software... greatly extending the ability of groups to self-organize."

I was skeptical of the term "social software" at the time. The explicit social software of the day, applications like friendster and meetup, were interesting, but didn't seem likely to be the seed of the next big Silicon Valley revolution.

I preferred to focus instead on the related ideas that I eventually formulated as "Web 2.0," namely that the internet is displacing Microsoft Windows as the dominant software development platform, and that the competitive edge on that platform comes from aggregating the collective intelligence of everyone who uses the platform. The common thread that linked Google's PageRank, ebay's marketplace, Amazon's user reviews, Wikipedia's user-generated encyclopedia, and CraigsList's self-service classified advertising seemed too broad a phenomenon to be successfully captured by the term "social software." (This is also my complaint about the term "user generated content.") By framing the phenomenon too narrowly, you can exclude the exemplars that help to understand its true nature. I was looking for a bigger metaphor, one that would tie together everything from open source software to the rise of web applications.

You wouldn't think to describe Google as social software, yet Google's search results are profoundly shaped by its collective interactions with its users: every time someone makes a link on the web, Google follows that link to find the new site. It weights the value of the link based on a kind of implicit social graph (a link from site A is more authoritative than one from site B, based in part on the size and quality of the network that in turn references either A or B). When someone makes a search, they also benefit from the data Google has mined from the choices millions of other people have made when following links provided as the result of previous searches.

You wouldn't describe ebay or Craigslist or Wikipedia as social software either, yet each of them is the product of a passionate community, without which none of those sites would exist, and from which they draw their strength, like Antaeus touching mother earth. Photo sharing site Flickr or bookmark sharing site del.icio.us (both now owned by Yahoo!) also exploit the power of an internet community to build a collective work that is more valuable than could be provided by an individual contributor. But again, the social aspect is implicit — harnessed and applied, but never the featured act.

Now, five years after Clay's social software summit, Facebook, an application that explicitly explores the notion of the social network, has captured the imagination of those looking for the next internet frontier. I find myself ruefully remembering my skeptical comments to Clay after the summit, and wondering if he's saying "I told you so."

Mark Zuckerberg, Facebook's young founder and CEO, woke up the industry when he began speaking of "the social graph" — that's computer-science-speak for the mathematical structure that maps the relationships between people participating in Facebook — as the core of his platform. There is real power in thinking of today's leading internet applications explicitly as social software.

Mark's insight that the opportunity is not just about building a "social networking site" but rather building a platform based on the social graph itself provides a lens through which to re-think countless other applications. Products like xobni (inbox spelled backwards) and MarkLogic's MarkMail explore the social graph hidden in our email communications; Google and Yahoo! have both announced projects around this same idea. Google also acquired Jaiku, a pioneer in building a social-graph enabled address book for the phone.

This is not to say that the idea of the social graph as the next big thing invalidates the other insights I was working with. Instead, it clarifies and expands them:

Massive collections of data and the software that manipulates those collections, not software alone, are the heart of the next generation of applications.
The social graph is only one instance of a class of data structure that will prove increasingly important as we build applications powered by data at internet scale. You can think of the mapping of people, businesses, and events to places as the "location graph", or the relationship of search queries to results and advertisements as the "question-answer graph."

The graph exists outside of any particular application; multiple applications may explore and expose parts of it, gradually building a model of relationships that exist in the real world.

As these various data graphs become the indispensable foundation of the next generation "internet operating system," we face one of two outcomes: either the data will be shared by interoperable applications, or the company that first gets to a critical mass of useful data will become the supplier to other applications, and ultimately the master of that domain.

So have I really changed my mind? As you can see, I'm incorporating "social software" into my own ongoing explanations of the future of computer applications.

It's curious to look back at the notes from that first Social Software summit. Many core insights are there, but the details are all wrong. Many of the projects and companies mentioned have disappeared, while the ideas have moved beyond that small group of 30 or so people, and in the process have become clearer and more focused, imperceptibly shifting from what we thought then to what we think now.

Both Clay, who thought then that "social software" was a meaningful metaphor and I, who found it less useful then than I do today, have changed our minds. A concept is a frame, an organizing principle, a tool that helps us see. It seems to me that we all change our minds every day through the accretion of new facts, new ideas, new circumstances. We constantly retell the story of the past as seen through the lens of the present, and only sometimes are the changes profound enough to require a complete repudiation of what went before.

Ideas themselves are perhaps the ultimate social software, evolving via the conversations we have with each other, the artifacts we create, and the stories we tell to explain them.

Yes, if facts change our mind, that's science. But when ideas change our minds, we see those facts afresh, and that's history, culture, science, and philosophy all in one.

The true glory of Wikipedia continues to lie in the obscure, the arcane, and the ephemeral. Nowhere else will you find such lovingly detailed descriptions of TV shows, video games, cartoons, obsolete software languages, Canadian train stations, and the workings of machines that exist only in science fiction. Whatever else it may be, Wikipedia is a monument to the obsessive-compulsive fact-mongering of the adolescent male. Never has sexual sublimation been quite so wordy.

One of my favorite examples is Wikipedia’s wonderfully panoramic coverage of the popular sixties sitcom Gilligan's Island. Not only is there an entry for the show itself, but there are separate articles for each of the seven quirky castaways—Gilligan, the Skipper, the Professor, Mary Ann, Ginger, Thurston Howell III, and Eunice "Lovey" Howell—as well as the actors that played the roles, the ill-fated SS Minnow, and even the subsequent TV movies that were based on the show, including 1981’s The Harlem Globetrotters on Gilligan's Island. Best of all is the annotated list of all 98 of the episodes in the series, which includes a color-coded guide to "visitors, animals, dreams, and bamboo inventions."

It goes deeper than Wikipedia, though. Gilligan's Island has been a great motivator of user-generated content across the breadth of the web. Check out this YouTube take on the eternal question "Mary Ann or Ginger?":

In fact, if I were called in to rename Web 2.0, I think I'd call it Gilligan's Web, if only to underscore the symbiosis between the pop-culture artifacts of the mass media and so much of the user-generated content found online.

So imagine my bewilderment as I listened to Clay Shirky argue, in his speech "Gin, Television, and Cognitive Surplus," that Gilligan's Island and Web 2.0 are actually opposing forces in the grand sweep of human history. Whoa. Is Professor Shirky surfing a different web than the rest of us?

To Shirky, the TV sitcom, as exemplified by Gilligan's Island, was "the critical technology for the 20th century." Why? Because it sucked up all the spare time that people suddenly had on their hands in the decades after the second world war. The sitcom "essentially functioned as a kind of cognitive heat sink, dissipating thinking that might otherwise have built up and caused society to overheat." I'm not exactly sure what Shirky means when he speaks of society overheating, but, anyway, it wasn't until the arrival of the World Wide Web and its "architecture of participation" that we suddenly gained the capacity to do something productive with our "cognitive surplus," like edit Wikipedia articles or play the character of an elf in a World of Warcraft clan. Writes Shirky:

Did you ever see that episode of Gilligan's Island where they almost get off the island and then Gilligan messes up and then they don't? I saw that one. I saw that one a lot when I was growing up. And every half-hour that I watched that was a half an hour I wasn't posting at my blog or editing Wikipedia or contributing to a mailing list. Now I had an ironclad excuse for not doing those things, which is none of those things existed then. I was forced into the channel of media the way it was because it was the only option. Now it's not, and that's the big surprise. However lousy it is to sit in your basement and pretend to be an elf, I can tell you from personal experience it's worse to sit in your basement and try to figure if Ginger or Mary Ann is cuter.

Shirky's calculus seems to go something like this:

Spending a lot of time watching Gilligan's Island episodes: bad Spending a lot of time watching Gilligan's Island episodes and then spending a lot more time writing about the contents of those episodes on Wikipedia: good

But that's not quite fair, because Shirky is making a larger argument about society and its development. He's got bigger fish to fry than Gilligan and his mates. The journalist and blogger Scott Rosenberg does a nice job of summing up Shirky's argument:

In brief, he suggests that [during the early years of the Industrial Revolution] the English were so stunned and disoriented by the displacement of their lives from the country to the city that they anesthetized themselves with alcohol until enough time had passed for society to begin to figure out what to do with these new vast human agglomerations — how to organize cities and industrial life such that they were not only more tolerable but actually employed the surpluses they created in socially valuable ways.

This is almost certainly an oversimplification, but a provocative and fun one. It sets up a latter-day parallel in the postwar U.S., where a new level of affluence created a society in which people actually had free time. What could one possibly do with that? Enter television — the gin of the 20th century! We let it sop up all our free time for several decades until new opportunities arose to make better use of our spare brain-cycles — Shirky calls this "the cognitive surplus." And what we’re finally doing with it, or at least a little bit of it, is making new stuff on the Web.

What Shirky is doing here, in essence, is repackaging the liberation mythology that has long characterized the more utopian writings about the Web. That mythology draws a sharp distinction between our lives before the coming of the Web (BW) and our lives after the Web's blessed birth (AW). In the dark BW years, we were passive couch potatoes who were, in Shirky's words, "forced into the channel of media the way it was because it was the only option." We were driftwood, going with whatever flow "the media" imposed on us. We were all trapped in Shirky's musty cellar.

The Web, the myth continues, emancipated us. We no longer were forced into the channel of passive consumption. We could "participate." We could "share." We could "produce." When we turned our necks from the TV screen to the computer screen, we were liberated:

Media in the 20th century was run as a single race—consumption. How much can we produce? How much can you consume? Can we produce more and you'll consume more? And the answer to that question has generally been yes. But media is actually a triathlon, it 's three different events. People like to consume, but they also like to produce, and they like to share. And what's astonished people who were committed to the structure of the previous society, prior to trying to take this [cognitive] surplus and do something interesting, is that they're discovering that when you offer people the opportunity to produce and to share, they'll take you up on that offer.

I think we'd all agree that the Web is changing the structure of media, and that's going to have many important ramifications. Some will be good, and some will be bad, and the way they will all shake out remains unknown. But what about Shirky's idea that in the BW years we were unable to do anything "interesting" with our "cognitive surplus"—that the "only option" was watching TV? That, frankly, is bull. It may well be that Clay Shirky spent all his time pre-1990 watching sitcoms in his cellar (though I very much doubt it) but I was also alive in those benighted years, and I seem to remember a whole lot more going on.

Did my friends and I watch Gilligan's Island? You bet we did—and thoroughly enjoyed it (though with a bit more ironic distance than Shirky allows). Watching sitcoms and the other junk served up by the boob tube was certainly part of our lives. But it was not the center of our lives. Most of the people I knew were doing a whole lot of "participating," "producing," and "sharing," and, to boot, they were doing it not only in the symbolic sphere of the media but in the actual physical world as well. They were making 8-millimeter films, playing drums and guitars and saxophones in bands, composing songs, writing poems and stories, painting pictures, making woodblock prints, taking and developing photographs, drawing comics, souping up cars, constructing elaborate model railroads, reading great books and watching great movies and discussing them passionately well into the night, volunteering in political campaigns, protesting for various causes, and on and on and on.

People were, in other words, every bit as capable of living rich, multidimensional, interesting, creative, and "participative" lives before the web came along as they are today—and a lot of people did live such lives. And they often lived them even while spending considerable portions of their time watching TV or drinking gin or sitting in a lotus position intentionally frittering away their "cognitive surplus." (There’s a creepy kind of neo-Puritanism at work in Shirky’s calculations of how productively we’re "deploying" our "cognitive surplus," but that’s a different story.)

It's worth remembering that Gilligan's Island originally ran on television from late 1964 to late 1967, a period noteworthy not for its social passivity but for its social activism. These were years not only of great cultural and artistic exploration and inventiveness (it was the first great age of the garage band, for one thing) but also of widespread protest, when people organized into very large—and very real—groups within the civil rights movement, the antiwar movement, the feminist movement, the folk movement, the psychedelic movement, and all sorts of other movements. People weren't in their basements; they were in the streets.

If everyone was so enervated by Gilligan's Island, how exactly do you explain 1968? The answer is: you don't, and you can't.

Indeed, once you begin contrasting 1968 with 2008, you might even find yourself thinking that, on balance, the Web is not an engine for social activism but an engine for social passivity. You might even suggest that the Web funnels our urges for "participation" and "sharing" into politically and commercially acceptable channels—that it turns us into play-actors, make-believe elves in make-believe clans.

To use a computer science rather than economic analogy, what Shirky is talking about is what I call the "awesome power of spare cycles"—the human potential that isn't tapped by our jobs, which for most of us is a lot of it. People wonder how Wikipedia magically arose from nothing, and how 50 million bloggers suddenly appeared, almost all of them writing for free. Who knew there was so much untapped energy all around us, just waiting for a catalyst to become productive? But of course there was. People are bored, and they'd rather not be. The guy playing Solitaire on his laptop at the airport? Spare cycles. Multiply it times a million.

I am at this moment, somewhat randomly, in a regional airport. It is tiny airport like thousands of others across the country. But, like all the others, it has to meet standard TSA security standards. There is a flight (which I am on) at 2:30 pm. It is the only flight out of this airport for the past hour. There will not be another flight out of this airport for another hour. Yet we need our full TSA apparatus. That includes the local police, who are represented by a sheriff.

I'm watching him right now. He's in his room, labeled "Sheriff". Young guy. He's watching a movie on a portable DVD player. That's fine—he won't be needed for another half hour. But of course "needed" isn't quite the right word. "Required" is closer to it. He will be required by policy to stand by, gun in holster, while I take my laptop out of my nerd backpack. He may, fingers crossed, go his entire career without a terrorist going through that security checkpoint. He may indeed never unholster that gun in the line of duty.

That sheriff is watching a movie because he has spare cycles. Spare cycles are the most powerful fuel on the planet. It's what Web 2.0 is made up of. User generated content? Spare cycles. Open source? Spare cycles. MySpace, YouTube, Facebook, Second Life? Spare cycles. They're the Soylent Green of the web.

In Wired we've got a great story about a woman who cyberstalked the lead singer of Linkin Park. She correctly guessed the password to his cellphone account. The rest was easy. She was a technician at a secure military facility, the Sandia National Labs. When eventually confronted, she explained that her job only took her half an hour a day. The rest was spare cycles. She used them to stalk the lead singer of Linkin Park.

Web 2.0 is such a phenomena because we're underused elsewhere. Bored at work, bored at home. We've got spare cycles and they're finally finding an outlet. Tap that and you've tapped an energy source that rivals anything in human history. Solitaire Players of the World Unite!