WHO SAYS WE KNOW: ON THE NEW POLITICS OF KNOWLEDGE

WHO SAYS WE KNOW: ON THE NEW POLITICS OF KNOWLEDGE

In the Middle Ages, we were told what we knew by the Church; after the printing press and the Reformation, by state censors and the licensers of publishers; with the rise of liberalism in the 19th and 20th centuries, by publishers themselves, and later by broadcast media—in any case, by a small, elite group of professionals.

But we are now confronting a new politics of knowledge, with the rise of the Internet and particularly of the collaborative Web—the Blogosphere, Wikipedia, Digg, YouTube, and in short every website and type of aggregation that invites all comers to offer their knowledge and their opinions, and to rate content, products, places, and people. It is particularly the aggregation of public opinion that instituted this new politics of knowledge.

LARRY SANGER, a co-founder of Wikipedia, recently started a new competitor, the Citizendium, or the Citizens' Compendium.

There are a lot of things that "everybody knows." Everybody knows that Everest is the tallest mountain on Earth, that 2+2=4, that most people have two eyes—and a lot of other things. If I were to go on, it would get tedious very fast, because, after all, these are things that everybody knows.

But there are also a lot of other things that "everybody knows," except thatnot everybody agrees that everybody knows them. For example, everybody knows not only that there has been significant global warming recently, but also that human beings caused this by burning fossil fuels. We know that evolution is as solidly proven as most of the rest of science, and that intelligent design isn't science at all; that Iraq never had any weapons of mass destruction (after they destroyed them); and that the U.S. government had nothing to do with the destruction of the World Trade Center. Except that, for each of these things "we all know," significant minorities insist that they're false.

Those dissenters, however, don't matter much when it comes to most journalism, reference, and education. Society forges ahead, reporting and teaching things without usually mentioning the dissenters, or only in a disparaging light. As a result, certain claims that some of us don't accept end up being background knowledge, as I'll call it. If you question such background knowledge, or even express some doubt about it, you'll look stupid, crazy, or immoral. Maybe all three.

To be able to determine society's background knowledge—to establish what "we all know"—is an awesome sort of power. This power can shape legislative agendas, steer the passions of crowds, educate whole generations, direct reading habits, and tar as radical or nutty whole groups of people who otherwise might seem perfectly normal. Exactly how this power is wielded and who wields it constitutes what we might call "the politics of knowledge." The politics of knowledge has changed tremendously over the years. In the Middle Ages, we were told what we knew by the Church; after the printing press and the Reformation, by state censors and the licensers of publishers; with the rise of liberalism in the 19th and 20th centuries, by publishers themselves, and later by broadcast media—in any case, by a small, elite group of professionals.

But we are now confronting a new politics of knowledge, with the rise of the Internet and particularly of the collaborative Web—the Blogosphere, Wikipedia, Digg, YouTube, and in short every website and type of aggregation that invites all comers to offer their knowledge and their opinions, and to rate content, products, places, and people. It is particularly the aggregation of public opinion that instituted this new politics of knowledge. In the 90s, lots of people posted essays on their personal home pages, put up fan websites, and otherwise "broadcasted themselves." But what might have been merely vain and silly a decade ago is now, thanks to aggregation of various sorts, a contribution to an online mass movement. The collected content and ratings resulting from our individual efforts give us a sort of collective authority that we did not have ten years ago.

So today, if you want to find out what "everybody knows," you aren't limited to looking at what The New York Times and Encyclopedia Britannica are taking for granted. You can turn to online sources that reflect a far broader spectrum of opinion than that of the aforementioned "small, elite group of professionals." Professionals are no longer needed for the bare purpose of the mass distribution of information and the shaping of opinion. The hegemony of the professional in determining our background knowledge is disappearing—a deeply profound truth that not everyone has fully absorbed.

The votaries of Web 2.0, and especially the devout defenders of Wikipedia, know this truth very well indeed. In their view, Wikipedia represents the democratization of knowledge itself, on a global scale, something possible for the first time in human history. Wikipedia allows everyone equal authority in stating what is known about any given topic. Their new politics of knowledge is deeply, passionately egalitarian.

Today's Establishment is nervous about Web 2.0 and Establishment-bashers love it, and for the same reason: its egalitarianism about knowledge means that, with the chorus (or cacophony) of voices out there, there is so much dissent, about everything, that there is a lot less of what "we all know." Insofar as the unity of our culture depends on a large body of background knowledge, handing a megaphone to everyone has the effect of fracturing our culture.

I, at least, think it is wonderful that the power to declare what we all know is no longer exclusively in the hands of a professional elite. A giant, open, global conversation has just begun—one that will live on for the rest of human history—and its potential for good is tremendous. Perhaps our culture is fracturing, but we may choose to interpret that as the sign of a healthy liberal society, precisely because knowledge egalitarianism gives a voice to those minorities who think that what "we all know" is actually false. And—as one of the fathers of modern liberalism, John Stuart Mill, argued—an unfettered, vigorous exchange of opinion ought to improve our grasp of the truth.

This makes a nice story; but it's not the whole story.

As it turns out, our many Web 2.0 revolutionaries have been so thoroughly seized with the successes of strong collaboration that they are resistant to recognizing some hard truths. As wonderful as it might be that the hegemony of professionals over knowledge is lessening, there is a downside: our grasp of and respect for reliable information suffers. With the rejection of professionalism has come a widespread rejection of expertise—of the proper role in society of people who make it their life's work to know stuff. This, I maintain, is not a positive development; but it is also not a necessary one. We can imagine a Web 2.0 with experts. We can imagine an Internet that is still egalitarian, but which is more open and welcoming to specialists. The new politics of knowledge that I advocate would place experts at the head of the table, but—unlike the old order—gives the general public a place at the table as well.

II

We want our encyclopedias to be as reliable as possible. There's a good reason for this. Ideally, we'd like to be able to read an encyclopedia, believe what it says, and arrive at knowledge, not error. Now, according to one leading account of knowledge called "reliabilism," associated with philosophers like Alvin Goldman and Marshall Swain, knowledge is true belief that has been arrived at by a "reliable process" (say, getting a good look at something in good light) or through a "reliable indicator of truth" (say, proper use of a calculator).

Reliability is a comparative quality; something doesn't have to be perfectly reliable in order to be reliable. So, to say that an encyclopedia is reliable is to say that it contains an unusually high proportion of truth versus error, compared to various other publications. But it can still contain some error, and perhaps a high enough proportion of error that—as many have said recently—you should never use just one reference work if you want to be sure of something. Perhaps, if one could know that an encyclopedia were perfectly reliable, one could get knowledge just by reading, understanding, and believing it. What a wonderful world that would be. But I doubt both that there is a way of knowing that about an encyclopedia, and also that humanity will ever be blessed with such a reference work. Call such a thing a perfect encyclopedia. Well, there is no such thing as a perfect encyclopedia, and if there were, we'd never know if we were holding one.

Why not? Well, when we say that encyclopedias should state the truth, do we mean the truth itself, or what the best-informed people take to be the truth—or perhaps even what the general public takes to be the truth? I'd like to say "the truth itself," but we can't simply point to the truth in the way we can point to the North Star. Some philosophers, called pragmatists, have said there's no such thing as "the truth itself," and that we should just consider the truth to be whatever the experts opine in "the ideal limit of inquiry" (in the phrase of C. S. Peirce). While I am not a pragmatist in this philosophical sense, I do think that it is misleading to say simply that encyclopedias aim at the truth. We can't just leave it at that. Unfortunately, statements do not wear little labels reading "True!" and "False!" We need a criterion of encyclopedic truth—a method whereby we can determine whether a statement in an encyclopedia is true.

Let's suppose our criterion of encyclopedic truth is encoded in how encyclopedists decide whether to publish a statement. The method no doubt used by Encyclopedia Britannica and many other reference works goes something like this. If an expert article-writer states that p is true,and the editors find p plausible, and p gets past the fact-checkers (who consult other experts and expert-compiled resources), then p is true, at least as far as this encyclopedia is concerned.

The problem is that this is a highly fallible process. Sometimes, we discover that p is false. Sometimes, it's false because somebody made a typo or misinterpreted expert opinion; but sometimes it's false because, though faithful to expert opinion, expert opinion itself turned out to be false. Even if there were a beautifully reliable method of capturing expert opinion, that wouldn't be an infallible criterion of encyclopedic truth, because expert opinion is frequently wrong. Unfortunately, as a society, we usually can't do any better: if we learn that expert opinion is wrong, the corrected view becomes the new expert opinion. Besides, experts disagree about a lot of things. It is presumptuous, and a great disservice to readers, for editors to choose one expert to believe over another.

So we shouldn't say that encyclopedias aim to capture either the truth itself or any perfectly reliable indicator of truth. That is too much to hope for from an encyclopedia. Instead, consider: what do we most want, as responsible, independent-minded researchers, out of an encyclopedia? Primarily, I think most of us want mainstream expert opinion stated clearly and accurately; but we don't want to ignore minority and popular views, either, precisely because we know that experts are sometimes wrong, even systematically wrong. We want well-agreed facts to be stated as such, but beyond that, we want to be able to consider the whole dialectical enchilada, so that we can make up our own minds for ourselves.

Notice that the word is used in various ways. For instance, journalists, interviewers, and conference organizers—people trying to gather an audience, in other words—use "expert" to mean "a person we can pass off as someone who can speak with some authority on a subject." Also, we say the "local expert" on a subject is the person who knows most, among those in a group, about the subject. Neither of these are the very interesting senses of "expert."

We also speak of experts in the credentials sense, that is, any person who meets a (vague) standard of credentials, or evidence of having studied (or practiced) some matter, to whatever extent is thought needed for expertise—for example, as defined by professional organizations. And finally, surely we also speak of experts in a more objective sense: someone who really does have expert knowledge of a subject, whatever that amounts to. On my view, objective expertise amounts to something like this: if we rely on the expert's opinion in matters of their expertise, that really does increase the probability we have the truth.

The hope is that expertise in the credentials sense is a good but imperfect sign of expertise in the objective sense. Personally, I am not so cynical as to deny this. So, I believe that if someone meets a certain standard of credentials about some topic, then that person is probably more reliable on that topic than someone picked at random. Bear in mind, however, that "credentials" should be construed very broadly, and can mean much more than simply degrees and certifications.

Encyclopedias should represent expert opinion first and foremost, but also minority and popular views. Here, surely we are stuck with the credentials sense of "expert opinion." Just as statements do not bear labels announcing their truth values, people do not bear labels announcing their objective expertise. When decision makers have to decide whether a person really is,objectively, an expert, they have to use evidence that they can agree upon. But any such evidence can count as a "credential" in a broad sense. No doubt some wholly uncredentialed people have expertise in the objective sense—some autodidacts must fit the bill. Moreover, it's surely possible for other people to come to recognize such hidden expertise. But when groups must make decisions about who is an expert, they must have evidence; if evidence of expertise, or credentials, is lacking, the decision makers cannot be expected to acquaint themselves deeply with each person individually. And what if someone who is unquestionably an expert does interview, and declare to be an expert, a wholly uncredentialed autodidact? Then that opinion is the autodidact's first credential.

Even given this goal, why not simply grant the authority to articulate what we know to experts, as Britannica does? Can't experts do a good job of representing mainstream and minority expert views as well as popular views? Or, on the other hand, why not give this authority to the general public, as Wikipedia does? Can't the general public in time get expert opinion right?

First, why open up encyclopedia projects to the general public? While the whole body of people called "experts" (in any very restrictive sense) are probably capable of writing about and representing the interests and views of the larger public, the trouble is that they won't actually want to do so, or they lack the time to do so, in as much detail as the public itself is capable of. It is difficult and tedious enough for experts to cover their own areas. While there are people with expertise about popular culture—from celebrity journalists, to video game designers, to master craft workers—there are far more people who can do a good job summarizing information about "popular" topics than there are experts about them. Similarly, there are usually a number of experts about theories that are far out of the mainstream—one thinks of people who have expert knowledge of astrology, or some kinds of alternative medicine—but again, the quantity of non-expert people able to write reasonably well about such theories is much greater.

I'll have no truck with the view that simply because something is out of the mainstream—unscientific, irrational, speculative, or politically incorrect—it therefore does not belong in an encyclopedia. Non-mainstream views need a full airing in an encyclopedia, despite the fact that "the best expert opinion" often holds them in contempt, if for no other reason than that we have better grounds on which to reject them. Moreover, as we are responsible for our own beliefs, and as the freedom to believe as we wish is essential to our dignity as human beings, encyclopedias do not have any business making decisions for us that we, who wish to remain as intellectually free as possible, would prefer to make ourselves.

There is another reason to engage the public: due to its sheer size, the public can also contribute enormous breadth and extra eyeballs for all sorts of the more usually "expert" topics, too. The general public may add a far greater assortment of topics and perspectives than one would get if one assigned only experts to write about only their areas of expertise. Moreover, the sheer quantity of eyeballs gazing at obvious mistakes means that such mistakes will be fixed more quickly and reliably than if one engages only experts working only on their areas of expertise. Finally, and perhaps most importantly, the inclusion of the general public in an encyclopedia project, and ensuring that all subjects are treated at once, will tend to reduce the insularity common to many specialized fields: the result is that the encyclopedia's readers will be subjected less to dogmatic presentations of wrongheaded intellectual fads.

Therefore, the assistance of the general public is needed in encyclopedia projects. Now let's turn to the other group: why are experts needed? Or perhaps a better question is: why it is important to ensure that experts are involved?

Experts, or specialists, possess unusual amounts of knowledge about particular topics. Because of their knowledge, they can often sum up what is known on a topic much more efficiently than a non-specialist can. Also, they often know things that virtually no non-specialist knows; and, due to their personal connections and their knowledge of the literature, they often can lay their hands on resources that extend their knowledge even further.

Another thing that experts can do, that few non-experts can, is write about their specializations in a style that is credible and professional-sounding. Frequently, students and dabblers possess an adequate understanding of a topic, but they are wholly incapable of saying much about it without revealing their inexpert knowledge, in one way or another—even if they are superb writers and even if what they say is correct, strictly speaking. This is a common problem on Wikipedia. Furthermore, while a great many specialists are terrible writers, some of the very best writers on any given topic are specialists about that topic. Many experts take great pride in their ability to write about their own fields for non-experts.

Finally, experts are—albeit fallibly—the best-suited to articulate what expert opinion is. It is for the most part experts who create the resources that fact-checkers use to check facts. This makes their direct input in an encyclopedia invaluable.

For these reasons, I believe experts should share the main responsibility of articulating what "we know" in encyclopedia projects; but they should share this responsibility with the general public. Involving both groups in a content production system has the best chance of faithfully representing the full spectrum of expression. To exclude the public is to put readers at the mercy of wrongheaded intellectual fads; and to exclude experts, or to fail to give them a special role in an encyclopedia project, is to risk getting expert opinion wrong.

III

The most massive encyclopedia in history—well, the most massive thingoften called an encyclopedia—is Wikipedia. But Wikipedia has no special role for experts in its content production system. So, can it be relied upon to get mainstream expert opinion right?

Wikipedia's defenders are capable of arguing at great length that expert involvement is not necessary. They are entirely committed to what I call dabblerism, by which I mean the view that no one should have any special role or authority in a content creation system simply on account of their expertise. I apologize for the neologism, but there is no word meaning precisely this view. I did not want to use "amateurism," since that word is opposed to "professionalism," and the view I want to discuss attacks not the privileges of professionals, per se, but of experts. The issue here is not whether people should make money from their work, but whether their special knowledge should give them some special authority. To the latter, dabblerism says no.

Wikipedia's defenders have a great many arguments for dabblerism: non-experts can create great things; the "wisdom of crowds" makes deference to experts unnecessary; studies appear to confirm this in the case of Wikipedia; there is no prima facie reason to give experts any special role; it is only fair to judge people by what they do, and not by their credentials; and making a role for experts will actually ruin the collaborative process.

Not one of these arguments is any good.

First, it is absolutely true that dabbleristic (if you will), expert-spurning content creation systems can create amazing things. That's what Web 2.0 is all about. While many might sneer at these productions generally, Web 2.0 has created some quite useful and interesting websites. Wikipedia and YouTube aren't popular for nothing, and for many people they are endlessly fascinating.

This does not go the slightest way toward showing, however, that some sort of expert guidance is neither needed, nor would be a positive addition to, content creation systems, and particularly to encyclopedia projects. Many people have looked at Wikipedia's articles and concluded that they sure could use work by experts and real editors. It's one thing to say that Wikipedia is amazing and useful; it is quite another to say that we couldn't do better by adding a role for experts.

At this point, my opponent might pull out a very interesting and popular book called The Wisdom of Crowds by James Surowiecki, and say that it shows that Wikipedia has no need of expert reviewers. Surowiecki explains some fascinating phenomena, but nowhere does he say that Wikipedia doesn't need experts. And no surprise: by Surowiecki's own criteria, there's no reason to think that Wikipedia displays "the wisdom of crowds." Let me explain.

In the introduction of the book, Surowiecki describes an agricultural fair in England in 1906, at which all manner of people competed to guess the weight of an ox. There were many non-experts in the crowd, so theaverage of the guesses should have been ridiculously off; but in fact, while the ox actually weighed in at 1,198 pounds, the average of the guesses was 1,197 pounds. This, Surowiecki says, illustrates a widely-recurring phenomenon, in which ordinary folks in great numbers acting independently can display behavior that, in aggregate, is more "wise," or accurate, than the greatest expert among them.

Of course, Surowiecki is no fool. His claim isn't that whatever data "crowds" produce are reliable, regardless of circumstances. Among other things, each member of a "crowd" needs make decisions independently of each other. But this is precisely how Wikipedia doesn't work. As he writes:

Diversity and independence are important because the best collective decisions are the product of disagreement and contest, not consensus or compromise. An intelligent group, especially when confronted with cognition problems, does not ask its members to modify their positions in order to let the group reach a decision everyone can be happy with.

But that's exactly what happens on wikis, and on Wikipedia. To be able to work together at all, consensus and compromise are the name of the game. As a result, the Wikipedian "crowd" can often agree upon some pretty ridiculous claims, which are very far from both expert opinion and from anything like an "average" of public opinion on a subject. I don't mean to say that the Wikipedia process is not robust and does not produce a lot of correct answers. It is and it does. But the process does not closely resemble the "wise crowd" phenomena that Surowiecki is explaining.

Besides, the standard examples demonstrating the strength of group guessing—say, that a classroom's average guess of the number of jelly beans in a jar is better than all individual guesses, or that experts cannot outperform financial markets—do not lend the slightest bit of support to the notion that experts and editors are not needed for publishing or content creation. There are objective facts about the number of jelly beans, or about market prices, that experts can be right or wrong about. But what facts are Wikipedians attempting to describe? Objective facts that you can point to like a stock price in a newspaper? Only rarely. The facts they want to amass are facts contained in the books and articles that, it so happens, they are so keen on citing. Who writes those books and articles? Experts, mostly. To say that expert guidance is not really needed in encyclopedia construction is like saying the opinion of the person who counted out the jelly beans before putting them in the jar is not really useful.

It's easy to be impressed with the apparent quality of Wikipedia articles. One must admit that some of the articles look very impressive, replete with multiple sections, surprising length, pictures, tables, a dry, authoritative-sounding style, and so forth. These are all good things (except for the style). But these same impressive-looking articles are all too frequently full of errors or half-truths, and—just as bad—poor writing and incoherent organization. (Jaron Lanier was eloquent on the latter points in his interesting Edge essay, "Digital Maoism.") In short, Wikipedia's dabblerism often unsurprisingly leads to amateurish results.

Some might point to Nature's December 2005 investigative report—often billed as a scientific study, though it was not peer-reviewed—that purported to show, of a set of 42 articles, that whereas the Britannica articles averaged around three errors or omissions, Wikipedia averaged around four. Wikipedia did remarkably well. But the article proved very little, as Britannica staff pointed out a few months later. There were many problems: the tiny sample size, the poor way the comparison articles were chosen and constructed, and the failure to quantify the degree of errors or the quality of writing. But the most significant problem, as I see it, was that the comparison articles were all chosen from scientific topics. Wikipedia can be expected to excel in scientific and technical topics, simply because there is relatively little disagreement about the facts in these disciplines. (Also because contributors to wikis tend to be technically-minded, but this probably matters less than that it's hard to get scientific facts wrong when you're simply copying them out of a book.) Other studies have appeared, but they provide nothing remotely resembling statistical confirmation that Wikipedia has anything like Britannica-levels of quality. One has to wonder what the results would have been if Nature had chosen 1,000 Britannica articles randomly, and then matched Wikipedia articles up with those.

Let's set aside the question whether Wikipedia's quality does, at present, rival Britannica's. One might argue that, even if it doesn't, there is still no prima facie reason to give experts any special role in the project. To give authority to people simply on the basis of their expertise is—as Wikipedians often say—simply "credentialism," and no more rational than rejecting an application from a stellar programmer simply because he lacks a B.S. in Computer Science. People should be judged based on their demonstrated abilities, not degrees.

But I can agree with that. There is no reason whatsoever to insist on any simpleminded approach to identifying experts. Some of the finest programmers in the world lack any computer science degrees, and it would be silly to fail to recognize that fact. But there is no reason why a content creation system could not recognize as a "credential," or as proof of expertise, all manner of evidence, not just degrees.

Similarly, Wikipedians have a sort of moral argument for their dabblerism: they say, sometimes, that it is only fair to judge people based on what theydo, not who they are. Meritocracy is the only fair way to justify differing levels of editorial authority in open projects; and a genuine meritocracy would assign authority not based on "credentials," but only based on what people have demonstrated they can do for the project. It is wrong and unfair to hand out authority based on credentials.

But, interestingly, Wikipedians can't help themselves to this argument. If they are fully committed to dabblerism, then they cannot justify different levels of editorial authority on any grounds. Dabblerism, as I said, is the view that no one should have any special role or authority in a content creation system simply on account of their expertise. But we can easily identify, as a kind of expertise, the proven ability to do excellent work. So dabblerism, as I defined it, is incompatible with meritocracy itself. There's another way to state this line of thought. Define "credential" as "evidence of expertise." If we reject the use of credentials, we reject all evidence of expertise; ergo, lacking any means of establishing who is an expert, we reject expertise itself. Meritocrats are necessarily expert-lovers.

I find the moral argument annoying for another reason, however. It implies that degrees, certificates, licenses, association memberships, papers, books, presentations, awards, and all other possible evidence of expertise—the whole gamut of "credentials"—just don't matter. They don't constitute good evidence of anything. But if they don't count as good evidence of expertise, why should the ability to do something on behalf of a mere Internet project count as good evidence? There is a bizarre reversal in the insular world of Wikipedia: mere quantity of work is a credential there, but not for academic tenure and advancement committees; meanwhile, degrees and peer-reviewed papers are credentials for tenure and advancement committees, but not for Wikipedia and its ilk. (Wikipedians will protest that quantity of work doesn't really matter. But, of course, it very much does.)

The last hope for rescuing dabblerism might come in the form of an argument that the use of experts will render the project less collaborative; it will "kill the goose that lays the golden eggs." Wiki-style collaboration requires that there be no differences in authority. According to this argument, we are committed to dabblerism if we want to enjoy the fruits of bottom-up collaboration.

But this is little better than an untested prejudice. The notion that experts cannot play a gentle guiding role in a genuinely bottom-up collaborative project seems to be plain old bigotry. No doubt this prejudice stems from a fear that experts will twist what should be an efficient process into the sort of slow, top-down, bureaucratic drudgery that they are used to. But this needn't be the case. Surely it isn't impossible for professors to exit the cathedral—to borrow Eric Raymond's metaphor in his essay "The Cathedral and the Bazaar"—and wander the bazaar, offering guidance and highlighting what is excellent. Will that necessarily make the bazaar less of a bazaar?

None of these arguments, dismissing special roles for experts in encyclopedia projects, is any good. The support for dabblerism—as I've defined the term—would appear irrational. Is it really?

IV

Here's a little dilemma. Wikipedia pooh-poohs the need for expert guidance; but how, then, does it propose to establish its own reliability? It can do so either by reference to something external to itself or else something internal, such as a poll of its own contributors. If it chooses something external to itself—such as the oft-cited Nature report—then it is conceding the authority of experts. In that case, who is it who says "we know"? Experts, at least partially: their view is still treated as the touchstone of Wikipedia's reliability. And if it concedes the authority of experts that far, why not bring those experts on board in an official capacity, and do a better job?

If, on the other hand, Wikipedia proposes to establish its own reliability "internally," for example through polls of its contributors, or through sheer quantity of edits, they have a ridiculously untenable position. The position entails that the word of an enormous, undifferentiated, and largely anonymous crowd of people is to be trusted, or held reliable, for no other reason than that it is such a crowd. It is one thing to argue for "the wisdom of crowds" by reference to an objective benchmark. It is quite another thing to maintain that crowds are wise simply because they are crowds. That is a philosophical view, a variety of relativism, according to which the only truth there is, the only facts there are, are literally "socially constructed" by crowds like the contributors to Wikipedia.

It's this view that Stephen Colbert was able to mock so effectively and hilariously as "wikiality": reality is what the wiki says it is. Colbert has in effect added to what "we all know." By brilliantly skewering the notion that facts are whatever Wikipedians want them to be, Colbert has added to our culture's modest stock of background knowledge—about philosophy. Thanks to Colbert, we all know now that reality isn't created by a wiki. That's no mean feat for a humorist.

But nobody really believes that reality is constructed by Wikipedia. Instead, Wikipedians attempt to take my dilemma by the horns, supporting the credibility of Wikipedia's content through a combination of both external and internal means. They insist that footnotes suffice to support an article. If a fact has been supported by a footnote, then, apparently, it is credible. This, we might say, is an external means of fact-checking; but it is up to rank-and-file Wikipedians, not any fancy experts, to add and edit the footnotes, and so it's also an internal means of fact-checking. So, where's the dilemma?

The dilemma is easy to apply here, too. If Wikipedians actually believe that the credibility of articles is improved by citing things written by experts, will it not improve them even more if people like the experts cited are given a modest role in the project? And, on the other hand, if (somehow) it is notthe fact that the cited references were created by experts, one has to wonder what the references are for. They have a mysterious, talismanic value, apparently. It seems that we all know that footnotes makes articles much more credible—but why? Whatever the reason, Wikipedians wouldn't want to say that it's because the people cited are credible authorities on their subjects.

The dilemma Wikipedia finds itself in, then, is that if it wants to establish its credibility by reference to expert opinion, then it has no reason not to invite experts to join in some advisory capacity. But this is completely intolerable for Wikipedians. Now, why is that?

Wikipedia is deeply egalitarian. One of its guiding principles is epistemic(knowledge) egalitarianism. According to epistemic egalitarianism, we are all fundamentally equal in our authority or rights to articulate what should pass for knowledge; the only grounds on which a claim can compete against other claims are to be found in the content of the claim itself, never in who makes it.

Notice that (on my account) this is a doctrine about rights or authority, not about ability; it would be simply absurd to say that we are equal in ability to declare what should pass for knowledge. Someone who has never had a course in physics is unlikely to be equal to a Nobel laureate in physics in his ability to declare what is known about physics. But epistemic egalitarianism would hold them equal in rights—for example, in the right to change a wiki page about a topic in physics—nonetheless.

Note also that epistemic egalitarianism doesn't declare we have the right to say what really is known—that too would be absurd—but only what passesfor knowledge, or what is presented as known, for example through Wikipedia's mechanisms, or through a Blogosphere that operates much like a democratic popularity contest. In fact, Wikipedia is the perfect vehicle for epistemic egalitarianism, since it allows virtually everyone to edit virtually any page. Granted, Wikipedia's "administrators" have rights that others do not have; but it is perhaps as egalitarian as it's possible for a project of its scale to be.

It is precisely the fact that it speaks about our rights to declare what passes for knowledge that makes epistemic egalitarianism a doctrine about the politics of knowledge. So, who says "we know"? We all do.

Put that way, perhaps the appeal of the doctrine should be plain. I began this essay by saying that the power to declare society's background knowledge is awesome, and that many consequential decisions, including political decisions, are deeply influenced by that background knowledge. If the Internet now makes it possible for society's background knowledge to be shaped by a far broader, more open and inclusive group of people, that would seem to be a good thing. Indeed, perhaps it is only an accident of history, not any good reason, that placed the epistemic leadership of society almost exclusively in the hands of a fairly small class of professionals. But now, through another accident of history—the rise of the Internet—the general public may partake in the conversations that determine what "everybody knows." I think this is mostly a positive development.

No doubt the main philosophical reason for epistemic egalitarianism is, like the reason for egalitarianism generally, the now-common and overarching desire for fairness. The desire for fairness creates hostility toward any authority—and not just when authority uses its power to gain an unfair advantage, but toward authority as such. That is, the most radical egalitarians advocate that our situations be made as equal as possible, including in terms of authority. But, in our specialist-friendly modern society, expertise can confer much authority not available to non-experts. Perhaps the most important and fundamental authority experts have is the authority to declare what is known. This authority, then, should be placed in the hands of everyone equally, according to a thoroughgoing egalitarianism.

I support meritocracy: I think experts deserve a prominent voice in declaring what is known, because knowledge is their life. As fallible as they are, experts, as society has traditionally identified them, are more likely to be correct than non-experts, particularly when a large majority of independent experts about an issue are in broad agreement about it. In saying this, I am merely giving voice to an assumption that underlies many of our institutions and practices. Experts know particular topics particularly well. By paying closer attention to experts, we improve our chances of getting the truth; by ignoring them, we throw our chances to the wind. Thus, if we reduce experts to the level of the rest of us, even when they speak about their areas of knowledge, we reduce society's collective grasp of the truth.

It is no exaggeration to say that epistemic egalitarianism, as illustrated especially by Wikipedia, places Truth in the service of Equality. Ultimately, at the bottom of the debate, the deep modern commitment to specialization is in an epic struggle with an equally deep modern commitment to egalitarianism. It's Truth versus Equality, and as much as I love Equality, if it comes down to choosing, I'm on the side of Truth.

Reality Club Discussion

How did I come to know what I know about the world and myself? What ought I to know? What would I like to know that I don't know? If I want to know about this or that, where can I get the clearest, best and latest information? And where did these other people about me get their ideas about things, which are sometimes so different from mine? — H.G. Wells

"The day when an energetic journalist could gather together a few star contributors and a miscellany of compilers of very uneven quality to scribble him special articles, often tainted with propaganda and advertisement, and call it an Encyclopaedia, is past."

So proclaimed H. G. Wells, to the Royal Institution of Great Britain, on November 20th, 1936. With darkness descending across Europe, Wells called for "a World Encyclopaedia... carefully assembled with the approval of outstanding authorities in each subject... alive and growing and changing continually under revision, extension and replacement from the original thinkers in the world."

"How did I come to know what I know about the world and myself?" asked Wells. "What ought I to know? What would I like to know that I don't know? If I want to know about this or that, where can I get the clearest, best and latest information? And where did these other people about me get their ideas about things, which are sometimes so different from mine?"

Wells foresaw (70 years ahead of Web 2.0) that "the whole human memory can be, and probably in a short time will be, made accessible to every individual" and urged us to build a "universal organization and clarification of knowledge and ideas... a World Brain which will... have at once the concentration of a craniate animal and the diffused vitality of an amoeba."

But the wisdom of crowds did not look promising in 1936. Wells's World Encyclopedia—more Citizendium than Wikipedia—would be governed by an editorial board, with experts at the helm. Non-specialists need not apply. "In a burning hotel or cast away on a desert island they [non-specialists ] would probably do quite as well. And yet collectively they would be ill-informed."

Gresham's Law (see Wikipedia) holds that bad money drives out good. Wales's Law (that's Jimmy Wales, founder—or is it co-founder—of Wikipedia) holds that bad information will be driven out by good. But we have ample evidence of the reverse. Wikipedia or Citizendium? Wells or Wales? The difference is in the metadata—and the meta-metadata that determines who has expertise over expertise. Endless regress, or an asymptotic approach to the limits of truth?

"Ten years ago I started a company based on the assumption that people are basically good," announced E-Bay founder Pierre Omidyar in 2004. "And now I have the data to prove it." Anyone who has been defrauded on E-Bay is entitled to disagree. But, overall, the evidence from E-Bay is that most people will deal fairly. Yet a few will always cheat. So it is with Truth.

The Wikipedia community seeks equality and truth. Can they make this work? Many see evidence of failure. I see evidence that it could work. The current ailments are ones that better layers of metadata—attributions, references, and differentiation of facts from beliefs—could largely cure.

It is not just what we know that's important, it is what we don't. We not only need an encyclopedia of knowledge, we need an encyclopedia of ignorance, too. If our ignorance is not mislabeled, cataloging it in one place can be a useful tool.

"We are still too close to the beginning of the universe to be certain about its death," wrote J. D. Bernal in 1929. And we are too close to the beginning of Wikipedia (and Citizendium) to determine which—if either—is the path to truth.

He's charitable in characterizing his opponents as "egalitarian." By way of analogy, a clunky communist economy that makes everyone equally poor is not egalitarian in any admirable way, and neither is a sloppy information architecture that gives everyone equal access to creating and receiving mediocre information.

My problem with the Wikipedia was not primarily with the questions of expertise or accuracy when I wrote "Digital Maoism." Instead I was worried about the reduced expectations people seemed to have of themselves in the context of "Web 2.0." Why tweak a wiki or add data to some other conglomerate site when you now have the ability to really write and be read? Why choose to become part of an anonymous mush when you can finally be known?

Since I wrote the essay, I have paid more attention to the question of quality on the Wikipedia, and I must say, it is worse than I thought based on my earlier experience. In the areas where I have detailed knowledge, such as regarding certain obscure musical instruments, the Wikipedia is not just unreliable but unreliable in an insidious way.

Entries are often just askew enough to screw someone up who might be trying to appreciate a recording better, or trying to get the background on an exotic instrument seen on stage. The Wikipedia has found a way to efficiently enable the fallacy of specious accuracy in text. (This fallacy used to be more familiar in the domain of numbers.) Numerous mistakes occur below the threshold of detail found in conventional encyclopedias or online sources, so are hard to check, making "edit wars" excruciating. But at the same time the Wikipedia is made to appear vast and authoritative.

Another way to make the same point: The value of a good summary article is in the choice of what details to leave out. The Wikipedia is useless in this regard.

I know, I know, why don't I just go in to try to fix the problem entries I come upon? Because when you do that you have to engage in the aforementioned edit wars with anonymous people who are typically headstrong and have more time than I do to fight (but not enough time to do sufficiently thorough independent research, it seems.)

Well, ok, I just looked up one instrument (chosen at random by spinning a bottle in my instrument room); the not-at-all-obscure Chinese mouth organ "Sheng." As of this evening, the entry is typical for the Wikipedia. There is plenty of circumstantially selected, impressively detailed information, including names of some Europeans who brought shengs to the West in the 1700s and so on. But the overall effect is misleading. The emphasis is random.

For instance, the models and tunings of shengs listed are relevant in some recent contexts (when there have been Chinese instrument factories innovating to serve a modern and somewhat Western-influenced movement of music education and performance) but even within that framework, the details are hardly complete. The hot news among my Sheng-playing friends in the last few years has been the amazing innovation in models like the Hong Liang Zhao 38 key gaoyin, which are changing ideas about what can be played on the instrument.

An online exposition of modern keyed shengs ought to at least mention that the sheng world is caught up right now in a period of rapid transformation. Much more importantly, the very long history of the sheng, which includes many forms, tunings, and earlier influences on the West (going back to classical times) is not even suggested.

Of course once a Wikipedia inadequacy gets publicized, like the charming but incorrect claim that I'm a filmmaker, it gets fixed right away. It's like when a politician publicly helps a needy family now and then in front of the cameras, while leaving millions of other invisible families without health insurance. At some point you want to stop feeding such a politician families to use for publicity—and I feel the same way about trying to get faulty Wikipedia entries fixed by publicizing them.

For me, though, there is a more profound problem, and in this case my concerns are not entirely addressed by Sanger's project: Why recreate something which already exists, like an encyclopedia, when there are opportunities to create profoundly new things, like virtual simulations of the world, such as the Mirror Worlds proposed by David Gelernter? The same question can be asked about the open software community's obsessions with such things as UNIX and browsers. Maybe the human spirit isn't quite expansive enough to be revolutionary in the creative sense and the economic sense at the same time.

If there is a choice to be made, I am with Sanger. Economics and politics are only means to an end, so they shouldn't be prioritized over deeper, more beautiful stuff.

If social networking and Wiki media is the new religion, we need dissenters and atheists to challenge the new faith. Larry Sanger is making a macro argument about how society establishes "background knowledge" and a much more detailed critique of how Wikipedia works. I am not convinced by either argument but I am grateful to Sanger for making the challenge.

Take Sanger's macro argument first.

Society has "background" knowledge which is well established and provides the framework for how we understand the world. In the past background knowledge was established by an elite, from priests to publishers. We are entering a new era in which background knowledge will be created through a more open, egalitarian and democratic process enabled by Web 2.0 and its successors. One risk Sanger raises in passing—echoing Cass Sunstein inInfotopia is that Web 2.0 might fragment common platform of background knowledge. But his main focus is on the current best example of "democratic" background knowledge creation—Wikipedia—which he says is deeply flawed because it treats all contributors as equals and fails to accord a proper role to experts. So the risk is that as a society we may become dependent on a way of establishing background knowledge that is more egalitarian but less accurate.

I am not convinced by this argument. Leave aside whether this claim is true for all societies—India, China, Iran—or just the developed liberal democracies. And leave aside whether the account of history is correct: many would argue that elite control over society's background knowledge has been subject to growing contest for at least the last two centuries. That contest is now taking on new forms thanks to the Web.

For Sanger's apocalyptic scenario to be correct there would have to be a new way of establishing society's background knowledge that will displacecompeting methods, leaving us in the grip of a new, flawed, monopoly provider of background knowledge—Wikipedia—writ large.

But that is not what's happening. Instead of displacing other sources, Web 2.0 seems to be adding to them, complementing them. As readers and researchers we now have a wider array of sources to choose from and compare. And by comparing them we may become more discerning, critical and engaged readers, learning to distinguish what can be trusted from which source. Wider information sources could make us more critically engaged citizens, more used to thinking for ourselves, a point Yochai Benkler makes powerfully in The Wealth of Networks.

Let me give you a very trivial example. Every morning I scavenge for news about Arsenal football club (soccer to American readers) which has its home round the corner from mine in north London. Ten years ago my sources were confined to the two newspapers I got delivered at home which carried about one report on Arsenal every two days, written by an "expert" football reporter. When the web came along the official Arsenal.com site started to provide lots of useful additional information about upcoming fixtures accompanied by bland match reports and player interviews.

Then five years ago a slightly crazed, sometime drunk, often witty and very passionate Dublin based Arsenal fan started Arseblog which each day provides a daily round up of the news in all the newspapers, on and offline editions, including papers in France and Spain where many Arsenal players come from, as well as linking to all the other—fifteen plus—decent blogs about Arsenal.

In Sanger's nightmare scenario Arseblog would became a monopoly, displacing all other sources of news and comment about the club. That would clearly not be ideal. Sometimes the blogger in chief goes awol. Arseblog works only by drawing on and aggregating other sources from the expert to the amateur.

But Arseblog is not going to become a monopoly provider of news a bout Arsenal. Instead what we have is a much richer information ecology, in which there is a good deal of collaboration—Arseblog feeds on experts in the newspapers but also directs readers to them—as well as competition.

As Sanger puts it: "I think most us want mainstream expert opinion stated clearly and accurately; but we don't want to ignore minority and popular views, either, precisely because we know that experts are sometimes wrong, even systematically wrong. We want well-agree factors to be stated as such, but beyond that, we want to be able to consider the whole dialectical enchilada, so that we can make up our own minds for ourselves." Well that seems to be exactly what the emerging, richer media ecology provides.

So it Sanger's macro argument fails because Wikipedia is not displacing but diversifying our sources of information, that leave his much more detailed, micro critique of how Wikipedia functions.

I am no expert on Wikipedia but I did not find this convincing either. Sanger does not clearly establish that Wikipedia regularly makes serious mistakes that experts would have avoided. He says Wikipedia would be better if experts had a special role but does not specify how this might work. At one point he seems to suggest the real problem with Wikipedia is not lack of expertise but a lack of independence and diversity among contributors.

Even if Sanger is right that Wikipedia is flawed, reforming Wikipedia is not the only option. The richer information ecology created by Web 2.0 should allow a variety of alternatives to Wikipedia, such as Citizendium, to emerge which mix experts and amateurs in different ways on different topics.

Wikipedia—and its current process—does not represent a new monopoly provider of society's background knowledge. Wikipedia part of the developing "dialectical enchilada" that Sanger says we all want.

Philosopher and Researcher, Centre National de la Recherche Scientifique, Paris; Author, Reputation: What it is and Why it Matters

I like the idea of epistemic egalitarianism that underlies the Wikipedia project. But, as an epistemologist interested in the impact of Internet on knowledge, I won't bet on epistemic egalitarianism as a stable outcome of Web 2.0. So I share Larry Sanger's scepticism about the equation between Equality=Truth. The Web is not only a powerful reservoir of all sort of labelled and unlabelled information, but it is also a powerful reputational tool that introduces ranks, rating systems, weights and biases in the landscape of knowledge. Systems as different as the PageRank algorithm in Google-based on the idea that a link from page A to page B is a vote from A to B and the weight of this vote depends on who A is—and the reputational system that underlies eBay, are powerful epistemic tools insofar as they not only provide information and connect people, but sort people and information according to scales of value. Even in this information-dense world, knowledge without evaluation would be a sad desert landscape in which people would be stunned in front of an enormous and mute mass of information, as Bouvard et Pécuchet, the two heroes of Flaubert's famous novel, who decided to retire and to go through every known discipline without, in the end, being able to learn anything.

Here is my modest epistemological prediction: The more knowledge grows on Wikipedia or other similar tools on the Web, the more crucial the mastery of reputational cues about the quality of information will become. An introduction of tools for measuring"credentials" seems thus the most natural development of such a system. But of course we may disagree on what counts as "credentials" for expertise. And here I would like to invoke a parallel notion to that of epistemic egalitarianism so cherished by the Wikipedia community, that is, the notion of epistemic responsibility. What counts as a credential, how credible is the reputation of an expert, is something we may be able to rationally measure by handling in an appropriate way the huge amount of indirect criteria, reputational mechanisms and recommendation tools available today inside and outside the Web. An epistemically responsible subject is someone who is able to navigate the immense corpus of knowledge made available by the Web by using the appropriate reputational tools, as a competent connoisseur of French wine is not one who has drunk the largest number of different bottles of wine, but someone who is able to make sense of labels, appellations, regions, names of grapes and also who is able to discriminate the advice of experts and charlatans. So, if credentials are academic titles, a responsible epistemic subject should check the institutions that have delivered these titles, or check whether the holder of 15 different degrees and awards has a citation rate higher than 5 in the ISI Web of Knowledge. I can also compare the past records with the credibility gained "on the spot": someone's holding three PhD in the best American universities who write inconsistencies tells me something on these three prestigious institutions.

An efficient knowledge system like Wikipedia inevitably will grow by generating a variety of evaluative tools: that its how culture grows, how traditions are created. What is a cultural tradition? A labelling systems of insiders and outsiders, of who stays on and who is lost in the magma of the past. The good news is that in the Web era this inevitable evaluation is made through new, collective tools that challenge the received views and develop and improve an innovative and democratic way of selection of knowledge. But there's no escape from the creation of a "canonical"—even if tentative and rapidly evolving—corpus of knowledge.