The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, published in 2004, is a book written by James Surowiecki about the aggregation of information in groups, resulting in decisions that, he argues, are often better than could have been made by any single member of the group. The book presents numerous case studies and anecdotes to illustrate its argument, and touches on several fields, primarily economics and psychology.

The opening anecdote relates Francis Galton's surprise that the crowd at a county fair accurately guessed the weight of an ox when their individual guesses were averaged (the average was closer to the ox's true butchered weight than the estimates of most crowd members, and also closer than any of the separate estimates made by cattle experts).[1]

The book relates to diverse collections of independently deciding individuals, rather than crowd psychology as traditionally understood. Its central thesis, that a diverse collection of independently deciding individuals is likely to make certain types of decisions and predictions better than individuals or even experts, draws many parallels with statistical sampling; however, there is little overt discussion of statistics in the book.

Surowiecki breaks down the advantages he sees in disorganized decisions into three main types, which he classifies as

Cognition

Thinking and information processing

Market judgment, which he argues can be much faster, more reliable, and less subject to political forces than the deliberations of experts or expert committees.

Coordination

Coordination of behavior includes optimizing the utilization of a popular bar and not colliding in moving traffic flows. The book is replete with examples from experimental economics, but this section relies more on naturally occurring experiments such as pedestrians optimizing the pavement flow or the extent of crowding in popular restaurants. He examines how common understanding within a culture allows remarkably accurate judgments about specific reactions of other members of the culture.

Cooperation

How groups of people can form networks of trust without a central system controlling their behavior or directly enforcing their compliance. This section is especially pro free market.

Surowiecki studies situations (such as rational bubbles) in which the crowd produces very bad judgment, and argues that in these types of situations their cognition or cooperation failed because (in one way or another) the members of the crowd were too conscious of the opinions of others and began to emulate each other and conform rather than think differently. Although he gives experimental details of crowds collectively swayed by a persuasive speaker, he says that the main reason that groups of people intellectually conform is that the system for making decisions has a systematic flaw.

Surowiecki asserts that what happens when the decision making environment is not set up to accept the crowd, is that the benefits of individual judgments and private information are lost and that the crowd can only do as well as its smartest member, rather than perform better (as he shows is otherwise possible).[4] Detailed case histories of such failures include:

Extreme

Description

Homogeneity

Surowiecki stresses the need for diversity within a crowd to ensure enough variance in approach, thought process, and private information.

The United States Intelligence Community, the 9/11 Commission Report claims, failed to prevent the 11 September 2001 attacks partly because information held by one subdivision was not accessible by another. Surowiecki's argument is that crowds (of intelligenceanalysts in this case) work best when they choose for themselves what to work on and what information they need. (He cites the SARS-virus isolation as an example in which the free flow of data enabled laboratories around the world to coordinate research without a central point of control.)

Where choices are visible and made in sequence, an "information cascade"[5] can form in which only the first few decision makers gain anything by contemplating the choices available: once past decisions have become sufficiently informative, it pays for later decision makers to simply copy those around them. This can lead to fragile social outcomes.

Surowiecki is a very strong advocate of the benefits of decision markets and regrets the failure of DARPA's controversial Policy Analysis Market to get off the ground. He points to the success of public and internal corporate markets as evidence that a collection of people with varying points of view but the same motivation (to make a good guess) can produce an accurate aggregate prediction. According to Surowiecki, the aggregate predictions have been shown to be more reliable than the output of any think tank. He advocates extensions of the existing futures markets even into areas such as terrorist activity and prediction markets within companies.

To illustrate this thesis, he says that his publisher is able to publish a more compelling output by relying on individual authors under one-off contracts bringing book ideas to them. In this way they are able to tap into the wisdom of a much larger crowd than would be possible with an in-house writing team.

Will Hutton has argued that Surowiecki's analysis applies to value judgments as well as factual issues, with crowd decisions that "emerge of our own aggregated free will [being] astonishingly... decent". He concludes that "There's no better case for pluralism, diversity and democracy, along with a genuinely independent press."[8]

The most common application is the prediction market, a speculative or betting market created to make verifiable predictions. Surowiecki discusses the success of prediction markets. Similar to Delphi methods but unlike opinion polls, prediction (information) markets ask questions like, “Who do you think will win the election?” and predict outcomes rather well. Answers to the question, "Who will you vote for?" are not as predictive.

Assets are cash values tied to specific outcomes (e.g., Candidate X will win the election) or parameters (e.g., Next quarter's revenue). The current market prices are interpreted as predictions of the probability of the event or the expected value of the parameter. Betfair is the world's biggest prediction exchange, with around $28 billion traded in 2007. NewsFutures is an international prediction market that generates consensus probabilities for news events. Intrade.com, which operated a person to person prediction market based in Dublin Ireland achieved very high media attention in 2012 related to the US Presidential Elections, with more than 1.5 million search references to Intrade and Intrade data. Several companies now offer enterprise class prediction marketplaces to predict project completion dates, sales, or the market potential for new ideas.[citation needed] A number of Web-based quasi-prediction marketplace companies have sprung up to offer predictions primarily on sporting events and stock markets but also on other topics. Those companies include Piqqem, Cake Financial, Covestor, Predictify, and the Motley Fool (with its Fool CAPS product). The principle of the prediction market is also used in project management software such as Yanomo to let team members predict a project's "real" deadline and budget.

The Delphi method is a systematic, interactive forecasting method which relies on a panel of independent experts. The carefully selected experts answer questionnaires in two or more rounds. After each round, a facilitator provides an anonymous summary of the experts’ forecasts from the previous round as well as the reasons they provided for their judgments. Thus, participants are encouraged to revise their earlier answers in light of the replies of other members of the group. It is believed that during this process the range of the answers will decrease and the group will converge towards the "correct" answer. Many of the consensus forecasts have proven to be more accurate than forecasts made by individuals.

Illusionist Derren Brown claimed to use the 'Wisdom of Crowds' concept to explain how he correctly predicted the UK National Lottery results in September 2009. His explanation was met with criticism on-line, by people who argued that the concept was misapplied.[9] The methodology employed was too, flawed; the sample of people, couldn’t have been totally objective and free in thought, because they were gathered multiple times and socialised with each other too much; a condition Surowiecki tells us is corrosive to pure independence and the diversity of mind required (Surowiecki 2004:38). Groups thus fall into groupthink where they increasingly make decisions based on influence of each other and are thus less accurate. However, other commentators have suggested that, given the entertainment nature of the show, Brown's misapplication of the theory may have been a deliberate smokescreen to conceal his true method.[10][11]

This was also shown in the television series East of Eden where a social network of roughly 10,000 individuals came up with ideas to stop missiles in a very short span of time.

In his book Embracing the Wide Sky, Daniel Tammet finds fault with this notion. He explains that this notion may work in the Who Wants to be a Millionaire scenario because audience members have various levels of knowledge that can be coordinated to provide a correct answer in aggregate: Some persons will know the correct answer, others will know what are not the right answers and some will have no clue. Those who know the right answer will choose it, and the others will choose among what might seem the possible answers. The result will be to give a slight edge to the correct answer, even if only a few actually know the correct answer.

However, Tammet points out the potential for problems in systems which have less well defined means of pooling knowledge: Subject matter experts can be overruled and even wrongly punished by less knowledgeable persons in systems like Wikipedia, citing a case of this on Wikipedia. Furthermore, Tammet mentions the assessment of the accuracy of Wikipedia as described in a study mentioned in Nature in 2005, outlining several flaws in the study's methodology which included that the study made no distinction between minor errors and large errors.

In his book You Are Not a Gadget, Jaron Lanier argues that crowd wisdom is best suited for problems that involve optimization, but ill-suited for problems that require creativity or innovation. In the online article Digital Maoism, Lanier argues that the collective is more likely to be smart only when

1. it isn't defining its own questions,

2. the goodness of an answer can be evaluated by a simple result (such as a single numeric value), and

3. the information system which informs the collective is filtered by a quality control mechanism that relies on individuals to a high degree.

Lanier argues that only under those circumstances can a collective be smarter than a person. If any of these conditions are broken, the collective becomes unreliable or worse.

^Introduction (page XII): Although Surowiecki's description of the "averaging" calculation (page XIII) implies that Galton first calculated the mean, inspection of the original 1907 paper indicates that Galton considered the median the best reflection of the crowd's estimate. (Galton, Francis (1907-03-07). "Vox Populi". Nature. doi:10.1038/075450a0. the middlemost estimate expresses the vox populi ). Galton's quotation from the end of this paper (given by Surowiecki on page XIII) actually refers to the surprising proximity of the median and the measurement, and not to the (much closer) agreement of mean and measurement (which is the context Surowiecki gives it in). The mean (only 1 pound, rather than 9, from the ox's weight) was only calculated in Galton's subsequent reply to a letter from a reader, though he still advocates use of the median over any of the "several kinds" of mean (Galton, Francis (1907-03-28). "Letters to the Editor: The Ballot-Box". Nature75 (1952). doi:10.1038/075509e0. my proposal that juries should openly adopt the median when estimating damages, and councils when estimating money grants, has independent merits of its own); he thinks the median, which is analogous to the 50% +1 vote, particularly democratic.

Ivanov, Kristo (1972). Quality-control of information: On the concept of accuracy of information in data banks and in management information systems: The University of Stockholm and The Royal Institute of Technology. (Doctoral diss. Diss. Abstracts Int. 1974, Vol 35A, 3, p. 1611-A. Nat. Techn. Info. Service NTIS order No. PB-219297