The Dos and DonӴs of Crowd Wisdom

Crowd wisdom or wisdom of the crowds has been getting a lot of lip service in recent years. Academic researchers like James Surowiecki have published widely read books that got everybody talking about this new way of answering questions and making decisions. Perhaps we liked the idea that a group of ordinary people could make decisions just as good, or better, than an esteemed expert. It appeals to the democratic, egalitarian side of us. The part of our ethos that wants to believe humble team work is mightier than flashy, celebrity superstars. The naysayers maintain that there is no wisdom in crowds, rather the ԭasses are assesԼ/a>.

Before you chose a camp, letӳ take a closer look at the outcomes produced by crowd wisdom applications across several different verticals. Some of the real world scenarios have proven strikingly successful. Others did not turn out so well.

2. Wikis and shared information resources

Wikipedia is the free access, free content online encyclopedia built entirely by a distributed network of contributors and editors who write, review, correct and update each otherӳ entries freely via the internet. Wikipedia has over 18 billion pages, and they are generally remarkably complete and accurate. In fact, Nature designed a study to test the quality of Wikipedia and traditional encyclopedia Britannica. They randomly selected entries from both sources on a wide variety of topics and sent them to leading topic experts for review. Britannica entries averaged 2.92 mistakes per article. Wikipedia averaged 3.86.

3. Betting Spread

In the world of sports betting the bookie has a very important job of setting the point spread for any given match that lets the betting crowd know which team is expected to win and by how much, or what the odds of each outcome are. If the bookie does his job right the spread will be perfectly balanced and betters will distribute the total cash amount of their bets equally on both outcomes. Then the spread is published and the crowd starts placing bets. If the incoming bets are not distributing equally, the bookie knows he needs to alter the spread to balance it out. A handful of scientists have studied betting spreads to see how well they predict outcomes. It turns out they do a pretty good job at predicting the real outcomes of the game. Results consistently show that for most major professional sports the spread is a very strong predictor of outcomes. The crowd is especially good in horse racing, where the final order of the horses odds accurately predict the order they will finish the race.

4. Interpreting chaotic data

Faced with a growing army of malicious bots, Google created the famous Captcha to sniff out which users were bots and which are real, genuine humans. The online security check showed a somewhat distorted image showing text and numbers and prompted users to type what they see in a text field. The assumption was that unlike digitized text that could be deciphered by bots, the blurry images could only be read accurately by real humans. But capthcas served another purpose. Every time a human properly enters a Captcha value it helps Ԥigitize text, annotate images, and build machine learning datasets. This in turn helps preserve books, improve maps, and solve hard AI problems.ԠMore on CAPTCHAs Each human contributes a tiny little effort. Together they solve a huge problem.

5. Mechanical Turking

Mechanical Turking is a Amazon program that uses a massive distributed human work force to do jobs with the speed, accuracy and consistency of computers, but that nevertheless are better performed by actual humans. These tasks are called HITs (human intelligence tasks) and they mostly have to do with deciphering human language by finding info in documents, translating and transcribing. In this case by breaking a large, complex task into endless little pieces and unleashing it to a crowd, Amazon Mechanical Turks are able to do a job faster and better than computers or professional, expert humans.

6. Predicting Elections

Here it is important to point out the difference between a political survey an election prediction. The difference lies in the type of question asked. Typically citizens of voting age are asked, ԗho will you vote for?Լ/em>. The answer to this question assumes that each respondent is going to actually participate in the election, has made a decision about which candidate to support and that he wonӴ change his mind before election day. This method also assumes that by asking enough people, you can get a Ҳepresentative sampleҠof the entire voting population and extrapolate the percentages onto the rest of the population. Unfortunately, these type of voter surveys are best at providing interesting news content, but not so good at predicting the actual outcomes of the elections. If we want a more efficient predictor of actual outcomes the questions should be җho is going to win?ҠThis simple nuance in framing helps elicit the predictive powers of the crowd. When groups of individuals are asked to predict what will happen, they are pretty good at predicting the future. New Zealand-based PredictIt is an online prediction market for political and financial events. At the moment the lead prediction question the Predict It crowd is asked to answer is ԗho will be the Democratic nominee?Ԡand the crowd is predicting Hillary Clinton by 79%.

It seems that crowd wisdom works best for questions that are: + Transparent, free access to all relevant information + Question of popular taste or opinion + Open, efficient feedback loop + Question of cross disciplinary nature

Promising New Directions

1. Predicting global trends

It is still too early to know, but academics and financial enterprises have begun to ask if crowd wisdom can predict global trends. Some preliminary research of comments on social media network Twitter, suggest that its crowd could predict the path of the recent Ebola outbreak with startling accuracy. Could sentiment analysis of social media chatter predict the direction of the stock market? Can we ask the crowd what will happen to currency markets or if we are in a tech bubble?

2. Sales forecasting

If the right crowd is asked the right questions, we may be able to predict important business markers like future sales. But who should be asked? Existing customers? Potential customers? What about employees and shareholders? What combination can accurately predict sales? Will we discover a new, better way to forecast than the existing econometric methods. Lumenogic is a suite of commercial platforms optimized for predictive markets, competitive forecasting and crowdsourced innovation.

3. Replacing governments in complicated situations

On the one hand, crowd wisdom seems to be the genius behind democratic electoral governments. Let each citizen place his or her vote and the best possible collective good will be determined. There is no objective way of determining whether contemporary democracies have succeeded in optimizing the public good, but research and common sense suggests in has fallen far short. Powerful corporations manage to secure huge bailouts and tax breaks, while the truly downtrodden individuals have little or no means to lobby for their own basic needs. Questions of governance and public policy often require complex analysis of a multitude of factors, many of which are largely unknown. Itӳ a sophisticated exercise in economic, social and legal behavior and scenario testing. While the cynics among us may say that ԡny idiot could make better decisions than the governmentԬ it is questionable whether crowds will have the ability to sufficiently estimate all the factors at play in governing. This week Greece voted ҮoҠto an EU bailout package via a referendum. Only time will tell how that decision plays out for Greece. And no matter what the outcome, we wonӴ be able to know for sure whether a ҹesҠvote would have been better or worse. Social and economic forces are just too complex.

4. Corporate governance

While corporate governance does involve voting by the board of directors, this is too small a number of people to tap into real crowd wisdom. The biases, backgrounds, interests and interpersonal relations among board members are more likely to evoke group think that crowd wisdom. But what if those votes were opened up to a wider group of interested individuals, like all employees? The problems of biases and interests might still be a problem, since employees also have a very distinct vantage point.

5. Medical advice

An exciting, yet controversial area of crowd wisdom research is medical advice. This goes against our traditional understanding of who has the expertise and legal standing to give medical advice. Physicians are famous for resenting patients who come into their office armed with medical ideas they learned fro Googling their symptoms or condition. This is understandable, since just searching for keywords on the internet doesnӴ return relevant, insightful information that is relevant to each individual case. One company, Treato, has built technology that sifts through all the information on the internet and organized it so that is can be easily personalized and consumed. Patients, physicians and pharmaceutical companies can all use this crowd insight o better understand their condition, treatment and trends in disease management. What Treatoӳ technology doesnӴ do is play ңrowd doctorҠand encourage people to share symptoms and be diagnosed by the crowd. Though in the future there may be some new tools at the hands of physicians that would allow them to ask a crowd of other physicians for their diagnosis or prescribed course of treatment.

DonӴs

1. Inventing

They say Ԯecessity is the mother of all inventionԠ, but can the crowd give birth to new innovative ideas? Can the crowd combine its collective brainpower and ingenuity to come up with a ground breaking new scientific theory or technological invention? I am afraid not, crowd wisdom doesnӴ allow brains to actually join forces to think better. The sheer intelligence and creativity needed to sprout new ideas, will likely continue to come from individuals and small teams. Scientist can certainly pool data, but they cannot think collectively. Nor can any crowd ԧuessԠnew discoveries or inventions.

2. Needle in Haystack problems

Crowd wisdom works well with large numbers that are of continuous quantity, such as the classic ox meat estimation case. It cannot work when the task is inverted, such as guessing which number will be drawn from a lottery. In that case itӳ not an estimate of continuous quantity and the members of the crowd have no information to judge, itӳ a pure guess. Likewise the crowd is not good at picking out a perfect solution out of many possibilities, since the items are distinct and therefore cannot be estimated. Some examples include: picking one winning stock or picking who will win the World Series at the beginning of the season,

3. Determining personal taste or belief

Similar to needle in haystack problems, determining personal tastes or belief is not an estimation task and the information required to make the determination is too rich for the crowd to process. The determination of personal taste requires a human expert, or machine that can imitate a human expert. A crowd of humans will perform this task poorly. Questions of faith or Եltimate truthsԠwill also not benefit from crowd wisdom. It doesnӴ matter how many people we poll, we will never get a definitive answer for ҄oes god existҿ or җho is the greatest electric guitarist of all time?Ҽ/p>

Conclusion

Looking at what we know today, it is clear that crowd wisdom has the potential to be a highly efficient predictor of future outcomes in situations where the question at hand is one of estimation. So long as there is transparent, free access to all information and it can be digested and understood by the members of the crowd. Perhaps if we can find new ways to usefully share more information with more people, we will be able to harness the good side of crowd wisdom with greater skill and accuracy in important areas of science and industry.

Is crowd wisdom a passing trend or will is become more widely implemented?