Crowd-talk yields great answers, says university team

The Chorus system. User requests are forwarded to crowd workers, who then submit and vote on responses. Once sufﬁcient agreement is reached, responses are made visible to users. The crowd’s working memory is updated by workers selecting lines from the conversation or summarizing important facts. Credit: Walter S. Lasecki et al.

(Phys.org)—Move over, Siri. Some researchers from the University of Rochester in collaboration with a University of California, Berkeley, mathematician/crowdsourcing entrepreneur, have come up with a killer personal assistant approach. "We introduce Chorus, a system that enables realtime, two-way natural language conversation between an end user and a crowd acting as a single agent." So begins their paper, "Speaking with the Crowd," suggesting the ideal artificial chat partner is the partner that is actually the work of contributions from many crowdsourced workers. The researchers propose a crowd-powered chat system that behaves as an online collaborative interface. They believe it one-ups existing systems because it can take on more complex tasks.

Walter S. Lasecki, Rachel Wesley, and Jeffrey P. Bigham from the University of Rochester worked with Anand Kulkarni, the cofounder of the crowdsourcing company MobileWorks, to create Chorus. They sought to demonstrate that the power of crowdsourcing might be able to go beyond simple tasks into the complex. "What we're really interested in is when a crowd as a collective can do better than even a high-quality individual," said co-author Bigham.

How the Chorus system works: People talk to Chorus with an instant messaging window. User requests are forwarded to crowd workers, who submit and vote on responses. When agreement is reached, responses are made visible to users. The crowd's working memory is updated by workers selecting lines from the conversation or summarizing important facts. According to the co-authors of the paper, "Chorus is capable of maintaining a consistent, on-topic conversation with end users across multiple sessions, despite constituent individuals perpetually joining and leaving the crowd. This is enabled by using a curated shared dialogue history."

Put to the test, the researchers found that Chorus was able to keep up consistent conversations between a single user and large numbers of crowd participants. They said the talk was kept on focus. Also, Chorus was capable of retaining meaningful long-term conversational memory across multiple sessions, even as individual users changed. As for accuracy, workers were able to answer 84.6 percent of user queries correctly.

In all, say the authors, "These ﬁndings suggest that Chorus is a robust interface for allowing disparate members of the crowd to represent a single individual during natural language conversations as an alternative to software agents."

How robust these systems become as technologies evolve remains to be seen. What is known is that personal assistants such as Apple's Siri are very useful but do not come up to par with the conversational skills of a real person. As the researchers note, robust two-way conversations with software agents remain a challenge. Existing dialogue-based systems generally rely on a ﬁxed input vocabularies or restricted phrasings, have a limited memory of past interactions, and use a ﬁxed output vocabulary.

As for where Chorus can fit in the real world, the authors say that "In the future, we expect Chorus will have utility as a conversational partner and as a natural-language dialogue interface to existing systems."

Related Stories

The South by Southwest Festival (SXSW) isn't necessarily known for the product-announcement frenzy that occurs at CES, so it was a pleasant surprise to see a cool new toy on the trade show floor Thursday evening.

Recently funded by the National Science Foundation, Jeffrey V. Nickerson, and Yasuaki Sakamoto of Stevens Institute of Technology are conducting research on how well design can be accomplished by a set of individuals quickly ...

(PhysOrg.com) -- If you're Google and you're looking for the next crowd-sourcing piece to add to your already massive portfolio, it would seem Professor Luis von Ahn, of Carnegie Mellon, would be your man. After several previous ...

(PhysOrg.com) -- A team of innovative thinkers from several universities working together on a joint project to merge existing technology (the iPhone) with a real-world crowd sourcing application to help blind people make ...

By means of cloud computing, enterprises can access scalable computing power and storage capacity. A people cloud, by contrast, supplies a scalable number of workers via the internet. It is used when non-automated tasks are ...

Recommended for you

It sounds like a science-fiction nightmare. But "killer robots" have the likes of British scientist Stephen Hawking and Apple co-founder Steve Wozniak fretting, and warning they could fuel ethnic cleansing and an arms race.

A startup team calls their work a product. They also call it a social movement. Many people in the over-7,000 islands in the Philippines lack access to electricity .The startup would like to make a difference. Their main ...

Are some people fed up with remembering and using passwords and PINs to make it though the day? Those who have had enough would prefer to do without them. For mobile tasks that involve banking, though, it is obvious that ...

7 comments

The problem with crowd sourcing is information sharing. People are overwhelmingly imitators versus innovators so if information is shared most people begin to follow the popular opinion. Consider CNN et al and public decisions such as whether or not to go to war. If the information is all public the most common answer is followed.

The same is true with the famous guess the number of jelly beans in the jar contest. If all the votes are kept hidden the correct answer appears. On the other hand if the votes are displayed in real-time (like the stock market or real-estate prices) the most common answer is followed.

Any such crowd sourcing system needs information to be temporarily hidden to force individuals to "think for themselves", revealing the consensus after the fact.

I am reminded of the popular tune "50 Million Commies Can't Be Wrong," by Sherman and Larsen.The robustness of the scientific method suggests 'consensus' decisions are not as reliable as empirical testing.

While the crowd got 85% correct (higher than my experience - except in extremely pre-qualified groups), a single individual could at least theoretically have gotten 100. Therein lies the limitation of crowds they will always be averaging their answers and always be asymptotic to 100%. Ultimately, this is computer augmented decision by committee. Somehow I can't but not think of how efficient Congress is - when I think of crowd sourcing decisions.

Why should there automatically be no bias in crowd derived estimates. Is there any proof??

Crowdsourcing is not a new concept. It has been "played-around" with for years. Usually in the Social Sciences departments

My impression from what I remember, is that the most extreme outliers (biased opinions) will mostly cancel each other. Within a given range about a possible "average" there is always the possibility of some bias, which may not necessarily be a bad thing. It all depends on your definition of "bias."

a single individual could at least theoretically have gotten 100. Therein lies the limitation of crowds they will always be averaging their answers and always be asymptotic to 100%.

It's not clear what 84.6% accuracy means without comparing to the average and best/worst accuracy of the individuals in the same pool. Also, a crowd can theoretically get 100% in the same scenario where any individual gets 100%. Just give that person 100% of the influence. (You'd have to identify that person a priori in either case.)

There is nothing inherent in the concept that requires "averaging" of answers in any particular way. You could have a reputation system to identify the strengths/weaknesses of active crowd members and adjust the weight of their opinions, use NLP to group questions with likely subject matter experts, try out different voting schemes, combine an ensemble of small crowds instead of one big crowd, etc.

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.