Five Cautions for Crowdsourcing

There is a small academic niche in market research that relies heavily on extremely cheap crowdsourcing for data. But before getting up in arms about the reliability and validity of such data, you should know this: They used to rely heavily on students enrolled in their college classes for data. Remember having to work six hours as a “laboratory subject” for Psych 101? Turns out it is cheaper, faster, and easier to find volunteers for these experiments on Amazon’s Mechanical Turk. The Journal of Consumer Research reports that 43% of the behavioral research it published last year was “conducted on the crowdsourcing website Amazon Mechanical Turk (MTurk).”

Market researchers are always looking for new ways to make work cheaper, faster, and easier. So if you are considering crowdsourcing as an option, take a look at JCR’s most recent tutorial, “Crowdsourcing Consumer Research.” It assesses “the reliability of crowdsourced populations and the conditions under which crowdsourcing is a valid strategy for data collection, and then proposes “specific guidelines for researchers to conduct high-quality research via crowdsourcing.”

Here are five important guidelines they offer, highlighted here because they have clear relevance for all types of sampling, not just crowdsourcing:

Minimize the risk of self-selection. When recruiting respondents, make truthful statements about your survey’s general focus, but avoid revealing exactly what qualification criteria are needed. Instead, use multi-tiered sets of screening questions.

Avoid attrition. Long, tedious, and repetitious survey tasks will make some respondents quit. If you find that many are quitting, carefully analyze who is quitting and where. This is especially important for experimental designs that may put unequal burdens on respondents, confounding the main experimental effect in your data.

Manage the pool. Be responsive to MTurk workers who have signed up for the survey. Provide any information they need; answer their questions promptly. Remember they are not anonymous subjects who magically provide data points. If you want thoughtful information from them, do not take them for granted.

Diversify samples. Crowdsourced samples are probably more diverse than a typical group of college students. But they do not represent the full population—not even close. So you need to test your ideas with various groups to differentiate the “universals” from factors like demographics or cultural background.

Report fully. As with any research, you should provide full details about the sample and how it was recruited. Describe how problems of self-selection and attrition were managed. Offer a profile of sample demographics, and lay out how all your cleaning and weighting adjustments were made.

Versta Research has never used crowdsourcing for research, as we noted last year in Using MTurk for Market Research. This is because crowdsourced sampling is not a good option for most of the work we do. But even for researchers like us, it can be excellent for early testing and pre-testing of survey instruments and research protocols. It is an attractive option to consider for the future.