DIY survey platforms make constructing questionnaires easy, but the results could be biased, contradictory, or deeply misleading.

Online surveys often have to compete for attention against the backdrop of Netflix, Gmail alerts, and 25 open browser tabs. The minimal cognitive effort given to answering questions may exacerbate all the problems that lead to biased or outright distorted results.

Perhaps the biggest no-no that surveys violate is poor wording. Minor adjustments in questions can often produce enormous differences. For example, one study found that for the question
“Should divorce in this country be easier to obtain, more difficult to obtain, or stay as it is now?” placing “more difficult” at the end of question caused a 11% difference in responses.

advertisement

In another study, for the question “Do you think the United States should
forbid public speeches against democracy?” replacing the world “forbid” with “allow” caused a 26% increase in respondents’ support for free speech (because individuals, on average, have an aversion to forbidding rights). In other words, because respondents don’t take the time to think about the substance of a question, wording matters.

One way to get around bias, says Traugott, is to ask balanced wording. “Some people think A, while some people think B, how about you?” Be as explicit as possible about the wide range of beliefs that exist–or, an individual’s sheep-like proclivities kick in.

Second, try not to attach an authority’s name to a question, such as, “The Supreme court recently decidedX, do you agree?” Individuals, especially lazy ones happy to pass off the heavy thinking to someone else, will give extra weight to authority figures who may know more than they do.

Pre-test, Pre-test, Pre-test

Traugott says its a mistake for people to believe “that they can write these questions and get them correct the first time.” Testing out a survey on a few close friends may reveal enormous gaps in understanding. More sophisticated pretesting may require iteratively improving the question on different sources until a string of unique testers gives the same interpretation to the same question. If that’s too cumbersome, asking someone in a nearby cubical or over Facebook chat may still lead to big improvements.

In pre-testing, one of the red flags to look out for is response categories that don’t allow respondents to answer how they truly feel. For instance, Traugott recommends adding a “Don’t Know” option if the question relates to an issue for which a concrete opinion doesn’t exist. When a respondent agrees to an interview they may feel a sort of “social contract” to answer questions, “even if they haven’t thought very much about it.” If a pre-test is done correctly, a respondent who does not have a solid opinion will tell you so and the questionnaire can be adapted accordingly.

Ultimately, all questions begin with a hypothesis about the world. Pollsters ask about President Obama’s approval ratings after the State of the Union because they suspect an eloquent speech might boost his likability among conservatives. A manager may ask workers if they enjoy their job because he or she fears low organizational morale.

Therefore, Taugott recommends adding in questions that unearth the cause of an answer. Pollsters should ask which party a respondent is affiliated with; a manager might ask a worker how long they’ve been at their job. Without these additional variables, we’re left in the dark, unable to prove why the results turned out a certain way.

advertisement

SurveyMonkey, Facebook, and other DIY survey platforms are digital siren songs, tempting us with the ability to bang out a quick survey over a lunch break because it’s possible. But, as the old statistics adage goes, “junk in, junk out.” Collecting accurate data takes revision, investigation, and forethought.