Survey? That’s so…90’s.

The simple answer: none of the above offer the data you need to make critical decisions.

Survey questions focus the human mind on particular contexts, parameters, and frames of reference. The information yielded can then be compared to other people’s answers again and again. Some forms of new media are purposefully ambiguous, rendering comparisons impossible.

Comparisons are the essence of choice, correlation, cause and effect. There’s a popular saying among researchers, that what you really need to understand are “Differences that make a difference. And the difference made.” What this really means is that differences in conditions, demographics, groups and more can have an effect on something else. Understanding that effect is critical in order for businesses, organizations, schools, and even families to make good decisions.

Of course, the question itself has to be constructed with precision in order for this process to be effective.

Constructing a good question is a science. Really.

We’ve been asking questions nearly our whole lives. Most of these are questions that are good enough to pick a restaurant among friends, but not great for making critical business decisions.

To get decision-making data, you need to ask a question in a way that everyone will interpret it the same way. Equally important, response sets need to be constructed so that they have the same meaning every time for each respondent. Questions need to be focused and simple, yet contain all of the information needed to accomplish your goals.

For example, a question such as: “How happy were you with the quality of service at our restaurant?” seems reasonable. But does the researcher mean happy? Or satisfied? Service, in terms of what? Why not ask several questions for each dimension of service quality. For instance:

“How clean was our restaurant?”

“How prompt was the server in refilling drinks?”

See how “clean” and “prompt” are two different, specific ideas that help you focus on where your business needs improvement?

Once we have identified a specific idea of interest, then we can add scale points. Let’s say I asked you to rate how helpful this article is. It may fall between being the most helpful article you’ve ever read and the least helpful article ever. We can safely assume there is something in between—like “sort of” helpful. So far, that’s three response options that we can easily wrap our minds around. After that, a response option in between the midpoint and the anchors is about all the brain can handle. Extremely helpful, Very helpful, Somewhat helpful, Slightly helpful, Not at all helpful—voila! Fully labeled scales are an added measure of stability that will help your respondents focus and thus help you make better decisions from the feedback provided.

See? There is methodology to the SurveyMonkey madness. Keep an eye on this space to learn more about how methodology can help you get the information you need. And definitely let me know how helpful you found this post…

|Related Posts

Here's some great news for all you market researchers out there: If you use surveys to perform demographic segmentation on your target market, you no longer have to take up valuable survey real estate by asking questions...

Lisa works in Human Resources at Widgets, Inc. Lately, she’s heard rumblings that employees are unhappy with their supervisors, but she’s not sure why. Determined to find out what’s going wrong, she sets up...

Net PromoterⓇ Scores have been causing waves since they first emerged in 2003. In a now-famous piece for the Harvard Business Review, Fred Reichheld caused the first ripples by claiming his simple brand loyalty...

Hello Dr. Philip Garland!
I just found my way to your series of articles on survey methodology here on surveymonkey blog. The hazards of satisficing is very good reading.

You encouraged requests from readers about areas of interest for your future posts. I have two:

1) Might you discuss the use of flow charts in survey design?
2) Can you make any comments about separating valid survey respondents from responses that should be discarded? Obvious criteria include incomplete surveys. Any absolute rules for inclusion or exclusion?

Sometimes there’s a good reason to include responses from incomplete surveys. If a survey is too long, respondents might burn out and quit part way through. Their answers at the beginning might be valuable, especially if they are the most important questions – a good reason to place your critical questions upfront.

Ellie and Wendy–thought you both might enjoy our latest blog post on the very topic of completion rates based on number of questions in a survey: https://www.surveymonkey.com/2010/12/survey_questions_and_completion_rates/ Our business intelligence team did a pretty interesting analysis across 100,000 surveys to better understand how adding questions effects completion rate.