Gamification in Market Research – Is Respondent Engagement More Important than Bias?

I’ve just finished reading an article on Gamification in the May 2012 issue of Admap by Deborah Sleep of Engage Research (http://bit.ly/KVuJ97 ). It describes the findings of research carried out jointly by GMI and Engage on the impact on respondent engagement during online surveys.

The main gist of the article is simple: that richer responses can be generated by the integration of gaming mechanics – including feedback loops, tapping into competitive spirit, framing the questions as tasks to name a few. Respondents spend more time in giving answers, enjoy the experience more, give richer responses.

Great stuff – especially for online panel providers, and potentially for all involved including Clientside researchers such as myself.

Yet it left me wondering : does Gamification actually move the perceptual goalposts with its brain-engaging techniques?

Or more critically: does Gamification introduce a new element of bias? Here’s my take:

1. Gamifying is a powerful tool, one that needs treating with care. A gamified question can easily elicit a different response from a straightforward question.

Take an example from the article: a question that originally asked respondents to “name the clothing they liked to wear” was gamified to: “what clothing would you wear for a first date?”

That to me is a different question – I’d expect answers to both questions to be different.

2. Utilizing gamification should always be informed by the actual knowledge needs you have.

If – as this research shows – you get richer response say on brand mentions, then the design needs to consider how important the accuracy of measurement is versus richness of mentions. Richness may not be a relevant criteria, depending on the goal of the survey in hand.

3. Care needs to be taken with rating scales that are part of a gamified survey.

More engaged respondents are likely to give more positive evaluation on rating scales. Where normative data is a key aspect, this is important to be able to understand better – especially if we’re needing to look at purchase propensity and top-box scores.

4. Authenticity of response should be equally important for any survey as engagement. Utilizing gamification needs to take that into account.

It’s potentially misleading to introduce a heightened awareness into a survey context if the real life experience isn’t likely to match that. Not all things in life are a game, or fun – if reality is monochrome, then research needs to replicate that.

My overall take on Gamification is that it looks like a relatively important methodology innovation that needs proper validation – so the R&D work needs to be done before broad adoption can be advocated.

More particularly, I think we need to understand what biases we’re introducing, so we can understand what impact what types of gamification mechanics have on response patterns, and build that into analysis.

The upsides in engagement may well be there for MR – but gaining a relatively granular understanding of the impact of change is important.

18 responses to “Gamification in Market Research – Is Respondent Engagement More Important than Bias?”

My question is: what if the bias is actually due to the way we structure questions and surveys now?

Bear with me here: what if we are have simply adjusted for the cognitive biases we introduce via the artificial constructs of traditional research models and gamified models are actually more naturalistic, honest, and representative of how humans actually think and make decisions? There is much evidence that this is the case, so our reliance upon norms and benchmarks from traditional models may actually be limiting our ability to deliver better data, insights, and user experiences.

In short, what if the gamified model is the right way to conduct research and the current model is the wrong way?

I’m a big fan of reality – the more real we make our research situations, I believe the better the responses we get, where better means more isomorphic with reality. That said, we need to be cautious about gamification as a goal in and of itself. Social Psychology faced the same issues back in the ’60s, leading to Ken Ring’s seminal article about “fun and games” taking over research (J Exp Soc Psych, 1967). Researchers were trying to outdo each other with elaborate ruses designed to elicit behavior, eventually going beyond the bounds of good taste and good research, claimed Ring. Potential for the same thing to happen here.

Edward, great points – and I concur that change for the sake of change is not the goal. Purposed-based innovation that demonstrates better data/decisions, however the client can define it, is what will lead to greater adoption.

Lenny, this is a perfect example of how many big clients are struggling with legacy systems, be they Brand Trackers, Ad Trackers, Ad Tests, Concept Tests etc. What is “right or wrong” is (unfortunately) secondary to the thousands of normative case studies, that from a client POV – aren’t broken, so why fix them? Until Gamification can break through this glass ceiling, we’ll continue to see it play a niched role in the broader mix.

My observations from this work is yes these techniques can introduce bias (though I would challenge the concept of thinking of them as biases, as it places an assumption that the existing data is correct. I would rather describe them as different responses) and it is critically important to conduct test and control experiments when implementing them to understand what impact they can have. But from my experience these differences can be overcome through the thoughtful design of questions and generally these more engaging questioning techniques appear to deliver more cross market consistent data.

@jon – would be interested in new the work you’ve done, is there an open access version of it you can share? Particularly interested in what you say about giving cross-market consistency – do you mean across countries?

@steve – same as with jon, would be interested in reading the article you reference, do you have a link?

@lenny – I don’ think it’s about right or wrong. Clearly a badly written, overlong non-gamified survey is going to contain many unquantifiiable biases, including a new one that one might call “boredom bias” (or disengagement). However, I’m with Steve – research has to be as realistic, authentic, contextually attuned as possible within given constraints. So Gamification needs to be used carefully, and always with the objective in mind. Data norms are powerful, often as benchmarks – their power is also about relative relationships, not absolutes. So shifting things is like changing a measurement instrument – you have to know how to recalibrate with a new tool, Six Sigma stuff. Can we do that yet with Gamifcatikon – for what categories, countries, target audiences? Curious as to Jon’s study.

I am afraid there is not a open access version of the paper. But I think it can be purchased for a very modest price from the ESOMAR website. There are 2 other gamification papers available on this site that I have had a hand in for anyone who is interested in the gamification of research. One by Bernie Malinoff called “How far is too far” presented at last years 3D conference and the original gamification paper I wrote with Deborah Sleep from Engage research, that we delivered at last year Congress (from which the article you mentioned was based one).

Jon – just to share the thought more broadly, perhaps there is a role for Gamification insertion later on in a survey. The piece doesn’t touch on the extent to which respondent fatigue can be addressed by some gaming mechanics – eg after 10 minutes or so. Do you have any learnings on that you can share? Bernie?

Nice discussion going on here. Let me add some thoughts on to the 4 points raised by @edward04:

1 – I don’t think that is a good example since they are indeed different questions. But the main point here is that wording, either to gamify a question or not, will always introduce bias, or drive respondents to provide responses incorrectly, if it is not well done. Is there any worst source of bias than a bad translation? And how about a complex phrasing that respondent can’t understand?

Wording is an essential part of any self-administered survey, and using gamification, and other visual communication tools, is extremely important to provide respondents with a better experience and therefore impact positively the final results. As shown by @Jon’s experiments (we have conducted some as well in Latin America with similar results), examples of gamified wording that can positively impact survey results include:

a) Direct and clear instructions = transparency (eg: accurate survey length, stages, how far is the respondent to finish, technical requirements (such as need to watch videos), etc.)
b) Challenges (respondents can provide richer responses and be more motivated to continue taking the survey when faced a challenge)
c) Daily situations (appropriate wording used to bring respondents closer to a situation of their daily lives, generating therefore more accurate responses)
d) Entertainment (usage of specific wording + visual resources to provide a more fun and cool experience to participants)

2. And how about getting poorer responses on brand mentions due to a bad survey experience?
A richer response in this case for me means that an engaged respondent is more likely to spend more time taking the survey, putting more effort to provide accurate responses, writing more on open questions, sharing more of their thoughts and knowledge. It’s about reciprocity: as much engaged and satisfied with the experience, more he/she will give in exchange. So should you consider that a bias, or consider that is your new reference point and is more accurate to the reality?

3. Again… and how about “non engaged” respondents, or respondents frustrated with their survey experience? Are they likely to give more negative evaluation on rating scales?
By our experience, engaged respondents tend do me more honest, more committed, and willing to share more of their time and knowledge. But of course, since there are a lot of historical data already available, it’s necessary to measure the impacts of the introduction of new ways of asking and engaging people on surveys, even that it’s for having more quality and accuracy.

4. The reality is that people spend more and more time connected, engaged in social networks, applications, interacting with friends, working or playing.
The reality is that we’ve learned to communicate in 140 characters, perform activities with the touch of our fingers, multi-task and broadcast messages to the world from our mobile devices.
The reality is that online surveys as they have been conducted in the last 10-15 years are not the reality of today’s consumers.
The reality for me is that the market research industry needs to change a lot, and the appropriate usage of “gamification” techniques can be a way to approximate researchers and consumers.
So is respondent engagement important? Yes, actually in my opinion a survival condition for survey research.

Adriana – I would have much to respond to in your lengthy comment. I’ll leave it at one – your statement “The reality is that online surveys as they have been conducted in the last 10- 15 years are not the reality of today’s consumers” isn’t going to move the debate on, certainly not from my client perspective. It is an unqualified generalisation, and as such something I am surprised at in a research debate. Empiricism is surely the basis of all we do – and that within diverse categories, audiences, contexts, objectives and cultures. As a final note – and contradicting myself havin said I would only respond with one comment – If you read my original post, you will see that – to your point 1 – it refers to the example given in the article by Jon/Deborah.

Edward,
Thanks for your comments. Yes, I agree it is a generalization when I say “The reality is that online surveys as they have been conducted in the last 10-15 years are not the reality of today’s consumers”, and I mean that nowadays I still see most online surveys being conducted as a simple replication of their off-line ones, with long questionnaires, poor design, and non adequate wording for engaging respondents. That is not the type of experience (eg: spend 30-40 minutes responding a poor online structured questionnaire) people expect when performing online/ digital activities. So in my opinion yes engagement is necessary, and gamification techniques, when used appropriately, impacts positively respondent experience and study results (more qualified and richer responses). And I am saying this of course based on my own perspective and experience of 12 years working daily in online research projects from around the globe, with market research companies, panel firms and end clients.
As for the point 1, I understand that was an example extracted from the article by Jon/Deborah. I just think that is not a good example of appropriate usage of gamification since they really modified the question and we would expect answers to both questions to be different. And I totally agree with you that gamifying is a powerful tool, but needs to be treated with care and validation.

Adriana – have you carried out any test and control projects that you can/will share? This helps the validation process greatly. Also – any hard data on actual lack of engagement in non-gamified online quant. surveys, however you may choose to define engagement, welcome.

Edward, yes during the last 6-7 years we’ve conducted a lot of R&D and experiments on the application of gamifying techniques applied to online communities, survey invitations, online questionnaires and even research games. We haven’t written a specific paper, but we have plenty of data and accumulated knowledge on that area. I will see if we can put together some info to share with you. Thanks.

A good article but how quickly we forget interpretation bias. In your example, how do we interpret the clothes a respondent might pick out – and how do we interpret that they’re picking out said clothes in a game environment rather than a real dating scenario or in a survey or in a qualitative interview. This is an example of the desperation of market research when we’re throwing out the only reason why people take MR data seriously – we worked hard in the early years to account for biases like these. I’m not saying gamification is bad. I’m agreeing that we need to understand but I’m surprised (and disappointed) that the people with MR mindshare who commented here aren’t thinking about the complete picture. Where are the academics when you need them?

The article is more of food for thought than a thought in itself.
James: Almost everyone in the discussion including the author himself expressed their views on the use of gaming with cautionary tones. We all realize the ‘bias’ element of gamificiation but we are not underestimating the virtue of ‘engaging’ consumers for better quality of data and gamification’s potential to do that.
We’re beginning to use gaming approach in qualitative studies on the youth and its giving us greater, richer responses. As you know, one of the biggest challenges with youth studies is the engagement issue, and we’ve addressed that here with gamification. At the same time, we apply our mind/ consumer/brand understanding when doing the analysis so that we don’t end up with superficial consumer learnings. A good judgement (while designing Q’re/DG and during reporting) helps us to balance the use.
The discussion has been quite useful for me! Thanks Andriana, Edward and Leonard.

This makes for interesting reading. I work for a non-for-profit organisation supporting kids who live in the care of the government – effected by trauma and neglect. They are a tough bunch to get feedback from.

We are looking at using an online survey to gather some additional feedback. We are looking at Viewpoint. It has some tacky gamification and background designs. I’m trying to find out what the research says about these kind of surveys and the most important features of a survey – before we sign up with this Viewpoint group.

Could any of you point me to articles or evaluation reports that would help me?