I’m going to scream if you mention panel quality one more time #MRX

I received my research organization’s magazine today. Inside were many lovely articles and beautiful charts and tables. I quickly noticed one particular article because of all the charts it had, but the charts are not what caused my fury.

The article was YET ANOTHER one on panel quality. Yes, random responding, straightlining, red herrings. The same topic we’ve been talking about for years and years and years.

Now, I love panel quality as much as the next person and it is an absolutely essential component for every research panel. We know what the features of low quality are and how to spot them and how to remove their effects. We even know the demographics of low quality responders (Ha! Really? We know the demographics of people who aren’t reading the question they’re answering?) But this isn’t the point.

Why do we measure panel quality? Because the surveys we write are so bad, we turn our valuable participants into zombie. They want to answer honestly but we forget to include all the options. They want to share their opinions but we throw wide and long grids at them. They want to help bring better products to market but we write questions about “purchase experience” and “marketing concepts.”

I don’t want to hear about panel quality anymore. It’s been done to death. Low panel quality is OUR fault.

Tell me instead how you’re improving survey quality. How have you convinced clients that shorter is better and simpler is more meaningful? What specific techniques have you used to improve surveys and still generate useful results? Tell me this and I’ll gaze at you with supreme admiration.

Post navigation

8 responses

OK so I conducted 3 focus groups and my groups predicted the Canadian 2011 election 100%. I did the same for the last 4 elections. My track record is unblemished. I did it online so it was much cheaper. This is called quality???? Where is my dartboard? Maybe I can do the same thing.

In my opinion, issues on panel quality is due both to poor survey design and poor panel management :

– poor survey design : there is a disconnection between what end clients and MR agencies request when it comes to respondents’quality and the efforts they put in having a survey attractive ; responsabilitiy is always put on panelist or panel provider companies, but hardly no co-creation process exists, MR agencies tend not to listen to panel providers advice usually thinking MR agencies are the only ones to be expert in questionnaire design ( and most of the time, just copy past an CATI survey into a CAWI one )

– poor panel management : is where admit a lot of efforts could be done at panel providers end ; but as long as fieldwork get shorter and shorter times in field, that target needed are more and more complicated , the panelist will always end up struggling completing a questionnaire , no matter how sampling is done and which recruitment methods are used ;
Incentive is also important and as CPIs get cheaper and cheaper, how could you ask someone to spend 35 min of their time on a boring and sometimes silly questionnaire and being rewarded only a miserable euro ?
when thinking of qual groups, aren’t we offering at least 5 euro per hour for their time ?

Online surveys built their attractiveness on cheap costs and short fieldworks ; we now pay the price of such a philospophy ;
to revert this trend , cooperation and partnerships between online panel providers and mainstream MR agencies is now mandatory to make sure both parties can still get benefit ( happy panelists = happy respondents = qualitative responses ) ;

I disagree with Glenn. Good panels exist, and not just from us. The largest player in the online panel space are taking data quality very seriously. We took the discussion of quality one step further and provide “surveyscore” data for both surveys once they have been fielded as well as BEFORE they are fielded. We ran over 2 million surveys through our model, and now have a predictive model for which surveys will cause people to abandon – and where they will. Any of our clients can enter their survey info into a simple interface, and get both a score as well as concrete suggestions to improve the score.

One thing that we thought would be predictive of engagement – the topic – turned out to be much less important than the construction of the survey itself (over 20 variables -things like questions per page, how many matrix questions, and others)

That’s good to hear. I’ve done similar work in my history. It was really quite fun to see exactly which variables were predictive of poor quality and drop-outs. Database mining for this purpose is probably underused.

Disagree somewhat. There is a distressing reliance within the industry on online panels — panels that have no proper frame, panels that aren’t projectable to any population, panels that produce widely varying results over time. To get good samples, your first thought can’t be to get an answer cheaply. Good research is inescapeably expensive, more so today than ever. Then, of course, you have to have the right instrument, the right flow, you have to put yourself in the respondent’s shoes and consider how the survey will unfold for them. If you want useful answers, you have to ask the right questions…

I too believe there is an over-reliance on panels. BUT, my philosophy is I don’t care how unrepresentative a panel or focus group or co-creation or neuroscience or social media research project is. If you have a proven track record of accurately predicting the market place, then you can do whatever you want. And nope, this kind of accuracy and validity does not come cheap. Quality costs time and money.

Buy People Aren’t Robots

People Aren't Robots: A practical guide to the psychology and technique of questionnaire design by Annie Pettit, PhD FMRIA

Available for Kindle, iPad, other digital on SmashwordsSeductive smelling, spine cracking 60 page paper copies on Amazon and CreateSpace.

Hi and welcome!

I'm a market research methodologist who blogs about sampling, surveys, statistics, charts, and more. I also like to live blog conferences I speak at. My goal is to keep research real, current, and fun. I approve and post every single comment that makes it through my spam filter, usually within 24 hours.