Posts Tagged ‘Focus groups’

Listening to the pundits speculate about why the polls failed to predict the Clinton win in New Hampshire, why they had prematurely counted out McCain, Huckabee and Obama, why they had paid so much attention to Thompson and why they can’t call the nominees yet, has caused me to think about experts and expertise, particularly in marketing.

The presidential elections are intensely competitive, often interactive, marketing campaigns. As such, they fascinate me.

For once, the political pundits were talking about something that I know a lot more about than they do. They were talking about marketing research, even though they talked in terms of polls and elections, rather than surveys and purchases. Polls = surveys and elections = a purchase, or product selection. What’s different is that in elections, the products talk (I know, the analogy breaks down with Elmo).

What was amazing to me is that there were the pundits doing exactly the same thing as marketing executives, agency executives and consultants that I have been observing for decades. They blindly project past behavior and intentional data (what people say they will do) into the future. They are very, very often wrong. But they are highly paid, so they have to look right. So what do they do? They make up plausible stories.

Sometimes, they even go out and find the data that will back up their stories. What’s going on here?

The network’s producers call up an expert who they are about to book for a show. They ask that expert what he/she thinks. Producers are not going to provide an airplane and limousine to the studio for an expert who was simply going to say, “we just don’t know.” They want people with definite opinions, strongly held. They want controversy. They want plausibility. The pundits are all too willing to provide that. They spin out plausible explanations with great certainty.

The same is true with the marketing pundits, when they are making predictions.

I’m standing up, like that little boy in the Emperor’s New Clothes, and telling you there’s nothing there. They don’t have a clue.

Not when they’re explaining the past or predicting the future or describing the present. Why? Because there is much more that they don’t know than they do know. Furthermore, they don’t know it, because if they knew it, they would know it. You don’t know what you don’t know. So, while do you know that you don’t know some things, you severely underestimate the things that you don’t know.

So, they are saying things like the Clinton voters showed up in greater numbers than were expected because they were angered by other candidates ganging up on Hillary, or because of Hillary’s becoming choked up. Never mind that this was not reflected in the exit polls. They quote their mothers, their friends, or passersby in the streets.

The actual experts in polling, who are, for the most part, pretty dull and therefore don’t make it to television interviews, are saying that it was probably the Bradley effect (blacks do worse in the actual elections and they do in the polls), or the fact that lower income people to not like to be interviewed and have a higher refusal rate, and favor Clinton. They would be likely to refuse both polls prior to the election, and exit polls. These, to me, are the most likely explanations, but they are not as politically correct as other explanations. Notice that I said “most likely.” The so-called experts rarely use this or equivalent phrases. The fact is, that we don’t know, and may never know.

So, what are we to do? In politics, we have to make predictions. In business, we have to make forecasts. One very successful marketing vice president, who came up from marketing research, when I asked him once about forecasting said, “give them a date, and give them a number, but never, ever at the same time.”

I used to be asked, “Based on the focus groups, will the product be successful?” I suspect that my answers were rather disappointing. Now, I am never asked that question because I make it clear beforehand that focus groups and surveys can’t predict a product’s success, although they can sometimes predict a product’s failure when there is a fundamental flaw in the product. There are just too many things that have to go right for success. For instance, there is no way to predict what competitors will do. What if an iPod or an iPhone comes along? What if you are Alta Vista, doing a great job, and a Google comes along?

There are last-moment, decisive factors that hit people when they are in the privacy of the voting booth, or are about to click their product choice, or standing at the shelf in the store.

People do not know what they are going to do. They don’t know what they will buy or not buy (or vote). They do not know how strongly held their preferences are. They do not know “what it would take to get you to buy the product,” a favorite, stupid question that marketers like to ask. Or, “On a scale of one to 10, 1 being least likely and 10 being certain, how likely are you to vote for your previous choice.?” (Who says there are no stupid questions?)

If people (including pollsters) can’t predict their own behavior, how do you think they’re going to predict others’?

So, what can focus groups, polls and surveys tell us? They can tell us about many obvious and hidden attitudes, opinions, beliefs, wishes, fears, etc. that may need to be addressed. They can tell us, for instance, that people are frustrated because their music libraries are a mess. They can tell us that the iTunes/iPod system of keeping them organized addresses that frustration. They can’t tell you that these will displace the ubiquitous Walkmans and CD players. They can’t tell you that these will take over the music industry.

They can’t tell you that an obscure Arkansas governor (Bill Clinton) can go up against a wildly popular president who just won the Gulf War (1), who the Democrats were despairing about running against, and who had lost the first primaries, could go on to win the presidency.

The Taurus, wildly popular in its time, was ridiculed as a “jelly bean” in focus groups. The VW bug, as well as its revived version decades later, was also ridiculed, but found its niche, who probably weren’t well represented in the surveys and focus groups. Respondents loved the Edsel and New Coke.

The Oracles are frauds. Predicting is a con game. Historians are fiction writers. Stock pickers are just racetrack touts. Forecasting is only on target by chance. Get it?

Sometimes, you just have to refine your guesses by marketing research, then put them out into the marketplace and let reality decide.

The main lesson: Clues are clues. Reality is reality. Sometimes they coincide. Sometimes… You get the picture.

In an article last Monday in the Wall Street Journal, reporter Emily Steel described the growing trend of using online social networks — both existing and company-encouraged — for marketing research. It’s a very dangerous trend, as I point out in my letter to her. Many companies are headed for disaster if they give undue weight to the opinions expressed on their online networks.

However, it didn’t cover the major pitfalls, of which there are many. (Full disclosure: I am a marketing consultant who runs face-to-face focus groups and telephone focus groups. I’m a founder of the Qualitative Research Consultants Association and member of its Professionalism Committee, although I am speaking officially for neither.)

I have rejected the methodology of online groups for reasons enumerated in detail in the following article:

In brief, the written word does not allow for the reading of emotions that live focus groups do, and the reading of these emotions is absolutely necessary for the interpretation of the results. (E.g., How enthusiastic are they? Are they hesitant? Are their remarks ironic and sarcastic? Are they coming from their heads or hearts? Are they mildly annoyed or royally pissed off?)

However, these kinds of standing panels have a problem that I didn’t discuss in the article. It is one of sample bias. As you describe, people are continually dropping out of the group and being replenished. This severely skews the kind of people who remain in the panel, in ways that are virtually impossible to account for in interpreting the findings. It is well known that participators are radically different than non-participators and ex-participators. For instance, probably the more enthusiastic and/or more lonely people (including social misfits) tend to stay in. So, you keep the enthusiasts, for whom the panel becomes a part of their social life (as mentioned in your article). They are sometimes less prone to criticize, but sometimes more prone to criticize. The point is, one never knows. But as this sample becomes more and more distorted and unrepresentative of real customers, you have a disaster waiting to happen.

On the other hand, these panels are a wonderful source of ideas and a way to make sure that certain actions and wording do not antagonize loyal customers.. But they are notoriously unpredictive of success in the marketplace. The Edsel automobile and New Coke are but two of many examples of going to the wrong people and asking the wrong questions. Many of the .com failures were guided by discussions on company forums, forgetting that most real people to not hang out on forums, especially for prolonged periods of time.

I hope that, as a reporter, you will follow this phenomenon. You will have many juicy disasters to cover. When you ask, “What were they thinking and why were they thinking it?” I hope that you will keep this letter in mind when they tell you “That’s what our customers told us they wanted.”