Marketing Strategy and Pricing

Menu

Tag Archives: Market Research

This is a long quote from a 1967 article published in Journal of Industrial Economics*. This paper was written as a response to Galbraith’s theory of Consumer Sovereignty.

The sensible manufacturer works with the environment, not against it. He tries to satisfy desires, latent and patent, the consumer already has; it is much cheaper than creating new ones.

First, he tries to identify these desires. To do this he now has all the aids of marketing research. If he only researches into which detergent the consumer considers to wash cleanest, he may miss the fact that the consumer now also wants her detergent to be pleasantly perfumed.

That is why so many of the new products even of the biggest firms fail miserably in test market. It is rarely because they are poor products technically. It is because there is something in their mix of qualities that fails to appeal to the consumer.

Once the manufacturer has found out what he thinks the public wants, he has to embody it in a product.

When the manufacturer does find an answer at a reasonable price, he still has to sell it to the public. He may think the answer will work; he may feel the price to be reasonable. He does not know whether the public will see it as he does.

If you go further back you most likely will find yet another article saying the same thing in more arcane language.

Fast forward to present day and you have exactly the same concepts stated above packaged in so many different ways. Every Guru has a name for it, they want us to believe none of the existing methods work.

Unfortunately, when the audience suspends its skepticism or the Guru is popular enough, their re-packaged ideas take roots.

There really is nothing new in marketing. Only new catch-phrases that fit the language of the time.

*You will find a copy of the said paper from your local library EBSCO host.

Quick! Write an inequality equation using two ‘>’ (greater than) signs and

Product

Strategy

Business Model

Depending on where you stand and which articles you read recently there are six possible permutations. If you had recently read what Fred Wilson, a Venture Capitalist, wrote you are mostly likely to write down

Product > Strategy > Business Model

Is that all to it? According to research done by four business schools, this permutation defines only one of two classes of VCs. More precisely, there are two schools of thoughts of how VCs make investing decisions. The second class of VCs believe the right permutation is,

Strategy > Business Model > Product

While Fred Wilson makes a compelling case to get product-market fit correct, then define your strategy and then worry about making money, a VC who falls in the second category will argue, equally eloquently, strategy (making choices about segmentation and needs to serve) first, finding how you add and capture value (business model) is next and what the offering (product) is last.

Effectual – Instead of doing market research, competitive analysis, value analysis etc, go build something and keep iterating on it and building a growing customer base. Then worry about strategy and business model.

Causal: Start with customer segmentations and their unmet needs (or jobs to be done). Make choices on the right segment you should target first and understand its value perception, alternatives and willingness to pay. Define a product version that serves that segment and offer at a price they are willing to pay.

There exists a class of VCs who apply effectual reasoning and there exists another that applies causal reasoning. You can see Fred Wilson falls in the effectual bucket.

So when you have two classes of entrepreneurs and two classes of VCs, the next obvious question is which pair would work together well. The aforementioned research suggests, cognitive similarity (“I like how you think”) was a decisive factor in how VCs decide choose to invest in startups.

Their study was conducted on 49 partners from different VC firms, by presenting them 16 different hypothetical investment opportunities and asking them to rate how likely are they to fund these ventures. From these 784 data points, the researchers employed conjoint analysis to tease out the influence of individual factors on VC’s decision. This is approach is far better than stated preference studies that ask VCs for their rating and data mining studies that succumb to data errors.

The number one deciding factor? How similar the thought process is between the VC and the founder. The researchers call this cognitive similarity, which has nothing to do race, national, education, gender or other physical characteristics. It is how a founder thinks and how similar it is to VC’s thought process. Higher the similarity, greater the chances of getting funding.

Everything else, including the perception of the team, its experience and commitment (human capital) are influenced by VC’s reading of founder’s thought process.

“A founder who demonstrates cognitive similarity with a VC is more likely to be perceived in a positive light, and viewed as better positioned to make effective use of his or her human capital”

All other positive attributes we hear about, the product’s competitive advantage, scalability, founding team’s ability to hustle, their focus etc seem to be bestowed after the fact.

What does this mean to you as a startup founder seeking venture funding?
You are better off seeking those VCs who think like you do in terms of product, strategy and business model. If you think market demand and opportunity size first and pitch to Fred Wilson you are most likely going to come back empty. On the other hand you at least get to play if you think product-market fit first. So knowing how you reason and seeking as venture partners only those who think like yourself saves lots of wasted time and agony.

Will Fred Wilson and other VCs admit to this influence of cognitive similarity in their investment decisions? More broadly, do VCs know and admit to the influence of cognitive similarity on their funding decisions?

No, they do not recognize this hidden factor. And I expect comments from a few stating so. In the same study that teased out this hidden factor, the researchers asked an explicit question on how much weight VCs place on cognitive similarity with founders. VCs rated this as the the least important factor, but when they had to place a bet given a profile of venture and its founders, the hidden influence of cognitive similarity came out loud and clear.

Finally, is Fred Wilson right? Is effectual better than causal? The proponent of this classification, Professor Saras Sarasvathy, goes one step beyond this mere classification. She argues great entrepreneurs are ‘effectual’. They opt for doing things vs. analyzing things. I do not subscribe to this latter part of her theory regarding what defines entrepreneurial greatness.

No sooner you let it be known, mostly inadvertently, that you are about to send out a survey to customers than starts incessant requests (and commands) from your co-workers (and bosses) to add just one more question to it. Just one more question they have been dying to find the answer for but have not gotten around to do a survey or anything else to find the answer for.

Just one question right? What harm can it do? Sure you are not opening the floodgates and adding everyone’s question, just one question to satisfy the HiPPO?

May be I am unfair to all our colleagues. It is possible it is not them asking to add one more question, it is usually us who is tempted to add just one more question to the survey we are about to send out. If survey takers are already answering a few it can’t be that bad for them to answer one more?

The answer is yes of course it can be really bad. Resist any arm-twisting, bribing and your own temptation to add that one extra question to a carefully constructed survey. That is I am assuming you did carefully construct the survey, if not sure add them all, the answers are meaningless and in-actionable anyways.

To define what carefully constructed survey means we need to ask, “What decision are you trying to make with the data you will collect?”.

If you do not have decisions to make, if you won’t do anything different based on the data collected or if you are committed to do whatever you are doing now and only collecting data to satisfy the itch then you are doing it absolutely wrong. And in that case yes please add that extra question from your boss for some brownie points.

So you do have decisions to make and made sure the data you seek is not available through any other channels. Then you need to develop a few hypotheses about the decision. You do that by doing the background exploratory research including customer one-on-one interviews, social media search analysis and if possible focus groups. Yes we are actually paid to make better hypothesis so you should take this step seriously.

For example your decision is how to price a software offering and your hypotheses is about value perception of certain key features and consumption models.

Once you develop a minimal set of well defined hypotheses to test, you design the survey to collect data to test those hypotheses. Every question in your survey must serve to test one or more of the hypotheses. On the flip side you may not be able to test all your hypotheses in one survey and that is okay. But if there is a question that does not serve to test any of the hypotheses then it does not belong in that survey.

The last step is deciding the relevant target mailing list you want to send this survey to. After all there is no point is asking the right questions to wrong people.

Now you can see what adding that one extra question from your colleague does to your survey. It did not come from your decision process, does not help with your hypotheses, and most likely not relevant to the sample set you are using.

In finding the first customer within their immediate vicinity, whether within their
geographic vicinity, within their social network, or within their area of professional expertise, entrepreneurs do not tie themselves to any theorized or pre-conceived “market” or strategic universe for their idea. Instead, they open themselves to surprises as to which market or markets they will eventually end up building their business in or even which new markets they will end up creating.

While a traditional (read established) business start with well defined markets, segments, targeting and product positioning to reach end customers entrepreneurial ventures start with single customers they have and move up to market definitions and sometimes creating new markets in that process. Of course, once they reach market definition the ventures now become established enterprises and revert to the first flow for their decision making.

The problem is the risk involved and how many of the startups actually move past each stage. The fact that some have made it does not mean any startup can succeed by starting with few available customers, identify more and move to define an entire market. What it really means is, as many startups try their hypotheses, testing different customers, a few will eventually traverse the path to define the market.

Share this:

Like this:

Today there was a survey out from Cowan and Co that got written about in almost every blog. It is about the effect of iPad mini on iPad sales. Most quickly jumped to the obvious conclusion (in articles with catch headlines too) that iPad mini will go on to add significantly to Apple’s profit with “inconsequential” cannibalization of iPad.

Here is my representation of the survey results using the data from Cowan and Co and let me tell about the problems with this survey and hence the conclusions.

First this is a stated preference study measuring the attitude of the customer and not the actual behavior. It is well established in marketing research literature that stated preference surveys overestimate behavior at point of purchase.

Second the survey question was specific to iPad mini, asking them specifically if they would buy iPad mini in the next 18 months. They did not ask them about other tablets in their consideration set nor did they ask them if they planned to purchase Kindle Fire, Nexus or nook. That is too narrow, anchors them on single choice and ignores other possibilities. Had they asked, “Which tablet will you buy?” and reported percentage distribution of different tablets it would have been much better.

Third, they slice and dice the 24 samples who reported switching from another device to report cannibalization and conversion from other tablets. The number 24 is too small to make any meaningful estimate, especially when 8 report switching from iPad and 3 report switching from Fire. If these were accurate estimate then it also point to even smaller impact on Kindle and other tablets.

Fourth, a more significant problem with the question “What device will iPad mini replace?” is it is just plain wrong as respondents were not primed to compare price and value of each. One right way is to do a (choice based) conjoint analysis to find the respective share of different tablets – iPad, iPad mini, Fire, Nexus etc.

Fifth, let us take the 6.1% new buyers number at face value. You can interpret this as 6.1% of all those who would buy a tablet would choose iPad mini. That is, iPad mini is not bringing in many new buyers into the market. Had they asked what tablets would they buy this number would likely pale in comparison to others. So it is overreaching to say, “its low-price will bring in new customers”.

Netting it out, there is not enough validity in the data to make bold predictions about iPad mini. There are indeed many uncertainties and those are not considered let alone quantified by this study. What you have is someone’s wishful thinking supported with non-scientific sampling and analysis.

Share this:

Like this:

As I previously wrote, Google Customer Surveys is a true business model innovation. It helps publishers unlock value from their digital assets and enables market researchers reach new audience they otherwise would not have found. I expressed my reservations on their positioning in my previous article

But I do not get what they mean by, “look for correlations between questions” and definitely don’t get, “pull out hypotheses”. It is us, the decision makers,who make the hypothesis in the hypothesis testing. We are paid to make better hypotheses that are worthy of testing.

Since I wrote that article, their Product Manager emailed to say they removed their statement on, “pull out hypothesis”.

This is a limited tool with ability to ask just one question and no way to ensure that the same user will answer multiple questions for doing customer level analysis.

There is one more item which is their minimum sample size. You cannot order anything less than 1000 samples.

Despite these reservations I see Google Customer Surveys as an effective tool for product/brand managers, researchers and small businesses for these purposes:

1. Aided Recall: Present them a choice of different brands ask them how many of these they recognize.
When you are trying to get very quick and high level data on customer awareness or preference of your brand, this is a great tool. The results are especially actionable when you get extreme results like no one knows about you.
If you are trying to find which brand they recognize the most then you can do that as well with different question type. However, due to its question format limitation, Google Customer Surveys cannot help with Unaided recall.

2. Finding Consideration Set: Present them a choice of different brands and ask them how many will they consider buying for solving a particular need. This is similar to Aided Recall but the question is more focused. You are not simply asking about awareness but whether your brand makes it into their consideration set.

3. Brand Association: Present them an image or a statement and ask them to pick a tag-line or brand they believe goes with it. Another variation of this question is asking them to associate your brand with an unrelated field. A typical example is, “if our brand were a movie actor, who will it be”.

Ability to use images is a very powerful feature. It creates many different opportunities. For example for testing your advertising copy or the images you use in your collateral. It is better to poll your audience whether the image you used looks more like a bean bag or boxing glove before you launch your expensive advertising campaign.

4. Consumer Behavior Research: This is a whole class of hypothesis testing you can do with Google Customer Surveys. While it is not a tool for A/B split testing, you can use it test your hypothesis on customer preferences or their susceptibility to anchors and other nudges. Before collecting results you need to specify a reasonable hypothesis that is worth testing. When you collect data you can test for statistical significance using Chi-square test to validate your hypothesis. Do keep in mind that sometimes data can fit more than one hypotheses

There is however a big limitation because of the length of questions you can ask (as you see in the third option in the image on the left).

There you have it. A tool with limitations but is effective for specific areas. It opens up new ways to collect data and test when none existed before.

A corollary for this post would be cases where you should not use this tool. That includes finding price customers are willing to pay or asking them about how important a single feature is. You have to wait for another post for the reasons.