Technology is simply brilliant! If I didn’t already embrace that fact, “60 Minutes” reinforced that upon me with their article about Kai-Fu Lee, the “Oracle of Artificial Intelligence” recently.

A company of Mr. Lee’s, Face ++, has deep-learning facial response programs that can tell educators which students are engaged, bored or confused during classroom lectures. Teachers can see at what point during lecture these responses happened, and can follow up with the individuals.

And, modern kitchens can re-order supplies for you, such as your fridge noting the milk or eggs are empty, or when you have used any of the food you purchased. The fridge can automatically contact your supplier and new food arrive before you even knew it was needed.

But, the practical success of technology has its limitations. The teachers can be alerted as to which students were excited/confused by the lecture, but the software does not know “why” it happened, and doesn’t know anything more than the facial response identified. “60 Minutes” notes that “a typical AI system can do one thing well, but can’t adapt what it knows to any other task.”

Their best sellers year after year – Thin Mints, Peanut Butter Patties/Tagalogs and Carael Delites/Samoas – contain chocolate. But I don’t eat chocolate (doctor’s orders). Which means every year I hold my breath until I learn whether my favorite GSC – the Lemonade – is still on the list. And thankfully, it is being offered once again in 2019. I ordered 5 boxes from my co-worker’s daughter.

My no-chocolate policy makes me a “niche” consumer of not only GSCs but of snacks and candies in general. When Hershey came out with Hershey Gold in 2017 (essentially a chocolate bar without the chocolate), I thought they had developed it just for me! I hadn’t eaten a candy bar in years until I tried that one. Now it’s my go-to when I need a little something sweet.

But the problem with niche markets is they are small by definition. I worry that with a limited market, eventually Lemonades and Hershey Gold will be dropped in favor of more popular products.

I was reading Russell Perkins blog (Russell and his firm Infocommerce group help clients develop product strategy and new product development) about the use of Customer Lifetime Value (CLV) in Relationship Scoring. He takes note of a new trend to apply CLV across all customer touchpoints.

As researchers the concept of CLV is something we are quite familiar with. It seeks to take into account everything known about a customer (these might include factors ranging from past purchase and payment behavior to things like credit score, income or level of education) in order to determine the value that customer is likely to bring over their lifetime.

Relationship scoring then uses this to determine how the customer should be treated. High value customers are given opportunities from better customer service to special offers. Is this idea really all that new?

In many respects it is not. Long before advanced algorithms firms recognized that some customers were more valuable than others. For example a good butcher knew how important each customer was and provided perks to them (like setting aside the best cuts of meat).

I spent a very long time on Amazon and Staples.com recently trying to find a replacement for the little hand-held gadget that I use to clip crossword puzzles out of my newspapers. Typing “Paper cutter” in the search box didn’t get me there. And neither did “Scissors”. I eventually gave up. It wasn’t until I saw it in a good old-fashioned mail-order catalog that I learned it is called a “Gift wrap cutter.” And so I was finally able to go back online and order one.

There are two important lessons here for researchers.

The first is never assume that the way you refer to something is universally understood.

Our clients use a lot of acronyms and short-hand to describe their products, much of which is insider-speak. Testing that terminology with an uninitiated audience helps to overcome this problem. At TRC we have a group called the Questionnaire Review Committee. A member who is not involved in that project reviews the survey instrument prior to fielding. Anything that isn’t understood is flagged for further review

The second is that potential customers may not consider themselves as being in the market for a given product, even if they are.

I didn’t realize I wanted a gift wrap cutter, but it turns out that’s exactly what I needed. A potential research client who isn’t aware of the pricing research options available to him can’t search on “conjoint study providers” to find a suitable research partner. But he can search on his business objective: “how to price a product” or “pricing research”. And when that search sends him to us, we can tell him that a conjoint study is an appropriate approach.

The way products and services are presented in the marketplace - their names, labels, tags and descriptions - are important. But if potential customers don’t know of your product, or don’t know how to describe it, we can still reach them based on their need, the job to be done, or a solution to a problem that the product offers. In a research questionnaire, screening for ‘likelihood to purchase X product’ may not capture the same range of potential customers as “likelihood to purchase a product that does Y” would. We need to keep this in mind when deciding who does and doesn’t qualify as a prospective customer in our research questionnaires. And also in marketing our own services.

As researchers, we are always interested in understanding consumer choices. We ask respondents to rate importance, rank priorities, and trade-off among complex configuration scenarios. We discuss and make our own trade-offs in terms of design simplicity, project cost and informational objectives. And sometimes we try new approaches.

Anyone who reads the news is aware that there is a whole new consumer product category on the horizon: marijuana is now legal for medical purposes in over 30 states and for recreational use in at least 9 of those states (as of this writing). In partnership with NJ Cannabis Media (www.njcannabismedia.com), we took the opportunity presented by this new consideration dynamic to test a new choice evaluation strategy.

Essentially, we were interested in understanding the roles of 4 factors in adult recreational-use marijuana purchase decisions. A constant sum exercise (allocation of 100 “importance points”) among self-reported current and potential users yielded the following priority distribution:

I grew up in a family that always seemed to be on the hunt for a good sale. Whether it was clothing, electronics or home goods, we never passed up an opportunity to buy an item for MUCH less than it was two days ago. Now, don’t get the wrong impression, I’m not an extreme couponer by any means. Nor do I sit outside in the cold for hours waiting for the doors to open on Black Friday. But a good sale has my name written all over it—especially clothing. Food and beverages, however, were never really something that we compromised on in terms of price, especially since I grew up in a household that favored organics. We were 100% guilty of buying a product we knew of and trusted rather than a cheaper, lesser known, alternative. I never even thought to buy “Toasted Whole Grain Oats” on the bottom shelf when “Cheerios” was right at eye level and is from a brand I know and trust. And let’s be honest— “Cheerios” just has a better ring to it.

Best Way to Measure Brand Value, Price and Size of a Product

I always wondered how companies knew exactly what consumers wanted. It never occurred to me that there were survey techniques that could be used to identify needs as well as the best way to price, size and package a product. That is until I started at TRC. I just assumed that companies use the standard question and answer format in order to get the information they desire. Once I began working on surveys, I learned that some companies were already utilizing a conjoint analysis to better understand its consumers’ needs, interests and perceptions. By showing several variations of the same product and only changing minor features across each group (such as the size, cost, packaging and other attributes), these surveys seek to identify the characteristics most appealing to respondents. This, in turn provides companies with the data they can use to assess their brand equity, while also seeing what they can do to modify their product and/or packaging, ultimately to increase sales. But is this enough to retain loyal customers?

What about Off-brand Products?

It wasn’t until I graduated from college and started working full time that I learned there are stores essentially devoted to off-brand products and are actually thriving. Imagine a store filled not only with “Toasted Whole Grain Oats,” but an off-brand product for everything you use. Yeah, that’s Brandless.com and stores like Aldi. Since realizing that I can fundamentally get the same item for much less, these types of stores have easily become some of my favorite places to shop. They have very minimal, if any, advertising throughout the store, so there is really no pressure when it comes to buying anything. Brandless.com, is just as it sounds. It takes the brand name off of every single item and shows you exactly what you are paying for. And to make it even better, both of these stores have organic products for a fraction of the cost. But, are these types of stores currently being seen as a major threat to well-known companies?

Can Optimization Conjoint Research Be Further Optimized?

Typically, when doing research, we tend to only include brand-name products and services; which makes sense seeing as those are the ones that are most popular and come to your mind first. However, now that there’s this new segment of stores totally devoted to off-brand products, it may be time to break the mold and include generic and “brandless” companies when conducting choice research to measure brand equity. When it comes down to it, it’s only a matter of time before products like “Toasted Whole Grain Oats” begin to take over. How prepared is your company?

Last week I was watching the CBS morning news and they had a story about a new study that indicated that your attitude toward gym class as a child shaped your attitude toward exercise your entire life. After watching the story I am convinced that it is another case of causation being confused for correlation.

The basics were that kids who reported loving gym class were far more active decades later than kids who reported finding it stressful (“I was always picked last”). I don’t doubt this correlation. My problem is they spoke of a need to make gym class more inclusive so that all kids grow up to exercise more. In other words, if we can take the stress out of gym for kids who are not good at sports, we can get them to love exercise more. I’m not sure achieving the first part of this is possible and I’m even more certain that even if we do it will not alter future behavior.

I don’t see how you can eliminate the anxiety about gym class without eliminating the physical activity. Sure you could eliminate picking teams and save that humiliation, but once the games begin the kids who are poor at sports will continue to feel anxiety. Even if you simply make it an exercise class, the kids who are out of shape will stand out. Short of one-on-one classes I don’t see how you can fix the problem.

Doing so is also not likely to make us more active as adults. The kids who were not good at gym were not good for a variety of reasons but likely they either lacked the natural talent OR more likely the interest in sports that the athletes had. Gym class wasn’t the cause of this! If there had been no gym class I would bet that the kids who didn’t like gym class would still be less active than those that did (those who loved it and/or were good at it). I’d point the “causation arrow” backwards….if you love sports as an adult you probably liked gym class.

Statistical Principles Should Be Explained

It is easy to forget that we have to play a role in explaining statistical principles in our reporting and not just when doing sophisticated work like Discrete Choice Conjoint, Max-Diff, Segmentations and Regressions. Our direct clients likely understand causation and correlation issues, but it is important to know that their internal clients may not. Clear justification for pointing the “causation arrow” must be provided in reports and presentations. Just as important is knocking down attempts to point the arrow based solely on correlation. Otherwise they may walk away with a completely false assumption and not double back with researchers to validate it.

Can You Project Results to the Population?

This study is also useful in highlighting another common mistake made by internal clients. I was telling an old friend about the study and he said “that can’t be right, you hated gym class and you are far more active now than when we were kids”. Imagine me in a focus group telling my story of how much I hated waiting to be picked for a team and how my memory of that humiliation caused me to exercise more and more as an adult. The internal client stands up and says “That’s how we make people healthier…more humiliation in gym class!” In that case, someone will be in the room to point out that one person’s story is not projectable to the population…but that’s another blog.

It’s well known that humans respond to personalization. But, as consumers do we respond more when our name is used when we are being sold to, and if so, why? Specifically, are we more likely to react positively to marketing emails that include our name in it? It turns out that we do indeed, as revealed by an interesting new study to be published in the INFORMS journal Marketing Science (authored by Navdeep Sahni, Christian Wheeler (both from Stanford) and Pradeep Chintagunta (Univ of Chicago)).

The researchers were specifically interested in understanding whether including a consumer’s name in the subject line of an email had a positive effect – in terms of the number of emails opened as well as subsequent conversion into sales leads. They ran a classic A/B test where everything was controlled to be the same except the inclusion of the consumer’s name in the subject line. This one tweak was sufficient to increase the probability of opening the email by 20%, which then translated to a 31% increase in sales leads and a 17% reduction in those who wanted to unsubscribe.

What is interesting here is the nature of the manipulated content. It is non-informative about the product and its benefits, yet still has a significant impact on the consumer’s behavior. This would seem to imply that the effect should be generalizable to other products and contexts as well. To test this they ran two more studies where the products differed as well as the relationship of the consumers to the particular companies. The results were consistent with the first study, establishing the generalizability of the results. “Aspects of the advertising message that are seemingly unrelated to the product can affect how consumers process the message, and significantly change outcomes,” said lead author Navdeep Sahni.

There is then the question of why this occurs. While there are competing theories the best one (message elaboration) seems to be that once their attention is drawn using their name, consumers process the information more carefully. This, of course, has a potential downside in that if the message is not relevant to the consumer then the more careful processing could translate into fewer sales leads and more people unsubscribing.

A rather clever 2x2 design was used to tease out this effect – the recipient’s name was included in the body of the email (or not) and a relevant piece of information in the form of a product discount was included in the email (or not). By including the name in the body of the email, the chances of the recipient processing the message increases. By including the discount the relevance of the message itself becomes higher (or not). So if the psychological mechanism at play is message elaboration, then the condition where attention is drawn and a relevant message is presented should provide the most leads – and that is precisely what they find.

Additional (regression) analysis showed how the pieces fit together. Seeing the name increases the likelihood of the message being read and processed, and increases the chance of a positive outcome – if the message is compelling. By itself, the personalization still has an effect but not as much as it otherwise could with a relevant message.

This research does not tell us what happens when more and more marketers start using email personalization. Will consumers get desensitized to the effect? What if the domain is sensitive? Would consumers get offended resulting in a backlash? The answers are not available in this research as the datasets examined here do not fall into these categories.

But, for now, we can say that email marketers could benefit from including the recipient’s name, and can enhance the effect by having a relevant message in the body of the email.

At TRC, the most popular spot in the office is our snack shelf. It features an array of sugary, salty and carb heavy treats. The contents vary and are determined by one person (Ruth, who stocks the shelf) with influence from the rest of us (based on past usage and suggestions). Sometimes the shelf has exactly what you’re looking for. Other times, not so much. But what if instead of relying on Ruth’s powers of deduction we were to use research to figure out the optimal shelf configuration? We’re researchers, after all.

Start out with Incentive Alignment

We would start out by using our Idea Mill™ product to generate ideas on which snacks people want to have. It uses incentive alignment and gamification to bring out the most creative ideas and provide direction on the favorites. It is likely that this will create too long a list of ideas (the candy shelf is only so large) and while we can toss out ideas that are not feasible, we believe it is best not to toss out ideas just because you personally don’t like them (I’m looking at you Mr. Goodbar). Far better to get more consumer input…this time to narrow the list.

Go beyond Simple Ratings, Employ a Choice Method

We could ask our folks to rate all the suggested snacks and then use that to figure out which ones should make the cut. Ratings might be good enough to eliminate some things (my guess is that despite what people claim, healthy snacks would bite the dust), but among popular snacks (like different types of pretzels) we are not likely to see clear differentiation.

A choice method like Max-Diff could help but if the list was long it would require a lot of work on the part of our employee respondents. A method like our proprietary Bracket™ would do the job in a faster and more engaging fashion while still finding clear winners and losers.

Find a Combination of Flavors that Would Please the Most People

Stocking the winners would therefore make the most sense…but would it please the most people?

Currently the shelf features five types of M&M’s (original, almond, caramel, dark and strawberry nut). If dark chocolate was the least preferred it might get cut. But what if those who like almond, caramel and strawberry nut also liked original, but those who like dark only liked it. For situations like this we can take the results of the Bracket™ (or Max-Diff) and use TURF to find the combination that would please the most people.

Find the Best Position on the Shelf with Discrete Choice Conjoint

Of course, another factor is positioning. The shelf is only so large. M&M’s can be dispensed from any size canister (in fact Ruth has one that spins so that it can dispense three types) while Pretzels tend to come in large bins that take up a lot of room. In addition, not all of the snacks cost the same. In an effort to keep our expenses and waistline under control we follow a strict budget. Might I trade off having greater quantity of a lesser snack in exchange for an expensive favorite?

For these kinds of questions a discrete choice conjoint is the answer. We can include a variety of candy types and constraints related to the room they take up as well as cost. Simulations can then optimize how to spend our candy budget.

Despite our love of research and wide array of tools though, I think in this case they would be overkill (we have a very small population of around 40 employees). So I think we’ll stick with Ruth’s instincts. I never go wanting….

I heard a great episode of the “You Are Not so Smart” podcast in which Sam Arbesman talked about his book called “The Half Life of Facts”. This book has nothing to do with “truthiness”, “fake news” or any accusation that someone is or is not a liar, but it does provide some context for the world we live in.

The book’s title is taken from a scientific term (the time it takes an isotope to lose half of its radioactivity) and the notion that as we learn more, some things we took as “fact” will turn out to be wrong. Newton’s laws, for example, were supplanted by Einstein. The point of the book is not that we shouldn’t bother learning facts, but rather that we should be open to the possibility that they might be wrong. Modern medicine acknowledges that they don’t know everything and that some things they “know” will prove to be false. At the same time, they must treat patients based on what is known or thought to be known.

It got me thinking about our business. What is the half-life of facts here? You might be tempted to take comfort in the fact that things like margin of error have not changed. While technically true, this ignores that academia is facing a crisis of confidence over statistically significant findings that don’t hold up in subsequent studies. One cause for this is they run lots of cuts of the data and look for anything statistically significant and then build a rationale for that finding. They ignore that with so many cuts of the data they are likely to find some statistical noise. Don’t we run the same risk with each additional banner we run?

There is a known problem with Discrete Choice Conjoint that is often ignored. If you have a product made up of say 8 features each with three levels and 1 with 150 the importance of the feature with 150 levels will be overstated by the model. Still, the model will run, utilities will be calculated and a simulator can be constructed…all of which provide a sense of precision that is not warranted. A researcher who knows about it will guide the client either by changing the design OR by putting the results into their proper perspective. There are many other ways that a complex model like this can produce skewed results and I have little doubt more will be found in the future.

This is not to say that we can’t trust results. Doctors have to treat patients based on what is known today and we must do the same for our clients. The important thing is that we have to acknowledge we have things to learn. As researchers that should be easy for us…

In my previous blog about HQ Trivia I pondered how the creators of HQ were planning to make money. Right now there is no advertising; venture capital funds the app and the jackpots. Apart from occasional sponsorships, there appears to be no immediate source of additional funding.

HQ could do many different things to achieve financial success – content sponsorships, jackpot sponsorships, advertising, product placement, buying ‘lives’ by watching a 15-second spot – even sponsor logos on host apparel. In fact, there are probably different ways to monetize HQ Trivia that we haven’t even thought of yet – making this a perfect research case for TRC’s Idea Mill™.

Idea Mill™ is our method that employs Smart Incentives™ – harnessing the principles of crowd-sourcing to ask respondents for their best idea, and the ideas are then voted on by other respondents within the same research survey. The respondents with the best ideas as judged by their peers are rewarded with prizes. This is a great technique to use when you’re in the idea generation phase of product development.

Once we get a list of potential ways to monetize HQ, we could then winnow the list to the ones that would be feasible to implement, and narrow the list using a prioritization-based research method such as Idea Magnet™. Results can be generated quickly.

Before implementing the winning ideas, we could further explore options by building various scenarios of the sponsored game, and asking HQers to weigh in on which one would be most acceptable to them. Through a choice-based research tool such as discrete choice conjoint, we could vary HQ’s potential features, such as:

• Number of ads or sponsorships per game

• Where the ads appear (between rounds, upon game entry)

• Prize pool

• Having sponsor-related questions

• Getting bonus ‘lives’ for watching sponsor videos

All of these techniques employ strategies we use in pricing and product development research to include the consumer in the decision-making process. HQ’s creators are good at asking questions – I hope they do the same in further developing their product.

I appreciate that we are once again in the GRIT 50 Most Innovative Research Agencies. Innovation has always been important to me and so I am quite gratified when I see our efforts being recognized. What I don't know is how people are defining innovation.

I think as an industry we sometimes label things as innovative that are not while failing to recognize some things that are genuinely innovative. In my view, innovation requires that we provide something of value that wasn't available before. Anything short of that may be 'interesting' but not 'innovative'.

I would put things like neuroscience or most AI into the "interesting" category. There is a lot of potential but so far little so show in terms of tangible benefits. Over the years at TRC we've had many ideas that showed promise, but ultimately didn't prove out (my favorite being "Conjoint Poker"). Ultimately it is the nature of innovation that some things will never leave the drawing board or 'laboratory', but without them there would be no innovation.

On the other side, I think ideas that save time and money are often not viewed as innovative unless they involve something totally new. I disagree. If I can figure out a way to do the same process faster and/or cheaper then I'm innovating. It may not look flashy, but if it allows clients to do something they couldn't otherwise do it is innovation.

A bunch of us here at TRC enjoy trivia, so we’ve been playing HQ Trivia using their online app for the past few months. HQ is a 12-question multiple choice quiz that requires a correct answer to move on to the next question. As a group, we have yet to get through all 12 questions and win our share of the prize pool. But it’s a nice team-building exercise and we like learning new things (who knew that 2 US Presidents were born in Vermont).

Given the fun we have playing it, I can understand HQ’s success from the player perspective. Where I am a bit confused is the value proposition for its creators. Venture capital funding provides the prize money. But there are no ads, so I’m not sure how anybody’s actually making money. There are occasional tie-in partnerships (The awesome Dwayne Johnson hosted one of the gaming sessions to promote his newest movie release, “Rampage”.) But I suppose the biggest question is, will interest in HQ still be there when they’ve finally signed on enough sponsors to be profitable?

We do a lot of pricing research at TRC, and can model on a variety of variables. But predicting the direction of demand is nearly impossible for certain products. For consumables and many services, product demand is predictable. How your product fares compared to the competition may have its ups and downs, but you can assume that people who bought toilet paper 2 weeks ago will be in the market for toilet paper again soon.

But with something like HQ Trivia, product demand is much more difficult to determine in advance, especially more than a few weeks from now. Right now it’s still hot – routinely attracting 700,000 – 1,000,000+ players (HQers) in a given game. How do the creators – and investors and potential sponsors – know whether it’s a good investment? What if interest suddenly declines, either because the novelty has worn off or because something better comes along?

One way to find out is through longitudinal research. Routinely check in with HQers over time to determine their likelihood to play the next week, their likelihood to recommend to their friends, and their attitudes toward the game itself. This information can be overlaid with the raw data HQ collects through game play every day – number of players, number of referrals, and number of first-time players. This information can not only help shed light on player interest, but players could also weigh in on changes the creators are considering to keep the game fresh.

HQers are engaging in a free activity which gives them the opportunity to win cash prizes. But just because it’s free to play doesn’t mean the HQ powers-that-be couldn’t do pricing research (more on that in a future blog).

For now, I’ll keep on playing HQ hoping I can answer all the questions, not the least of which is: when will I – and the other million HQers – no longer care?

I’ve written many times about the importance of “knowing where your data has been”. The most advanced discrete choice conjoint, segmentation or regression is only as good as the data it relies on. In the past I’ve written about many ways that we can bias respondents from question ordering to badly worded questions and even to push polling techniques. A new study published in Psychological Science would seem to indicate that bias can be created much more subtly than that.

Dr. Michael Reifen-Tagar and Dr. Orly Idan determined that you can reduce tension by relying on nouns rather than verbs. They are from Israel so they were not lacking in “high tension” things to ask. For example, half of respondents were asked their level of agreement (on a six point scale) with the “noun focused” statement “I support the division of Jerusalem” and the other half with the “verb focused” statement “I support dividing Jerusalem”.

Consistent and statistically significant differences were found with the verb form garnering less support than the noun form. Follow-up questions also indicated that those who saw the verb form were angrier and showed less support for concessions toward the Palestinians.

Is this a potential problem for researchers? My answer would be “potentially”.

The obvious example might be in published opinion polls. One can imagine a crafty person creating a questionnaire in which issues they agree with are presented in noun form (thus garnering higher agreement from the general public) and ones they disagree with in verb forms (thus garnering lower agreement). It is unlikely that anyone would challenge those results (except for those of you clever enough to read my blog).

It might also be the case on more consumer-oriented studies, though it is unclear whether the same effect would be felt in situations where tension levels are not so high. In our clients’ best interest, however, it makes sense to be consistent and with that eliminate another form of bias.

I work in a business that depends heavily on email. We use it to ask and answer questions, share work product, and engage our clients, vendors, co-workers and peers on a daily basis. When email goes down – and thankfully it doesn't happen that often – we feel anything from mildly annoyed to downright panic-stricken.

So business email is ubiquitous. But not everyone follows the same rules of engagement – which can make for some very frustrating exchanges.

We assembled a list of 21 "violations" we experienced (or committed) and set out to find out which ones are considered the most bothersome.

Research panelists who say they use email for business purposes were administered our Bracket™ prioritization exercise to determine which email scenario is the "most irritating".

You are planning to take a trip to the city of brotherly love to visit the world famous Philadelphia Flower Show, and would like to book a hotel near the Convention Center venue. If you’re like most people, you go online, perhaps to TripAdvisor or Expedia and look for a hotel. In a few clicks you find a list of hotels with star ratings, prices, amenities, distance to destination – everything you need to make a decision. Quickly you narrow your choice down to two hotels within walking distance of the Flower Show, and conveniently located near the historic Reading Terminal Market.

But how to choose between the two that seem so evenly matched? Perhaps you can take a look at some review comments that might provide more depth? There are hundreds of comments which is more than you have time for, but you quickly read a few on the first page. You are about to close the browser when you notice something. One of the hotels has responses to some of the negative comments. Hmmm…interesting. You decide to read the responses, and see some apologies, a few explanations and general earnestness. No such response for the other hotel, which now begins to seem colder and more distant. What do you do?

In effect, that’s the question Davide Proserpio and Georgios Zervas seek to answer in a recent article in the INFORMS journal Marketing Science. And it’s not hard to see why it’s an important question. Online reviews can have significant impact on a business, and unlike word of mouth they tend to stick around for years (just take a look at the dates on some reviews). Companies can’t do much to stop reviews (especially negative), and so they often try to coopt them by providing responses to selected reviews. It is a manual task, but the idea seems sound. By responding, perhaps they can take the sting out of negative reviews, appear contrite, promise to do better, or just thank the reviewer for the time they took to write the feedback – all with the objective of getting prospective customers to give them a fair chance. The question then is whether such efforts are useful or just more online clutter.

It turns out that’s not an easy question to answer, and as Proserpio and Zervas document in the article, there are several factors that first need to be controlled. But their basic approach is easy enough to understand – they examine whether TripAdvisor ratings for hotels tend to go up after management responds to online reviews. An immediate problem to overcome, ironically enough, is management response. That is, in reaction to bad reviews a hotel may actually make changes that then increases future ratings. That’s great for the hotel, but not so much for the researcher who is trying to study if the response to the online review had an impact, not whether the hotel is willing to make changes in response to the review. So, that’s an important factor that needs to be controlled. How to do that?

Enter Expedia. As it happens, hotels frequently respond to TripAdvisor reviews while they almost never do so on Expedia. So, they use Expedia as a control cell and compare the before-after difference in ratings on TripAdvisor and Expedia (the difference-in-difference approach). Hence they are able to tease out if the improvement in ratings was because of responding to reviews or real changes. Another check they use is to compare the ratings of guests who left a review shortly before a hotel began responding with those who did so shortly after the hotel began responding. Much of the article is actually devoted to several more clever and increasingly complex maneuvers they use to finally tease out just the impact of management responses. What do they find?

For many years the answer would have been telephone interviewing. We continued to use telephone interviewing long after it became clear that web was a better answer. The common defense was “it is not representative”, which was true, but telephone data collection was no longer representative either. I’m not saying that we should abandon telephone interviewing…there are certainly times when it is a better option (for example, when talking to your clients customers and you don’t have email addresses). I’m just saying that the notion that we need to have a phone sample to make it representative is unfounded.

I think though we need to go further. We still routinely use cross tabs to ferret out interesting information. The fact that these interesting tidbits might be nothing more than noise doesn’t stop us from doing so. Further, the many “significant differences” we uncover are often not significant at all…they are statistically discernable, but not significant from a business decision making standpoint. Still the automatic sig testing makes us pause to think about them.

Wouldn’t it be better to dig into the data and see what it tells us about our starting hypothesis? Good design means we thought about the hypothesis and the direction we needed during the questionnaire development process so we know what questions to start with and then we can follow the data wherever it leads. While in the past this was impractical, we not live in a world where analysis packages are easy to use. So why are we wasting time looking through decks of tables?

There are of course times when having a deck of tables could be a time saver, but like telephone interviewing, I would argue we should limit their use to those times and not simply produce tables because “that’s the way we have always done it”.

Conventional internal combustion engine cars need a grille because the engine needs air to flow over the radiator which cools the engine. No grille would mean the car would eventually overheat and stop working. Electric cars, however, don’t have a conventional radiator and don’t need the air flow. The grille is there because designers fear that the car would look too weird without it. It is not clear from the article if that is just a hunch or if it has been tested.

It would be easy enough to test this out. We could simply show some pictures of cars and ask people which design they like best. A Max-Diff approach or an agile product like Idea Magnet™ (which uses our proprietary Bracket™ prioritization tool) could handle such a task. If the top choices were all pictures that did not include a grille we might conclude that this is the design we should use. There is a risk in this conclusion.

To really understand preference, we need to use a discrete choice conjoint. The exercise I envision would combine the pictures with other key features of the car (price, gas mileage, color…). We might include several pictures taken from different angles that highlight other design features (being careful to not have pictures that contradict each other…for example, one showing a spoiler on the back and another not). By mixing up these features we can determine how important each is to the purchase decision.

It is possible that the results of the conjoint would indicate that people prefer not having a grille AND that the most popular models always include a grille. How?

Imagine a situation in which 80% of people prefer “no grille” and 20% prefer “grille”. The “no grille” people prefer it, but it is not the most important thing in their decision. They are more interested in gas mileage and car color than anything else. The “grille” folks, however, are very strong in their belief. They simply won’t buy a car if it doesn’t have one. As such, cars without a grille start with 20% of the market off limits. Cars with a grille, however, attract a good number of “no grille” consumers as well as those for whom it is non-negotiable.

Conjoint might also find that the size of the grille or alternatives to it can overcome even hard core “grille” loving consumers. Also worth consideration that preferences will change over time. For example, it isn’t hard to imagine that early automobiles (horseless carriages as they were called originally) had a place to hold a buggy whip (common on horse drawn carriages), but over time, consumers determined they were not necessary (or perhaps that is how the cup holder was born :)).

In short, conjoint is a critical tool to insure that new technologies have a chance to take hold.

The Economist Magazine did an analysis of political book sales on Amazon to see if there were any patterns. Anyone who uses social media will not be surprised that readers tended to buy books from either the left or the right...not both. This follows an increasing pattern of people looking for validation rather than education and of course it adds to the growing divide in our country. A few books managed a good mix of readers from both sides, though often these were books where the author found fault with his or her own side (meaning a conservative trashing conservatives or a liberal trashing liberals).

I love this use of big data and hopefully it will lead some to seek out facts and opinions that differ from their own. These facts and opinions need not completely change an individual's own thinking, but at the very least they should give one a deeper understanding of the issue, including an understanding of what drives others' thinking.

In other words, hopefully the public will start thinking more like effective market researchers.

We could easily design research that validates the conventional wisdom of our clients.

• We can frame opinions by the way we ask questions or by the questions we asked before.• We can omit ideas from a max-diff exercise simply because our "gut" tells us they are not viable.• We can design a discrete choice study with features and levels that play to our client's strengths.• We can focus exclusively on results that validate our hypothesis.

Do people buy green products? Yes, of course. The real question for green marketers is whether they buy enough. In other words, are green sales in line with pro-green attitudes? Not really, as huge majorities of consumers show at least some green tendencies while purchases lag far behind. Why is that? Economics tells us that consumers buy based on value (trading off cost and benefits). Since eco-friendly products are seen as being more expensive, higher prices can lower the value of a green product enough to make a conventional alternative more attractive.

While the cost trade-off is clear, it is not the only one. The benefit side has at least two major components. One is the environmental benefit, which may or may not seem tangible enough to make a difference. For instance, a dozen eggs at Acme goes for less than a dollar, while some cage-free varieties can run north of $4 at Whole Foods. So, an environmentally conscious consumer has to make a trade-off at the time of purchase – is the product worth the additional cost? For items like food, the benefits may seem small enough, and far enough out, that many may decide the value proposition does not work for them. In other product categories (say, green laundry detergent), the benefits may seem both long term andimpersonal, making the trade-off even harder.

The second major component is the effectiveness of the product in performing its basic function. If consumers perceive green products as inherently inferior (in terms of conventional attributes like performance), they are less likely to buy them. So a green laundry detergent (that uses less harsh chemicals) could be seen as more expensive andless effective in cleaning clothes, further dropping its overall value. (A complicating issue is that the lack of effectiveness itself could be a perceptual rather than real problem). Unless the company is able to offset these disadvantages, the product is unlikely to succeed.

A direct way to increase demand is to offer higher performance on a compensatory attribute. In the case of LED TVs, for example, newer technology consumes less power andprovides better picture quality. (Paradoxically, this can sometimes lead to the Rebound Effect, whereby greener technologies encourage higher use, thus clawing back some of the benefits). But in reality, most products are not in a position where green attributes offer performance boosts.

And of course, as it is with every other market, there are segments in this market as well. Consumers who are highly committed (dark green) are willing to buy, as the value they place on the longer term environmental benefits is high enough. And, often they are affluent enough to afford the price. But a product looking for mainstream success cannot succeed only with dark green consumers (who rarely account for more than 20% of the market). Other shades of green will also need to buy. Short of government subsidies and mandates, green marketers have to find ways to balance out the components of the value proposition for the bulk of the market.