A few years ago, a marketing team from a major consumer goods company came to my lab eager to test some new pricing mechanisms using principles of behavioral economics. We decided to start by testing the allure of “free,” a subject my students and I had been studying. I was excited: The company would gain insights into its customers’ decision making, and we’d get useful data for our academic work. The team agreed to create multiple websites with different offers and pricing and then observe how each worked out in terms of appeal, orders, and revenue.

Several months later, right before we were due to go live, we had a meeting about the final details of the experiment—this time with a bigger entourage from marketing. One of the new members noted that because we were extending differing offers, some customers might buy a product that was not ideal for them, spend too much money, or get a worse deal overall than others. He was correct, of course. In any experiment, someone gets the short end of the stick. Take clinical medical trials, I said to the team. When testing chemotherapy treatments, some patients suffer more so that, down the road, others might suffer less. I hoped this put it in perspective. Fortunately, I said, price testing household products requires far less suffering than chemo trials.

But I could tell I was losing them. In a sense, I was impressed. It was a beautiful human sentiment they were conveying: We care about all customers and don’t want to treat any one of them unfairly. A debate ensued among the group: Are we willing to sacrifice some customers “just” to learn how the new pricing approaches work?

They hedged. They asked me what I thought the best approach was. I told them that I was willing to share my intuition but that intuition is a remarkably bad thing to rely on. Only an experiment gives you the evidence you need. In the end, it wasn’t enough to convince them, and they called off the project.

This is a typical case, I’ve found. I’ve often tried to help companies do experiments, and usually I fail spectacularly. I remember one company that was having trouble getting its bonuses right. I suggested they do some experiments, or at least a survey. The HR staff said no, it was a miserable time in the company. Everyone was unhappy, and management didn’t want to add to the trouble by messing with people’s bonuses merely for the sake of learning. But the employees are already unhappy, I thought, and the experiments would have provided evidence for how to make them less so in the years to come. How is that a bad idea?

Companies pay amazing amounts of money to get answers from consultants with overdeveloped confidence in their own intuition. Managers rely on focus groups—a dozen people riffing on something they know little about—to set strategies. And yet, companies won’t experiment to find evidence of the right way forward.

I think this irrational behavior stems from two sources. One is the nature of experiments themselves. As the people at the consumer goods firm pointed out, experiments require short-term losses for long-term gains. Companies (and people) are notoriously bad at making those trade-offs. Second, there’s the false sense of security that heeding experts provides. When we pay consultants, we get an answer from them and not a list of experiments to conduct. We tend to value answers over questions because answers allow us to take action, while questions mean that we need to keep thinking. Never mind that asking good questions and gathering evidence usually guides us to better answers.

Despite the fact that it goes against how business works, experimentation is making headway at some companies. Scott Cook, the founder of Intuit, tells me he’s trying to create a culture of experimentation in which failing is perfectly fine. Whatever happens, he tells his staff, you’re doing right because you’ve created evidence, which is better than anyone’s intuition. He says the organization is buzzing with experiments.

And so is that consumer goods company. A group there is studying consumer psychology and behavioral economics and is amassing evidence that’s impressive by any academic standard. Years after our false start, they’re recognizing the dangers of relying on intuition.

Comments

That’s odd… in advertising, “split runs” are something that’s done all the time. At least in my experience doing direct marketing. When I was running the DM department for Verizon in the Midwest, we often made use of initial, small-quantity split runs in order to test both the best offer and the best creative. If we had a final universe of around 1.5 million addresses, we might test 3 creative executions with 2 or 3 offers for each, using around 2,000 households for each test. That only eats up about 20,000 potential customers out of the 1.5 million total database; around 1.3%. In some cases, the best creative/offer execution pulled harder than the worst by up to .5%. When applied to the final campaign, that’s around 7,500 customers who wouldn’t have purchased had we gone with the least effective version. Many, many more people got to see an ad/offer that was more appealing because we did some testing up front on what worked (and was most attractive).

Two ways you might have more success selling the concept to companies. First, do some “possibility math” showing them the delta between how many people will be satisfied with the worst offer vs. the best in the long term. If you can show that doing some testing will result in X% more satisfaction down the road, that might help.

I don’t know if it would squirrel the test, but another possibility is to run the different offers at different times, rather than concurrently. I know this would add another element to the comparison (and don’t know enough about statistics to know if that’s a huge deal). But marketing folks are used to changing up offers over time. If you did one offer in March and then another in April, a third in May, etc., that might give you some good data without making the marketing people feel as if they are disadvantaging one group of folks concurrently with another.

Dear Dan,
I wonder if the answer to the dilemma that is mentioned here by you – Why business organizations do not experiment, could be viewed in two levels-
 Legislation – As long as there is no law that enforce business organizations to seek the customer’s best interest regarding pricing (as it is in the case of pharmaceutical trials)’ and pricing is based on a free market game, there is no real motivation for businesses to take part in such an experiment.
 What Is In For Me? As long as the business organizations will not be motivated with real value for for him, a value that can be translated in to a short term as well as a long term gain, they will not take part in such an experiment.
How to do it?
 Try to find the way to put alternatives – A , A- in comparison to B.
 Find the first one that would form the anchoring that would lead to that “ herd phenomena “ or the “Arbitrary Coherence” behavior described in your book.
I hope I shall be able to find it in my workshops in Israel/
I shall appreciate your help, Dan.
Are there any volunteers in business organizations in Israel?
Curiously yours,
Sarah Kiperwas
http//www.kelimiskiim.co.il

Hotels in major cities work together to attract conventions and trade shows, but seldom cooperate in attracting corporate and/or tourist business. Many General Managers are reluctant to work with “competitors.”

While Director of Sales/Marketing for a luxury hotel in San Francisco, an airline friend asked me if we could provide 100 complimentary rooms for their top travel agents from Europe. That was 25% of our capacity and our property couldn’t do it alone. I approached the other four hotels on Nob Hill to share the group, but one of the managers said “no” so I enlisted the aid of a luxury hotel in Union Square. It was a successful promotion; our bookings from Europe increased 30% the following year.

In Beverly Hills I initiated a similar program specifically for increasing corporate and tourist travel. Again one of the five General Managers said no, so the other four properties hosted 25 rooms each. The city’s share of bookings to the Los Angeles area increased and it is now an annual event, with the fifth hotel now participating.

It can be difficult to convince people to try something new, especially if they are more concerned with the downside.

Dear Dan,
you are absolutely correct.
how about the government testing and experimenting
major decisions ?
also, i never understood how consultants with no intimate knowledge of a given business can be useful.
yoram

the behaviour makes total sense. it is the qualitative version of the mostly discussed loss aversion phenomenon. I think quantifying the risks may have helped. The other crucial component to remember that brands are a belief system and so are bonuses (belief in equitable system). rupturing a belief system is very easy and not easily predicted before hand. Brand or belief systems are “trust systems”. Trust once earned can be lost very rapidly and so the risk/reward is highly asymmetric. The hesitation was a logical and intuitively perhaps correct one.

This is my first message on this blog (and it’s not off topic), I’ve red your book a year ago and I’ve saw the immense business improvements that lie in using your methods and I totally agree that businesses SHOULD experiment. I’m a graduate sociologist with postgraduate studies in communication, globalization, media and computer programming and I work as production manager for software company that has over 300,000 clients and various software projects.

We’ve done lots of AB tests on our communities that prove irrational behavior in using and buying online software, and we’ve managed to learn from our users and improved our pricing models, product packaging, special offers, newsletters, user feedback, conversion rates, conversion amounts, roi. We’ve even tested the power of free versus some other type of offers (special bundles, limited time offers, preorder discounts, standard discounts) and we had great findings that support most of your theories.

Finally, I just wanted to thank you, and let you know that, despite your article, there are businesses that DO experiment.

From 2003-2006 the consultant population at Fannie Mae and Freddie Mac went up to 5,000 out of a population of 8,000 in their various DC-area campuses. I had a small “non-brand” team of 25 persons, who ran circles around the “branded” Big 4 firms – who were being hired by the hundreds of partners and new college grads at $125/hr (partners at $400) primarily because of risk aversion. “you can’t go wrong hiring the big 4″… Partners who closed those deals were extremely important – they showed up at the board level or director meetings, exuded confidence at every pore, and basically has barrels of money shoved at them – because they looked like they had some answers.

Dan,
(Jokingly) You should expect some companies to irrationally want to maintain the status quo while paying big bucks for consultants to give the illusion they’re doing something.

@Andy
Doing tests over time throws in another test variable, time. World events get thrown into play sometimes that will skew numbers. For example, earthquake insurance is probably pretty hot right now in California with the recent Baja earthquake and it may not have been ‘top of mind’ for consumers 6 or 12 months ago.

Dan.. I was pleased and surprised at the integrity of the marketers you reported. A student I once talked to said he was studying communications, but it was really advertising! AAAS is asking for symposia for
next annual meeting. I wish one of them could be Marketing, Govt Policy Implementation by
Persuation, and Ethics, with requirements for full disclosure of methods and intentions.
How about you?

People seem more willing to learn from their mistakes than from conducting experiments from having to create mistakes in the first place.

Maybe you need to frame this as preventing mistakes…that it is less costly than doing something that will be a mistake or continuing something that is a mistake but you don’t know it.

Actually, though, I think it is probably something else entirely: status and identity within the organization are at risk. Experiments–just the word experiments–signals some failures, or else why wouldn’t you boldly go forward and do it in the first place. Who wants to be associated with failure, or not knowing enough to know it would fail in the first place. Hindsight after the experiment is perceptually very clear.

As a consultant I find Dan’s generalization of consultants a little… well… unfounded. Not all of science stems from hard experimentation (where would humankind be if all advances and innovation came solely from experimentation). Surely experimentation plays an important part, but many times one has to decides what experiments to run (unless he wants’ to try everything imaginably possible!).

As a consultant I think my work is far from overpriced overconfidence and recommendations come from many things besides intuition (and hardly ever from those). More often than not it comes from experience, not at the company but at similar places. Just like an experiment can lead to conclusion applicable to the (similar) environment in the same company later on. More often than not most recommendations follow onto an “experiment” or “pilot” phase.

And like Dan says, more often than not, many companies are afraid to take that step.

But that’s just what make innovative companies what they are. The ones willing to take the step into the unknown to get it right. I can’t remember Apple experimenting with the iPod…

(Them vs US generalization to strengthen a point is a common mistake. More often than not experimentation faces the fear of change, not the dependency on Intuition)

Your post is very timely. In reading some 2010 survey results put out by Smart Brief, they reminded us of survey information put out in 2005. So we compared.

Turns out that they were relatively the same, e.g. companies that plan to engage in social media still plan to engage in social media. Most of the other survey questions landed within a few percentages as well, which means not much has change in five years other than companies confirmed that they are still in a perpetual state of planning. Ho hum.

Bill asked “where would humankind be if all advances and innovation came solely from experimentation.?”. I think while much intuition led the spark towards invention nearly all major advances from actually doing the work and in science more so than anything else it’s obvious that experimentation is a requirement. Forget about immutable laws, even a simple hypothesis (which has more basis in observation than even an intuition) must pass through the experimentation phase to prove itself as a sound theory. So, in my opinion, we would likely be just about where we are.

Dan, have you considered on experimenting on others acceptance of experimenting?
Since you get projects canceled so much maybe it would be a good idea to try to figure out which strategy would get corporations to try something new.

My guess is that they want the experiments to reassure them, but they want the experiments done in someone else. They want you to come back after you tried in someone else ( maybe their competition) and succeeded (although that is corporate espionage).

Maybe if you told them the experiment that the experiment had already been done, and you need to do it to find out which kind of “solution” among the ones of a set of possibilities they need (kind of a personality test) and that there is no other way out, they would be more willing to try. If you convince them that they are unique, they will be willing to take more risks.

[…] Ariely, the author of “Predictably Irrational”, has noted that there’s plenty of research to explain this behavior. In his own words: “Experiments require short-term losses for long-term gains. Companies (and […]

[…] Review) sparked my interest because I found it was very much applicable to blogging. He spoke about how companies in need of change choose to go with advice from experts, rather than conducting their own tests and seeing for themselves what would work, and what […]