Make work better

Post navigation

Why Experimenting Beats Benchmarking

In his excellent new book Why the West Rules – For Now Ian Morris tells many great stories while trying to explain the trends in human history from around 14,000 BC to now. One of them jumped out at me – the story of the rise of Portuguese sea power on the back of guns in the early 16th century (emphasis mine):

Dozens of Portuguese ships followed in da Gama’s wake, exploiting the one advantage they did have: firepower. Slipping as the occasion demanded among trading, bullying, and shooting, the Portuguese found that nothing closed a deal quite like a gun….

Their tiny numbers meant that Portuguese ships were more like mosquitoes buzzing around the great kingdoms of the Indian Ocean than like conquistadors, but after a decade of their biting, the sultans and kinds of Turkey, Egypt, Gujarat and Calicut – egged on by Venice – decided enough was enough. Massing more than a hundred vessels in 1509, they trapped 18 Portuguese warships against the Indian coast and closed to ram and board them. The Portuguese blasted them into splinters.

Like the Ottomans when they advanced into the Balkans a century earlier, rulers all around the Indian Ocean rushed to copy European guns, only to learn that it took more than just cannons to outshoot the Portuguese. They needed to import an entire military system and transform the social order to make room for new kinds of warriors, which proved just as difficult in sixteenth-century South Asia as it had been three thousand years earlier, when the kings of the Western core struggled to adapt their armies to chariots.

Here is my take on lesson of this story: benchmarking doesn’t work.

The problem with benchmarking is the basic assumption that a particular tool or process will function in the same way in your firm as it does in its original context. This is rarely true.

Just as the kings around the Indian Ocean didn’t just need guns, they needed entirely new military processes and personnel, firms trying to change can’t just copy a tool that a successful firm uses, like Google’s 20% rule (discussed here), they need to support the tool with different processes and people.

So rather than trying to copy tools through benchmarking, you are better off putting in place a system that supports experimentation. Matt Perez raised this point in a comment on yesterday’s post:

“Bad ideas come from bad structures. One of the best ways to eliminate bad ideas is to build new, better structures.”

Oops, how would you know if they are “better?” I would say that to show up bad ideas for what they are, build new structures and EXPERIMENT like your life depended on it (because it does).

As discussed yesterday, one of the difficulties in managing in uncertain environments is that hierarchies often don’t function well in these circumstances. They respond slowly to change.

This is important to innovation, because it means that you can’t simply say “we need to be more innovative” in response to competitive pressures. You actually have to change the way you do business – much as the kings around the Indian Ocean had to change their entire military systems in response to pressure from the Portuguese.

I’ve talked about Steve Denning’s ideas in this regard a few times. He has a set of prescriptions for changing your management systems to deal with such threats. They are well worth exploring.

Organize work in short cycles: As the authors of The Power of Pull point out, one proceeds “by setting things up in short, consecutive waves of effort, iterations that foster deep, trust-based relationships among the participants… Knowledge begins to flow and team begins to learn, innovate and perform better and faster.… Rather than trying to specify the activities in the processes in great detail…specify what they want to come out of the process, providing more space for individual participants to experiment, improvise and innovate.”

When you are faced with competitive threats that change the playing field, you can’t respond by simply doing more of what you’re currently doing. Furthermore, you usually can’t copy what the new competitors are doing either – their systems and contexts are often too different, which means that even if you can copy what they’re doing, it won’t work the same way.

To succeed, you need a system that supports experimentation – that’s what really provides firepower.

Disclosure of Material Connection: Some of the links in the post above are “affiliate links.” This means if you click on the link and purchase the item, I will receive an affiliate commission. Regardless, I only recommend products or services I use personally and believe will add value to my readers. I am disclosing this in accordance with the Federal Trade Commission’s 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising.”

14 thoughts on “Why Experimenting Beats Benchmarking”

My takeaway from this is not that benchmarking doesn’t work, but that 1) Benchmarking doesn’t work “When you are faced with competitive threats that change the playing field”, and 2) If you want benchmarking to work (e.g. because you’re trying to imitate a non-disruptive innovation) you need to invest a lot in understanding the the inter-dependencies between the practice you want to benchmark and the other elements of the firm that might make it work. Yep, learning about this will require some experimentation 😛

Benchmarking has always seemed a bit of a “cover” to me. A firm says to itself, let’s try to be “no worse” than the firm that claims to be the best at a process. But while the benchmarking and improvements are underway, the leading firm is already changing again, so by benchmarking you lock yourself into a constant catch-up paradigm, rather than a leadership paradigm. Plus, you never discover anything for yourself, simply rely on the experimentation of others. That has to be a risky proposition.

Does it have to be benchmarking versus experimentation? Benchmarking doesn’t work when one company (or country) is trying to duplicate another’s success. It does work when it is used to establish a control for measuring the effectiveness of experimentation within an organization. If you don’t benchmark your starting position, how do you know if you are advancing?

It is a rare business experiment that is so overwhelmingly successful that it doesn’t need a reference point. Most often, success is defined in baby steps, not leaps. Validating progress by tangible benchmarks keeps people motivated and moving forward.

It’s always frustrating to me when people ask, “How do we compare to your other clients?” because they are benchmarking to the wrong metrics. The question they should be asking is “How do we compare to our history and what can we do to improve?”

I agree that internal benchmarking is important, and it definitely plays a role in experimentation.

However, I usually run into in circumstances like you do – when people want to know how they compare to others. That’s the real problem that I have with benchmarking, when someone tries to bring in something from outside – often the context is too different.

I put to you that it can also be an issue of benchmarking AND analysis. Benchmarking has its flaws, no doubt. But having said that, Cohen and Levinthal’s concept of “absorptive capacity” may provide some additional insight.

You need to analyse what you benchmark. Obviously, merely comparing what the others have and you don’t will not suffice. But assessing one’s own ability to actually use that insight is key.

The analysis part could, e.g., be informed by something as simple as the RBV – What resources and capabilities does an organisation have, and how can those be leveraged?

What’s more, it benchmarking and experimentation should not be considered as extremes of a continuum.

That’s all reasonable – but it’s not what most organisations do when they “benchmark”. I agree with both you and Debra in thinking that there are ways to make something like benchmarking work – I just think for the most part we’re not there yet.

I agree that there is no substitute to experimentation. However, the question arises: Where do we begin? That is where benchmarking comes in. I also agree that benchmarking (or sometimes called best practices) may not be the best place to begin experimenting. As the Portuguese story shows, the practice may not be culture-friendly. Do we have any better option? I like what Chip & Dan Heath suggest in the book “Switch”. They call it “following the bright spots”. It involves asking the question: “What is working well in *our* system? Can it be cloned?” Note that unlike the benchmarking, bright spots are culture-friendly. To find out more about bright spots, check out their article from Fast Company: Don’t solve problems, copy success

I also feel that Debra has raised a good point. We need a reference to check where we stand. Question is: what kind of reference point is good? I like Warren Buffett’s metaphor: weighing scale is better than voting machine. Why? Former doesn’t depend upon emotions (less ambiguous), the latter does. For example, I like what Intuit measures: number of experiments performed per year. In the HBR June 2011 article “Innovation catalyst” it says, “In 2006, TurboTax unit ran just one customer experiment. In 2010, it ran 600. Experiments in QuickBooks unit went from a few each year to 40 last year.”