The Ugly Truth About Benchmarks

SEARCH BLOG

Why Do We Want Benchmarks in the First Place?

As Garrison Keillor says every week, in Lake Wobegon, “all the kids are above average.” If we can simply be “above average,” then we know we’re pulling away from mediocrity. And that’s what we want with benchmarks — we want to know what “average” is so that we know the exact height of the measurement bar that, if we clear it, we can claim success (if not necessarily supremacy). It’s something to aim for that must be attainable, because others have attained it.

We’re surrounded with benchmarks in our personal lives, too: doctors tell us how our weight, blood pressure, and cholesterol compare to benchmarks for healthy people of the same age, gender, and height; standardized testing in schools are compared to statewide benchmarks; salary surveys tell us (generally in a flawed way) benchmarks for pay for others in our field. We’re used to benchmarks, and we want to use them to set targets for the key performance indicators (KPIs) for our marketing initiatives.

Benchmark = Target…right?

All too often, I run up against someone who equates a benchmark with a target. That’s dangerous for two reasons:

Benchmarks are a reasonable sanity check, but targets should be driven by what success will really look like — where does a particular metric need to be in order to justify the investment required to get there?

If targets are solely driven by benchmarks, then it’s an easy (if faulty) deductive leap to believe that, in the absence of a benchmark, no target can be set

So, resolved: benchmarks are not targets.

The Benchmarks We Most Want Are the Ones We Can’t Realistically Have

The easiest, and, in most cases, most relevant and useful benchmarks generally come from your own historical data. If you’re considering an initiative that will improve a certain metric, then your track record with that metrics is a fantastic baseline input into target-setting. Since that data is usually readily available, it gets used. It’s when a totally new initiative is launching — a Facebook page, a mobile app, a community contest — that we get the most anxious about what a “reasonable target” is and, therefore, launch a quest to find benchmarks.

The problem is that these are most often the benchmarks that are least likely to be available. Or, if they are available, there is so much variability inside the data set that it’s hard to put much stock in the data.

Even with something as massively established as email marketing, getting a reasonable benchmark for something as common as open rate has a lot of underlying variables mucking up the data:

The target of the e-mail — internal house list vs. rented list, for instance

The specific industry and consumer type the emails target

The email platform in use and how it captures and calculates open rate

The basic deliverability of the emails included in the benchmark, as driven by content, email platform, and user type

If all of these factors are at work with something as established as e-mail, then what does that mean for a relatively knew and evolving medium like social media or mobile? Almost every time we launch a new Facebook page, we get asked what the “benchmark is for new fan growth.” In that case, the single biggest driver of fans — outside of brands that have a massive number of rabidly enthusiastic customers — is the promotion of the page, be it through Facebook advertising, through channels the brand already owns (email database, web site, TV advertising, etc.), or through paid promotion elsewhere. It’s an unsatisfactory reality…but it’s reality nevertheless.

Should We Just Abandon All Hope, Then?

There are some cases where relevant and appropriate benchmarks are available. For instance, Google Analytics provides benchmark data for common web metrics based on sites of “similar size” and in a user-selectable site category/industry. Twitalyzer can be used to gather benchmarks using all of the tracked users who fall into a given “community.” Email marketing platforms often do provide benchmark data by industry, but they can fall short on the critical “e-mail type” front. When benchmarks are available, by all means use them as an input!

In the absence of available benchmarks, meaningful targets can absolutely still be set. It’s just largely a matter of ferreting out stakeholder expectations. Expectations always exist, even if they are claimed to not:

Expectations almost always exist. In the (real) example illustrated above, I pointed out that, if there truly were no expectations, then there would have been no “shock.”

The expectations that exist may not be precise , but, with a little bit of probing, you can generally find a range, below which the initiative will undoubtedly be judged as disappointing, and above which the initiative will certainly be judged a success. Starting with that range and then narrowing down as best you can and getting agreement of this target range from all of the key stakeholders is just smart performance measurement.

Hear, hear! Setting benchmarks as a range (below lowest is disappointing, above highest is successful) AND getting agreement from all the stakeholders are the two best points.

Tim, I wonder if you agree with this, from my own experience: The job of getting agreement from all the stakeholders can be painful and fun, and usually requires more discussion than you’d ever expect, resulting in everybody being a lot smarter about both analytics and the business.

I both agree, and, well … confess. I too sometimes search for benchmarks. Not to use as targets, or to think that’s what we’ll achieve, but for some information vs. nothing. Normally this is in some way model or forecast the expected impact of something we’ve never done before, as one of many inputs into the equation.

Where possible, I’ll use any data we have, over external benchmarks (for example, look at other clients) but with no other option, I will resort to generic benchmarks.

Are they perfect? Heck no. Are they better than nothing? Yeah. But I give about 20,000 disclaimers about how we don’t know what the performance will be in our unique situation.

@Chris Absolutely! “Measurement” as an alignment mechanism is a powerful tool. I like the “painful and fun” description, as it describes the process well. And, the alternative is to *not* get this alignment up front and deal with “painful and NOT fun” after the fact when trying to determine whether the initiative was successful or not and what can be done to drive additional improvement.

@Michele I’m not saying that we shouldn’t try to find benchmarks. If they’re available, they’re a useful input. If they’re available and super-noisy, then “20,000 disclaimers” gets to part of the issue — people see a number, and they assume precision AND accuracy, no matter how many footnotes there are. If I’ve got an incredibly messy benchmark, then I’m going to want to present it side-by-side with some back-of-the-napkin assumptive modeling that illustrates the noisiness.

Something to ponder: Couldn’t every data point be used as a benchmark? If we’re comparing last month’s traffic to this month, aren’t we in fact using last month as our benchmark?

Like most every tools, benchmarks in the hands of someone who understands and knows how to use them can be powerful. They provided points of reference that can stimulate meaningful conversation and can lead to deeper understanding.

In the wrong hands they (like a Skill saw) can lead to significant blood loss.

@Matt That starts to head down the semantic argument path, which doesn’t really help much in my experience. If, when asked for benchmarks, I was able to say, “Take the first data point and consider that a benchmark,” and have the requestor depart satisfied, I wouldn’t have penned this post.

Tim has moved on from Analytics Demystified effective 12/31/2017 but his content lives on. If you have questions for Tim please send them to eric@analyticsdemystified.com directly and they will get routed.