Measurement

February 11, 2010

2009 is long gone and the results of your Q4 PR efforts are finally in. Let’s say you had 100 media placements during this time period and you want to know how you did. Before you even read the executive summary of the PR measurement report, what can you expect? The good, the bad, or the mediocre?

The correct answer is: all of the above.

But many of us fall in this trap where our expectations are shaped by extreme events: “We had a great hit in the NYT this quarter. That means we did great!” or “The WSJ article ripped us apart, we are dreading the report findings.” We tend to focus too much on one or two good or bad eggs out of a whole basket, and wrongfully so. Now don’t get me wrong, it’s important to identify and be mindful of the outliers, but if you’re measuring the overall success of a PR initiative, it’s the central tendency of the effort that really matters.

To illustrate, let’s go back to my initial question: If you had 100 media mentions in Q4 of 2009, what should your expectation be? I can bet you that, on average, you’ll see the following: 16 poor hits, 68 mediocre hits, and 16 exceptional hits. How do I know this? It’s not magic; it’s called the normal distribution.

February 03, 2010

Here’s a brilliant idea: I’ll put up a sign for one day near the Empire State Building. The sign will say “Check Out Peppercom’s Blog!” Since on any given day about 13,000 people visit the building, and each person has an upper bound of about 150 people in their social circle who they’ll talk to about my sign, that means I can expect 1.95 million impressions in one day! Astounding, isn’t it? Especially if you’re one of those that looks at a glass half empty and says it’s a river dam, then the math definitely makes sense.

The PR industry’s struggle for credible measures has paradoxically reduced its credibility. And it’s because of measures like impressions that are comprised of circulation numbers and a mysterious multiplier. It’s not so much that circulation numbers are misleading, it’s the multiplier - a number used to inflate audience exposure by a factor ranging from as low as 2.5 to 8.0 – that grossly exaggerates and distorts already questionable figures.

September 24, 2009

Do you know the definition of “statistical power?” Don’t count on the guys at J.D. Power to know; it seems like they don’t really care about all that. So I’ll take at shot at this one: The power of a statistical test is the probability that the test you’re conducting will find a statistically significant result. So what’s the statistical power of a J.D. Power customer satisfaction study? Well, let’s leave it up to them to figure that one out.

In the meantime, I call your attention, once again, to the importance of statistical significance. Either it’s becoming an obsession of mine, or there’s something seriously wrong with the set of studies I’ve come across recently. To be fair, it’s probably a combination of the two, but that doesn’t excuse the latter.

I said it last week and I’ll say it again: Before you let anything affect your business plans, read the fine print. Look at this J.D. Power Customer Satisfaction Study, among many others, which ranks appliance retailers. Now look at the fine print if you have the eyes to read 4-point font. If not, I call your attention to the following statement: “Rankings are based on numerical scores, and not necessarily on statistical significance.” Interesting.

August 19, 2009

Peppercom recently conducted a social media audit to uncover the “white space” areas of opportunity for a client. In the process, we collected data on over 1,200 mentions or social media “hits” for industry competitors, measuring their share of voice vis-à-vis various messaging categories and sentiments. After examining data on Twitter and blogs primarily, as well as Facebook, YouTube, Ning, LinkedIn, Delicious, and MySpace, we decided to do a quick experiment and answer the following question: For each competitor, does a strong presence in one social media channel translate, correlate, or spill-over into a strong presence in another social media channel? The answer was “yes.”

To answer the question we looked at blog and Twitter activity for three competitors. We graphed the number of Twitter and blog “hits” over the timeframe analyzed, and generated the following three graphs. Do you see any relationships?

August 10, 2009

How many times have you heard this question: “Should we measure outputs or outcomes?” The discussion of measurement in PR is flooded with a debate over which is a “better” metric: outputs or outcomes. Can you guess what the conclusion is always? You got it right - it’s “both.”

So why the debate? Well, in essence, it’s a discussion around opportunity cost of doing each, be it in terms of financial resources or time, and usually doing the former over the latter is considered to be “cheaper.” But as technology and software develop and data becomes more readily available, the discussion will lose steam. Untill then, it’s a valid debate only if you consider outputs and outcomes to be perfect substitutes.

July 09, 2009

It turns out Andrew Gelman, a professor of statistics at Columbia University, was right when crying out about the use, misuse and abuse of graphs in academic research. It seems to me like businesses aren’t much better. To illustrate, let’s look at this typical problem as a 6th grade math assignment:

June 09, 2009

Numbers are our best friend, unless of course, they’re used irresponsibly. The BMI or body mass index is one of the most widely used (or misused) mathematical formulas out there. Why? Because it’s a formula that generates a magic number which is scientific, accurate, and easy to understand. And like most formulas, it plays a key role in decision-making. Too bad it’s complete garbage.

Nowadays, there’s a formula or index for virtually everything out there, and like the BMI, claims accuracy beyond its statistical power. Unfortunately, the PR industry, one that has traditionally struggled with quantifying qualitative data, isn’t immune to this predisposition. So it’s important for all of us to approach “diagnostic” measures, like the BMI, with a critical eye.

One doesn’t need a course in multivariate calculus to understand why the BMI measure is nonsense. Just look at the infamous formula:

BMI = weight in pounds / (height in inches x height in inches) x 703

First of all, it leaves out measures such as waist size, as well as the importance of relative densities of muscle, bone, and fat. But let’s ignore those for a minute and assume they’re irrelevant for the sake of simplicity. Just look at the formula. Why is height in inches squared? Is there any scientific reasoning behind squaring one’s height? What about the random 703? Where did that come from?

April 08, 2009

I randomly (pun intended) came across this rant by Andrew Gelman, a professor of statistics at Columbia University, on the use, misuse and abuse of graphs. Now if you’re too lazy to read the entire post, this pretty much sums it up:

Graphs are gimmicks, substituting fancy displays for careful analysis and rigorous reasoning. It's basically a tradeoff: the snazzier your display, the more you can get away with a crappy underlying analysis. Conversely, a good analysis doesn't need a fancy graph to sell itself…[I]n writing this piece right here have I realized the real problem, which is not so much that graphs are imprecise, or hard to read, or even that they encourage us to evaluate research by its "production values" (as embodied in fancy models in graphs) rather than its fundamental contributions, but rather that graphs are inherently a way of implying results that are often not statistically significant.

Given his credentials, I may not be in a position to challenge his point of view, but I can’t help but disagree with him on a few things.

September 30, 2008

The Dow Jones industrial average is a misnomer -- it’s an unreliable representation of the broader market. The media can’t get over its 777.68 point (6.98 percent) drop yesterday, which is understandable as it’s the most popular index. But it’s misleading. And to those affected (that’d be all of us), it might make sense to use a more accurate yard-stick.

Take a look at the umbrella indices: The S&P 500 dropped 106.59 points, or 8.79 percent. The Nasdaq Composite lost 199.61 points, or 9.14 percent. Does this mean the stock market plummeted more than just 6.98 percent? You bet.

The Dow has many big problems. Let’s revisit the basics: It’s based on 30 companies only. There’s no way this fully reflects economic trends on Wall Street and Main Street. It’s price-weighted, meaning a dollar change in stock price translates into points. But not all stock price changes are created equal. Since when does relative size not matter?

There’s clearly a disconnect between reality and the 107-year-old index, and yesterday’s stock market (or bailout) failure makes it evident. So when using any index, make sure you understand the math behind it. And if you don’t or can’t, make sure it reflects what you’re trying to measure. More often than not, you’ll find the most popular metric to be the most misleading.

Conflicts Policy

Everything on this blog is the opinion of the authors and does not necessarily represent the views of Peppercom or its clients. Some posts may contain references to businesses or people that Peppercom or its clients work with or have worked with, and in such cases we make an effort to point out those connections in the posts. We also may choose not to write about subjects or events that may relate to or affect Peppercom clients.