March 2008

March 30, 2008

Twice in one day! In an amazing case of synchronicity (either that, or it's the start of the baseball season) Morley Safer interviewed Bill James on 60 Minutes tonight. Bill James is famous for applying statistics to baseball, and some credit him with turning the Red Sox into World Series winners. His big point has been to replace decisions made by gut instinct or traditional wisdom with decisions based on data. A lot like our big point about public relations measurement. --Bill Paarlberg

Well this morning the New York Times has an excellent piece concerning both baseball and statistics that, really, to fans of both fields, could hardly be more exciting: "A Journey to Baseball’s Alternate Universe." It describes a mathematical simulation of the entire history of baseball carried out to discover how likely Joe DiMaggio’s 56-game hitting streak really is. --Bill Paarlberg

March 27, 2008

As always,
IPRRC in Miami was boot camp for your mind. If you can survive three
days
of stimulating conversation,
debates until dawn washed down with quantities of wine, and 100+
presentations to comprehend, you can probably
survive
anything. I still haven't recovered fully, but at least I've
sobered up enough to sum up a few observations. In another month
or so, all papers will be published on the IPRRC website. In the
meantime
here's a wrap up.

IPRRC
vs. South by Southwest Digital

First
of all, at the same time the PR crowd
was meeting in Miami, the social media crowd was gathering at
South by
Southwest Digital in Austin, Texas. Thanks to the magic of Twitter,
there was even some cross-pollination of ideas between the two groups.
But
what was ironic was that whereas the folks in Austin were subjected
to traditional people-on-stage-with-PowerPoint, those
crazy
PR people
in Miami
were
gathering around tables arguing with the presenters. Here's what
a typical table looked like:

In the
end the effect was the same, the tools were just different. Attendees
in
Austin initially
complained to each other via Twitter, and then finally shouted
down
the interviewer during the keynote with Facebook CEO Mark
Zuckerberg. In contrast, Miami attendees gathered
around each presenter at separate tables, constantly questioning
and challenging them. In both cases, it was
the
dialog
among attendees that proved the greater value.

But that
was about all the two conferences had in common. The IPRRC had distressingly
few presentations about social media; less than ten percent of the
85 total papers discussed the impact of blogs or bloggers.
What
is frustrating is that it is precisely in the arena of social media
that real solid research is the most desperately needed. The vast
majority
of discussions focused on testing of existing theories of crisis,
relationships, and organizational structures.

Another
significant theme of the conference was how PR education is organized.
There were some great conversations about whether PR should be taught
in business schools, or business taught in PR schools. The answer
is clearly both. PR people need to be taught to think like business
people
and become part of the overall value proposition in a company, but
at the same time business people need to better understand the role
that
PR plays in corporate health, welfare and reputation.

For anyone
wanting to understand PR in other cultures, there was lots of information,
and presentations on PR in Turkey, Slovenia, Brazil, and Japan.
Also a comparison of PR in Western and Eastern Europe.

Ben-Piet
Venter's paper on making PR a support function, similar
to IT, made a lot of sense. And of course there was
the usual conversation about how to "get into the dominant
coalition" and
get them to listen to you. The answer seemed to be: Stick
it out long enough for a crisis to happen so they'll see how indispensable
you really are.

Most
of all I was left with lingering excitement about the next generation
of PR folks, now coming up through the ranks. These are recent graduates
who understand the power of social media and the fruitlessness of
the
old command
and control structure. People who inherently look for data on which
to base
decisions, and rely less on "It's the way it's always been
done," and more on "This is the decision that the data
supports." For them the route to the dominant coalition and the proverbial
"seat at the table" will be quick and direct, because they'll
always have the data to support their decisions.

March 26, 2008

(Okay, so
I'm hopelessly prejudiced. I just got my new beautiful HP
laptop pre-loaded with Vista and I hate it.
It crashes about five times an hour, and my experience so far with
Vista guarantees that my next
laptop will be a Mac.)

That having
been said, we are naming Microsoft our Measurement Menace of
the Month for
their "Engagement
Mapping" product. Not that we don't believe in measuring
engagement, (we really like that whole concept, see this
recent article), but for throwing
its weight into the whole discussion without providing
any
transparency
or information as to how they "map engagement," or any
details at all as to what is behind this black box. --KDP

When
three or more people forward you the same article saying, "You
have to read this!" -- it makes you pay attention. That's what
happened with the New
York Times Magazine piece about John
List and
his research on why people give to philanthropic organizations. And,
since KDPaine &
Partners is doing quite a lot of work in that area, we were thrilled
with his main premise: Non-profits need to be making
decisions
based on data,
not on long-standing beliefs that may or may not still hold water.

Professor
List brought together results from several research projects to demonstrate
some interesting, if unexpected conclusions. One of which was that,
for stimulating donations in fundraising campaigns, challenge grant
seed money
works
better
than
matching
grants.

Of course
we'd like to factor in some PR metrics into his research to
figure out what impact headlines have on charitable giving. We know
PR works
with the ASPCA (see
this article from last fall) but we'd love to see it factored
into a broader range of philanthropic efforts. Nonetheless, he's
this month's Measurement Maven for the rigorous approach he brings
to the subject. --KDP

"Our online life is often used as a frustration outlet... Sometimes, it just feels good to be somebody else online, or to support the candidate that it is taboo to support in your small town. Do stuff you'd never do in real life. The online world resembles a chimerical projection of our social fantasies."

This is going to take a bit of study to resolve, and my hunch is that sometimes online activity does predict offline behavior, and sometimes it doesn't. Here's some data that bears on the question, YouTube views and comments compared to voting behavior:

At least in this case, online activity can and does closely correlate with offline behavior. --Bill Paarlberg

This
article is condensed from a paper
submitted to the 11th Annual International Public Relations
Research Conference.

"If we
can put a man in orbit, why can't we determine the effectiveness
of our communications? The reason is simple and perhaps, therefore, a little
old-fashioned: people, human beings with a wide range of choice. Unpredictable,
cantankerous, capricious, motivated by innumerable conflicting interests,
and conflicting desires."

Modern
technology has come up with many good ways to measure what human
beings read, watch and see, but comparatively few ways to measure
-- as my father said half a century ago -- those "unpredictable,
cantankerous, capricious... conflicting interests and conflicting desires."
The recent rise in the influence of social media has turned the entire
communications
paradigm
upside down. Counting column inches and eyeballs is irrelevant when a single
YouTube video enjoys a larger audience than Monday
Night Football,
the average consumer is bombarded with 5000 messages a day, and 90% of CEOs
say they
are dissatisfied with how their CMOs measure results.

The basic
problem is that we have years of research that says that if you "expose"
1 million consumers to a message (or buy 20 GRPs) you will sell X
number of cases
of shampoo, soda or soap. We have no data that says if 1 million people download
your YouTube video, you'll sell any shampoo at all. What we now
want to know is how social media affects user's behavior.

Engagement:
The Relationship Between the User and the Brand

Like
most other buzzwords, "engagement" has come
a long way from its original meaning of "an agreement to marry." Essentially,
it started with the notion that a website or a blog was "engaging"
enough to get a reader to begin to develop a relationship
with the brand.

As
more and more advertisers and media types realized that hits really
do stand
for "How Idiots Track
Success" and
that even unique page views were suspect (given the enormous variation in
such
statistics), people began to speak of measuring engagement--not
just how "sticky" the site was, but the extent to which it enhanced
the relationship between the user and the brand. Advertisers now want to
measure a site's ability to create an experience that earns a visitor's loyalty
and,
with luck,
its
business.
As
a result "engagement" now means everything from the number of
times that a visitor returns to the site to the time spent online.

Another
way to think of engagement
is as the fourth step in a five
step process that the individual user goes through:

Finding,
usually search

Lurking

Participation

Engagement

Relationship
or outcome.

Engagement
According to Scoble

Popular
blogger Robert
Scoble (2006) has suggested that engagement is
a valid measure of user interaction and authority of Internet-based
social media channels. That is, engagement is a way to determine
whether you are really having a dialog,
or you are just yelling ever more loudly. His premise is that by measuring
activity on a blog or social
media website as a sign of engagement, you can predict users' behavior. In
other words, if they come back to a corporate blog over and over
again they'll eventually
buy. If it's a YouTube video, if they watch and rate it or comment
on it, they are more likely to pass it on to their friends and maybe even
take some other action as a result.

Brian
Haven of Forrester Research picked up on Scoble's premise and proposed
measuring engagement based on a variety of tangible and intangible
factors including links, track backs, comments and the frequency
sentiment
and tonality of comments.
He defines engagement as the
level of involvement, interaction, intimacy, and influence an individual
has with a brand over time:

Involvement—Includes web analytics like site traffic, page views, time
spent, etc. This essentially is the component that measures if a person is
present.

Interaction—This component addresses the more robust actions people take,
such as buying a product, requesting a catalog, signing up for an email, posting
a comment on a blog, uploading a photo or video, etc. These metrics come from
e-commerce or social media platforms.

Intimacy—The sentiment or affinity that a person exhibits in the things
they say or the actions they take, such as the meaning behind a blog post or
comment, a product review, etc. Services such as brand monitoring help track
these types of conversations.

Influence—Addresses the likelihood that a person will recommend your
product or service to someone else. It can manifest itself through brand loyalty
or through
recommendations to friends, family, or acquaintances. These metrics mostly
come from surveys (both qualitative and quantitative).

Peterson's
Engagement

Web Analytics
expert Eric Peterson, author of Web Analytics Demystified, Web
Site Measurement Hacks, and The Big Book of Key Performance
Indicators, has
proposed
alternative measures of engagement based on Web metrics.
Peterson
suggests that if you want to measure engagement you need to measure
stats like the following:

Percent
increase or decrease in unique visits

Change
in page rank - e.g., a list of the top ten most popular areas
and how it has changed in the last week

How
many sessions on the blog or website represent more than
five page views?

In
the past month, what percent of all sessions represent more
than five page views?

Percent
of sessions that are greater than five minutes in duration

Percent
of visitors that come back for more than five sessions

Percent
of sessions that arrive at your site from a Google search, or
a direct link from your website or other site that is related to
your brand

Percent
of visitors that become a subscriber

Percent
of visitors that download something from the site

Percent
of visitors that provide an email address

The problem
with Peterson's metrics is that for most organizations, that
data is only available on their own site, not on competing sites, so there is
no way to conduct a benchmark to understand how "engaged" visitors
are with one's own site vs. the competition.

While
both Peterson and Haven contribute important ideas to the engagement
discussion, I suggest that measuring engagement necessitates following
the actions and
desires of the customer.

An
Engagement Index? Not Yet.

There
is no such thing, yet, as an engagement index, but there has been
a lot of talk about the possibility. Both Scoble and Peterson suggest
that their metrics could be reduced to a single index, but they don't
say how.

Most of
the discussion on the topic is centered on the necessity for advertisers
to quantify
the impact of their online advertising. Microsoft's new black box "Engagement
Mapping" is designed to make advertisers on Microsoft websites
more comfortable with their data (see our
Measurement Menace Award for this month). Comscore and Nielsen's efforts are designed to
give more meaning to the
numbers
they provide
advertisers.

Unfortunately,
metrics that make advertisers happy are not necessarily very useful
for other communications functions. As internal and external communications
functions become more involved in social media, they too need a way
to measure engagement, but numbers from Microsoft, Comscore and Nielsen
are only available for large consumer sites, not corporate blogs.
More problematic is that those numbers do not factor in the newer
more popular social networking sites like Facebook, YouTube and Twitter.

Engagement
is a Relationship

I suggest
that "engagement" is just another way to say "relationship, but a
minor one." So, to the metrics suggested by Scoble and Peterson,
I suggest we add those from relationship
theory.
At some
point, you just need
to come right out and ask your audience:

"Do you
trust us?"

"Are you
committed?"

"Do you
believe that we are committed to you?"

"Do
we interact with you only out of necessity or a sense of reciprocity?
Or are we
working
together to
see each other succeed?"

Unless
one incorporates relationship measurement into the mix,
you end up with just data rather than insight. Because,
while you can track behavior with increasingly accuracy, all
the web metrics in the world may not answer the fundamental question of "Why?"
"Why
did they stop coming to your site?" "Why are they spending less time there?"
Or, more critically, "Why are they buying less?" Without
the true understanding of
the nature
of the relationship, you won't be able to do anything to improve once you
find out what the problems are.

Which
leads me to some final, unanswered questions: What is the difference
between engagement and relationship? In fact, do we really know that
engagement is something distinct -- distinct from web stats
and distinct from relationship theory? Suppose we do use Grunig's
questions to measure engagement, how do we know we are measuring
"engagement" rather than "relationships?" If
you think you know, please let
me know.

March 24, 2008

Three
New Measurement Studies Provide Crisis Control Tips Research presented at the IPRRC has practical results.

by Katie Delahaye Paine

We've
been saying forever to anyone who'll listen that PR measurement is
not just about demonstrating ROI or proving value, but is
really about
having data on which to make better decisions. Never was that more
evident than in the plethora of papers on crisis communications
presented at this year's IPRRC in Miami. I personally
listened to a dozen papers on the topic, and there were a other
dozen or so more that I didn't
even get to.

IPRCC
researchers
studied the
impact on crisis communications of everything from involvement to
intimacy. A lot of the findings fell into the "duh"
category (as
in "we professionals probably knew that already"),
but it's always useful to have solid data to back things up.
Here are three studies with some results that will come in handy.

1.
The More They Know, the Angrier They Get

Yoonhyeung Choi and Ying-Hsuan Lin of
Michigan State presented a paper on the Mattel product recall that
has interesting
implications for any consumer company under fire that is trying
to manage customer expectation. Choi and Lin compared what moms
and mommy-bloggers
had to say about the toy recall to how the daily newspapers reported
it. As it turns out, the four major newspapers studied blamed the
Chinese manufacturers more than twice as often as did the consumers.
The consumers were twice as likely to point their collective
fingers at Mattel than at China.

As the
crisis unfolded, and recall followed upon recall, the media's
portrayal of China was consistently negative, and the reputation
of Chinese manufacturers declined over time. However, among consumers,
anger was more frequently directed at Mattel and its sister company
Fisher Price. What this tells us is that even though consumers
may
be getting their information from the media initially, their long
term opinions are formed less by the media and more by people like
themselves. The bad news is that your customers
are likely to ignore any of the mitigating factors that the media
reports
on,
but
the good news is that if you have highly engaged consumers, they
may also ignore the pounding you're getting in the media.

Another
conclusion from Choi and Lin is that engaged consumers are more
likely
to dig deeper than the media into a crisis, looking to go beyond
the headlines to find the real issues at hand. And the more
they know, the angrier they get. "Consumers with high involvement
are more likely to scrutinize and elaborate crisis information
and generate more counter arguments as they process the crisis
information, as covered in newspapers."

The important
conclusion the authors draw is that monitoring media in a crisis
is no longer
enough. You need
a
clear
understanding of how your stakeholders are responding to the
media, particularly as consumer generated media continues to
increase in awareness and credibility. More importantly, success
should
be
measured by the
speed with which one's headlines go away. Essentially what
Choi & Lin
found was that the more media exposure a crisis got, the angrier
the consumers got at the company. Lesson learned: Get the story
out of
the headlines and into the back pages as fast as you possibly
can.

2.
Good Relationships Mean Fewer Bad Rumors

An equally
intriguing paper was presented by Hun-Jim Kang from
Pennsylvania State, with co-authors Karina Garcia Ruono and
once again, Ying Hsuan
Lin of Michigan State. This study examined the manner
in which rumors were spread, specifically the impact of relationships
on the rumor
mill. The authors started out with the premise that there were
two
kinds of rumors: "dread" rumors, which foretold
of something bad about to happen, and "wish" rumors,
which foresaw something good happening. The study compared the
speed
with which dread vs. wish
rumors circulate, as well as the credibility of the different rumors.

Their
hypothesis was that the spread of rumors was closely tied to the
health of the organizational public relationships (OPR) behind the
entity
under discussion. OPR was defined according to the standard Grunig
terms such as trust, control mutuality, satisfaction, and commitment.
Each participant rated statements on a 7-point scale. The study
was conducted on the campus of Michigan State among 109 undergraduates.
With a mean age of 22, it may not be relevant if your target
audience is senior citizens. Nonetheless, the findings were fascinating.

Not surprisingly,
the propensity to spread dread rumors was significantly higher
than that to spread wish rumors, and the credibility
of dread rumors was also higher. But the most telling information
was
on the impact of relationships: The
health of your relationships has a great deal to do with the likelihood
of people to spread nasty rumors about you. People with
low relationship
satisfaction were more likely to spread dread rumors than
people with good relationship satisfaction. Additionally, people
with
good relationship
scores were more likely to check out the validity of rumors
before passing them on. The clear lesson is, if you want to squelch
rumors,
keep your relationships healthy and strong.

3. If
You Are Innocent, Act Indignant

Then there
were the really surprising and interesting findings of someone
(I'm sorry, I've forgotten your name) who studied communications
in the military. He concluded that when you are in crisis and you
are in the right -- i.e.
your organization has not done anything wrong -- the public position
that is
most likely
going
to
generate a positive response from your target audience is an
aggressive, proactive response. So if you need ammo to keep those
lawyers quiet,
you got it. Again, something we always knew, but it's nice
to have it verified.

Then go there and design your own. Even though those of us in social media measurement haven't yet quite defined "engagement," there must be some serious amount of it happening with this website. So how would you measure it? Count the number of custom logos on Scions? Or maybe on any random sample of cars? A survey on pride of ownership? Resale value?

(The website has some rather bizarre Term of Use: the site is only "open' to legal residents of the U.S. over the age of eighteen. Good luck enforcing that. Reverse psychology or lawyers run amok?) --WTP

March 17, 2008

How online measures of engagement have predicted
recent primary results.

Ever wondered
what the effectiveness of political lawn signs is? Supposedly, every
lawn sign represents six votes for the candidate. Or maybe ten votes,
depending on what you read. And there's a theory in political circles
that
if you can get someone to put
out
a lawn
sign, then that person is committed
enough to not just vote for you, but also to encourage his or
her friends to vote for you as well. So, each additional
lawn
sign means more than just one more vote, it means more of something
even more valuable and a lot more difficult to pin down: more loyalty
or commitment or what we in communications call engagement.

To my
knowledge, no
one has ever done a scientific study of how lawn sign displays
influence voting habits. But my completely
unscientific
study
of
New Hampshire lawns this fall more or less predicted the outcome
of our
First in the Nation primary: Everywhere you went there were
lots of Obama and Ron Paul signs, and both did much better than the
polls predicted.

Now let's
transfer this scenario into the world of social media. Can online
measures of engagement predict votes? I argue that they can and have
done
so recently:

Our
YouTube study of candidates showed Obama having a significant lead
over Hilary Clinton both in terms of viewership, and in terms
of the number of videos that were rated by viewers.

On
Facebook we noted that there were some 500,000 plus groups supporting
Obama, compared to Hillary's 100 or so. (In fact there are far
more groups opposed to Hillary than there are those in favor.)

In
terms of Facebook's US Politics application that has Facebook voters
register their opinions on a variety of topics as well as on the
candidates themselves, Obama has consistently maintained a 50-
to 60-point lead
over Clinton.

And in
the end, Obama did better in the primaries than the early polls suggested.
The primary
results
have proven that Obama has a stronger than expected following, as
hinted at by the strong online engagement we found.

The point
here about engagement is bigger than just politics. How and why
is engagement
a stronger or different
measure than just impressions? If,
by joining a group, rating a video, or following someone
on Twitter, you are actually thinking or behaving differently than
if you just viewed an ad or a message, then measuring these signs
of
engagement is critical
to every marketer. In order
to hang on to advertising dollars, media companies will need
to provide
this
data.
And the
good news
is that
the data is there, they just need to release it.

And finally,
I can't help but see engagement as a kind of bridge between measuring
outputs and measuring relationships. (Most of you are aware of my
recently published book Measuring Public Relationships, learn
more here.)
If you measure an output like impressions, you only know what has
happened to an
audience. But if you measure engagement, you are measuring what is
done by an
audience as the result of their relationship with your output. How
does measuring engagement fit in with measuring relationships? That's
a good question, let me know
if you have the answer.

March 14, 2008

As everyone
who has talked to me recently knows, I'm a serious social media evangelista.
And as I travel around I'm
constantly confronted with business people who say: "Social
media? I don't get it! Who
has time? Why
should I bother?"

My
simple answer has always been that blogging (or Twitter, or Facebook)
is a way to engage in a conversation with your customers or your
employees.

(And
if,
Mr.
CEO, you do not want to engage your customers or your
employees, you deserve
to be fired. If not shot.)

But I've
been told that that attitude isn't particularly helpful when you're
talking to people who think Facebook is still "just a college
thing," and who think "twittering" must be something
dirty.

When
I built my house, it was designed to be the capital of social capital,
with a huge living room and kitchen and dining room so that
lots and lots of people could gather there. As that's just what happened:

When
I moved in, I threw a party and lots of people showed
up. As always happens
at parties, different groups of
people gathered in different nooks and crannies of the house to
talk about what
was interesting to them.

A
few months later, my best friend had her wedding at the house,
saving her the expense and
aggravation
of
renting a hall. Lots of people showed up to wish the newlyweds
well and
to share their stories and experiences.

A little
later, we had a benefit for the Durham Public Library. The author
Joyce Maynard spoke, the place was packed, and the library more than
tripled its mailing list.

A
lovely lady named Carol Shea-Porter decided to run for Congress
and we did
a fundraiser at my house. She told the crowd, "I'm
not asking for your money, I'm asking for your votes. If I have
your votes,
I don't need the money." She was outspent 5 to 1, but
we now call her "Congresswoman Shea-Porter."

Later
on, we had a house concert with a hundred or so fans of a local
musician. He sold lots of his new CDs and added to his mailing
list.

So are
you getting
my point? Social
Media is just a great big version of my living room. Any social network
-- Twitter, Facebook or MyRagan -- starts off as one big noisy place.
But soon,
people of like minds and like interests start to find
each other and sometimes they spin off and form their own separate
groups.

So What's
the ROI of my Living Room?

Which
gets us to measurement. The hot topic right now -- and what everyone
wants to know -- is: How do you measure the ROI of the effort you
put into
these
groups?

To answer
that, let's go back to my living room. The bride and groom's
goal was to save money. The Library's was to build their
mailing list. The politician wanted votes and the musician wanted
to sell CDs. If they wanted to measure the ROI of their events in
my living room, they'd
compare their investment in effort to the particular return that
was important to them.

And
What's My ROI for Social Media?

I, on
the other hand, use
social media, and (often) my living room, to satisfy my thirst for
knowledge and intellectual stimulation. Take Twitter: Do
I appreciate the
fact that
traffic on
both my blog and my website has picked
up since I started Twittering? Absolutely. But mostly I Twitter
because
it makes me smarter. It allows me to follow interesting people that
I wouldn't ever meet in my living room, and who would never read
my
blog.

I mess
around in Facebook because it's a great way to share stuff
with my friends and colleagues and it's a huge time saver when
you're trying to pull together an event.

I blog
because I like having conversations with people – about
my business, my services, and the world as a whole.

And that
ends my longwinded explanation of why you or anyone might want to
bother with social media. It depends on you and
your
goals. And as for measurement, it's the same as for any marketing
effort; first be clear about what you're
trying to achieve, then go about measuring it.

March 13, 2008

Salesgenie.com's $2.7 million
mistake, and how a major retailer avoided making a similar one.

Note
to Salesgenie.com's
CEO: A little investment in research generates a huge return.

Vinod
Gupta, the InfoUSA CEO who owns Salesgenie.com, learned this
lesson the hard way when
he had to stop airing his television commercial after its debut
on Superbowl XLII because of protests about its culturally-insensitive
presentation
of animated pandas with Chinese accents. It was an expensive lesson:
The cost of airtime alone for a 30-second spot on Superbowl 2008
was $2.7 million. View
the ad here at YouTube:

Gupta,
who wrote and produced the ad himself, told USA Today (February 11,
2008) that next year, he'll test his ads with consumer focus
groups. This year, he said, he only ran the ad by some friends. "None
said it was offensive," he said.

A Major Retailer Discovers
that Ideas are Beautiful, but Reality Is Something Else

Vinod
Gupta isn't the only top executive to become enamored with
his own creativity. The Salesgenie.com ad debacle reminded
me of another marketing campaign developed by a retail chain leadership
team. Fortunately, those executives had the good sense to test
the materials before going public. I'll never forget the shock experienced
by the campaign's creators when they saw the reactions of their
intended audiences.

By the
time I was retained, a seven-figure budget had already gone into
the creation of advertising and internal communications materials
that
featured real employees. It was my job to test the communications
with focus groups to determine if they were relevant
and would
produce the desired reactions and results.

The stores
in the materials looked wonderful. Inviting. Immaculate. Smiling
employees welcomed you. Salespeople exhibited pride in their work.
Several mentioned their impressive employee benefits: health
insurance, 401K plans and paid vacations. Others spoke of their aspirations
to become store managers. These proud, joyful team members encouraged
the public to shop at their stores and consider working with them.

I showed
the employees in the first focus group the materials and asked
the following questions:

"What's
the main idea behind what you see?"

"Would
you notice this ad if you were looking through a magazine?"

"How
does this information make you feel about working there?"

Immediately
I knew we had a major problem. The employees looked angry, confused
and
resentful. Sample responses included:

"Those
employees look way too happy. No one in my store smiles all the time."

"These
ads make me not want to go to work tonight... The benefits
they are talking about don't exist. I've never heard of them."

"These
photos depress me. My store doesn't look anything like that. These
are just pretty pictures. If it was really like
this, I'd
be proud to work there!"

The employees
filed out. As I prepared for the second group, I popped into the
adjoining room where a team of marketing and ad
agency executives
were watching through a one-way mirror. The creative director
had a light film of sweat glistening on his forehead. No one looked
at me. They were
engaged in an intense discussion that absorbed all of their attention.

I showed
the materials to the next focus group, which consisted of seven customers.
I asked:

"Any
immediate thoughts or feelings?"

"Is
the ad clear?"

"How
does this make you feel about shopping there?"

The customers
were equally incredulous:

"The
ad looks is great, but the store I go to in my neighborhood always
has stuff on the floor."

"I
like the way the employees seem to care about their customers.
But if I go back to the store in my area and get the same shabby
treatment
I always get, I'll be mad. They're tricking me."

"If
the employees really receive all of these benefits, why do they
look so miserable?"

Again
I ducked into the next room to confer with my clients.
Now the creative director was openly mopping his forehead and the
marketing director had gone ash white. He whispered to his assistant, "Haven't
these benefits gone national?"

The assistant
whispered back, "Guess not."

Where
the Reality Hits the Road

I said
to the marketing team, "For now, the only thing we can do
is to proceed with these focus groups and instruct the participants to
react to the ads as if they are true. Our goal today isn't to poll
them on their actual experiences in your stores; it's to get
their responses to this campaign."

The team
agreed that this was the best use of our time; however, it was clear
that there
would
be
some serious
accountability
checking back at
home office.

I proceeded
with the same questions to the remaining 15 focus groups. Then my
team and I summarized our findings in a written report that we
presented to the client two weeks later.

Upon
reviewing the report, the client called me and said, "I must
admit, this hurts. We spent a lot of money on those materials, but
I'd rather know this now than after rolling out a national, multi-million
dollar campaign."

As painful
as the findings were, they served a valuable purpose.

They
spared the retailer from the embarrassment and potential liability
of launching a campaign that promoted employee benefits only offered
in certain regions of the country.

They
pointed out that the store chain had locations that were inconsistent
with the quality, store appearance and service being advertised.

They identified that this could indeed be a powerful campaign
as long as the organization fixed these issues and got them
right. After all,
both the customers and the employees indicated that they liked real
employees being featured in the campaign and they were attracted
to the stores
that were depicted in the ads.

The overriding
benefit of testing your marketing communications is ensuring that
your communications are meaningful to key constituents.
In other
words, the resources that you devote to research now will pay off
in spades when you move forward with communications programs and
materials
that hit the mark. By ensuring that your marketing is compelling,
you can stop pursuing – and start attracting -- your intended audiences.

March 12, 2008

1. Set
up Google
Analytics on your blog to find out how many repeat
visitors you have. How many pages per visit do they check out?
How many go
back more than 3 times a week? How many go back and spend more
than a second
or two on the site?

3. Go
to xinure and
enter the URL of your choice to find out how well it is doing in
search engines, links,
social bookmarks and a whole bunch of other stats.

4. With
many of the leading blog providers like TypePad, check your stats
to find out how many people have subscribed, and how many visits
per
day you're receiving.

5. What's
the Conversation Index (the ratio of postings
to comments)? In the blogosphere any comment is a good comment because
it shows that people are engaged enough in what you are saying to
take the time to respond.

6. If
you have posted a video on YouTube, or a photo on Flickr, check to
see how many people have rated it, and/or commented on it.

7. If
you have a presence on Facebook, how many people have joined your
group?