The judges in the 2013 Silver Anvil competition were faced with a plethora of programs built on using the latest and hottest tool or distribution channel available. Beyond the fluff, we often found a spectacular lack of substance. This leads to sharing a compelling truth that runs through the heart of every winning Silver Anvil entry and may benefit all PR professionals: good research provides the foundation for smart strategic planning, brilliant creative and precise execution toward achieving measurable objectives that matter.

The PR tool kit has expanded considerably over the past two decades of my judging Silver Anvil entries (done in years when Gable PR didn’t enter). But are we using the tools in an integrated and strategic fashion? Will the results drive anything meaningful? Are we just having fun playing with things that don’t really drive sales, help achieve marketing goals or turn around an image?

The annual competition can feel like the classic movie, Groundhog Day. The same fuzzy-edged little critters keep popping up each year and in every category (usually chirping about media hits). In reviewing results with other veteran judges from the Counselors Academy and College of Fellows after this year’s recent session, I found a universal impression that some of the entrants hadn’t read the rules or bothered to check out past winners on the PRSA website. The latter exercise would have saved several hundred of the 847 entrants from wasting their entry fees.

The judging criteria are straightforward: 10 points maximum in each category of research, planning, implementation and evaluation, or 40 points total. Sadly, we had many entries that didn’t hit double digits.

Solid research to establish a baseline for measurement and evaluation (this can be both secondary and primary; polling; online surveys; crunching one year of social media data to find trends that could lead to a new position for a client; use of psychographics, demographics and other findings that would help in the positioning and planning).

Setting measurable objectives (e.g. turning around image from 3-to-1 against the company to 2-to-1 favorable within one year; successfully introduce the new family of mobile applications, build market awareness to X percent within six months, generate reviews in the top ten media, grow subscribers by Y percent within one year, introduce one cause marketing program that adds another Z subscribers in one year and generates $X for the cause).

Implementing strategically through all channels that can help drive a result (print, broadcast, social media, local events, direct mail, contests, guerrilla marketing, promotions, conference programs, and cause marketing).

For evaluation, the best programs set measurable objectives in many categories. As noted last year, the top programs included achievements in: meetings and special events held, increased attendance, better product reviews, increased distribution, doubling social media likes and followers, winning design awards, expanding promotional program results by a certain percentage, improving share of voice, launching a cause marketing program that raised X dollars, doubling the number of analysts following the company, increasing stock volume, improving internal communications globally as measured by continuous progress in online surveys among all employees on impressions of quality, using social media to drive more hits and qualified leads to the company website, reducing calls to the 800 number in favor of website conversations and increasing sales and market share.

Always keep the results-oriented continuum in mind: great research drives new creativity and smart planning; the detailed planning across all channels helps set measurable objectives and guides precise implementation; and evaluation ties back into all your brilliant work in research and planning.

Ten Biggest Shortcomings

Poor or missing research (e.g. one entry noted that they conducted research by interviewing the client contacts; another cited research in the executive summary about consumer motivations but didn’t include anything in the Research section for validation; some didn’t have a Research section)

Not setting measurable objectives

Setting objectives based solely on amount of media coverage

Setting vague objectives, such as building brand image, but with no means of measurement (the winners documented how they conducted research on baseline consumer awareness, and then built their programs to drive awareness, which was measured at the end, along with metrics)

Developing one-dimensional plans, such as just having a social media strategy

Not outlining the rationale behind strategies and plans (e.g. one judge called this “doing a lot of stuff because the tools were exciting”)

Relying on huge budgets and spectacular events to carry the day (fellow judges shared background on several entries where the scope of the program was impressive but the results weren’t)

Not having a precise plan for implementation

Providing numbers on media hits, Twitter followers and other metrics but without tying them back into the research and planning

And the number one shortcoming: not turning in an entry that covered each of the four areas being judged: research, planning, implementation and evaluation

Beyond the transgressions, there was agreement that the PR profession is continuing to raise the overall quality of all programs. We are being given more opportunities to conceive, create and implement complex and strategic programs that are out of the purview of most marketing, advertising and other consulting companies. We are becoming more trusted advisors in the C-suite and included in company-wide long-range strategic planning. But the bar needs to be raised another notch. These ideas may help.