“Measurement” has certainly experienced a rise in popularity and relevance in the event and experience space. There is increased focus, activities, and even funding to capture the opinions of the attendees, understand the results of the event, even (in a few cases) the return against the objectives. And when the objective is sales, a true ROI can be calculated showing the financial returns for the economic investment and allowing comparison to other tactics. Organizations are actually (read as finally) making portfolio and experience decisions based on measurement rather than on assumptions, hopes, and sentiments.

This second generation of measurement (the first having been almost nothing, or driven by emotion and recognition of the hard work rather than actual event results), begins to align to event objectives and business impact in positive ways. It’s not easy to move from counting metrics to applying insight and much credit needs to go to those willing to push through.

So what does Measurement 3.0 look like?

Measurement 3.0 will retain much of 2.0 including the triple focus of efficiency (diagnosing how we did), effectiveness (what were the results) and value (return on objective or investment). But these focuses will move beyond the lint in the belly-button to incorporate the audience objectives more squarely in the center, and align to the corporate objectives in financial as well as satisfaction terms.

There are some lessons (good and bad) that can be had by looking at other environments and tactics where measurement and feedback occur. Key elements of Measurement 3.0 are:

Actionable Measurement | The “So what?” syndrome has started to influence what is measured. If the reaction to the measurement is “so what can I do about that?” then it is likely that either the measurement should be adjusted, or the item not measured at all.

One on-campus event showed a relationship between overall satisfaction and the weather at the event. The action of changing the weather to increase satisfaction seems intuitive, but hard to control. The insight is not meaningless, but it is not truly actionable.

When it comes to the diagnostics (“how did we do?”) of an event, the time between the insight being gathered and any action taken is shortening. Recognizing a less than satisfying experience closer to when it occurs provides the opportunity to correct it, rather than just recognize it. Is it better to know that there is a problem at check-in during the event, or after, via a survey? Adjustments could/should be made on site that improved the experience immediately.

Fortunately social media provides for a general sense of how things are going and other tactics (like instant feedback at the check-in counters as is done at some international customs counters) can provide real-time, actionable feedback. This also gets closer to matching the solution to the attendee having the issue rather than just the very important, but still more generic, “we’ll fix that for next time”.

The response to the feedback is just as important. It must be genuine, relevant, and appropriate. One lesson on how NOT to do this comes from a major restaurant chain. After the appetizers came out after a long wait (and after the main course) the manager (recognizing the problem) come over to the table to apologize. She offered a coupon for a free appetizer at the party’s next visit to the restaurant.

A more genuine, relevant, and appropriate response might have been to subtract the cost of one of the appetizers from the bill for the visit where the problem occurred. Instead, the solution was turned into a future sales opportunity but the customer left having had an unresolved negative experience. When they did use the coupon, they recalled the negative experience that resulted in receiving it.

On the other end of the spectrum – “how effective were we?” – more data might be needed to “action” the data from the event. What happened to the leads captured, how can/did we action the data provided on the new product launch, etc.

One event, working closely with the regional sales team, responded via tele-teams to all leads within 30 days, accelerating the pipeline over 900% from prior years.

Lesson: For diagnostics, shorten the time between the activity and the feedback allowing for corrections when issues arise. In this time of instant everything, fixing it next time is later than it needs to be. For effectiveness, look to align and accelerate the “downstream” activities.

Context | At its simplest, data without context is just words or numbers. A “4.3” means what? However “4.3 for transportation, on a scale of 5, with 5 being the most satisfied, and a target of 4.5” brings the context needed to turn data into information. Yet often the context of targets, comparison, segmentation, or scale is missing.

Adding context often means more data is needed – historical, competitive, or target based. At times related data not directly resulting from the experience can bring context. For example, $6.81 is just a data point. We do not know the target, scale, segmentations, or in this case relationship to the objectives of the event

To some, Maple Syrup is beautiful. To others it is valuable, and to the Canadian Government it has strategic value. In fact, as the producer of 70%+ of the world’s Maple Syrup, Canada maintains a strategic reserve of Maple Syrup much as the US does Oil.

To Canada, the value of Maple Syrup is much more than just its market value. Its impact on the larger economy, jobs, and the need to maintain quality levels elevate this commodity to unusual heights.

At times, events are treated as nothing more than a commodity with endless supply and little “deep” value. We hear, “We have to be there, our competitors are.”, “Wall Street would be spooked if we cancelled that annual event.”, or “We’ll run it as break-even so there is no cost to the company.” as justification for staging an event. Like so much Maple Syrup running down the side of the tree.

In many ways these justifications (which come from the people or organization putting on the event) are cop-outs for not understanding the strategic value of the experience – the unemotional, non-job protecting, objective driven, and audience-centric value that the experience should provide for.

This void of (at best) focus and (at worst) understanding of the strategic value is supported by the “what a great show” emails that come out even before the load-out begins, much less after the post survey has been conducted. The Strategic Value can’t be seen or proven that quickly.

Events that are in the business of being an event – where the event is designed to make money first and foremost – are crystal clear as to the value they are providing for their attendees and stakeholders. Break-even is a failure. These events are in the business of events, the event is their product.

For everyone else, attending a 3rd party conference or hosting their own experience, the experience must be viewed first and foremost as a business event – aligned (and measureable against) corporate and/or customer objectives.

As with context, strategic value requires more data points to move from the optical success to the true value. These data points (pipeline captured, moved, or closed; changes in belief or understanding; even social media sentiment) take time to surface and be captured.

Lesson: Break-even events are bad for those in the business of events and show a lack of understanding or focus on the strategic value for those staging business events. Understand the Strategic Value of the event from an audience centric prospective and measure success against that.

The Flash Mob that was denied a permit for the night of the celebration did in fact appear on site, at the registration counters on the first day. While slightly disruptive, the video posted on YouTube has had (to date) under 150 views and it is hard to determine the brand behind the mob, or the message.

WHAT FOLLOWS IS A POST FROM A FRIEND AND COLLEAGUE WHO HAS BEEN A THOUGHT LEADER IN THE AREA OF MEASUREMENT FOR MANY YEARS. IT RELATES TO THE NEW NORM OF EQUALITY AND USER GENERATED CONTENT.

_______________________

The history of the event industry can be characterized as an unending search for the next big “WOW”.

Today’s corporate events and conferences are filled with the best ideas and technology from television, entertainment and social media. They are complex and expensive undertakings requiring large internal teams to develop and support the content, a large portion of the sales force to host the audience, and armies of specialized freelancers to execute the logistics.

Often corporate events cost more then a Super Bowl campaign. Which begs the question of why measuring the business impact of an event has never been an integral part of these complex undertakings.

Dynamics Driving Corporate Event Measurement

We believe that a sea change in corporate event measurement is underway, driven by two very different forces.

The first is obvious, economics. CMO’s in every industry are under increasing pressure to demonstrate a return from every line item in their budgets. For the first time, innovative companies are conducting market research to determine how the effect of events at influencing brand perception, accelerating pipeline and ensuring customer loyalty through education.

The second dynamic is that customers are now making enterprise level purchase decisions based on their own independent online research. Traditional marketing departments have lost control of the dialogue, and are no longer the only source of product information. No one knows where it goes from here.

Development of The AIR Score

What is needed is a way for event marketers to identify the issues most likely to garner online commentary from their attendees. Working with our client Scott Schenker, Vice President, SAP we developed a technique called the AIR Score, short for Audience Impact Rating.

The genesis of the AIR Score was the realization that the two most commonly used reporting conventions, “Top Box” and “Averaging” are both designed to present data in a way that all but ignores those most likely to be part of an online discussion.

The Pitfalls of Top Box Scoring

The “Top Box” system adds the percentage of responses in scoring boxes 4 and 5, and reports the total as the result of the question.

This yields sentences like “80% of the respondents found the xyz aspect of the event to be somewhat or extremely valuable.”

This approach has two shortcomings:

1/ Top Box scoring paints an unduly rosy picture of the results.

“Top Box” scoring combines the ‘5 ranking’ which indicate that the respondent is “extremely” positive; with the ‘4 ranking’ which indicate that the respondent is politely noncommittal – the “somewhat” 4s.

This example clearly demonstrates the problem. A “Top Box” Score of 80% can be derived in many ways, which in no way can be considered equal.

2/ Top Box scores provide no insight into what is going on in the other three boxes.

Yes, a veteran executive or manager with the time to read through the data should pick up these distinctions. But they are not readily apparent in the reporting that most people rely on to make decisions.

The Pitfalls Of Averaging

As the name implies, averaging focuses attention on the middle, not on what is going on at the fringes…

This example demonstrates that while “Averaging” is more responsive to the audience then the “Top Box”, by design it mutes (damps) the extremes, the respondents that we are the most interested in.

The AIR scores in this example shift 20 points, moving from Good to Poor, clearly signaling an increasing number of Detractors. The Weighted Average has a subtler downward trend, within a range (north of 3.5) that is considered acceptable by many companies. This is an important distinction.

What An AIR Score DoesThe AIR Score was developed to provide event sponsors and managers with a metric that enables them to quickly identify the issues most likely to influence the larger universe of clients and prospects post-event. The AIR Score is calculated using the data from a Likert scale response.

AIR categorizes the survey respondents into three segments.

The Promoters are enthusiastic about the item in question.

The Neutral group is neither unhappy nor enthusiastic.

The Detractor group is negative and unhappy.

5) Extremely Valuable

Promoters

4) Somewhat Valuable

Neutral

3) Neutral

Neutral

2) Not Very Valuable

Detractors

1) Not At All Valuable

Detractors

Our hypothesis is that the Promoters and Detractors are much more likely to share their opinions then the Neutrals.

The AIR Score reports the relationship of Promoters to Detractors among all scores as a number between 0 and 100, where 100 are all Promoters.

Though they are based on the same data, neither “Top Box” nor “Average” explicitly reveal this relationship.

In effect, this is grading on a curve that is biased so that a response of ‘somewhat valuable’ has the same value as a polite ‘neutral’.

Applying the Air Score

The AIR Score factors the entire range of scores (all responses) into account (i.e. it is normalized).

We, and most of our clients deem an event to be successful when significantly more attendees go home as Promoters then Detractors. We developed the following scale to aid in interpretation of the scores.

Because the AIR Score reports the results as a single number, it is a useful tool for comparing scores from different questions, and even different events. It can be applied after the fact to any historical Likert scale data; and can be used to compare data gathered using unbalanced scales with data collected using balanced scales.

While for know marketers sponsoring virtual events seem happy to count ‘clicks’, ‘likes’ and ‘tweets’, we are already engaging in discussions about how to connect the participant experiences. The AIR Score will be an important bridge.

We are happy to share the “math”. We invite you to contact us if you have any questions, or would like to have the formula to apply in your own work.