Manage What You Measure

The key reason that marketers spend so much time justifying their budget and jobs is that we focus more on trying to look good as opposed to doing good.

We express hundreds of excuses and very few reasons why marketing cannot measure its effectiveness. Much of this abdication of responsibility is underpinned by a stark realization that marketing may not be relevant to the business. This results in too much hand-gesturing, defensive posturing, and perspectives—and too few hard, cold facts and data.

How we measure our performance and the key performance indicators (KPIs) we use to do so will shed light on this question, provide us a baseline, and, ultimately, determine if we are, in fact, relevant to the business and to our shareholders.

We also have been reprogramming our attitudinal and behavioral philosophies over the past few months to get ourselves to the right size, the right structure, and the right focus. Now, the time is right to begin to operationalize and to manage against our KPIs, as opposed to simply collecting and monitoring. We just completed our first marketing operations review, which will become a regular monthly cadence throughout the calendar year. This metrics-intensive session stimulated plenty of qualitative conversations about our work and progress toward attaining our strategic marketing objectives.

Bias for action

As we assessed our recent marketing programs, we realized that our approach was being governed by a “Ready, aim, aim, aim, aim, aim, aim, aim—fire!” mind-set. We invested too much time massaging the details of a new program. Having a bias for action means that we need to embrace a “Ready, aim, fire!” mindset.

Rather than seeking perfection on the front end, I want us to adjust on the back end of a project, after we've seen what worked and what didn't work. This approach will help us become less activity driven (which, as I said last month, is often motivated by a desire to look good by looking busy) and more outcome driven. The outcomes of a new initiative will give us the insights we need to make our next initiative much more effective.

For example, we're in the process of developing methods to increase the utilization of our regional solution centers. We completed an initial plan to roll out a nationwide campaign to communicate the value proposition to our customers and to invite them to our solution centers to see our products, solutions, and offerings firsthand.

However, given our new “bias for action” mind-set, we decided it would be more effective to change the initial plan. Rather than rolling out a nationwide campaign, the team responsible for the initiative refined the program's scope to one large metropolitan area. The team was able to put in place the program much more quickly, in two months rather than in six months.

We have yet to see the results, but we expect that this information will provide numerous benefits. It will enable us to test our messaging and our value proposition while ensuring that we give our customers a great brand experience. If we get checks in most of these boxes, we'll have the recipe to scale to other large cities while minimizing the risk of implementation.

This shift away from excessive aiming and detailed preparation fosters frustration. That's natural; people worry about being wrong and an initiative failing. “Fail” is an alien vocabulary word for most of us in business. On the other hand, the great inventors of our time develop a hypothesis, which they then either validate or disprove. Great inventors typically fail far more often than they succeed, but many business people have yet to accept this approach.

Clearly, we're not looking to run dozens of marketing experiments; too much bias for action results in borderline anarchy. Our experiments need to be part of a bigger picture and will be conducted within a given framework with clear parameters. Part of the cultural change I'm working on with my senior team to bring about here involves an understanding of the value of “failing fast,” if you will. We will use our failures to take follow-up actions that, ultimately, produce better outcomes.

We're making progress. I've seen my team challenge their respective teams regarding implementation time frames. They're also emphasizing the importance of putting “experiments” in place.

Our first operations meeting

We had our first marketing operations meeting this past month. The purpose of these sessions is to introduce standard work across the organization: a standard agenda, standard reporting process, standard action plans, and standard financial reporting. All of my direct reports, some extended team members, plus our HR and finance business partners, who play a key role in the running of our business, attended the meeting. The sessions ran for four and a half hours. (If we reduce the length of these meetings, fine; however, I'm OK with investing this time if it makes us more effective as a team.)

We banned Microsoft PowerPoint for these sessions because we prefer our team to remain actively engaged in the discussions. These conversations showed how much preparation the team has put in, in terms of truly understanding their respective business. We also used the first meeting to ensure that we, as a collective team, have a consistent view of our overall business. To that end, we tackled the following questions:

Do we know and understand the key business drivers?

Are we managing our business by KPIs versus merely monitoring?

Do we have a handle on our financials?

Are we executing our action plans?

Are we focused on the right areas?

Our standard agenda, which will remain the same for each subsequent meeting, begins with the business and an assessment of how we're doing in each region. We review our KPIs by region and function (roughly five key metrics per region and approximately 10 for our Web team), as well as our action plans and financials. Three of my direct reports manage a set of KPIs and these indicators cascade down into their respective teams. We examine the actual performance against what we planned during the prior month and then discuss these variances. Specifically, we pressure test why we have a gap, whether we understand the root cause, and look at what our options are for eliminating the gap.

By managing these metrics, we will align our entire marketing operations with the company's strategic objectives. The meetings help us candidly and qualitatively discuss (i.e., “socialize”) what we're aiming for in terms or results.

Although we focus on a relatively small number of key indicators, the richness of conversation and insight we gain is invaluable. We're in the early stages of measuring our performance, so the first few operations reviews will focus on quality of data and the collection of data. As the team and their respective teams focus, gather, and analyze this data, we all grow far more intimately knowledgeable of our business—and how we can influence it.

Ample upside

Our team's conversation about our macro, regional, and country business, as well as marketing's contribution to business performance resulted in the stark realization that we have much to do.

That said, we also realized that we have an opportunity for significant improvement. In all, our initial operations review succeeded in bringing the team together for valuable discussions. It also served as a contextual platform for our HR and finance business partners, who are a part of our extended team.

The feedback directly after the meeting and a few days later was positive. I heard comments that our discussions provided “holistic insights” into our business, its challenges, and opportunities. My team also expressed appreciation for our data transparency and open conversation that attacked processes, not people.

We move forward knowing that we will review and adjust, because we certainly cannot be perfect the first time around. In doing so, we'll leverage our experience in pursuit of the ultimate KPI: profitable growth.