Socrates is noted as saying, “The beginning of wisdom is the definition of terms.” This premise should dominate how mission-oriented organizations communicate with the rest of the world, but it’s arguably even more important to clearly define terms internally—especially when it comes to operating metrics and outcome data. Too often different departments use the same metric terms but ascribe different values to them.

The problem comes to a head when organizations need to do inter-departmental work or analyze trends over time. If the definitions of metrics are inconsistent, it is difficult to isolate data points during analyses or properly capture them in the field. This results in information gaps and inaccurate data, and has broad-ranging implications for how organizations build and enhance their operational infrastructure. There are instances where re-establishing the meaning of a key metric requires significant change. For example, it may require altering the technology stack so that a nonprofit can track a client journey through the organization, or it may require reconfiguring on-boarding and training for field staff. Metrics are not just numbers on a spreadsheet—they represent actual trenchwork and exist to efficiently communicate about and validate an organization’s progress toward its mission.

For many organizations, even metrics as simple as “clients served” or “cost per client” can become a morass of definitions, depending on how fundraising, finance, executive, or field teams use them in the context of their own work. If the fundraising team defines “clients served” based on the moment when a unique client accesses any service, and the field team defines it based on when a client accesses a unique service, and they come together to do evaluate how much it costs to serve a client, this difference in definitions could lead to under-estimating the cost of a client and over-estimating the number of clients impacted.

One thing that can help organizations preempt scenarios like these is to ask two sets of questions. Examples of tactical, first-order questions include:

How do we define when a client started and stopped receiving services?

What costs do we ascribe to a client? If we say “direct costs,” what do we mean?

By “client” do we mean “active client”? What do mean by “active”? Are these “unique clients”? How can we be sure?

If organizations skip baseline questions like these and fail to clearly define the metrics, its foundational language—the nouns and verbs of mission health—will suffer in the longer term. Not defining “direct costs” broadly enough, for example, may mean missing the fact that it’s much more expensive to deliver services via the current distribution model than originally thought.

Once everyone agrees to the baseline, organizations can address strategic, second-order questions, such as:

How do other organizations in our space measure their progress? How do we compare?

Does the way we define our core metrics accurately describe what we want to achieve?

What are the current trends of this metric over time telling us about our effectiveness? How can we improve?

Between two of the most important metrics, how do we prioritize as we expand? What is the opportunity cost?

Asking these kinds of questions can help identify problems with existing strategies. If, for example, an organization’s decision to expand a fundraising campaign into three new markets is predicated on a certain cost per client, but in fact requires more capital, it may need to ask whether the current distribution model is as scalable as it needs to be to meet the expected market need.

Organizations that are continually putting out fires or growing quickly often push the critical work of defining metrics to the margins. The pressure managers feel to demonstrate their numbers are improving can also contribute to the problem, but ultimately it comes down to senior leadership failing to ensure that everyone is at the table when initially defining terms. Not doing so early on results in a more intractable problem later. All stakeholders should weigh in so that the organization can develop a comprehensive view of metrics, an appreciation for the cross-departmental impact those metrics have, and an understanding of how different definitions can affect different departments.

In addition to elevating strategic conversations, setting transparent standards across the organization creates additional accountability when setting team goals; establishes a united front when teams are speaking with other service providers, donors, or public agencies; and delivers funds to the most effective parts of the organization.

Here are five best practices for organizations looking to maximize the clarity and completeness of their data:

Take time to define the terms that compose all metrics, no matter how basic. What do we mean by “revenue”? What do we mean by “attendance rate”? Similar to the example above, one department may double-count the number of individuals it serves—for example, counting every meal an individual receives in the same day as three separate clients—while another counts only one unique client. Bring in all department heads to ensure that you take into account various perspectives.

Leverage credible third-party resources. The Global Impact Investing Network (GIIN) established the IRIS database, which defines generally accepted impact investing performance metrics. The GIIN has done substantial work across verticals to offer a robust repository of terms (such as “employee training costs”), and offers guidance on what to include and exclude. Taking all the financial, operational, and impact metrics you work with and comparing them with resources like IRIS is a good place to start.

Map how the organization acquires each data point and at what frequency. It is important to recognize the limitations of data gathering, whether internal or external, and ensure that you can secure credible information. It may be helpful to establish a timeline for tracking data acquisition. It is also important to know where that data lives and to restrict access to any confidential information— costs associated with program delivery may come from the accounting system, while client data comes from a customer relationship management system.

Incorporate vetted terms into the organization’s vernacular. The more an organization incorporates its clearly defined terms into things like on-boarding and training materials, team meetings, analyses of new market opportunities, and grant writing methodology, the more it re-affirms those terms’ meaning and importance.

Ensure that senior management is engaged. Executive directors, COOs, and CFOs can’t shy away from engaging with the process of defining or using the terms properly. True alignment and consideration of the organization’s standing according to the agreed-on metrics will improve the substance of strategic discussions about where the organization is headed.

Having a robust and clearly defined set of benchmarks not only establishes credibility, but also gives credit to the growth and impact narratives for each organization—something that becomes more important as larger donors get involved or as organizations take on more complicated funding structures. Using the best practices above can help organizations establish analytical rigor, effectively capture, synthesize, and communicate performance and engage stakeholders in a meaningful way.

Dave Policano (@dpgate06) is currently a consulting CFO providing financial leadership, business operations, and strategic advisory services to nonprofits, social businesses and for-profits. He has more than 10 years of experience in investing, advisory, and operating roles leading teams, implementing solutions and offering insights to help companies grow and innovate.

Is “collective impact” just a buzzword, or does it actually make an impact? Sarah Stachowiak of ORS Impact and Lauren Gase of Spark Policy Institute summarize eight important findings from a study examining collective impact’s effect on institutions, populations, and environments across 25 initiatives in the U.S. and Canada. https://ssir.org/articles/entry/does_collective_impact_really_make_an_impact

SSIR.org and/or its third-party tools use cookies, which are necessary
to its functioning and to our better understanding of user needs.
By closing this banner, scrolling this page, clicking a link
or continuing to otherwise browse this site, you agree to the use of cookies.