Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

5 comments

Hi,
I could not find in the summary above or the online report a description of how the study was fielded. In particular, I’m curious about how organizations were selected, the response rate, and why the sample for 2012 was much smaller than for 2010.
Thanks!

Hi – thanks for this great post, and apologies for the late comment. In point 1, you mention promising capacities and behaviours to meaningfully engage in evaluation. I wonder if you can share what you consider those promising capacities and behaviours to be? given that this is late, I’d be happy to receive a reply directly to my email (no twitter account) – marysue.smiaroski@oxfaminternational.org Thanks.
Mary Sue

Hi Mary Sue, the promising capacities and behaviors we identify are drawn from our survey data: 1) the nonprofit reported evaluating its work; 2) the organization self-reported having medium to high internal evaluation capacity; 3) the organization had a logic model (or similar document); and 4) the organization had updated the logic model (or similar document) within the past year.

All told, that amounted to 28% of respondents from nonprofit organizations who participated in our study. (See page 3 of the report for a little more detail.)

Good question! Here’s some more info about that bullet point: We gave the nonprofits a list of 10 organizational tasks and asked them to rank these tasks in order of importance. The exact wording was, “Please rank in order of importance (“1” being most important and “10” being the least important) the following list of internal priorities that competed for resources in your organization last year.” The tasks included: communications, evaluation, financial management, fundraising, governance, human resources, information technology, research, staff development, and strategic planning. These are tasks that most nonprofits are engaged in, to some degree.

We calculated average rankings for each of the 10 tasks. Fundraising, financial management, and communications were ranked #1, #2, and #3, on average, while governance, evaluation, and research were ranked #8, #9, and #10, on average. (Strategic planning, staff development, HR, and IT came out in the middle with average rankings of #4, #5, #6, and #7, respectively.) This is the same ranking that we found in 2010. More details are available on page 12 of the SOE 2012 report.

Evaluation sure has to “compete” with a lot of other internal priorities! Re: insights into why this is the case… As an evaluator, I’m certainly biased and would love to see evaluation move up the list a little, but I can understand how basics like fundraising, IT, and HR would come before evaluation in most organizations. If you don’t have money to pay staff, computers for them to use, and at least a basic HR infrastructure, it’s hard to think about evaluation.