How do programs know what they should be doing— which target populations require services, the types of services programs should provide, the amounts of services, which kinds of services will be most effective, etc.? Needs assessments are the best way to determine the needs of individuals, communities, and other populations. A needs assessment is a systematic process for identifying and determining such needs. Like program evaluations, needs assessments draw on a range of social science methods—from surveys and observations, to focus groups and individual interviews.

Needs assessments assume a clear definition of “a need”. As James Altschuld and Ryan Watkins point out in New Directions in Evaluation, No. 144, Winter, 2014 “A need, in the simplest sense, is a measurable gap between two conditions: what currently is, and what should be….This requires ascertaining what the circumstances are at a point in time, what is desired in the future, and a comparison of the two.” Needs assessments don’t just exclusively focus on what is and should be, but also on gathering and synthesizing data about how to narrow the gap between the existing state and the desired state. Needs assessments also prioritize needs so that users of the assessment can address specified needs in a reasonable order, and devote appropriate resources to meeting identified needs.

By gathering data from a range of stakeholders, needs assessments are able to determine the best means to achieve the desired results. To be effective, however, needs assessments must not simply focus on deficits in individuals and communities, but must also explore existing strengths, capacities, and assets. Too narrow a focus on “what’s missing” can blind researchers and program designers to the existing assets on which effective programming can be built. Effective needs assessments therefore, ask questions about: 1) on-going needs, 2) current strengths/assets/ capacities, 3) and desired states.

Needs assessments may differ in their design, but regardless of design, most needs assessment follow these phases:

Explore and gather data about the current condition/state of affairs (including existing assets)

Explore/identify desired or optimal condition/state of affairs

Analyze data to understand the difference or “gap” between the current condition and desired condition.

Prioritize identified needs and “gaps.”

With needs (and assets) in mind, design program to address (diminish or eliminate) the gap between existing needs and desired state.

When conducted in a timely and thoughtful way, needs assessments can be of substantial utility in helping programs to effectively deliver services to those who most need them.

Program evaluations are conducted for a variety of reasons. Purposes can range from a mechanical compliance with a funder’s reporting requirements, to the genuine desire by program managers and stakeholder to learn “Are we making a difference?” and if so, “What kind of difference are we making?” The different purposes of, and motivations for, conducting evaluations determine the different types of evaluations. Below, I briefly discuss the variety of evaluation types.

Formative, Summative, Process, Impact and Outcome Evaluations

Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program. Formative evaluations typically are conducted in the early- to mid-period of a program’s implementation.Summative evaluations are conducted near, or at, the end of a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities) and to indicate the ultimate value, merit and worth of the program. Summative evaluations seek to determine whether the program should be continued, replicated or curtailed, whereas formative evaluations are intended to help program designers, managers, and implementers to address challenges to the program’s effectiveness.

Process evaluations, like formative evaluations, are conducted during the program’s early and mid-cycle phases of implementation. Typically process evaluations seek data with which to understand what’s actually going on in a program (what the program actually is and does), and whether intended service recipients are receiving the services they need. Process evaluations are, as the name implies, about the processes involved in delivering the program.

Impact evaluations sometimes alternatively called “outcome evaluations,” gather and analyze data to show the ultimate, often broader range, and longer lasting, effects of a program. An impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes. (see, for example) The International Initiative for Impact Evaluation (3ie) defines rigorous impact evaluations as: ”Analyses that measure the net change in outcomes for a particular group of people that can be attributed to a specific program using the best methodology available, feasible and appropriate to the evaluation question that is being investigated and to the specific context” (see, for example) Impact (and outcome evaluations) are primarily concerned with determining whether the effects of the program are the result of the program, or the result of some other extraneous factor(s). Ultimately, outcome evaluations want to answer the question, “What effect(s) did the program have on its participants (e.g., changes in knowledge, attitudes, behaviors, skills, practices) and were these effects the result of the program?

Although the different types of evaluation described above differ in their intended purposes and times of implementation, it is important to keep in mind that every program evaluation should be guided by good evaluation research questions. (See our earlier post, Questions Before Methods) Program evaluation, like any effective research project depends upon asking important and insight-producing questions. Ultimately, the different types of evaluations discussed above support the general definition of “program evaluation —a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.” (see, for example)

Organizations are social entities that have a collective purpose and that interact with larger environments (the economy, society, other organizations, etc.) Nonprofit organizations are a sub-type of organization that use “surplus revenues” (i.e. revenues beyond those used for operation and sustenance) to achieve desirable social ends, rather than to produce profits or dividends. In essence, nonprofits use their financial resources (provided by individual donations, foundation and government grants, etc.) to ameliorate the lives and conditions of community members. In the US, there are a range of nonprofit organizations, including hospitals, charities, educational organizations, social welfare organizations, foundations, community organizations, etc. According to the National Center for Charitable Statistics, there are approximately 1.5 million nonprofits in the US.

Organizations vs. Programs

Many nonprofit organizations mobilize resources in the form of organizedprograms that provide activities and products that are designed to improve the lives of program participants. Carter MacNamara writes that, “A program is a collection of resources in an organization and is geared to accomplish a certain goal or set of goals. Programs are one major aspect of the non-profit’s structure. The typical non-profit organizational structure is built around programs, that is, the non-profit provides certain major services, each of which is usually formalized into a program.” (http://literacy.kent.edu/Oasis/grants/overviewprogplan.html) In serving program participants, nonprofits strive to effectively and efficiently deploy program resources, including knowledge, activities, and materials, to positively affect the lives of those they serve.

Program Evaluation Meets Organization Development

In order to assess the effectiveness and efficiency of programs, nonprofits will often conduct program evaluations. Program evaluations are customarily guided by a set of evaluation research questions, e.g., What are the effects of the program on participants? What challenges did participants encounter while in the program? Did the program make a difference in the lives of those it was intended to serve? Did the program cause the observed changed in program participants? etc. (for more examples of evaluation questions, see our previous posts “Questions Before Methods” and “Approaching an Evaluation: 10 Issues to Consider” ) Although program evaluations are customarily aimed at gathering and analyzing data about discrete programs, the most useful evaluations often collect, synthesize, and report information that can useful in improving the broader operation and health of the organization that hosts the program. Program evaluation thus can contribute to organization development, “the deliberately planned, organization-wide effort to increase an organization’s effectiveness and/or efficiency and/or to enable the organization to achieve its strategic goals.” (wikipedia)

In fact, findings from program evaluations often have important implications for the development and sustainability of the entire host organization. This is especially true in the case of small-to-medium sized nonprofit organizations, whose core programs often comprise the bulk of the organization’s structure and raison d’être. Consequently, information from program evaluations—especially formative evaluations which focus on strengthening program effectiveness—can be used to clarify the organization’s goals and objectives, to identify key organizational challenges and ways to address these, and to strengthen the overall effectiveness of the organization’s efforts. Additionally, program evaluations can offer an ideal opportunity for an organization to reflect on its practices and purposes, to rethink ways to achieve the organization’s mission, and to identify new data-based strategies for enhancing the organization’s long-term viability and well-being. Ultimately, program evaluation can, and in many cases should, be an integral component of organization development.

When discussing with clients potential sources of data about a program’s operations and effects, it has often been said to me, “But we just have anecdotal evidence.” It’s as if anecdotal data don’t count. Too often anecdotes are dismissed as unscientific and valueless—as if they are just stories. In point of fact, anecdotes (qualitative accounts, “word-based” data) can be a valuable source of information and offer powerful insights about how a program works and the effects it produces. When carefully collected and systematically analyzed, especially when combined with other sources of quantitative data, anecdotes can be a powerful “window” on a program.

In a recent blog post, (see link below) the evaluator Michael Quinn Patton reflects on the value and utility of anecdotal information. Patton shows that, when collected in sufficient quantity, compared (or “triangulated’) with other kinds of data, and systematically and sensibly analyzed, anecdotes can provide important information about the character and meaning of a given phenomenon. Furthermore, anecdotes are often the starting place for hypotheses and experiments that ultimately produce quantitative evidence of phenomena. William Trochim underscores the importance of word-based, qualitative data (of which anecdotes are a specific type) when he points out. “All quantitative data is based on qualitative judgment. Numbers in and of themselves can’t be interpreted without understanding the assumptions which underlie them…”(David Foster Wallace made a similar point from an entirely different vantage point, in Consider the Lobster: “You can’t escape language. Language is everything and everywhere. It’s what lets us have anything to do with one another. p. 70.)

Trochim goes on to say,

“All numerical information involves numerous judgments about what the number means. The bottom line here is that quantitative and qualitative data are, at some level, virtually inseparable. Neither exists in a vacuum or can be considered totally devoid of the other. To ask which is “better” or more “valid” or has greater “verisimilitude” or whatever ignores the intimate connection between them. To do good research we need to use both the qualitative and the quantitative.”

Patton reminds us of the importance of anecdotes when he quotes N.G. Carr, author of The Shallows: What the Internet is doing to our brains. W. W. Norton(2010). New York:

“We live anecdotally, proceeding from birth to death through a series of incidents, but scientists can be quick to dismiss the value of anecdotes. “Anecdotal” has become something of a curse word, at least when applied to research and other explorations of the real.. . . . The empirical, if it’s to provide anything like a full picture, needs to make room for both the statistical and the anecdotal.

The danger in scorning the anecdotal is that science gets too far removed from the actual experience of life, that it loses sight of the fact that mathematical averages and other such measures are always abstractions.”

I believe that it is important to use multiple kinds of information to understand what programs do, and what their outcomes are. Quantitative data is essential for understanding abstract trends and at getting at the “larger picture.” That said, it is nearly impossible to make sense of quantitative data without using language to reveal assumption, implications, explanations, and meaning of quantitative data. Anecdotal data, as one kind of qualitative data, is critical to effective program evaluation research.

Qualitative research interviews are a critical component of program evaluations. In-person and telephone interviews are especially valuable because they allow the evaluator to participate in direct conversations with program participants, program staff, community members, and other stakeholders. These conversations enable the evaluator to learn in a rich conversational venue about interviewees’ experiences, perspectives, attitudes, and knowledge. Unlike questionnaires and surveys, which typically require structured, categorical responses to standardized written questions so that data can be quantified, qualitative interviews allow for deeper probing of interviewees and the use of clarifying follow-up questions which can surface information that often remains unrevealed in survey/questionnaire formats.

Although research interviews are guided by a pre-determined, written protocol which contains guiding questions, excellent interviews require a nimble and improvisational interviewer who can thoughtfully and swiftly respond to interviewees’ observations and reflections. Qualitative research interviews also require that the interviewer be a skilled listener and thoughtful interpreter of verbally presented data. Interviewers must listen carefully both to the denotative narrative “text” of the interviewee, and to the connotative subtext (the implied intent, tacit sub-themes and connotations) that the interviewee presents.

The most productive qualitative interviews are those that approximate a good conversation. This requires the interviewer to establish a comfortable atmosphere; ask interesting and germane questions; display respect for the interviewee; and create a sense of equality, candor, and reciprocity between the interviewer and the interviewee. Good interviews not only are a source of rich and informative data for the interviewer, they can also be a reflective learning opportunity for the interviewee. Like every good conversation, both parties should benefit.