Maximizing the Value of Philanthropic Efforts through Planned Partnerships between the U.S. Government and Private Foundations.
III. HOW DO FOUNDATIONS AND USG AGENCIES IDENTIFY NEEDS, DEVELOP INITIATIVES, AND MEASURE PROGRESS TOWARD THEIR GOALS?

The processes by which funding organizations identify problems, develop and implement solutions, and monitor progress appear to be influenced by diverse factors, some of which are common to both the foundation and USG sectors, while others are distinct to one or the other sector. Moreover, individual organizations may engage in innovative or noteworthy practices in each phase. This section first addresses commonalities among organizations within each sector, then details the specific organizational processes of some exemplary foundations and agencies.

In a very basic sense, both sectors recognize the same essential life cycle of an initiative, which the U.S. Department of State describes as having five steps: formulation, planning, implementation, evaluation, and renewal/termination (2008). Similarly, Kennette Benedict, former director of international peace and security at the MacArthur Foundation, maps out the life cycle of a philanthropic initiative in three stages: creation, change, and closure (2003a). She could have begun with another stage, however: choosing which problems to address. Indeed, she cites the selection of problems actually amenable to philanthropic intervention as one of the greatest challenges foundations face.

This contrasts somewhat with the USG position on problem selection, as government often is charged with addressing problems that will, in some sense, never be solved definitivelyfor example, national security and public health (Tarnoff and Nowels 2005). This explains the State Departments use of the term renewal/termination (which allows for continuation, or renewal, of initiatives) as opposed to the MacArthur Foundations use of the term closure. As detailed above, foundations often approach problems with the explicit intent that government will eventually assume responsibility for the initiative; for this reason, the end of foundation involvement in an initiative may coincide with governments entry into the arena. Foundations also work in arenas that do not provide a natural fit for government infrastructure, such as leadership development or high-risk ventures. In this way they are supplementing the work of the government toward the solution of larger social problems. (It is worth noting, however, that in a few high-profile cases, this may be changing. With the recent tremendous increases in foundation resources, and the simultaneous focus among philanthropists on results [Renz and Atienza 2006], a few select initiatives may be poised to solve problems definitively. Perhaps the clearest example of such a case is the Bill & Melinda Gates Foundations work on malaria.)

At a basic level, how foundations and USG agencies prioritize and decide which problems to address also depends, in great part, on their organizational missions. While foundations have been criticized for spreading their resources too thinly across a host of programmatic areas (Porter and Kramer 1999), most do specialize in just a few programmatic and sometimes geographic areas, and will consider intervening only in areas they view as germane to their interests. There is, of course, great diversity in the number and nature of programmatic areas on which foundations focus. For instance, the MacArthur Foundations stated mission is to support creative people and effective institutions committed to building a more just, verdant, and peaceful world (MacArthur Foundation Web Site), which could conceivably translate into activities in almost any area of human or community development. In contrast, The EMCF focuses on advancing opportunities for low-income youth (ages 9 to 24) in the United States, specifically through grantmaking to provide growth and capacity-building capital to exemplary organizations that have evidence of the effectiveness of their youth services (EMCFweb site).

The programmatic purview of federal agencies is to some extent more transparent, compared to foundations. Agencies roles are, of course, typically articulated by public law, circumscribed by public political and budgetary processes, and further refined through on-the-record administrative actions. Still, preferences of political stakeholders can influence the process. The question of accountability to the public and the consequent systematization of processes raise a further point of contrast between foundations and USG agencies. At the beginning, when identifying problems worthy of consideration, foundations often are motivated by the personal interests and proclivities of their individual founders and/or leaders (15 Minutes with Susan Berresford 2003). Agency heads, in contrast, may be very influential in terms of the direction their funded initiatives take but they are less likely to drive decision-making in their agencies in the way that a single, well-funded philanthropist can shape his or her foundations approach. The broader implication is that foundation strategies often are mapped out less explicitly than at agencies, where decision-making is defined more typically with respect to transparent chains of bureaucratic authority (U.S. Department of State 2007; cf. Porter and Kramer 1999)

As part of its goal to influence the problem identification process across USG agencies (at least among those involved in international initiatives), the Department of States Foreign Assistance Framework (2007) delineates five broad objectives for all U.S. foreign assistance: peace and security, governing justly, investing in people, economic growth, and humanitarian assistance. The framework includes more specific programmatic objectives within each broader objective and highlights those viewed by the administration as most critical for the trajectory of assistance in that category. Although the framework does not delineate funding levels, the highlighted categories mark the programs that should receive greatest budgetary priority. While some agencies, such as USAID and MCC, have fairly broad programmatic portfolios, their internal bureaucratic mechanisms can delimit potential areas of involvement considerably. For example, MCC has clearly articulated procedures whereby governments seeking to enter a bilateral aid agreement must first demonstrate their competitiveness on a host of indicators of their nations governance, social investment, and entrepreneurial capacities.

Some have criticized the USG approach because they view it as overly fragmenteda critique typically applied to foundations. For instance, Kharas (2008) takes issue with the increasing prevalence of vertical fundsresources directed at a specific issue or population, such as the Global Fund for AIDS, Tuberculosis, and Malaria. With such funds, channeled more and more through specialized agencies, dedicated to particular targets, like HIV/AIDS or malaria, instead of through traditional agencies, Kharas is concerned that there will be little support for broad country development programs (p. 2). The result, according to critics such as Kharas, is a complex and convoluted approach to aid, which does not capitalize on precisely those advantages unique to USG: influence, accountability, and long time horizons.

In contrast to USG foreign aid, the framework that guides the identification of problems for targeting assistance in the domestic sphere is perhaps more fragmented. As noted, elected officials and high-level appointees typically bring their own sense of agency priorities to office; these then are filtered through legislative and administrative procedures, delimiting the nature of those problems agencies are in a position to address and the methods to address them.

In shaping the type of support they will provide, foundations and USG face several common considerations. The literature reveals foremost among these a great deal of tension around the relative utility and potential for the success of initiatives that provide program support (resources specific to a programmatic intervention) versus those that provide operating support (resources for the organization implementing a program) (Balin 2003; Huang, Buchanan, and Buteau 2006). An oversimplification of the issue would have foundations focusing heavily on program support (given their comparative advantages in innovating and taking risks by funding cutting edge programming), whereas USG would emphasize operating support (given its superior resources and staying power). Reality, of course, is more nuanced than this characterization, and many foundations do provide operating support, even as USG funds some programs. Moreover, the approaches are not mutually exclusive; a single initiative could comprise both types of support (Balin 2003; MCC 2008b). Rather than advocating any particular approach, the literature suggests that an organizations strategy should consider how one or the other type of support dovetails with organizational objectives (Balin 2003; Porter and Kramer 1999).

Another theme in the literature that describes decision-making around the development of USG and foundation initiatives is the question of whether, and with whom, to partner. Such considerations typically are driven by the comparative advantage of the groups involved and very often center on the question of an initiatives sustainability (Fink and Ebbe 2005; MCC 2008b; U.S. Department of State 2008; W.K. Kellogg Foundation 2003). Some of the benefits that foundations, in particular, might seek from partners include technical expertise; in-country knowledge; connections to academe, the private sector, and civil society; knowledge of public policy and public institutions; and resource mobilization networks (Fink and Ebbe 2005; U.S. Department of State 2008). Similarly, USG often seeks connections and networking opportunities from partners and is especially interested in organizations and individuals who can find markets for an initiative; that is, who can support widespread adoption of whatever programmatic elements an initiative may offer (MCC 2008b; U.S. Department of State 2008).

The literature indicates that both private and public sector actors are keenly interested in cross-sector collaboration and developing partnerships, but obstacles exist that hinder their progress. Successful partnerships require that all parties involved understand the interests, capacities, and approaches of the other actors (Fosler 2002). Yet a State Department study found that private sector partners felt the USG did not understand their interests and looked to them only to fill gaps (2008). Respondents also felt that USG was overly suspicious of private sector motives. This same study revealed that USG actors felt they were ill-equipped to deal with private sector partners and that bureaucratic structures hindered the development of partnerships. While these issues present challenges, the United Nations Foundation suggests that intermediary organizations might occupy a particularly good position for overcoming such problems and facilitating partnerships, as they have a foot in both worlds, public and private (2003).

While the literature reviewed here supports the general contention that measurement remains a challenge for both federal agencies and foundations, both sectors appear to have embraced the challenge to some degree, and successes in this area are not entirely uncommon. Interestingly, both sectors appear to have moved beyond the notion of measurement as primarily a means of demonstrating accountability or impact and also are seeking to measure progress to inform their own broad decision-making processes (Kramer 2007; MCC 2008a). This tendency seems more pronounced in the private philanthropic sector, where one survey of foundation leaders revealed little evidence that evaluations were used to determine grant renewal or termination decisions (Kramer 2007; p. 15). Rather, grant programs often have critics or supporters within the organization who may influence decision-making more heavily than evaluators. The survey revealed that the most useful evaluations for foundations purposes inform planning and implementation, as well as tracking the progress of the organizations broader goals (Kramer 2007; cf. Guidice and Bolduc 2004; Levinger et al 2007; William and Flora Hewlett Foundation 2008). Toward these ends, the Robert Wood Johnson Foundation developed a system of comprehensive performance measurement (a system of measuring progress against the foundations theories of change and indicators of performance); the William and Flora Hewlett Foundation (WFHF) developed an expected return metric (a quantitative process for evaluating potential investments based on consistent metrics); and the Annie E. Casey Foundation (AECF) embraced results-based accountability. On a much larger scale, the Bill & Melinda Gates Foundation has dedicated significant funding to the Institute for Health Metrics and Evaluation at the University of Washington for the development of data systems to support the monitoring of public health issues at a societal level. In the public arena, MCC seeks to implement results-based management, which uses data to inform aid giving and management, even as it focuses on results; and MCCs core indicators have been used by other agencies, including USAID, to guide decision-making. Each of these will be discussed in greater detail below, but it is worth noting here that the literature suggests foundations may achieve their best successes when applying metrics at earlier points in the continuum, while USG appears to apply metrics more consistently at all stages, with emphasis on evaluation for accountability. This is illustrated by the example (presented at the beginning of this section) that the State Departments (2008) conception of an initiatives life cycle explicitly includes a phase for post-implementation evaluation; whereas MacArthurs (Benedict 2003a) change phase may imply an evaluative component that is not given the prominence it receives from USG agencies and is not necessarily linked to accountability.

Beyond the broad sector-specific trends in decision-making discussed above, individual foundations and USG agencies often engage in planning and development processes specific to their organizations. These are addressed briefly in this subsection, as well as in Chapter V, where we report on a few examples of organizations and initiatives that could serve as case studies. Such issues will be addressed at length in the case studies themselves.

EMCF has been cited as an example of a foundation that has engaged in a very deliberate rethinking of its approach to decision-making. In the late 1990s, the Foundation chose to move away from several broad programmatic areas (poverty, child welfare, education) to a single area (youth development). This decision was innovative and potentially effective, at least insofar as it responded to the criticism, voiced regularly in the literature, that foundations tend to spread their resources across too many areas. Moreover, EMCFs movement away from program to operating grants appears to have gone against the grain in the private philanthropic sphere. Another interesting aspect of the new The EMCF approach is that it applies strategies from the for-profit sector to the foundations grantmaking process, including due diligence, business planning, and organizational performance tracking. Again bucking a foundation trend, The EMCF emphasizes multiyear grants to allow for the sometimes painstaking work of organizational development. Finally, the Foundations close relationship with its granteesincluding the provision of technical assistancereflects the broader trend of venture philanthropy, where emphasis is placed on hands-on work with grantees to ensure their success. None of these strategies is innovative in and of itself, but the comprehensive shift coming from within the foundation world represents a new way of envisioning the donor-recipient relationship. This shift is responsive to some of the common criticisms of private philanthropy.

As mentioned above, a few foundations have made noteworthy inroads on tracking their own broad organizational performance. The metrics developed by the Robert Wood Johnson, William and Flora Hewlett, and Annie E. Casey foundations are innovative in that they present a new way of examining success at the foundation as opposed to the program level. The Robert Wood Johnson Foundations system of comprehensive performance measurement is multifaceted, and at least three aspects deserve specific mention. First, the Foundation developed a Scorecard; this is released annually and reports outputs at the foundation level, outcomes from key grantees and foundation-wide, and changes at the population level in the broad health indicators their programs seek to address. Second, the Robert Wood Johnson Foundation developed and implemented internal assessments for each of its own programmatic teams, as well as employee surveys to gauge attitudes about the Foundations work. Third, the Foundation set up a public archive of all the data from its sponsored research. A case study of the Foundations focused and sustained attention to organizational assessment, coupled with its willingness to make data public, points to improved focus and strategy in grantmaking, increased innovation, and better alignment of board and staff goals (Guidice and Bolduc 2004).

The William and Flora Hewlett Foundation developed its expected return metric to support the systematic selection of grantees, specifically by considering their foundations comparative advantage in a given area, as well as the presence of other funders. Expected return is calculated by multiplying the benefit (of an intervention under optimal conditions; usually drawn from extant data), times the likelihood of success (calculated internally), times the foundations contribution (adjusted for varying roles in each situation), divided by the programs total cost. The metric is fairly straightforward, although data for each input may be of varying availability and quality. Because expected return considers other funders in the equation, the consistent use of such a metric by more foundations could support sector-wide improvements in effectiveness.

The Annie E. Casey Foundations results-based measurement approach was developed through an iterative processnot unlike the Robert Wood Johnson Foundations development of comprehensive performance measureswith heavy involvement from foundation leaders and staff. The Annie E. Casey Foundations process is also noteworthy because it involved granteesa step taken intentionally to gain support for new reporting requirements. Identifying performance measures required the Foundation to articulate the strategy for each program area with great precision. As one leader at the Foundation put it, In order to measure how we were doing, we needed to be as clear as we could possibly be about what we intended to do (Kaufmann and Searle 2007, p. 7). As such, the process proceeded, to some extent, in reverse order with the concept of measurement driving [their] thinking about what results should be (ibid.). The system considers results in three categories: impact (the direct effect of a grant on beneficiaries), influence (the effect on behaviors of people not directly touched by the grant), and leverage (additional support beyond the Annie E. Casey Foundations contribution that the grant built or attracted). The Annie E. Casey Foundations framework does not allow the Foundation to overcome some of the challenges (already cited) associated with measurementfor example, availability and consistency of measures but the process appears to have strengthened program strategy and enhanced thinking about different levels of performance. For example, in the Foundations Education Program, the process resulted in a formal expression of the rationale behind the results that the K-12 Education Program sought. This included a description of their vision for core results; identification of three critical barriers to achieving the vision; elaboration of the consequences of these barriers; and articulation of the specific role of the Education Program in overcoming them, which also spelled out the results for which the Program would be accountable (Kaufmann and Searle 2007, pp. 7-8).

The Bill & Melinda Gates Foundation has sought to address the challenge of measuring progress, not merely at the foundation level, but at the societal level as well. With an initial grant of $105 million in 2007, Gates helped to establish the Institute for Health Metrics and Evaluation. The Institute works to develop and compile data on five areas of public health: health outcomes, health services, resource inputs, metrics for decision-making, and evaluation. The purpose of IMHEs work is to put as much information as possible about health in the public domain in a way that is useful, understandable and credible to enable policy-makers and decision-makers to craft the best policies with the highest benefit for their own context (IHME web site). The institute has recently published statistics that challenge the reporting of the World Health Organization (WHO), which as a public agency could be prone to the interference of politics in its data gathering and reporting (The Seattle Times, April 9, 2008).

Probably the most well-known and well-developed process by which a USG entity identifies needs, develops initiatives, and measures progress is MCCs process to select, implement, and evaluate bilateral aid agreements. In its core elements, the MCC approach directly embraces many of the characteristics of successful aid initiatives identified in the literature, including those that remain a challenge for both the public and private sectors. Local ownership of MCC initiatives is supported explicitly through the requirements of the application process, as well as recipient countries role in providing performance assessment frameworks and conducting evaluations with input from local institutions. MCC applies relatively consistent and well-developed metrics throughout all phases of its decision-makingsuch that other agencies often rely on MCC indicators (USAID 2007; U.S. Department of State 2007). These metrics have been developed independently by third parties, such as the World Bank, the United Nations, and other international agencies, lending the process both credibility and transparency in the international sector.

MCC agreements are multiyear, with clear requirements for continuation, so funding is relatively reliable once a nation enters into a compact. Broad country coverage and the potential for applying successful initiatives in other countries also are considered in evaluating potential compacts, which speaks to the importance of scale in MCCs approach. A prime motive for long-term funding is the idea that much of what is undertaken by MCC compacts can be considered reform, and so often require structural and policy changes. For example, the MCC-World Bank collaboration in Mozambiques water and sanitation sector required changes in the legal authority for local sanitation services (MCC 2008b). These require time for changes to become effective, and resources and technical support to ensure that the changes are successful. The MCC is committed to making these foundational investments, which require a willingness to focus on long-term objectives. MCCs aid, however, is tied to performance on a yearly basis, which suggests that the achievement of shorter-term objectives is still necessary. Indeed, one of the most common critiques of MCC is that it has disbursed aid too haltingly (Chassy 2005). As it continues to support current compacts and establish new ones, MCC may need to balance a demand for quick results with investments in the broader goals of prosperity and stability.

Another example comes from USAIDs recent reform of its policy framework. According to USAID, studies of USG foreign aid often have highlighted the governments overarching agendas and the lack of coherence in goals across aid programs to meet those agendas (2006). The numerous accounts responsible for foreign aid have been isolated, with different standards and methods of measuring progress. To address this issue and provide guidance and coherence in the application of assistance, the USAID now uses a policy framework based on five core goals for foreign aid: promoting transformational development, strengthening fragile states, supporting strategic states, providing humanitarian relief, and addressing global issues and other special concerns. For each goal, the framework provides guidance on program planning, resource allocation, and evaluation. The framework builds on the concept that different goals require distinct approaches to formulation and implementation, and also incorporates USAIDs desire to see more public-private partnerships and other new models of aid delivery as part of its initiatives. The five goals also reflect new directions in foreign aid post-9/11, including the support for fragile states and key allies, and the identification of global concerns, such as HIV/AIDS, which have broad impacts.

To further increase the effectiveness of foreign aid and harness the strengths of various agencies within USG, the office of the Director of U.S. Foreign Assistance has piloted a new strategic planning process that brings together those USG agencies delivering assistance within a country to collaborate on the top priorities for that nation (Greene 2008). The agencies collectively produce a Country Assistance Strategy document that outlines the top four or five assistance priorities for that country, taking into account the relative strengths and opportunities that each agency brings to the table and the particular needs of the country in question. This process theoretically minimizes the conflict of goals that can occur when multiple agencies are involved, reduces overlapping efforts, and enables the transfer of knowledge. As of 2008, the process has been piloted in 10 countries. This integration of agency efforts is not surprising, given the 2006 creation of the central Director of Foreign Assistance to oversee foreign aid, but it is not certain whether this process will facilitate a consolidation of the accounts and programs funded with USG aid or an increase in the number of USG agencies involved in international development. It is also unclear what role private organizations will play in this process, although it would make sense to include their efforts for consideration, since some large foundations have as much of a presence in some countries as USG agencies.

In the domestic arena, the Centers for Disease Control and Prevention (CDC) have developed a Framework for Program Evaluation to ensure that amidst the complex transition in public health, [CDC] will remain accountable and committed to achieving measurable health outcomes (Milstein and Wetterhall 1999). The framework is a practical, nonprescriptive tool, designed for use by public health professionals (rather than professional evaluators), and it encourages the integration of evaluation practices into program operations. Although the framework is focused on the evaluation of individual programs, it is structured to allow CDC to make comparisons across programs. By attempting to build consistent, high-quality evaluation into all of its programs, the CDC hopes to employ this framework to support agency-wide planning and program development, as well as further evaluation.

Survey Disclaimer

According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid OMB control number. The valid OMB control number for this information collection is 0990-0379. The time required to complete this information collection is estimated to average 5 minutes per response, including the time to review instructions, search existing data resources, gather the data needed, and complete and review the information collection. If you have comments concerning the accuracy of the time estimate(s) or suggestions for improving this form, please write to: U.S. Department of Health & Human Services, OS/OCIO/PRA, 200 Independence Ave., S.W., Suite 336-E, Washington D.C. 20201, Attention: PRA Reports Clearance Officer.