In the most widely read article in the Stanford Social Innovation Review, “Collective Impact,” authors John Kania and Mark Kramer argue that the fundamental limitation of isolated and competing initiatives is lack of coordination. They write:

No single organization is responsible for any major social problem, nor can any single organization cure it. In the field of education, even the most highly respected nonprofits—such as the Harlem Children’s Zone, Teach for America, and the Knowledge Is Power Program (KIPP)—have taken decades to reach tens of thousands of children, a remarkable achievement that deserves praise, but one that is three orders of magnitude short of the tens of millions of US children that need help.

While we do not doubt the benefits of collaboration, we argue that a focus on “collective impact” over and above competition often results in coordinated but misdirected effort. Collaboration is initially helpful in generating efficiency of implementation by centralizing the focus of multiple organizations, but such coordination is beneficial only when it centralizes effectively and identifies the right solution—a complicated proposition with multi-faceted social problems. Indeed, the gap between collective impact and coordinated blindness is unfortunately small.

Understanding the relationship between competition and collaboration

Kania and Kramer define collective impact as a process of “creating and sustaining the collective process, measurement reporting systems, and community leadership that enable cross-sector coalitions to arise and thrive.” Put another way, collective impact involves the marrying of centralized planning and coordinated implementation.

While they are correct to suggest the benefits of scale when the right answer has been identified, finding these answers is rarely easy. Have we already identified the best way to address homelessness, education, and/or health care? An honest assessment of collaboration must acknowledge both the benefits of scale and the trade-offs of a corresponding reduction in experimentation with other approaches.

An alternative way to structure nonprofits is by decentralization of the market through competition. In this approach, organizations do not intentionally agree upon a specific path, but instead work on a social problem as they see fit. While this often results in organizations in competition with one other for limited resources, competition also serves to drive experimentation and innovation—the creative destruction that Schumpeter identified so insightfully. Furthermore, to the extent there is transparency of practices and other mechanisms that might create learning across these organizations, competition serves to help organizations move more quickly up a learning curve for improved market outcomes.

Consider the choice between collective impact and competition in a field such as education—an arena that Kania and Kramer use to make their case. One part of collective impact is in the rallying around shared standards. While such an idea is attractive for the accountability it creates, it is a whole other thing to decide what those standards are, to anticipate their unintended consequences, and to know when they are best adjusted over time as a given field changes. Consider the downsides of centralized standards in the work of No Child Left Behind, a policy that has rightly been criticized for failing to test the right things and for holding different types of schools and students accountable a common denominator set of standards. As AEI’s Frederick Hess and Stanford’s Linda Darling-Hammond argue, “Perhaps No Child Left Behind’s most enduring lesson is the value of humility—a virtue that must be taken to heart in crafting a smarter, more coherent federal role in schooling.”

Or consider charter schools which, in their accountablity to performance but not process, act as another movement toward structured experimentation over centralized planning. A 2004 NBER study on education market outcomes shows that charters generally increase the quality of the public schools with whom they compete. But other data on the outcomes is more mixed. In a 2009 study of charter performance across 16 states for example, the CREDO group at Stanford found that only 17 percent of charter schools reported academic gains significantly better than traditional public schools, while 46 percent showed no difference from public schools, and 37 percent were significantly worse than their traditional public school counterparts—not exactly a ringing endorsement.

But a distribution bent toward underperformance is not necessarily a sign of this model’s failure when viewed over the long run. Performance variance—seeing what did and did not work—is the nature of competition, and can be beneficial especially if transparency and accountability are embedded in a system, thus making individual schools more capable of learning from practices across the market. Consider what kinds of innovation would have been lost without the experimentation of KIPP’s character-based education or Teach for America’s model to capture high talent teachers who would have otherwise gone into more lucrative industries. When we do not know the solution to a problem, experimentation and its corresponding ups and downs should not immediately be considered a negative. School districts like New Orleans, which have shown significant performance strides after moving two-thirds of their schools to charter forms, are outliers to the CREDO study not because they identified a singular path, but because they intentionally experimented and found a way learn from this mini-market of schools.

Conclusion

There are good reasons for wariness around competition in the social sector. Many nonprofits may view embracing competition as sowing the seeds of mission destruction. In light of this, the appeal of collective impact—and specifically zeroing in on a solution and implementing it with scale—is understandable.

The problem is that scale might be the least of our worries when it comes to complicated social problems like education, where single solutions are challenging if not impossible to identify. As a result, scaling up through central planning, however nuanced, might be counterproductive to the extent that the “right” solution remains unidentified, or that different solutions are required for different people and different markets.

As an alternative, we suggest the importance of building systems that encourage competition within—and learning across—players in these markets. In the long-run, competition often leads to improved market outcomes, even if it makes survival of any given organization harder. Remember that “improved market outcomes” is another way to say “better care of a given service population.” The ultimate irony is that those in the social sector should actually be more supportive of competition than those in the for-profit marketplace. Even if it goes against an impulse towards self-preservation, it is time for practitioners to catch up to this reality.

Peter Boumgarden is an assistant professor of management at Hope College in Holland, Michigan. Dr. Boumgarden’s academic research is on organizational structures and innovation, and he also independently consults on these issues in both the for- and non-profit sectors.

John Branch is a lecturer of marketing at the Ross School of business, and an associate at the Center for Russian, East European, & Eurasian Studies, both of the University of Michigan. Dr. Branch is active teaching and training globally in the areas of marketing and strategy in both the for- and non-profit sectors.

COMMENTS

“An honest assessment of collaboration must acknowledge both the benefits of scale and the trade-offs of a corresponding reduction in experimentation with other approaches.”

Collective impact is a new approach, and properly done, it involves plenty of experimentation with new coordinated approaches as opposed to the old competitive approaches (which have worked oh so well).

The point of collective impact is also not at all to centralize decision making in order to impose a unique and identical model on everyone, it is about trying to coordinate all the different actors in order to maximize complementarity and thus impact.

Evaluation and what I might call modulated scaling (scaling what works only in ways that make sense for the different populations served) is at the center of the collective impact approach.

Leave a Comment

+ COMMENT

NAME

Email

Location

URL

Remember my personal information

Notify me if someone responds?

Please enter the word you see in the image below:

SSIR reserves the right to remove comments
it deems offensive or inappropriate.

Sign up with Gravatar to include your photo or avatar with your comment.