Data in Social Impact: What are the Limitations?

It’s a message we’ve heard time and time again: technology and big data are the way forward in the 21st century. But what happens when we introduce extensive, rigorous (and primarily quantitative) data collection and analysis into the development sector?

Donors – particularly in the international development sector, where some of the most prominent donors are large institutions and corporations based in the West – have high expectations regarding program effectiveness. They want every quantitative data point, every case study, and every interview imaginable in order to know that their money was well-invested. However, this can pose a huge stress on organizations, particularly those that do not have the human capital or financial resources to collect the data points to an “adequate” degree.

Two of donors’ biggest fears are 1) spending money on interventions that don’t work, and 2) not finding out an intervention doesn’t work until years later. Rife in the history of the development sector are horror stories of programs having been tested, scaled up, and replicated based on the positive results exuded by participants, only to find out a decade or so later that the positive outcomes were not attributable to the program at all.

As of now, the most feasible way to verify program effectiveness is through data that is collected by organizations working on the ground, or independent evaluators. And while there is nothing wrong with donors requesting technical data, it must be asked for within the context of what resources the program has to spare.

Low Actualization

Picture this: you are an employee at a small development NGO. You may have a myriad of interests, but you were hired to implement a program. However, donor deadlines are fast approaching and your implantation duties switch, and you find yourself pounding away on a laptop, trying to develop and translate questionnaires that adhere to the evaluation metrics set forth by this donor, that also make sense to your participants. As you look through all of your files, you take note of your upcoming deadline.

One donor has asked for quarterly reports, another for monthly reports, and a third for 6-month and yearly reviews. All three of the donors have different metrics by which they expect you to evaluate the program for.

What do you do? You know this information is important, but there aren’t enough hors in the day, nor does your organization have the capacity to turn all of this information out in time, even though you needed the money to make your program a reality.

So, you make the difficult, but understandable, choice. You get the numbers that you can and leave the rest. It’s not possible, you say. The demands are too high, given what we have to work with.

“Program Failure”

Due to a lack of data, or misinterpretations of what is needed, etc. the program is determined to be unsuccessful. If an NGO is working with a repeat donor, they would hopefully have cultivated enough of a relationship to know exactly what is being asked of them, and the donor will know what is a feasible ask given the timelines. But this is an ideal scenario, and often doesn’t play out in reality.

Rinse, and Repeat

It doesn’t have to be this way. It has become a common rule of practice in development to always listen to the community, and trust their knowledge. However, we have yet to give local NGO workers the same credit when it comes to data collection. By establishing common rules around expectations for data, we can move toward a system that is more trusting, more transparent, and provides positive results for all involved.

Hailing from the small suburb of Ossining, New York, Akiera knew from a young age that she did not want to settle down in a mundane locale early on in life. That mindset led her to Boston, where she attended Northeastern University in 2013. Northeastern has a co-op program that provides students with 6-month opportunities (up to three times throughout their undergraduate career) to pursue full-time work positions in their field of interest. Having worked at a local health improvement NGO and a government agency, she decided prior to her senior year that it was time for her to go after her interest in sustainable international development, despite her fears of not having the correct toolkit. During undergrad, she immersed herself in research across a wide variety of social impact sub-disciplines: from human trafficking to drug abuse to sexual violence. Public health emerged as a common thread amongst all of her studies. In 2016, she traveled to Kenya to work with a NGO in Nairobi to evaluate the effectiveness of a sexual gender-based violence (SGBV) resource tool the organization had developed to be put to use in an informal settlement. Later that year, she also worked in a consultant-capacity in Cape Town, South Africa with micro-entrepreneurs that had designed a business to create a foundation for social connections and employ refugee women from other African nations.