In the past ‘research’ was sometimes seen as a dirty word in the development sector, with the belief that any amount of money raised should be spent directly on practical aid projects. The issue with this, of course, is that some projects were implemented without any real understanding of the needs of the people that they set out to help, or any way of measuring the actual impact of a project once it was complete. This is changing now, and research is being seen as more and more critical to successful aid and development initiatives.

With advancement in technology, open-source analysis software like “R” available for anyone to use, and apps that can be used to collect data in the field, this is no longer as difficult as it once was. Moreover, donors (right from regular individual givers to government grants, and multi-national corporations) are increasingly interested in, and concerned about, data transparency and program assessment. It can be a huge motivator to invest time and money into data collection and analysis if it means that there will be more donor money coming in the future (and therefore more time available to do the important work in the field).

There is also a huge push by some within the humanitarian community to increase data quality and transparency. Take movements like The International Aid Transparency Initiative, for example. The IATI is a voluntary initiative that encourages various stakeholders to publish aid data in a specified format that enables better reporting and analysis across different projects and organisations. Or AidData, a research collaboration that tracks the money of donor organisations in order to understand how donor money is being spent. The AidData website provides some great interactive resources to visualise development finance, where the money is coming from, where it’s going, and what it’s being spent on worldwide.

And of course, heavyweights such as the OECD, the World Bank, and the UN all have major initiatives to track and report data, and make various resources available on their websites. Some of the information they provide is directly related to development initiatives, some includes wider measures of country and industry development indicators.

For individual projects, these impressive collections and visualisations of big data may not be as relevant as an accurate needs assessment, and subsequent tracking of outcome measures of the actual communities that they directly work with. More individual projects are now focusing on designing data collection strategies that will help inform project design from the beginning, and provide the means of tracking progress, and providing clear and unambiguous measures of project outcomes.

A 2013 news feature on Nature titled “international aid projects come under the microscope”, highlighted this shift in perspective. The article reports a cultural change in development circles with more and more organisations applying rigorous assessment strategies to projects. By necessity, this means reporting negative, or inconclusive results as well as success stories – something that more risk-averse organisations will take some convincing to do.

And this is understandable, in an industry where donor money is crucial, and donors can change their minds at a whim, it is scary to think how reporting anything but success stories might affect revenue flow. In the long term though, this kind of reporting is necessary in order to keep the trust of donors and to truly understand the short- and long-term effects of different development programs.

As the tide shifts to greater and greater emphasis on data transparency, and measurable program outcomes, this focus will become more and more ingrained into how aid organisations operate. There are still issues that individual organisations will need to work out for themselves, such as how to effectively use numbers without losing track of the humanity behind them, how to strike the balance between spending money on research and data collection without going overboard, and how to overcome resistance to accurately reporting outcomes – even when these are negative, or unexciting. But the international aid and development industry has made large leaps on this topic in recent years, and there is no reason why this would not continue.

So the answer is no, although this may once have been the case, research no longer seems to be a dirty word in humanitarian work. More and more, people in the industry are seeing the value that solid research can provide to their projects. Taking into account this swell of support for open data, and better assessment and reporting in international aid it seems that the future for research in this area is bright.

What do you think? Have you had any positive or negative experiences trying to incorporate solid research into international aid initiatives? Do you think this change is old news, that it has already been incorporated wholeheartedly into operational practice? Or is it just too difficult sometimes given the messy realities of real life work?