Evaluating Health Research Impact: Development and Implementation of the Alberta Innovates – Health Solutions Impact Framework

Graham, K. E. R., Chorzempa, P. A., Valentine, P. A., & Magnan, J. (2012). Evaluating health research impact: Development and implementation of the Alberta Innovates – Health Solutions impact framework. Research Evaluation, 21(5), 354-367. http://rev.oxfordjournals.org/content/21/5/354AbstractAlberta Innovates – Health Solutions (AIHS) is a Canadian-based, publicly funded, not-for-profit, provincial health research and innovation organization mandated to improve health, the health system, and socioeconomic well-being of Albertans through health research and innovation. Investments in health research are substantial and funders face increasing pressure to measure the impact of their investments and demonstrate ‘value for money’. However, measuring impact in this context is a challenge given the lack of agreement on a common approach or gold standard, diverse stakeholder interests, attribution issues, and time lags between investments and the realization of long-term impact. To address these issues and ideally optimize impact, AIHS developed and implemented an impact framework based on a model published by the Canadian Academy of Health Sciences (CAHS). The purpose of this article is to: (1) describe the evolution of the framework’s development and implementation; (2) summarize the results of tests undertaken to verify the suitability, feasibility, and applicability of the CAHS model to the AIHS context; and (3) present the AIHS framework, with discussion focused on the challenges of development and implementation, lessons learned and future plans for its ongoing development and implementation.
AIHS wished to “develop an impact framework to demonstrate impact and illustrate the pathways between research investment and impact”. Understanding the pathway(s) from research to the various impacts of research is the subject of much interest as illustrated by the UK Research Excellence Framework process of rewarding funding to universities who are able to articulate academic and non-academic impacts of research. AIHS describes the development of its evaluation framework which is based on that of the Canadian Academic of Health Sciences (CAHS) which is based on the payback method which is itself based on a logic model. The logic model is a standard model for program evaluation. A full edition ofResearch Evaluation was dedicated to the payback method in 2011.
Payback relies on articulating impacts that accrue to five domains: knowledge production; benefits to future research and research use; political and administrative benefits; health sector benefits; and broader economic benefits. These categories remain in the CAHS and the AIHS models but AIHS has included elements of “reach”. AIHS includes indicators of reach in its 42 indicators presented in Table 1 in the paper. By way of comparison, CAHS had 66 indicators. Adding indicators of research is important because this adds elements of “who” and “target audiences”. This is important because if one is anticipating impacts on non-academic audiences (policy makers, business and practitioners) then the target audience is critical because it is those audiences who actually make the broader societal impacts by making policies and products or delivering services that have in impact on the lives of citizens.
In their table on indicators the authors include indicators ofmedium term outcomes including indicators or dissemination and implementation of research. These don’t line up with the AIHS framework in Figure 2 which shows that primary outputs (=dissemination) occur in 1-3 years, influenced decision making (=implementation) occurs in 4-7 years.
In addition, between dissemination and implementation there is a middle stage of uptake. This is the stage where end user organizations are taking the research in house, evaluating it, comparing against current evidence and assessing the potential of the research to be implemented into policy or practice. This intermediate step doesn’t change the AIHS model but it does create the need for some additional indicators.
In developing and assessing the AIHS framework the authors used grant applications, annual progress reports (n=855) and financial data. There are two limitations with these data sources:

The oldest data sourced was 5 years old but the model clearly anticipates long term impacts to be realized more than eight years from the research. Therefore, the downstream effects of this model will only now be able to be assessed two years after publication of the paper.

The data sources are all investigator driven. They rely on investigator reports of the impacts that have occurred. SSHRC undertook an evaluation of their Connections Theme (the theme funding knowledge mobilization activities). SSHRC’s evaluation showed that end of grant reports were not sufficient for capturing non-academic impacts of research. Indeed even interviewing researchers produced an incomplete picture of impact. Only when SSHRC interviewed research partners, often years after the research, did they find examples of non-academic impacts of research.

This suggests that AIHS needs a new data source – qualitative interviews with end users – in order to establish the downstream effects of their model. My #1 lesson for research impact is that “impact is measured at the level of research partners“. Academic researchers don’t usually make policy, don’t sell commercial products and only rarely do they deliver social services. Since policies, products and services are what actually have an impact on the lives of citizens then researchers must work with (and funders must fund activities with) public, private and non-profit partners respectively.
The authors also acknowledge the issue of the linearity of this model. “Despite its linear appearance, research and innovation systems, as with any eco-system, are dynamic and interactive with interdependent, simultaneously occurring, feedback loops.” I have come to accept that linearity is ok. As I mentioned in an earlier journal club post, “Our community has long ago abandoned linear models of knowledge mobilization. But that refers to the non-linearity in a single knowledge mobilization project; however, when working in a system of knowledge mobilization the portfolio of projects does move towards impact. And this movement is linear from research to impact.” This also applies here. Any single instance of knowledge mobilization is not linear. But when measuring in at a systems level, as AIHS is, the mapping of the progress of your portfolio of research moving towards non-academic impacts must be linear.
Questions for brokers:

How easily can the AIHS model be applied to disciplines other than health? Does progress from research to impact follow the same path regardless of discipline: research activity to results to influencing decision making to affecting health care to changes in health, social and economic wellbeing? Does this pathway also work in agriculture, education and international development?

Are 42 or 66 indicators sufficient to capture complexity while monitoring the progress from research to impact or does this number of indicators create a barrier of evaluation burden?

If we need to engage with end users to find out stories of research impact how do we create meaningful connections with partners even many years after the research project has finished?

ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.