Translate

Saturday, April 08, 2017

Greg Armstrong[Updated July 2018]The 2016 RBM Guide produced by Global Affairs Canada is an essential tool for anyone implementing Canadian aid projects, and useful for anyone else seeking to design a results-based development project.

Global Affairs Canada results chain

Level of difficulty: ModerateLength: 105 pages (plus appendices)Primarily useful for: Managers of Canadian aid projects – or anyone involved in project design, regardless of the funding sourceMost useful: Reporting on Outcomes, 87-92,Limitations: Some elements are of use only in project design, but most can be used in implementation.Languages:

The Canadian aid agency – formerly known as CIDA – now part of Global Affairs Canada – adopted Results-Based Management in 1996. In 2001 a useful and user-friendly 97-page guide to using RBM in writing a project implementation plan was produced for CIDA by Peter Bracegirdle. I reviewed that RBM guide several years ago, and it is still available from Appian consulting and several other sites.

Despite changes to CIDA results terminology in 2008, and the posting of issue-by-issue guides on the CIDA, later DFATD and Global Affairs Websites, under the general title Results-Based Management Tools at (CIDA / DFATD / Global Affairs Canada): A How-to Guide, the resources available to people trying to use Results-Based Management in both the design and implementation of Canadian projects, were limited. Trainers had to paste together different documents on logic models, indicators, risk and indicators obtained off the website, to produce a coherent, if somewhat jargon-laden ad hoc RBM guide of roughly 45 pages.

Many people continued to use Peter Bracegirdle's 2001 PIP Guide [A Results Approach to the Implementation Plan] as the most effective of the CIDA/DFAT/GAC guides up until 2016, not just for developing implementation plans at project inception, but, making adaptations for terminology changes, as an aid to annual work planning,

This Guide has only recently been made available on the GAC website, and while it appears complete in itself, some of the links to templates remain to be added. In some cases, such as the template for the Performance Measure Framework, the links is to a document produced when the agency was still known as CIDA.

But while the new GAC RBM guide includes a lot of material from earlier materials used since 2008, it also has a substantial number of new clarifications, which make It a much more practical RBM tool than previous versions published since 2008.

[Update: 2018: The GAC RBM page now also contains a draft reporting guide and a number of checklists and tip sheets on developing, assessing or reviewing theories of change, logic models, indicators in general, gender equality results and indicators and other topics.]

Who This RBM Guide is for

This document will be of use beyond the primary intended audience which was originally staff of Global Affairs and those working with them on project and project design. While some of the background information describing the relationship of this guide to other Canadian government policies will be of little use or interest to anyone outside of the Canadian government, there is a lot of material here which could help implementing agencies and partners working on Canadian – funded projects, to work more effectively. And it is easy to see, with the discussions on problem identification, theory of change, risk, and other topics, how this guide will be useful to people designing projects for any agency, regardless of the funding source.

Results Terminology

The Global Affairs approach to Results-Based Management has been, since 2008, an improvement over that of many other agencies, limiting the labelling of results as “outcomes”. While the term “objectives” is in peripheral evidence in this document, it never appears in the functional tools such as the GAC Results Chain, the Logic Model, or the Performance Measurement Framework. There is no confusion here with results being described as purposes, or goals, terms which, for some agencies, are used almost interchangeably along with Outputs, Outcomes and Results, something that often leads to genuine confusion as implementing agencies, partners and beneficiaries try to describe results, and distinguish them from activities.

For Global Affairs Canada, as for CIDA before it, all results are changes: Not completed activities as some U.N. agencies confusing label low-level results – but changes in the short term in capacity, in understanding, skills, or access to services. At higher levels results are seen as changes in the behaviour, practice and performance, of change agents or of people who are the long-term beneficiaries. All of these changes are in theory designed to contribute to even longer-term changes in important life issues such as income, food security, health, security, status of women, levels of suffering or human rights.

The Global Affairs Canada Results Chain

The results chain, in English, for Canadian aid projects has, since 2008, looked like this:

The Global Affairs Canada Results Chain - English

It is interesting to note that in the French language version of the Global Affairs Canada RBM guide, what are called in English Immediate Outcomes, Intermediate Outcomes and Ultimate Outcomes in are, in French, just “results”.

The Global Affairs Canada Results Chain - French

The differences between the English and French reflect the Treasury Board of Canada results Results-Based Management Lexicon which standardizes the results frameworks for Canadian government agencies.

I do not find the addition of "Outcomes" - instead of just labeling them results, to be helpful.

As someone who works regularly to help people understand RBM in other languages, defining a result as a “change” is something that can be easily translated into any language, not just for government officials or field workers, but for villagers and other beneficiaries. But "Outputs" and "Outcomes" are both words used in English in many different forms, which causes problems of understanding even for native English speakers working on RBM, including those in donor agencies. In some other languages, while “change” is always understood, special terms have to be devised to describe Outputs or Outcomes. As I have argued elsewhere, clear language is always preferable, if we want people to actually use Results-Based Management in practice. I doubt, given the organizational context, that there is anything GAC RBM specialists can do about this, however.

Outputs - not Results

This version of the RBM guide provides improved operational clarity in the definitions of what are not results – inputs, activities, and particularly the products of activities – clearly labelled as “Outputs”.

Outputs are described as “Direct products or services stemming from the activities of an organization, policy, program or project.”

Those who have examined or worked with the Results terminology used by U.N. agencies will note difference between this, and the common definition of Outputs still used by many U.N. agencies [my emphasis added]:

"Specific goods and services produced by the programme. Outputs can also represent changes in skills or abilities orcapacities of individuals or institutions, resulting from the completion of activities within a development intervention within the control of the organization. " [Results-Based Management in the United Nations Development System, 2016, p. iii]

"Outputs are changes in skills or the abilities and capacities of individuals or institutions,orthe availability of new products and services that result from the completion of a development intervention." [United Nations Development Assistance Framework Guidance, Feb 2017, p. 27]

In practical terms the confusion caused by mixing products and actual changes in capacity into one common category, has meant that only the most serious U.N. agency managers have actually reported on changes in capacity – where their less….”ambitious”… colleagues have satisfied themselves, although not their bilateral partners, by reporting on completed activities – numbers of people trained, handbooks produced, schools built, as real results. This has proven to be a real source of frustration to bilateral donors contributing to U.N. agency activities, because many of the bilateral agencies, like Global Affairs Canada, DFID, the Australian aid agency and others, need to report on changes, such as increased skills, better performance, increased learning by students, or improved health, security or income - and not just on activities completed.

Results Level hierarchy

The results - three forms of Outcomes, are organized in a Logic Model.

Immediate Outcomes (or Résultat immédiat)

Immediate Outcomes are, for Global Affairs Canada:

“A change that is expected to occur once one or more outputs have been provided or delivered by the implementer. In terms of time frame and level, these are short-term outcomes, and are usually changes in capacity, such as an increase in knowledge, awareness, skills or abilities, or access* to... [services] ...among intermediaries and/or beneficiaries. * Changes in access can fall at either the immediate or the intermediate outcome level, depending on the context of the project and its theory of change.

Intermediate Outcomes (Résultat intermédiaire)

Defined as

"A change that is expected to logically occur once one or more immediate outcomes have been achieved. In terms of time frame and level, these are medium-term outcomes that are usually achieved by the end of a project/program, and are usually changes in behaviour, practice or performance among intermediaries and/or beneficiaries."

Ultimate Outcomes (Résultat ultime)

Defined as

"The highest-level change to which an organization, policy, program, or project contributes through the achievement of one or more intermediate outcomes. The ultimate outcome usually represents the raison d'être of an organization, policy, program, or project, and it takes the form of a sustainable change of state among beneficiaries."

Among the many useful small changes to the way these definitions work, is the admonition that such long-term changes should not refer to generic changes in the country’s circumstances (such as improved GDP), but should deal with real changes in the lives of real people – in health, learning, security and other areas which can be demonstrated with indicator data.

CIDA in 2008 moved from the familiar Logical Framework, which combined results, indicators, assumptions and risk in a visually (and often intellectually) confusing manner, to disaggregation of the main elements of the Logical Framework into three distinct elements:

A Logic Model

Based on a theory of change exercise, this visually illustrates how different elements are intended to be combined to contribute to short-term, medium-term and long-term changes as this example from a 2015 Request for Proposals illustrates

Example of a Logic Model

A Performance Measurement Framework

This presents indicators, targets and data collection methods and schedules for different levels of results, as this 2013 example illustrates

Example of a Performance Measurement Framework

A Risk Framework

This identifies risks, likelihood of occurrence, potential effect on the project, and strategies to mitigate them.

Criteria for assessing the likelihood and effect of risks

Global Affairs Risk Table

RBM tool templates

The combined templates for Logic Model, and the Output-Activity matrix and the separate Performance Measurement Framework, within some limitations can simplify the mundane if not the intellectual tasks, of distinguishing between and recording the links between Activities, Outputs, and Outcomes in the Logic Model, and in recording agreements on indicators. The positive side of these templates is that they standardize what is produced, and make it difficult to inadvertently omit or change the wording of results, as we move from a Logic Model to the development of activities and indicators.

Outcome & Outcome statements entered into the GAC Logic Model

Outcome and Outputs from the Logic Model transferred to the Outputs and Activities Matrix

The negative side of these templates – form-filling PDF files, which restrict reformatting, is that they can be difficult to work with if the forms are being projected onto a screen and being used as the basis for discussion in large Logic Model and indicator workshops, where using the suggested “sticky notes” is not practical. In those situations reformatting is often necessary to accommodate changes as the discussion occurs – and as new columns and notes need to be added to remind participants how these have evolved, and what needs to be done. This is apparently not possible with these forms.

This could be handled subsequent to a workshop in additional text, but it is best to get these things on record quickly, while the discussion is taking place. In these situations I have found word processing programmes such as Microsoft Word or Google Docs easier to work with than PDF or spreadsheet formats.

An additional factor is that some work is required if you are using Chrome for example, to disable the built-in PDF viewer, before these documents can be downloaded, even if you own Acrobat.

The templates for these tools are not part of the actual GAC RBM Guide itself - at least not as of this writing, but download links are provided either in the text or at the Global Affairs website to download locations for the Logic Model. Performance Measurement Framework and Risk Table downloads. If you get the message above, you might be able to get around it by right clicking on the link and downloading the document, but there is no guarantee this will work.

Improved operational clarity

All of the basic tools remain essentially the same as they were in 2008, but the improvements over earlier CIDA guides produced after 2008 is that there is increased clarity in this document about how to use the Logic Model, Output-Activities Matrix and Performance Measurement Framework, in practical terms in project design, implementation, monitoring and results reporting.

The 2001 PIP Guide remains a useful tool, as it had more detail on some design issues such as the Work Breakdown Structure, activity scheduling, budgeting and stakeholder communication plans. But the new Guide, working with earlier material after 2008, and with new examples, contains useful new clarifications throughout the document. These deal, among many other things, with

Distinguishing between Outputs and Activities.

This sounds mundane, but there has been confusion in some Logic Models about whether Outputs were just completed activities or something more. So, in that approach, an activity might be “Build wells” and the Output would be “Wells built”, something which is of no use at all in helping project managers mobilize and coordinate the resources and individual activities necessary to really put the wells in the ground. I have always found the CIDA (2001) Output-Activity Matrix to be a useful bridge between the theory of the Logic Model, and the need for concrete focus in work planning. This document makes this link, and the link to results-based scheduling, clearer, and the template for the Logic Model, automatically populates the matrix with Outputs, preparatory to figuring out what activities are necessary to achieve them.

An example of the Outputs-Activities Matrix after Activities are added

Examples of how to phrase Outcomes in specific terms (syntax)

The GAC RBM framework has several criteria for developing precise result - Outcome statements, reflecting the fact that these are supposed to represent changes of some kind for specific people, in a specific location. and the RBM guide provides illustrations of two ways this can be done:

Syntax Structure of an Outcome Statement - Global Affairs Canada

The Guide also provides examples of strong and weak Outcome statements, with suggestions on how they can be improved.

Examples of strong and weak Outcome statements

Results Reporting Format

The Guide provides a useful new format for results reporting. In the past different projects have reported in a wide variety of ways, often forcing readers to wade through dozens of pages of descriptions of activities, in a vain attempt to find out what the results are. This suggested new format puts results up front, in a table, emphasizing indicator data, with room for explanations in text, below.

These and other additional tips can be found in section 3 – Step by Step Instructions on results-based project planning and design (p. 66-85) and section 4 – Managing for Results during Implementation (p. 86-92) but others are spread throughout the document, and for that reason it is useful to read the whole document, even if users are familiar with past CIDA/GAC documents.

Limitations

This is a good, practical RBM guide, but having a good guide is one thing, and getting people to use it – or to deal with the implications of what it means for agency operation, is another. I see two areas where further improvements could be made, some of which could be done informally, and some which, given procedures in the Government of Canada, are perhaps beyond the scope of the GAC RBM group’s control.

1. Dealing with the Implications of RBM for operations and funding

I have seen very small civil society organizations face lengthy processes of data collection and report revisions, to comply with donor agency RBM requirements for relatively inexpensive projects. But at the same time donor agencies themselves - and this means most donors - often do not deal realistically with the implications of their own guidelines for project budgets.

Baseline data

Take baseline data collection, for example. The GAC Guide sections on Indicators and the Performance Measurement Framework (p. 52-64) are generally quite practical, and make the very valid point that baseline data for indicators must be collected before targets can be established, and results reported on. I agree completely that this is the most useful way to proceed – if the time and budget are allocated to make it possible. As the GAC guide says about baseline data (I have added emphasis):

"When should it be collected?

Baseline data should be collected before project implementation. Ideally, this would be undertaken during project design. However, if this is not possible, baseline data must be collected as part of the inception stage of project implementationin order to ensure that the data collected corresponds to the situation at the start of the project, not later. The inception stage is the period immediately following the signature of the agreement, and before the submission of the Project Implementation Plan (or equivalent). "[p. 60]

In a rational process this would in fact be the situation. But the reality is that for projects funded by GAC and many other donors, after two or three years of project design and approval processes, both the donor and the partners in the field want to start actual operations quickly. The amount of time allocated by donors and partners for the inception field trips by implementing agencies – and the budget allocated to support baseline data collection processes - are too limited to make baseline data collection for all indicators during the inception period feasible in all but the most unusual cases.

A typical inception field trip for an inception period might last 3-4 weeks, rarely longer, and during this period a theory of change process has to be initiated with all of the major stakeholders, an existing logic model tested and perhaps revised, a detailed work breakdown structure, and risk framework developed, institutional cooperation agreements negotiated, and detailed discussions on a Performance Measurement Framework with a multitude of potential stakeholders completed. As the GAC guide notes:

"As with the logic model, the performance measurement framework should be developed and/or assessed in a participatory fashion with the inclusion of local partners, intermediaries, beneficiaries and other stakeholders, and relevant Global Affairs Canada staff." [p. 58]

Some of these indicator discussions alone, where an initial orientation is required, and where there are multiple stakeholders, with different perspectives and different areas of expertise involved, can take 20 or 30 professional staff one or even two weeks in full time sessions, to reach initial agreement on what are sometimes 30 or 40 indicators. In some cases baseline data are available immediately, and that is one important criterion in choosing between what may be equally valid indicators.

But in many cases, the data collection must be assigned to the partner agencies in the field, who know where the information is, and how to get it. All of this means that a second round of discussions must be undertaken, to discard those indicators for which baseline data are unavailable, or just too difficult to collect and agree on new indicators. And, as the GAC guide quite correctly notes:

"The process of identifying and formulating indicators may lead you to adjust your outcome and output statements. Ensure any changes made to these statements in the performance measurement framework are reflected in the logic model." [p. 81]

The partners, meanwhile, have their existing work to continue with – and rarely see the baseline data collection as their most important operational priority given the political and institutional realities they fact to do their normal work.

I have participated in several design and inception missions, and I cannot remember when baseline data for all indicators were actually collected before the project commenced. And at mid-term in many projects it is not unusual for an audit of the indicators by a monitor to find that 30-40% of the indicators may not have baseline data, even after two or three years of project operation.

All of this could be avoided if more money and more time – up to six months perhaps – were allocated to the inception period, with an emphasis on establishing a workable monitoring and evaluation structure, and actually funding baseline data collection. That means that when a donor agency emphasizes participatory development of indicators, during an inception period, it should be prepared to provide the resources of time and money necessary to make this practical.

2. Limiting the Logic Model to three levels

The GAC logic model has three results levels - for short-term, medium term and very long term changes.

This is about standard for most agencies. But, of course, only two of these levels are actually operational, and susceptible to direct intervention during the life of the project – Immediate Outcomes in the short term (1-3 years on a 5 year project) and Intermediate Outcomes which should be achieved by the end of the project. The Ultimate Outcome level is the result to which the project, along with a host of other external agencies, including the national government, and other donors, may be contributing.

In real life, a Logic Model which actually reflects the series of interventions, from changes in understanding, which are necessary for a change in attitudes, to changes in decisions or policies and changes in behaviour or professional practice, will go through a minimum of 4 to 5 or even more stages where needs assessments, and training of trainers or researchers lie at the beginning of the process, before we get to field implementation of new policies or innovations.

I have worked with partners in the field during project design where during the theory of change analysis, up to 8 different levels were identified, with assumptions, interventions and purported cause and effect links between these levels, before ever getting to the ultimate long-term result. It is impractical, the donors would argue, to have even a 5 level Logic Model – and this would indeed require extra work on indicators. But while the Global Affairs RBM guide does give a nod, on page 48, to the idea of “nested logic models” something I have worked on with partners, these can be more complicated to present and to understand than a 4 or 5-layer Logic Model.

Some partners have decided to maintain their own more detailed, multi-level logic models, and present a simplified version to the donors, because the whole purpose of these tools is not primarily for reporting to donors – but to help managers determine what interventions are working, and what changes are needed. That is why the process is called Results-Based Management, and not just results based reporting. Having produced a more detailed and informal Logic Model, these partners, when an evaluator, a Minister, or a new donor representative has difficulty seeing how a simple two layer Logic Model can actually attribute results to interventions, can produce the real Logic Model and explain the relationships.

It is unlikely that any donor will agree to a 4 or 5-level Logic Model, but it would be useful, as this Guide will be revised, to include a section illustrating the process of nesting Logic Models.

The Bottom Line

This new Global Affairs Canada Results-Based Management guide is an absolutely necessary tool and reference for anyone working on Canadian aid projects – and it is a practical, very useful resource for anyone who wants clarity on the process of results-based project design. I will keep the old 2001 PIP Guide on hand, however, for its still useful detail, and user-friendly format, as a complement to the new RBM guide.

Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks. For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website.

About Me

Greg Armstrong brings what we know about how adults learn to helping international development workers use Results-Based Management in their work. If it is done right, it can be enjoyable, and productive, helping us explain our work to others.