This blog is intended as a home to some musings about M&E, the challenges that I face as an evaluator and the work that I do in the field of M&E.Often times what I post here is in response to a particularly thought-provoking conversation or piece of reading. This is my space to "Pause and Reflect".

Wednesday, April 25, 2007

You know how you always come back with a stack of stuff to read when you've attended an evaluation conference? It most cases the material just gets added to my ever growing "to read" pile. It is only once I start searching for something or decide to spring-clean, that I actually sit down and read some of the stuff. This morning I came across a copy of UNICEF / CEE/CIS and IPEN's New Trends in Evaluation.

What a delightfully simple straightforward publication - yet it packs so much relevant information between its two covers. I wish I had remembered about it last week when I lectured to students at UJ. Before I was able to get on with the lecture on Participatory M&E, I first had to explain how M&E is different and similar to Social Impact Assessments (In the sense of ex-ante Environmental Impact Assessment type assessment). I think it would have been a very handy introductory source to have.

The table of contents looks as follow:

1. Why Evaluate? The evolution of the evaluation function The status of the evaluation function worldwide The importance of Evaluation Associations and Networks THe oversight and M&E function2. How to Evaluate? Evaluation culture: a new approach to learning and change Democracy and Evaluation Democratic Approach to Evalution3. Programme Evaluation Development in the CEE/CIS

But what is really useful is the Annexures:Annex 1: Internet Bades Discussion Groups Relevant to EvaluationAnnex 2: Internet Websites Relevant to EvaluationAnnex 3: Evaluation Training and Reference Sources Available OnlineAnnex 4-1: UNEG Standards for Evaluation in the UN SystemAnnex 4-2: UNEG Norms for Evaluation in the UN SystemAnnex 5: What goes into a Temrs of Reference; UNICEF evaluation Technical Notes, Issue No.2

The good thing about this publication is that you can download it for free off the internet at

http://www.unicef.org/ceecis/New_trends_Dev_EValuation.pdf

An introductory blurb and a presentation is also available from the IOCE website.

Tuesday, April 10, 2007

Based on some of the ideas in previous blog entries, I presented the following presentation at the recent SAMEA conference.

Setting Indicators and Targets for Evaluation of Education InitiativesIntroduction

Good Evaluation Indicators and Targets are usually an important part of a robust Monitoring and Evaluation system.

Although evaluation indicators are usually considered as important, all evaluations do not have to make use of a set of pre-determined indicators and targets.

The most significant change (MSC) technique, for example, looks for stories of significant change amongst the beneficiaries of a programme, and after the fact uses a team of people to determine which of these stories represent MSC and real impact.

You have to include the story around the indicators in your evaluation reports in order to learn from the findings.

What do we mean?

The definition of an Indicator is: “A qualitative or quantitative reflection of a specific dimension of programme performance that is used to demonstrate performance / change”

It is distinguished from a Target which: “Specifies the milestones / benchmarks or extent to which the programme results must be achieved”

And it also different from a Measure which is: “The Tool / Protocol / Instrument / Gauge you use to assess performance”

Types of Indicators

The reason for using indicators is to feel the pulse of a project as it moves towards meeting its objectives or to see the extent to which it has been achieved. There are different types of indicators:

Risk/enabling indicators – external factors that contribute to a project’s success or failure. They include socio-economic and environmental factors, the operation and functioning of institutions, the legal system and socio-cultural practices.

Input indicators – also called ‘resource’ indicators, they relate to the resources devoted to a project or programme. Whilst they can flag potential challenges, they cannot, on their own determine whether a project will be a success or not.

Process indicators – also called ‘throughput’ or ‘activity’ indicators. They reflect delivery of resources devoted to a programme or project on an ongoing basis. They are the best indicators of implementation and are used for project monitoring.

Output indicators –indicates whether activities have taken place by considering the outputs from the activities.

Impact indicators – Concerns the effectiveness, usually long term, of a programme or project as judged by the measurable achieved in improving the quality of life of beneficiaries or other similar impact level result.

Good Indicators

Good Performance Indicators should be

Direct (Does it measure Intended Result?)

Objective (Is it ambiguous?)

Adequate (Are you measuring enough?)

Quantitative (Numerical comparisons are less open to interpretation)

Disaggregated (Split up by gender, age, location etc.)

Practical (Can you measure it timeously and at reasonable cost?)

Reliable (How confidently can you make decisions about it?) (USAID, 1996)

SMART Indicators Most people have also heard about SMART indicators:

Specific

Measurable

Action Oriented

Realistic

Timed

How we use indicators

For many of the evaluation initiatives that we help to plan M&E systems for, we usually work with the managers to set indicators that they understand and can use.

Although the issue of data availability and data quality is usually a big concern, it is often the indicators and targets that are set that could make or break an evaluation.

Case Study

Implementers of a teacher training initiative wants to know if their project is making a difference in the maths and science performance of learners.

Pitfalls

Alignment between Indicators & Targets (If the indicator says something about a number, then the target must also be couched in terms of a number, and not a percentage)

Averaging out things that do not belong together (i.e. maths and science) does not make sense at all.

Not disaggregating enough (Are you interested in all learners, or is it important to look at disaggregating your data by age group, gender, educator)

Assuming that all targets should be about an increase: (Sometimes a trend in the opposite direction exists and it is expected that your programme will only mediate the effects)

Assuming that an increase from 20% to 50% is the same as an average increase of 50% to 80%. (Psychometrists have used the standardised gain statistic for a very long time. It is interesting that we don’t see more of it in our programmes.)

Ignoring the statistics you will use in analysis: (In some cases you are using a sample and averages. This means an average increase might just look like an increase, but when you test for statistical significance it is actually not an increase)

Setting indicators that require two measurements where one would be enough (Are you interested in an average increase, or just the % of people that make some minimum standard.)

Ignoring other research done on the topic (If a small effect size is generally reported for interventions of these kinds, isn’t an increase of 30% over baseline a little ambitious?)

If you don’t have other research on the topic, it should be allowable to adjust the indicators.

Setting an indicator and target that assumes direct causality between the project activity and the anticipated outcome (Even if you have brilliant teachers, how must the learners perform if learners have nowhere to do homework, School discipline is non-existent and after learners have accumulated 10 years of conceptual deficits in their education?)

Ignoring Relevance, efficiency, sustainability, and equity considerations. (Is educators training really going to solve the most pressing need?If your programme makes a difference, is it at the same cost as training an astronaut?What will happen if the trained educator leaves?Does the educator training benefit rural learners in the same way in which it would benefit urban learners?)

Ways to address the pitfalls

Do a mock data exercise to see how your indicator and target could play out.

This will help you think through the data sources, the statistics, and the meaning of the indicator

Read extensively about similar projects to determine what the usual effect size is.

When you do your problem analysis, be sure to include other possible contributing factors, and don’t try to attribute change if it is not justifiable.

Look at examples of other indicators for similar programmes

Keep at it and work with someone who would be able to check your proposed indicators with a fresh eye.