Monitoring & Evaluation System “Learning to Improve” Making evidence work for development Contents Rationale of the M&E Users and needs Principles of the M&E Units of analysis and dimensions of study Elements of the M&E Products of the M&E M&E share of responsibilities M&E constraints

Presentation Transcript

Contents • Rationale of the M&E • Users and needs • Principles of the M&E • Units of analysis and dimensions of study • Elements of the M&E • Products of the M&E • M&E share of responsibilities • M&E constraints • M&E linkages: KM, decision making and learning

Rationale: Why should the MDGF have a M&E system? • It is a requirement included in the legal agreement between the Donor (Spain) and UNDP • It is an obligation, included in all signed joint programs • It is necessary: if we want, scaling up programs into policies and spread solutions to achieve MDGs at Global level • It is useful: As it is part of the program managing cycle and is the best way to measure progress, detect problems, correct them, improve performance and learn at local and global level

What kind of principles should the MDGF’s M&E system incorporate? • Accordance with UNEG and DAC/OECD standards • Oriented to well balanced learning and accountability purposes • Evidence-based: consistent data, information or knowledge to support judgments and conclusions of monitoring and evaluation

What kind of principles should the MDGF’s M&E system incorporate? • Built on an aggregation scheme: Elements of M&E (indicators + evaluations, etc) in lower levels add up in higher level of inquire • Measure (DELTA), describe, analyze understand the object of study (JP+C+W+MDGs) and use results to improve program and policy performance

What elements comprise the MDGF M&E system • Monitoring indicators: to measure progress and trends in the short and medium-term at the (inputs, activities, outputs, outcomes) • Field visits: to monitor JP in depth and prepare and manage evaluations as well as disseminate results and provide feedback from recommendations • Evaluations: to review programs and value the worth of the dimension of study JP, at country level, in thematic windows and the MDGF as a whole • Meta-evaluations: to review the quality of the evaluations (JP + Country) conducted and produce robust evidence at window level to link these evidence to MDG achievement • Desk reviews and data collection & analysis: from a variety of sources to contribute with information and knowledge to the M&E+KM system

What are the constraints we would face in such a enterprise? budget, time, data, political constraints • What level of involvement and what work load can the staff from the JP take? • Organizational issues, Who is responsible for what part of the system? • Joint program, for real? Do joint programs have the necessary coordination mechanisms to allow for joint monitoring and evaluation? • Will the users of the M&E system willing to collaborate to implement it ? • Is data available at a reasonable cost and on time? • Will we have the political support to implement it?

M&E linkages: KM, decision making and learning • M&E is an enormous source of knowledge and will generate an extraordinary amount of valuable explicit and tacit knowledge as well as organizational and general knowledge that will have to be delivered at the right moment, to the right persons in the adequate format for the purposes needed. • KM system could also become a source of data, information and knowledge for M&E system, is a reciprocal 2 way relation. • The ultimate goal of evaluation is recommend actions to improve decision making and improve performance of programs and policies so all evaluations should focused on utilization of their findings and recommendations