Provide a critical comparison and analysis of two of the five paradigms for monitoring, evaluation and impact assessment, in order to demonstrate a nunderstanding of which paradigms are most appropriate for conducting MEIA in which development contexts

Must critically analyse the perceptions of the nature of evidence, and how these impact upon the MEIA of development projects

Examine critical debates and key issues surrounding the monitoring,evaluation and impact assessment in international and community development project cycles, to build appropriate professional skills and understanding of how to design and implement appropriate monitoring, evaluation and impact assessment strategies

Pragmatic Evaluations

Indigenous Evaluations

Start with the importance of Evaluations

State the intention of this essay

Why Impact Evaluations? Why Indigenous Evaluations - as a very specific form of evaluation?

At the beginning of the 21st century, world leaders gathrered to discuss what the development goals of this century would be. These discussions led to the creation of the Millenium Development Goals. These goals targeted areas of daily income, universal education, gender equality, maternal health and environmental sustainability, just to name a few.

Most of the these goals had the set deadline of 2015, which passed last year. Though the most recent report acknowledges significant improvement in achieving its goals, it also acknowledges shortfalls and uneven success rates (United Nations, 2015). As we discuss the topic of human development, and the investment of time, funding and personnel, the significance of project evaluation becomes more apparent. What works, what hasn’t worked and what needs to be adjusted in order to deliver on outcomes are all aims of a good evaluation.However, approaches to project evaluations differ greatly. What is central to all evaluations is the five main attributes put together by the Joint Commitee for Educational Evaluation (Mertens, 2012). They are: Utility, Feasibility, Propriety, Accuracy and the Meta-evaluation.

This essay looks at Pragmatic (or Impact) and Indigenous approaches to monitoring, evaluation and impact assessment as well as understanding the context by which approach would be most appropriate.The reason for adopting these two approaches for this essay topic is because both of them have their sets of strengths as well as their shortcomings, which will be explained further.

Pragmatic Evaluation

 Your role as the evaluatorConsultation with stakeholders on what the problems are, goals and objectives should be. Philosophical and theoretical lensWhilst there exists a single reality, all individuals have their own unique interpretation of that reality.Gain knowledge in pursuit of desired ends as influenced by the evaluators’ and contextual values and politics

Match methods to specific questions and purposes of research; quantitative and qualitative methodologies can be used, or working back and forth between both to exploit the power of ‘triangulation’ of methods and their respective findings

 Management and budget (reports and utilisation)

Reports should be presented in a from that talks of usefulness and effectiveness.Criteria:How useful is it,Can it be implemented in this settingIs it humane, ethical, moral, proper, legal and professionalIs it dependable, precise, truthful and trustworthyDo you assure and control the quality of the evaluation research

Indigenous EvaluationCentral tenet is constituted by the importance of relationships, with ourselves, with other people and everything on earth and in the universe.

 Your role as the evaluator Philosophical and theoretical lensKnowledge is relational. You are answerable to all your relations when you are doing research.foundationally (i.e. ontologically) based on doing and being relationally, rather than as an ‘abstract’ theory imposed on others or used for indoctrination. The evaluand and its context

Though the earliest evaluation methods used a positivist methodology, applying a scientific gaze to community development, several evaluation practitioners have since recognised its shortcomings. The Pragmatic approach identified significant challenges with positivistic evaluation styles. More often than not, positivisitic approaches do not involve stakeholders due to its pursuit of an objective result. This can result in significant resource costs due to the hiring of external staff and contractors in order to conduct such evaluations (Parry, Platt and Gnich, 2001). Furthermore, Cisneros-Cohermour (2005) identified significant limitations in the apparent ‘objectivity’ of teaching research done in the US. Researchers were often based at the university they were teaching and their results supported their positions either directly or indirectly.Perhaps one of the greatest issues with the first paradigm of evaluation theory was the lack of participation stakeholders had in the design and implementation of the evaluation (Gertler et al., 2011). This often resulted in such reports not being read by practitioners or leaders in the field.

Impact evaluators sought to rectify these shortcomings by approaching assessments from a different philosophical angle. Where positivistic methods pursued complete objectivity, pragmatic evaluators recognised that there were different perceptions of the same project and thus sought to involve all stakeholders in the evaluation process (Shaker, 1990). Consequently, pragmatic evaluators are more concerned with the ‘usefulness’ of the data in assisting decision-makers in their role.Furthermore, the inclusion of participatory methods to the evaluation process provided a more empowering approach that was missing from the positivistic paradigm (Chambers, 2009).Impact evaluations typically use a mix of qualitative and quantitative methods in order to determine its results. Ultimately, the methodology is guided by the aims and objectives of the study and what is most likely to show effectiveness (or lack thereof).Like all approaches to evaluation, the success of impact evaluations is not without its conditions. Impact evaluations are not without its faults. In 2000, the Center for Global Development created a working group to identify the major problems with impact evaluations as well as solutions for how the paradigm could be taken more seriously.This led to the report titled, When will we ever learn? Improving lives through Impact Evaluation which outlined some of the problems with impact evaluation reports thus far, such as the lack of adherence to evaluation standards put by the Joint Committee, as well as a lack of rigour being exercised by evaluators (Center for Global Development, 2006). The work on the report led to the creation of the International Initiative for Impact Evaluation (3ie), an international body aimed at improving the quality of evaluations within the paradigm (Levine, 2016).

3IE adopted a position of providing grants to impact evaluators as well as collating the resulting evaluations, conducting systematic reviews of the evidence and finally, creating policy briefs for international bodies (3ieimpact.org, 2016). The body states that through the understanding the impact of interventions, programmes and policies and developing a scenarios of what would have happened in the absence of such developments, they are able to provide better evidence.For instance, in a study done on the effect of microfinance initiatives in Hyderabad, India, the research design adopted randomised control trials based on households where microfinance was taken up, compared to areas where there was the potential, but it wasn’t considered (Banerjee et al., 2015). This study, compiled with a couple of other evaluations on microfinance interventions forms a policy brief for practitioners in the sector (3ieimpact.org, 2016).

The Evaluation Gap Working Group found that impact evaluations were most effective for new or expanding projects where effectiveness had not been established. Impact evaluations also had to be built into the design of the project in order to deliver on outcomes.

CIPP

The Context,Input, Processing, Product Model is one of the more established approaches of impact evaluation. Designed by Daniel Stufflebeam in 1967, the CIPP model aims to involve all stakeholders in the evaluation process, identifying the project background (Context), its resourcing (Input), the processes by which the outcome will be delivered and finally the outcome itself (Product). Stufflebeam and Coryn (2014) believe the approach is most effective when the evaluator regularly interacts with the stakeholders, as it allows for the evaluator to be regularly updated with new information, while keeping the decision-making process informed.The CIPP model doesn’t aim to prove that a program works, but rather to improve on a program’s approach.This form of evaluation was appropriate for determining the effectiveness of the Kaohsiung Suicide Prevention Center’s efforts in reducing suicide as well as identifying where shortfalls needed to be addressed (Ho et al., 2010).

Utilization Focused Evaluation

Utilization focused evaluations focus on the stakeholders that will ‘use’ the project’s findings and act on them. This becomes the first task of the evaluator. Where CIPP has a general guide to evaluation, the UFE approach accomodates for method that meets the needs of the stakeholders (Mertens, 2012).This approach can be useful for addressing the needs of priority users of a project and effectively address their needs of the evaluation (Stufflebeam, 2014).

Examine Evaluation techniques x 2

{"cards":[{"_id":"6d7f9633ca8195d43c00001a","treeId":"6d7f909eca8195d43c000017","seq":8255525,"position":1,"parentId":null,"content":"Provide a critical comparison and analysis of two of the five paradigms for monitoring, evaluation and impact assessment, in order to demonstrate a nunderstanding of which paradigms are most appropriate for conducting MEIA in which development contexts"},{"_id":"6d7f9680ca8195d43c00001b","treeId":"6d7f909eca8195d43c000017","seq":8255650,"position":1,"parentId":"6d7f9633ca8195d43c00001a","content":"Must critically analyse the perceptions of the nature of evidence, and how these impact upon the MEIA of development projects\n\nExamine critical debates and key issues surrounding the monitoring,evaluation and impact assessment in international and community development project cycles, to build appropriate professional skills and understanding of how to design and implement appropriate monitoring, evaluation and impact assessment strategies\n\nPragmatic Evaluations\n\nIndigenous Evaluations\n"},{"_id":"6d884debca8195d43c000021","treeId":"6d7f909eca8195d43c000017","seq":8272198,"position":0.5,"parentId":"6d7f9680ca8195d43c00001b","content":"At the beginning of the 21st century, world leaders gathrered to discuss what the development goals of this century would be. These discussions led to the creation of the Millenium Development Goals. These goals targeted areas of daily income, universal education, gender equality, maternal health and environmental sustainability, just to name a few. \n\nMost of the these goals had the set deadline of 2015, which passed last year. Though the most recent report acknowledges significant improvement in achieving its goals, it also acknowledges shortfalls and uneven success rates (United Nations, 2015). As we discuss the topic of human development, and the investment of time, funding and personnel, the significance of project evaluation becomes more apparent. What works, what hasn't worked and what needs to be adjusted in order to deliver on outcomes are all aims of a good evaluation.\nHowever, approaches to project evaluations differ greatly. What is central to all evaluations is the five main attributes put together by the Joint Commitee for Educational Evaluation (Mertens, 2012). They are: Utility, Feasibility, Propriety, Accuracy and the Meta-evaluation.\n"},{"_id":"6d8befddca8195d43c000023","treeId":"6d7f909eca8195d43c000017","seq":8268304,"position":0.75,"parentId":"6d7f9680ca8195d43c00001b","content":"This essay looks at Pragmatic (or Impact) and Indigenous approaches to monitoring, evaluation and impact assessment as well as understanding the context by which approach would be most appropriate.\nThe reason for adopting these two approaches for this essay topic is because both of them have their sets of strengths as well as their shortcomings, which will be explained further.\n\n "},{"_id":"6d7fb58eca8195d43c00001c","treeId":"6d7f909eca8195d43c000017","seq":8255649,"position":1,"parentId":"6d7f9680ca8195d43c00001b","content":"Pragmatic Evaluation\n\n\tYour role as the evaluator\nConsultation with stakeholders on what the problems are, goals and objectives should be. \n\tPhilosophical and theoretical lens\nWhilst there exists a single reality, all individuals have their own unique interpretation of that reality.\nGain knowledge in pursuit of desired ends as influenced by the evaluators' and contextual values and politics\n\n\tThe evaluand and its context\n\n\tMethod (design, research purposes and questions, stakeholders and participants, data collection)\nassumption\n\nMatch methods to specific questions and purposes of research; quantitative and qualitative methodologies can be used, or working back and forth between both to exploit the power of ‘triangulation’ of methods and their respective findings\n\n\tManagement and budget (reports and utilisation)\n\nReports should be presented in a from that talks of usefulness and effectiveness. \nCriteria:\nHow useful is it, \nCan it be implemented in this setting\nIs it humane, ethical, moral, proper, legal and professional\nIs it dependable, precise, truthful and trustworthy\nDo you assure and control the quality of the evaluation research\n"},{"_id":"6d8bdf9fca8195d43c000022","treeId":"6d7f909eca8195d43c000017","seq":8272200,"position":0.5,"parentId":"6d7fb58eca8195d43c00001c","content":"Though the earliest evaluation methods used a positivist methodology, applying a scientific gaze to community development, several evaluation practitioners have since recognised its shortcomings. The Pragmatic approach identified significant challenges with positivistic evaluation styles. More often than not, positivisitic approaches do not involve stakeholders due to its pursuit of an objective result. This can result in significant resource costs due to the hiring of external staff and contractors in order to conduct such evaluations (Parry, Platt and Gnich, 2001). Furthermore, Cisneros-Cohermour (2005) identified significant limitations in the apparent 'objectivity' of teaching research done in the US. Researchers were often based at the university they were teaching and their results supported their positions either directly or indirectly. \nPerhaps one of the greatest issues with the first paradigm of evaluation theory was the lack of participation stakeholders had in the design and implementation of the evaluation (Gertler et al., 2011). This often resulted in such reports not being read by practitioners or leaders in the field. \n\n"},{"_id":"6d8c5070ca8195d43c000024","treeId":"6d7f909eca8195d43c000017","seq":8272201,"position":0.75,"parentId":"6d7fb58eca8195d43c00001c","content":"Impact evaluators sought to rectify these shortcomings by approaching assessments from a different philosophical angle. Where positivistic methods pursued complete objectivity, pragmatic evaluators recognised that there were different perceptions of the same project and thus sought to involve all stakeholders in the evaluation process (Shaker, 1990). Consequently, pragmatic evaluators are more concerned with the 'usefulness' of the data in assisting decision-makers in their role. \nFurthermore, the inclusion of participatory methods to the evaluation process provided a more empowering approach that was missing from the positivistic paradigm (Chambers, 2009).\nImpact evaluations typically use a mix of qualitative and quantitative methods in order to determine its results. Ultimately, the methodology is guided by the aims and objectives of the study and what is most likely to show effectiveness (or lack thereof).\nLike all approaches to evaluation, the success of impact evaluations is not without its conditions. Impact evaluations are not without its faults. In 2000, the Center for Global Development created a working group to identify the major problems with impact evaluations as well as solutions for how the paradigm could be taken more seriously. \nThis led to the report titled, *When will we ever learn? Improving lives through Impact Evaluation* which outlined some of the problems with impact evaluation reports thus far, such as the lack of adherence to evaluation standards put by the Joint Committee, as well as a lack of rigour being exercised by evaluators (Center for Global Development, 2006). The work on the report led to the creation of the International Initiative for Impact Evaluation (3ie), an international body aimed at improving the quality of evaluations within the paradigm (Levine, 2016). "},{"_id":"6d9be9c11dc7ba442f000066","treeId":"6d7f909eca8195d43c000017","seq":8272203,"position":0.875,"parentId":"6d7fb58eca8195d43c00001c","content":"3IE adopted a position of providing grants to impact evaluators as well as collating the resulting evaluations, conducting systematic reviews of the evidence and finally, creating policy briefs for international bodies (3ieimpact.org, 2016). The body states that through the understanding the impact of interventions, programmes and policies and developing a scenarios of what would have happened in the absence of such developments, they are able to provide better evidence.\nFor instance, in a study done on the effect of microfinance initiatives in Hyderabad, India, the research design adopted randomised control trials based on households where microfinance was taken up, compared to areas where there was the potential, but it wasn't considered (Banerjee et al., 2015). This study, compiled with a couple of other evaluations on microfinance interventions forms a policy brief for practitioners in the sector (3ieimpact.org, 2016)."},{"_id":"6d81cd89ca8195d43c00001f","treeId":"6d7f909eca8195d43c000017","seq":8273563,"position":1,"parentId":"6d7fb58eca8195d43c00001c","content":"The Evaluation Gap Working Group found that impact evaluations were most effective for new or expanding projects where effectiveness had not been established. Impact evaluations also had to be built into the design of the project in order to deliver on outcomes. \n\nCIPP\n\nThe Context,Input, Processing, Product Model is one of the more established approaches of impact evaluation. Designed by Daniel Stufflebeam in 1967, the CIPP model aims to involve all stakeholders in the evaluation process, identifying the project background (Context), its resourcing (Input), the processes by which the outcome will be delivered and finally the outcome itself (Product). Stufflebeam and Coryn (2014) believe the approach is most effective when the evaluator regularly interacts with the stakeholders, as it allows for the evaluator to be regularly updated with new information, while keeping the decision-making process informed. \nThe CIPP model doesn't aim to prove that a program works, but rather to improve on a program's approach. \nThis form of evaluation was appropriate for determining the effectiveness of the Kaohsiung Suicide Prevention Center's efforts in reducing suicide as well as identifying where shortfalls needed to be addressed (Ho et al., 2010).\n"},{"_id":"6d97ef771dc7ba442f000064","treeId":"6d7f909eca8195d43c000017","seq":8267119,"position":2,"parentId":"6d7fb58eca8195d43c00001c","content":"Utilization Focused Evaluation\n\nUtilization focused evaluations focus on the stakeholders that will 'use' the project's findings and act on them. This becomes the first task of the evaluator. Where CIPP has a general guide to evaluation, the UFE approach accomodates for method that meets the needs of the stakeholders (Mertens, 2012).\nThis approach can be useful for addressing the needs of priority users of a project and effectively address their needs of the evaluation (Stufflebeam, 2014)."},{"_id":"6d7fc59aca8195d43c00001d","treeId":"6d7f909eca8195d43c000017","seq":8256011,"position":2,"parentId":"6d7f9680ca8195d43c00001b","content":"Indigenous Evaluation\nCentral tenet is constituted by the importance of relationships, with ourselves, with other people and everything on earth and in the universe.\n\n\tYour role as the evaluator\n\tPhilosophical and theoretical lens\nKnowledge is relational. You are answerable to all your relations when you are doing research.\nfoundationally (i.e. ontologically) based on doing and being relationally, rather than as an ‘abstract’ theory imposed on others or used for indoctrination. \n\tThe evaluand and its context\n\n\tMethod (design, research purposes and questions, stakeholders and participants, data collection)\n1. Consultation of Elders is vital.\n2. Indigenous knowledge informs the process\n3. a cyclical approach\n\n\tManagement and budget (reports and utilisation)\n1. Addresses the role of the researcher in questioning \"givens\"\n2. Consent involves individual, community, group and collective\n3. The ‘knower’ is named, when they give permission\n"},{"_id":"6d81ce44ca8195d43c000020","treeId":"6d7f909eca8195d43c000017","seq":8272187,"position":1,"parentId":"6d7fc59aca8195d43c00001d","content":"Examine Evaluation techniques x 2"},{"_id":"6d81c621ca8195d43c00001e","treeId":"6d7f909eca8195d43c000017","seq":8256024,"position":2,"parentId":"6d7f9633ca8195d43c00001a","content":"Start with the importance of Evaluations\n\nState the intention of this essay\n\nWhy Impact Evaluations? Why Indigenous Evaluations - as a very specific form of evaluation?"}],"tree":{"_id":"6d7f909eca8195d43c000017","name":"ADS708 Essay 1","publicUrl":"6d7f909eca8195d43c000017"}}