• No results found

Module 4: Emerging Practices in International Development Evaluation

N/A
N/A
Protected

Academic year: 2021

Share "Module 4: Emerging Practices in International Development Evaluation"

Copied!
6
0
0

Loading.... (view fulltext now)

Full text

(1)

Module 4: Emerging Practices in

International Development Evaluation

Greetings, I am Stewart Donaldson at the Claremont Evaluation Center, in Claremont California, USA. Welcome to the fourth module in our series on the Introduction to the

Profession of Evaluation. In this module we will be covering emerging practices in International Development Evaluation: An Introduction. In an earlier module I talked with you about how there are some specialties within the evaluation field and one of those areas is International

Development Evaluation. The overview of this module is, I will first talk about International Development, the need for high quality evaluation, the developing country lens, and then evaluating organizational performance, capacity development, policy influence, networks and partnerships, coalitions, sustainable development, innovation, and finally we will talk about future reading and e-learning. This module is based on a recent volume that focuses on emerging practices and it will be recommended as follow up reading to this basic introduction.

So International Development is a wide concept concerning the development of a greater quality of life for humans on an international scale. There are many different projects and interventions throughout the world at any given time that fall under this broad concept of international development. And we believe there’s a high need for high quality evaluation of these types of projects and programs, interventions and so forth. So evaluation is important for international development projects because they often have ambitious outcomes that are not easy to measure and evaluate. When we move into this area of international development evaluation there’s some unique challenges that we face. And in our volume, Zenda Ofir, and Shiva Kumar have offered a framework that helps us think about doing this kind of evaluation work from a developing country lens. Now when we are evaluating development projects, there are a few things that we often encounter. First, they are much more complicated, and in

developing country contexts, things are much more unpredictable. Often we are working in projects where there are deep cultural differences and this can challenge the evaluation. Often times there are great capacity and other kinds of constraints that make evaluation design and methods and procedures quite challenging. Finally there are privilege and power differentials that must be acknowledged and addressed in this form of evaluation. Ofir and Kumar give us a range of recommendations in their chapter coming from their framework for how to approach evaluation in developing countries. For example, they suggest that often system approaches can be very valuable for this kind of evaluation. They also raise the issue that evaluation in this setting should be culturally responsive and take culture into consideration as we design and conduct evaluation projects. Evaluators have to think about how they might expose dis-empowerment and how to do evaluation in a way that empowers individuals that are being served by the interventions and evaluation process. We also need to think about how we can address the constraints that often exist in an international development project. And our goal is to develop respective and useful evaluation practices. So a unique feature of this area of evaluation is that we often need to realize that the conditions in which we are evaluating can be

(2)

significantly different in a developing country than in a more developed modern country. If we look at the kinds of practices that international development evaluators engage in we come to a set of areas that we often need some detailed information about in terms of providing good high quality evaluation. One area that is described in great detail in the volume is called “Evaluating Organizational Performance”, and Charles Lusthaus and Katrina Rojas, have given us a very thorough framework to look at a systematic process for obtaining valid information about the performance of an organization and the factors that affect performance. So often times

evaluating the organizations involved in carrying out the projects, programs, and interventions is critical in international development evaluation.

(The authors also describe several challenges associated with conducting a successful organizational performance assessment. Including: identifying and developing the appropriate methodologies for varying contexts, speaking truth to power, gathering and using high quality organizational data, senior managers general lack of understanding about the role and benefit of evaluation, determining who is involved in making the evaluative judgment on the data collected, a lack of standards of performance across organizational sectors, and the common tradeoff between effectiveness and efficiencies, where an organization may be effective but in-efficient or vice versa. The authors also describe possible solutions for conducting successful organizational performance assessment. First it is important to find out what works develop it and scale it, while also considering the contextual factors that contributed to the program’s success. You should also consider the framework of people in the developing context and the values underlying cultural differences in different societies. And to recognize that methodologies bring out values, norms, theories, and belief systems. You should build and develop the

organizations capacity for data collection and self-reflection. And you should also understand how things operate to assess at the institutional level in order to develop capacity).

Evaluating capacity development is another area that is often very important in this type of evaluation work where we are evaluating the processes of change that both intentionally and indirectly contribute to the emergence of capacity over time. So, Peter Morgan gives us an approach for thinking about how to evaluate capacity in developing countries and international development evaluation.

(The author describes several challenges associated with evaluating capacity development. It often takes time for stakeholders to reach agreement on what evaluation

capacity development is and what they are hoping to achieve from it. Many miss the opportunity to think about evaluation capacity building explicitly; however it is often considered implicitly. Capacity development contains various definitions that are often rooted within varying beliefs, ideologies, and paradigms. And there is a high aversion to conducting capacity development because it is seen as risky in developing areas due to its lack of a clear definition and

measurement. And capacity development is often imposed from the outside. The author also provides several possible solutions for evaluating capacity development. First, there should be a common language around this term to help stakeholders better understand what it means, and also to hopefully lead to a better commitment to the idea of capacity development. Capacity developments political and value laden definition should be recognized early on in any capacity development initiative. And the process of determining what the definition is, is helpful in itself

(3)

because it creates a process where capacity development has to be operationalized and participants have to think about how to measure it. And organizations values and perspectives should drive the definition of what capacity development is, and there should be a balance of simplicity and complexity when measuring capacity development. The measurement should be complex enough to capture contextual factors, but simple enough to communicate its outcomes to the outside world. And the use of self-reflection as a tool to capture capacity development may be a method of measuring how it has changed stakeholders thinking).

Fred Carden and Colleen Duggan give us a chapter and an approach to looking at evaluating policy influence, and this is where we evaluate direct policy change that is guided by research and evaluation findings. And in their chapter, they discuss a range of issues particular focused on developing countries, international development evaluation.

(The authors also describe several challenges with evaluating policy influence. For example there can be hindering factors that can obscure the evaluator’s ability to make a direct connection between the research findings and immediate policy change. Some of these

hindering factors can include policy maker’s ability to apply the research, the nature of

government or tight government control, economic conditions, the stability of decision making institutions, or countries in transition. There can also be competing agendas from different interest groups who can severe the researcher message. And you can have issues around problem identification which is the process of identifying the appropriate or relevant problems the policy change would address. The authors also describe several possible solutions for evaluation policy influence. For example the role of the political process and the role of the evaluator should be clearly described and articulated. Along with the associated limitations of the evaluation. The presence of an advocacy could help to policy influence efforts. The policy influence model that the evaluator uses should include varying approaches that respond to varying contexts, for example how a researcher would approach a policy influence issue in an election year. An evaluator should also consider the time scale needed for change, how to utilize policy influence frameworks when working with grantees, the research product and influence strategies should be of high quality, learning from evaluations of successful

development and the factors and conditions that lead to their supposed success, thinking about power mapping as a strategy for understanding the influence pathways and the interest of various decision makers).

Heather Creech talks about some of the latest practices in evaluating networks and partnerships. And this involves evaluating interorganizational relationships often described as networks or partnerships.

(The author also describes several challenges associated with evaluating networks and partnerships. First there need to be better evaluations of the governance structures associated with interorganizational relationships because this is where you will find issues related to leadership and ownership. There can be assumptions about networks that may not hold across cultures. For example there can be different definitions about what a god coalition member might look like. There are real transaction costs to forming networks and evaluators should be aware of these costs, and finally there are no common bases or standards to measure how well

(4)

networks are working. The author also describes several possible solutions for evaluating interorganizational relationships for the evaluation there needs to be clarity on the unit of analysis that is used at describing an interorganizational relationship. And these can include individually driven interorganitzational relationships, organizational driven interorganizational relationships, and mixed interorganizational relationships. Evaluation of interorganizational relationships should attempt to surface the values and beliefs of those involved to address the power structure within the networks, and to assess the ability of the networks to function given the resources and infrastructure available. And to recognize the contextual factors that

contribute to the success or failure of the networks. Relying on the native knowledge of the participants could be an effective way of strengthening the coalitions in the long term. And governance is one of the key factors that need to be evaluated as a measure of network success because it can determine how functional the network will be in the long term.)

Jared Raynor contributes a chapter on evaluating coalitions, and coalitions are often critical in effective international development work. So here what we are talking about is

evaluating an organization or organizations whose members commit to an agreed upon purpose and shared decision making to influence an external institution or target.

(The authors also describes several possible challenges associated with evaluating coalitions, first it can be difficult to identify replicable components of the coalition. There are often competing donor interests on the evaluation questions or the ultimate outcome of the coalitions. There can be a high level of uncertainty and the rapidly changing nature of coalitions can change the original outcomes. Power differentials can lead to distortions in data collection or the coalition functionality. It can be difficult to understand the unit of analysis involved in evaluating coalitions whether we are evaluating individuals or networks. And it can be difficult to evaluate the sustainability of coalitions over time. There is also the potential for unintended positive of negative consequences of coalitions. And finally making the evaluation useful to the stakeholders can be difficult when evaluation coalitions. The author also provides several possible solutions to evaluating coalitions. First, implement methodologies that are credible to the various stakeholder groups. Second, specify early on in the evaluation process the intended outcomes of the coalition, and the capacity of its members to achieve these outcomes. Request that donors provide space to the grantees so that they can begin to tap into their own abilities and skills. And finally focus on governance and develop decision making rules and evaluate their functioning within the coalition).

Steve Bass and Alastair Bradstock offer a chapter on evaluating sustainable development. Evaluating development that meets the needs of the present without

compromising the ability of future generations to meet their own needs. And many international development projects today have the desire of sustainable development. And so evaluating this dimension of these projects becomes very important to ensuring that funders and interventions realize their goals.

(The authors also identify several challenges associated with conducting evaluation of sustainable development initiatives. First it can be difficult to find a common definition of sustainable development. There can often be a lack of baseline data and indicators of

(5)

sustainability. And sustainable development often occurs on a long term time line and often at multiple levels, the local level, the national level, the global level, and it can be quite complex. And it can be difficult to access or influence decision makers who can support sustainable development efforts. And there is also a lack of a single sustainable development evaluation framework. It can be difficult to determine who owns the sustainable development challenge, whether its governments or communities and this can lead to issues about determining who is responsible for the sustainable development initiative. There can be finally an issue with the influence of the media and virtual sphere on decision makers rather than credible evidence and evaluation results. The authors also present several possible solutions for evaluating

sustainable development including finding strong champions because ownership is needed by a global institution that has the ability to make and enforce decisions. It is important to bring sustainable development awareness to the governing level and the organization level. Evaluation should also consider the unintended consequences associated with sustainable development).

Steve Rochlin and Sasha Radovich give us a chapter on evaluating innovation for development, so evaluating the change or introduction new elements, a process, a product, or a market, leveraged or harnessed to produce change to benefit the poor.

(The authors also describe several challenges associated with evaluating innovations. First, traditional evaluation measures and approaches might not capture innovation when evaluating an organization or an intervention process. The evaluation must tread lightly on the grantor, grantee relationships because of the inherent risks involved in innovation. It can be difficult to determine the evaluation focus. Is it on the innovation itself? Or, is it on the difference that the innovation makes for development? Intellectual property protection often comes into play, and it can effect motivations for innovation and the ability to evaluate it. Often the social dimension is ignored by innovation. There’s often too much of a focus on technology. There can be unintended positive or negative consequences of innovation. The authors also describe several possible solutions for evaluating innovation successfully. First it is important to divide the evaluation focus into two parts or phases. The first being the innovation itself, and the second being how that innovation helps to make a difference. It is important to understand the power agendas and the context of the funders so that you can successfully navigate the grantee, donor relationship when evaluating innovations. And also to match the purpose with the type of evaluation. So for example an evaluation focused on the use of innovation, it might be better to use a more formative or developmental style of evaluation. It is also important to utilize qualitative methodologies in order to get a stakeholder voice in the evaluation. And finally to encourage donors to create space for innovations because this can enable existing

environments to facilitate the flow of innovations. For example some communities of farmers could teach other farmers their techniques).

These emerging areas and the emerging evaluation practices associated with them are contained in a new volume sponsored by the Rockefeller foundation on emerging practices and international development evaluation. And so to get into more details into how these different areas of international development evaluation are being addressed today looking at the methods the designs and the evaluation challenges and recommended ways of overcoming

(6)

those challenges are provided in this new volume. In addition to the volume the My M & E website contains a range of free online trainings that are related to issues in international development evaluation. And in these webinars and related trainings you have the opportunity to ask question, and receive answers in real time from international experts. If you attend one of these webinars online trainings live, however all of or most of these webinars conducted are also available to view at your own convenience online. So by going to the My M and E website there will be a range of topics that you could select and view that would give you much more training on this basic interdiction on international development evaluation. Finally we have developed a series of more in depth trainings on this basic introduction, and through the My M and E website you can sign up for an e-learning course where going through the course and interacting with the material and answering questions, when you attain a level of mastery you will be eligible for a free e-learning certificate in international development evaluation. So I hope that you will seek out additional information about the basics of international development evaluation. And I hope this basic introduction has provided you with a general understanding of the kinds of issues that are addressed in international development evaluation. Thank you.

References

Related documents

The findings of this study show a number of interesting aspects of the development of the student teachers’ PCK with respect to their knowledge of difficulties in teaching and

However, data collection during the fi nal week of our cycle showed an average of 79.5%, our highest throughout the project - a rise which we anticipated to take a while due to

The sub-grid spatial variability of the annual mean NO 2 and NH 3 concentrations predicted by an atmospheric chemistry and transport model can be estimated by combining the

What has become a global practice has emphasised the mechanisms of trials, truth commissions and reparation processes, and one result of an increasingly mimetic practice has been an

The improvement in cohort performance on the Log station with subsequent OSCEs was attributed to the following: assessment criteria were communicated to students and assessors prior

19.6  AnAlysIs of the Cloud delIvery model lAndsCApe The three conventional cloud delivery models (SaaS, PaaS, and IaaS) are constrained by the capabilities available at

The verification of theoretical models of the first and second drying period indicates that the drying process of peach is determined by internal conditions of heat and mass

This list comprises simple and sophisticated, qualitative and quantitative tools and techniques belonging to every process of PRM (planning, identification,