owe a lot to M&E. The deep-rooted desire to measure progress, combined with the accountability requirement because of the large sums of money involved in aid, explain in part the importance currently attached to M&E. But M&E is not only an auditing tool for management and aid donors. Newer approaches stress the process that the evaluation initiates and the process of negotiation that is sustained by stakeholders. The objective of M&E is closely linked with the sustainability objective. Institutional capacity building is a condition for integrating development efforts and to leave lasting effects of interventions. Steps enhancing organizational competence, e.g. the learning process approach, are presented. Sustainability analysis, and participatory monitoring and evaluation techniques are illustrations of indicator application. An illustration is given of guidelines for evaluation of water and sanitation projects. Participatory Management Tools Monitoring and evaluation, M&E, are management tools for donor organizations. Evaluations are meant to be one of the central determinants of the direction of aid. They should enhance incorporation of long-term experience and pave the way for adjustment to changing economic and political conditions. It is being increasingly recognized that aid activities will be sustainable only if they are managed in and by the recipient country (Vilvy, 1993:35). Hence, M&E should be seen as much as a tool for recipients. Based on the screening of a variety of evaluation reports, OECD/DAC principles for technical cooperation and other DAC evaluation documents, three key areas for future M&E have been pointed out: Further development of principles for the recipients involvement and control of aid. The evaluation must take those principles into account which emphasize that recipient involvement should amount to more than participation. Recipients significant influence from planning via monitoring to evaluation is a basic condition for sustainability. The traditional donor-controlled project should increasingly by replace by sector-focused measurable results but with grater development effect, is needed. Further development of the sustainability concept and development of valid indicators of sustainability. The call for attention to these areas comes at a time when documentation of participatory principles for M&E and results are on the increases (e.g. Community Development Journal, 1988; Gravgaard, 1990; Swantz,1992). Attention to the sector perspective is also being reflected in sector evaluations which Danida has undertaken in recent years, one focusing on Drinking Water Projects one country, India (1991). Different Monitoring and Evaluation Approaches Monitoring and evaluation are elements in the management of development intervention. The conventional method is progress reviews and measuring impact by neutral outsiders. This approach has been challenged by new M&E methods that include Fourth Generation
Evaluation (Guba and Lincoln, 1989), Utilization-Focused (patton, 1986 and 1990 Participatory Evaluation Approaches (FAO/WCARRD), 1988 and FAO, 1989b; Feuerstein, 1988). In their own ways, the new evaluation methods challenge some of the principles on which evaluation has been based: An assumed causal relationship between inputs, outputs and impacts, difficult to prove in real life. That influential external factors can be controlled. That projects are closed systems with time limits when in reality development is a process. Cist-benefit analysis, when what would have happened without the project; type of questions cannot be answered. Quantitative measures tell only partial truths.
All there methods are process-oriented; whereas fourth generation and utilization- focused evaluation are more concerned with establishing final conclusions, participatory evaluation is more concerned with adapting and adjusting a project while it is in progress and on conditions set by the participants. Today all aid organizations are aware of inherent weaknesses in conventional evaluation methods, made even more difficult when clear objectives have not always been formulated from the outset of a project or programme intervention. But new practices take time to establish and there are many obstacles. Often they challenge other fundamental principles such as the project concept itself. To make basis changes at one end of the project cycle requires changes in other parts if this is to make sense. Hence the delay in a wider application of new practices. It is an important challenge to researchers and development workers alike to take part in the critical entitled Evaluation of Development Projects, Methods and Theories (Rebien 1993) to which reference only can be made here. Figure 7.1 may give an indication of the basic difference between conventional and newer approaches, using the examples of the fourth generation evaluation approach. Let it be stressed from M&E is better characterized as a learning process today than a rigid auditing exercise. Participation by indigenous evaluations is also seen as absolutely vital by most aid organization. Hence the distinction between conventional and alternative approaches may be misleading, but is here retained for the sake of presentation of the main points.
Figure7.1 Comparison of the Traditional Positivist and the Fourth Generation Constructivist Approaches to Evaluation Focus. Customarily, an AID project evaluation from a positivist viewpoint is driven by the Logical Framework that details the objectives and outcomes that the project is designed to
produce. From the constructivist perspective, the focus of the evaluation is on issues, on the concerns currently held stakeholders. Therefore, the assessment is guided by the present situation more than by the original intent of project designers. The here and now are paramount. Data. Positivist evaluators search for the facts and aim to get at the truth. They want to know in objective terms, what has happened in the project up to the date of assessment. Fourth generation evaluators are interested in the perceptions held by stakeholders, and they want to know what people attach to events. The fourth generation evaluator might present, for example, the fact that 100 individuals have been trained. The evaluator would go beyond these data to determine what the number trained means to people. For example, is 100 trained good or bad? Could or should it have been more? Did the training get results? Would different training be better? How? Is training still the solution to current requirements? And so on. Dependability of the Data. Evaluation is viewed as a science, and evaluators are required to concern themselves with standards of measurement, means of quantifying outcomes, reliability, and validity of data. Evaluation science offers strategies for ensuring that the data are dependable. The fourth generation evaluator obtains certainty largely through redundancy of information. Different sources see the same things the same way. If competing perspectives emerge in the process, then the evaluator knows that there is an issue that must be addressed by stakeholders. Nature of the Evaluation Process. From the positivist point of view, an evaluation is a wellmanaged, carefully administered process of data collection, analysis and reporting. For the constructivist, evaluation is political process. It is filled with negotiation and facilitation; it is full of surprises, and evaluation is political process. It is filled with negotiation and facilitation; it is full of surprises, and constant interaction Among players in various settings is encouraged. The process is managed to raise Conclusion. A typical evaluation ends with conclusions regarding what has happened and makes recommendations for the future. The evaluation document, itself , is given great importance because it contains data, evidence, results. A fourth generation evaluation is more likely to be viewed as a snapshot in time-a process that must ongoing if it is to be expected to improve the programme operation. Conclusions and recommendations are presented as statements of understanding regarding potential implementation barriers, and issues that must be further negotiated in order for the opportunities, implementation barriers, and issues that must be further negotiate in order for the programme to take advantage of opportunities and reduce barriers to performance. The evaluation document itself is far less important than the process that the evaluation initiates and the process of negotiation that is sustained by stakeholders. After Bryant 1991:8
Several public organization and NGOs have developed the conceptual and practical tools for involving users in M&E is, as yet, relatively limited. Participatory monitoring and evaluation, PME, is a tools for learning from experiencefrom success and from failure, and for doing better in future. PME serves a dual purpose(a) it is a management tools which enables people to improve their efficiency and effectiveness; (b) it is also an educational process in which participants increase awareness and understanding of factors which affect their situation, thereby increasing their control over the development process. Much of this is shared with conventional M&E. The participants tend to differ with outsiders dominating conventional M&E and local stakeholders dominating PME.
Beneficiary Assessment
The involvement of people, whose situation the M&E is supposed to reflect, has become the advocated approach in many development organizations including some, like the World Bank, which represent a top-down culture(see Figure7.11). The WB has recently developed an approach to beneficiary assessment(in our terminology-user assessment) which is a systematic inquiry into peoples values and behaviors in relation to planned or ongoing intervention for social and economic change (Salmen, 1992:1) By amplifying the voice of the people for whom development is intended, beneficiary assessment empowers these people to help themselves. This is the basic goal of the advocates of beneficiary assessment. In addition, beneficiary assessment is an instrument to create dialogue, hence the common understanding between managers and beneficiaries that lasting development depends on the integral involvement of both parties. It remains to be seen whether a monolithic organization like the World Bank is prepared to turn its culture upside down to permit genuine beneficiary-or user-assessment. Figure 7.1.1 Beneficiary Assessment as against Traditional Bank Culture
Beneficiary Assessment Appears to Run Counter to the Banks culture Beneficiary assessment Inductive Bottom-up Socio-cultural Qualitative Process Grounded Practical After Salmen 1992:21 Banks culture Deductive Top-down Economic Quantitative Impact Abstract Theoretical
That participatory evaluation has far-reaching methodological consequences is illustrated by PROWWESS (Figure7.1.2.) Figure 7.1.2 Differences between Conventional Evaluation and Participatory Evaluation Who What How Conventional External experts Predetermined indicators of success, principally cost and production outputs Focus on scientific objectivity; distancing of evaluators from other participants; uniform, complex procedures; delayed, limited access to results Usually upon completion; sometimes also mid-term Accountability, usually summative, to determine if funding continues Participatory Community people, project staff, facilitator People identify their own indicators of success which may include production outputs Self-evaluation; simple methods adapted to local culture; open, immediate sharing of results through local involvement in evaluation processes Merging of monitoring and evaluation; hence frequent small-scale evaluations To empower local people to initiate, control and take corrective action
When Why
Many M&E section tend to support participatory M&E principles, but the methods to implement them are not well developed. Another confining dimension is the persistence of the project cycle, with relatively fixed procedures for the individual steps. PME anticipates participation at all stages of the project cycle. To introduce participation only at the M&E stages may be counterproductive unless participants are selected according to substantial knowledge of the project. Participatory Monitoring and Evaluation Procedures The range of new monitoring and evaluation approaches, like other participatory methods, is not intended to replace the more traditional M&E methods. They can often make those methods which are useful more appropriate and effective. The newer approaches aim to make the methods suit the people and their situation, not vice versa. This is a strength in itself. Monitoring has been defined as a surveillance system, used by those responsible foe a project to see that everything goes as nearly as possible according to plan , and that resources are not wasted (FAO 1990:8). It is a continuous feedback system, ongoing throughout the life of a project or programme, and involves the review of each activity at every level of implementation. Participatory monitoring involves the users of a project in measuring, recording, collecting, processing and communicating information to assist both project management personnel and group members in decision-making. Data collected while monitoring provides the basis for evaluation analysis, which concerns the assessment of the effects of the project on or for the intended users. These may include benefits
and negative effects in the short and in the long term, in case of an evaluation carried out ex-post after project completion. Negative results include environmental damage or effects on particular groups and individuals, e.g. increased work burdens for women, exploitation of labour, loss of status, limitation in access to resources, judicial rights and independence. Figure 7.1.3 illustrates that PME cab be seen as a process within a system which allows the users to continuously share in assessing their own progress, and periodically evaluate the process to learn from past mistakes. It has been stressed that genuine participation in monitoring and evaluation requires that the participants have been involved already at earlier stages, i.e. in decision-making and planning, in the implementation process, and in sharing the benefits. PME requires the involvement of people in several steps: Deciding what areas to monitor and evaluate Selecting indicators for M&E Designing data collection systems Collating and tabulating data Analyzing the results sing PME information
Evaluation
Planning
Implementation
Figure7.1.4 is Feuersteins suggested list of steps in a participatory evaluation process. It is partly an overlap, partly a continuation of the foregoing steps.Figure7.1.4 Steps in Participatory Evaluation.
1. All those involved in a programme need to decide jointly to use a participatory approach. 2. Next they need to decide exactly what the objectives of the evaluation are. This is often harder than they think it will be. 3. When they have reached agreement on the evaluation objvetives, it is time to elect a small group of evaluation coordinators to plan carefully and organize all the details of the evaluation. 4. Now is also the time to decide what methods will be best for attaining the evaluation objectives. The choice of method, such as analysis of programme records or use of a questionnaire; will also be influenced by the capabilities of the people involved, and by how much time and how many resources are available for evaluation. 5. As these decisions are made, the written evaluation plan is formed. This plan shows why, how ,when and where the evaluation will take place and who will be involeve. 6. Next the evaluation methods. Must be prepared and tested ( for example, a questionnaire or a weighing scale may be needed) . selected programme participants will also need basic explanation of and training in interviewing, completing written or oral questionnaires, conducting various kinds of cheeks or examinations. Etc, All programme participants will need explanations of the objectives and general methods to be used in the evaluation. The more they understand, the more they can participate in the entire evaluation process, wherever and whenever requested by the evaluation coordinators. 7. Having prepared and tested the evaluation methods, the next stepis to use them to collect the facts and information required for the evaluation. 8. Then the information and data are analyzed by the programme participants. The major part of this work will probably be done by the evaluation coordinators. 9. The results of the analysis (or the evaluation findings) are then prepared in written, oral or visual form. There are different ways of reporting and presenting the evaluation findings to different groups connected with the programme. For example, a Ministry ( or programme funders) will usually need a written evaluation report but community-level participants will be ` `
5. Case Work help may be only patchwork in some cases but not so in many other cases. Individuals cannot be dissolved in the massiveness of the so called masses. To render help in a same possible. I such a situation, case work is not the answer. 6. Case work service may be prolonged in some cases but it is not so in every case. It is possible to help many persons through short-term service. Very often case work help results in a saving of time. It has been shown that in some cases the social workers timely intervention prevented wastage of clients time at various organizations. Effective case work practice, Fischer J. 1978
determination is a form of participation as it entails decision-making by the client. The process of helping and being helped does not stop at the point of decision-making by the client, but it goes much further in terms of plans pursued and action taken. According to the principle of participation, the client becomes the main actor in pursuing plans and taking action, where as, the Social Worker is only an chabler. 6. Confidentiality The Social Worker is expected to maintain confidentiality regarding the information received from the client. During the process of Case Work, there are many things about which the client talks to the Social Worker. It is important that the information. Which the client gives and the statements she/he makes be treated seriously. It is necessary that they area not disclosed to others, except with the clients permission, where the situation warrants sharing of information with a third person, like the clients family members, or another specialist and such other. 7. Controlled emotional involvement The Social Worker initiates a relationship with the client, characterized by acceptance and affirmation, devoid of hints at condemnation or firing of blame. Such a relationship can hardly be mechanical. It has to be built up through emotional involvement on the part of the Social Worker. Emotional involvement is needed to the extent that the Social Worker can move to the emotional level of the client, and view the situation as she/he sees it. Like an artist who captures on the canvas the feeling of the subject, the Social Worker should be able to capture on his/her own mental screen, the feeling of the client, without allowing those feeling to affect adversely his/her thinking process. So it is also necessary that the Social Worker maintains a certain degree of detachment, side by side, with an appropriate level of emotional involvement in order that she/he may enable the client to view his/her problem objectively and to plan realistically.