By Sharon Weir and Payal Mulchandani
Monitoring and evaluation systems are part of a broader study of impact assessment. Impact assessment has been defined as “a systematic analysis of the lasting or significant changes- positive or negative, intended or not- in people’s lives brought about by a given action or a series of actions” (C. Roche, 1999).
This essay seeks to explain monitoring and evaluation and why they are a crucial aspect in any project or plan.
It highlights the reasons they have become so widely accepted, used and discussed in development planning. Monitoring and evaluation serves the purpose of bringing information to the forefront and making changes in the existing system of delivery and planning to ensure results only if important considerations are made. There are many aspects involved in the evaluation and monitoring of a project and none can be ignored if we are to ensure an effective and relevant evaluation. It is a difficult choice to make while planning an evaluation as to which elements should be given priority and considered.
The best way to do this would be to pose a few basic questions regarding the evaluation, like, what is the main reason for an evaluation, who has decided to do the evaluation, how is the evaluation and monitoring going to be implemented, how is data going to be collected and treated, who are the stakeholders of the project, how is the research going to be carried out, who is the targeted group and which is the targeted place of study, how long is the evaluation, who is going to submit the final draft of the evaluation, what does this assessment mean for the people involved, does the study offer an insight into processes of the project, does the process tell any stories of success and failure, has it made any conclusions and recommendations and will it lead to an improvement in processes.
The reasons for evaluation in development projects could be many, an increased competition among NGOs to secure donor funding, to prove accountability, for institutional learning, to demonstrate impact and effectiveness of projects, improve future performance, to make comparisons with other projects, measure achievements and progress, to criticize efforts that do not lead to results, to share experiences and increase knowledge and make observations regarding the project to work towards the flaws.
There are broadly three main approaches to impact assessment (Hallam, 1998) which are the scientific approach which generates quantitative measures of impact, the deductive/inductive approach which is more anthropological and socio economic in its methods and approach and participatory approaches which gather the views of programme beneficiaries. Data collection during a study need not rely on only one method but could be a combination of all methods which could lead to the best possible conclusions and knowledge.
The data used in an evaluation for the preparation of an evaluation could be secondary or primary but a serious scrutiny needs to be made while collecting and using data. Secondary data has a reputation of being unreliable and with serious methodological problems with its construction. The development project and target groups to be considered should be well defined, understood and studied before any monitoring system can start its functioning in a particular plan.
Primary data on the other hand too, could lead to false conclusions, biases and portray a picture different from reality due to bad sampling methods used. An OECD/DAC study on impact (Kruse et al, 1997) concluded that there was a lack of ‘firm and reliable evidence’ on the impact of NGO developmental projects and programmes, related to the ‘paucity of data and weakness of evaluation methodologies’.
It is necessary for the research data to be precise, accurate and reliable for an evaluation to realise the projects true impact.
The tools used for collection of data can be questionnaires, interviews, focus group discussions, informal interactions, observation or a use of already published and printed secondary data. These should be prescribed according to what the evaluation and monitoring strives to find out.
There should be triangulation of data which “gives an acceptable degree of objectivity to the subjective perspectives” (Firestone, 1987).
The skills required by evaluators are another important aspect in an evaluation study and there has remained an unsolved debate as to whether an internal or external evaluator should be allotted the role of undertaking an evaluation.
An evaluator whether internal or external should be nonbiased, possess necessary skills for evaluation, have empathy and consideration for people and be dedicated to the task of bringing reality to the forefront. An internal and external evaluator have vested interests which might be to protect their jobs and get ‘praised’ for their achievements in the case of an internal evaluator or to maintain a reputation and keep the ‘top officials’ happy for an external evaluator.
Either way, these interests and selfish motives should be restricted to the minimum to ensure the evaluation and monitoring of a project is confirming to its definition.
Quasi experiments should be carried out in order to realise the real outcomes of a project. “Quasi- experiments seek to compare the outcomes of an intervention with a simulation of what the outcomes would have been, had there been no intervention.” (D. Hulme, 2000) Precise definitions need to be made on the success indicators of a programme and the other effects it may have on various situations to understand impact of the program.
Attention needs to be paid to the circumstances prevailing in a particular place where the evaluation is being carried out. Pilot testing of questionnaires and interviews should be done before the monitoring starts in order to rule out unimportant questions and incorporate more relevant and useful questions. Once the data is collected, it is a requirement to make realistic and reasonable interpretations and conclusions. Sample size and sampling methods should be chosen which do not make a study limited or the scope of the evaluation too broad.
It is difficult for evaluators to “synthesis and summaries what they are doing – the aggregation problem; and how they discover the degree to which any changes they observe were brought about by their actions – the attribution problem.”( C. Roche, 2001) Different stakeholders in a project have different perceptions on monitoring and evaluation and different reasons for participation.
“The E Mebratu, 2002 study illustrates, there are inevitable tensions between donors, project staff and beneficiaries who bring different experiences and perspectives and have different things to win and lose as a result of the process.” According to Francis Rubin (1995), there are many stake-holders in a development project and it is important to identify their needs, motives, interests and the relations between them.
“If this is not done, it can lead to conflict, uncertainty and a breakdown in communication.”
There has been a shift from scientific and formal systems of evaluation and monitoring to more participatory methods in recent years. The logic and reasoning is simple- “Empower through the research process itself”. (Mayoux, 1997).
A development project is planned and implemented with the idea of welfare and benefit for a certain target group. The participatory approach puts these target groups at the forefront and gives high priority to their views, opinions and experiences. The participatory approach attempts to remove feelings of alienation and exclusion and make evaluation and monitoring a coordinated, a collaborative assessment and judgement and more involved with its stakeholders. People would be more willing and interested in the evaluation if they were directly involved or were going to benefit.
Impact Assessment is a synthesis of making the right choices and focussing on the ultimate goal of monitoring and evaluation which is to prove the impacts- positive or negative in the most transparent manner which can lead to improvements in existing processes and ideas. Impacts can be proved if the appropriate tools of analysis and information are used.
The terms of reference should be decided and success or failure of a project should be carefully defined. An all round look at qualitative and quantitative aspects of the plan should be examined. There should be participation from all the stakeholders involved and more emphasis should be laid on bottom up measures of participation. To improve processes, the evaluation and monitoring should scrutinies the causes and effects of successes and failures.
There should be comparisons made between other projects and their strengths and weaknesses. The best way to make an improvement is to know from the ‘targeted people and place’ what their views and thoughts about the intervention is. The report prepared and the conclusions and recommendations drawn should be free from all biases and vested interests. It should be a scientific approach coupled with necessary participatory and qualitative aspects which can grasp true achievements and progress, leading to a successful and purposeful evaluation and monitoring system which proves impacts and influences improvements.