Globally, interventions are planned to change the status quo and improve people’s lives. Interventions may vary across countries and contexts and range from time-bound projects based in a single location or state to international programs in multiple countries and sectors. Once an intervention has been developed and implemented through the project/program approach, it is crucial to track the intervention’s progress and assess the effect of the intervention for course correction and scale-up.
In essence, the intervention must be monitored and evaluated to ensure that it has been successfully implemented, appropriately targeted the problem, and has the expected impact on the problem.
Monitoring and evaluation are more than instruments for determining the success or failure of an intervention. They help researchers explain the mechanisms behind certain alterations. In other words, a monitoring and evaluation framework, when implemented, may assist an organization in designing/extracting useful information from previous and existing operations. This information can then be utilized as the foundation for programmatic orientation, reorientation, and planning. This blog explains both concepts while highlighting the difference between monitoring and evaluation as processes.
As the name implies, monitoring entails continuously tracking the progress of an act. It may be characterized as a function that monitors the ongoing intervention process to inform project personnel, program managers, and key stakeholders about the progress, or lack thereof, toward attaining the project’s objectives. It does so by monitoring inputs and outputs and any change in output caused by changes in input. Monitoring is a vital aspect of the implementation process and is critical for the project’s progress. However, it cannot remark on the project’s larger results or overarching purpose. For this, an assessment is necessary. Nonetheless, it is critical for a project’s process indicators.
Since the monitoring process involves giving crucial information regarding different initiatives and their implementation, it cannot be a one-person job. It requires employees from policy management, finance, field staff, and officers responsible for executing the project on the ground. The process, thus, requires systematical collecting data to provide information to all stakeholders, such as managers, funders, and participants, on the progress of implementation and the achievement of desired outcomes.
Monitoring applies to all program levels: from output to outcome. Most commonly, the focus is on output data. Its critical functions are to gather participants, analyze contextual changes, and provide an early warning system of potential challenges of the project/program. The analysis of monitoring data is critical to making informed mid-term programmatic changes.
The monitoring process must begin with developing a monitoring framework or information system. It includes:
After designing the monitoring plan, the next step is to convert it into a system that can be used repeatedly to generate insights and report periodically for information on the project process. The Monitoring Information System (MIS) could be a simple paper-based monitoring system if we work in a resource-constrained setting. Nowadays, whenever we refer to MIS, it’s usually about a computerized system. It is important to point out here that, generally, after designing the plan, institutions outsource the design and implementation of the MIS to a technology firm.
Once we have finalized the monitoring plan and MIS design, the next step is to design various monitoring reports. A well-designed MIS can easily be translated into various reports required by the monitoring plan.
The first step in designing reports is to club all indicators to be assessed in one place. The process and modalities of developing monitoring reports will differ depending on the monitoring system, i.e., a paper-based MIS or a computerized/web-enabled MIS. Though in both systems, we must align the report with the project’s key outputs that need to be tracked. Also, we can devise one set of reports for one output and a summary report for the project.
Once the MIS is ‘rolled out,’ the next step will be troubleshooting and streamlining. Further, there may be alterations in information collection methods or even some reports. Finally, we can start analyzing the monitoring information once the final system is in place and continuously providing information as desired.
Regarding etymology, the word ‘evaluation’ comes from the root word value meaning “worth.” It is defined as an exercise in determining the value or worth of an intervention to offer feedback to stakeholders. In that respect, evaluation is distinct from monitoring.
Since monitoring often reports on the performance of process indicators, evaluation reports on the performance of effect indicators. Monitoring is an internal process in which all project staff members collaborate to create a monitoring system, while an external agency often assesses to review the project’s accomplishments.
By giving evidence-based feedback, assessment should essentially assist in influencing decision-making or policy formation. Such an assessment occurs in the evaluation process, a chosen activity that aims to evaluate progress toward achieving a goal methodically and objectively. It is described as collecting and analyzing diverse sorts and forms of data to assess the project’s result about the inputs utilized. Walberg and Haertel (1990: 756) describe evaluation as the systematic and thorough analysis of an intervention, program, institution, organizational variable, or policy. The emphasis is on comprehending and improving the assessment subject, as with formative evaluation, or on summarising, characterizing, and rating planned and unplanned results, as with summative evaluation.
Evaluation is sometimes seen as a component of a bigger management cycle. It is often called the planning-evaluation cycle since it is vital to any planning process. Both planners and evaluators refer to the planning-evaluation cycle in various ways. Typically, the first step of such a cycle is the planning phase, which is intended to develop a collection of viable actions, programs, or technologies to choose the best implementation.
The evaluation process begins with identifying the tasks, activities, or projects that need evaluation. Effective assessment is only possible if all other stages, including monitoring, have been thoroughly followed and comprehensive information regarding project activities and tasks completed to accomplish the intended objective is available. The first stage is to appoint an external assessment agency to assess the project’s result or performance about the established goals. The procedure does not stop with the external agency’s review. It is taken a step further by ensuring that assessment findings are communicated to all project stakeholders and that indications for program improvement are gathered.
Depending on the objective and the issue at hand, a planning process may include any or all of the following stages:
Managers responsible for the planning process must also be skilled in conceptualizing and detailing the evaluation issue and alternatives to choose the best possible solution.
Evaluation strategies based on the purpose of evaluation can be categorized into formative and summative evaluations.
As the term implies, it gives the early input necessary to improve the intervention or policy design. Formative assessments are used to determine if a program is performing well and, if not, what revisions are necessary. The formative evaluation may be classified into the following categories:
Process evaluation ascertains whether program activities have been implemented as intended or planned. Process evaluation is critical in evaluating how a specific set of processes, as enshrined in the project document, have been implemented at the field level and further improved outcomes.
Process evaluation is usually done at the end of the project. However, we can also carry out a concurrent process monitoring exercise to provide specific feedback about the course correction.
Process evaluation is sometimes considered formative evaluation, but it is conceptually helpful to separate formative evaluation from the process evaluation. Process evaluation can provide extremely useful information about what happened in a program during implementation.
Summative evaluation is more specific than formative evaluation. It strives to identify the influence of a program’s actions and tasks in accomplishing its goals. In addition, it aims to analyze the impacts or results of an intervention by verifying whether the outcomes follow the planned goal. Summative evaluation can also be subdivided into outcome evaluation and impact evaluation.
Outcome evaluation analyzes the impact of a program’s service delivery and organizational input on the desired outcomes. It also provides a summative assessment of the program implementation in achieving the desired outcomes. Outcome evaluation also looks for unintended outcomes or restraining factors that hinder programs from attaining the desired outcomes.
Usually, outcome evaluation employs qualitative approaches, design, and methods to ascertain short-term, medium-term, and long-term outcomes. However, in recent years, the emphasis has been on using mixed-method approaches and designs to provide a comprehensive outcome evaluation, ascertain the outcomes, and delineate the factors or strategies that contributed to achieving the outcomes (both formative and process) (Mohr, 1999).
Impact evaluation ascertains the project’s impact by analyzing whether the project’s activities and tasks have successfully achieved the desired objective/goal. It, therefore, assesses the overall effects of the program.
Impact evaluations are specific evaluations that try to ascertain the attribution of project intervention to the program’s impact. It seeks to answer the causal linkage between project intervention and project impact.
Unlike other types of evaluation, the central question in the case of impact evaluation is to ascertain the causal linkage to attribute the change to project intervention by creating a counterfactual. The idea of counterfactual is the key in impact evaluation, wherein one tries to find the counter to the factual group (control group) to ascertain what would have happened without the intervention. Impact evaluation demonstrates the impact as the difference in outcome achieved in the project and control groups.
Kultar Singh – Chief Executive Officer, Sambodhi