fbpx

Process evaluation: The Idea, Objective, and Process

Sambodhi > Blog > Research and M&E > Process evaluation: The Idea, Objective, and Process
Posted by: Kultar Singh
Category: Research and M&E
Process Evaluation

Process evaluation focuses on the implementation process and aims to identify how the project has adhered to the logic model’s strategy or prescribed theory of change. Unlike outcome and impact evaluations, process evaluations concentrate on inputs, activities, and outputs and their interaction. When doing process evaluation, one is not interested in the cause but is more interested in how the process of executing the intervention is rolled out and if everything is going as per the defined process. If this is not happening, we need to understand what’s going wrong with the process.

Process evaluation enables assessors to distinguish between implementation and implementation failures. Implementation failure occurs when intended outcomes are not achieved due to ineffective implementation techniques, such as unfulfilled objectives and a shortage of qualified workers.

Objective

The key objective of the process evaluation is to assess the intervention fidelity, defined as the extent to which programs are implemented in the manner intended in the design. Fidelity can be defined as adherence to the intervention regarding content, coverage, frequency, and duration. However, probable adherence mediators must also be considered, such as intervention complexity, implementation methods, delivery quality, and participant responsiveness.

Advantages

There are considerable advantages to undertaking a process evaluation. First, it serves as a checkpoint during program implementation to verify that the program is delivered according to the plan. Further, If any requirement is not satisfied, results at this level of review might save time and money throughout the later stages of evaluation. Second, a process evaluation establishes a feedback loop by including regular evaluations and monitoring findings. Data collection and analysis may show early programmatic difficulties because these actions are performed consistently throughout the project. Third, it enables implementers to adjust program activities appropriately and increase the likelihood of beneficial results. Finally, process evaluation enables evaluators and program implementors to identify programmatic strengths and shortcomings and enhance the program’s design for future scale-up.

The process evaluation has the strength to look at the entire intervention or components of the intervention to assess how they are delivered to target participants. In addition, it assesses an intervention’s effect on process indicators.

When to use Process evaluation

While outcome and impact evaluation are great for assessing the program’s effect, if you want to make changes to your program as it is implemented, you should go for a process evaluation. Through process evaluation, one can make course corrections to improve the program and figure out why one is not achieving outcomes, and one may be able to break down the process to look for issues for course correction.

People might want to do a process evaluation to determine which parts of their program are the most important. Further, it is essential to highlight that while coverage is all about how well the intervention reaches the people and spreads, one is also interested in knowing whether the services went to the right people and what is the makeup of these people? Further, it also helps assess whether one is only picking the people who are most likely to have a good outcome and not reaching those who might need the services. It also assesses who’s getting full service, who dropped out, and reasons for not reaching out to the desired target group.

Frameworks for Process evaluation

Process evaluation uses a range of qualitative and quantitative methods and approaches. One of the well-used frameworks is the MRC (Medical Research Council) process evaluation framework. The framework argued that understanding the causal assumptions underlying the intervention and evaluating how interventions work in practice are critical for developing an evidence base that informs policy and practice. Furthermore, the framework underscored the dynamic nature of relationships between implementation, mechanisms, and context and used these relationships to conduct a process evaluation. Evaluators can also use other popular frameworks, such as Realist evaluation (it answers questions, such as what works, for whom, in which circumstances, and why) or RE-AIM (reach, effectiveness, adoption, implementation, maintenance) framework, too for process evaluation.

Stages of Process evaluation

Planning

In process evaluations, planning is the key. The first stage identifies the purpose of process assessment and balances the necessity for intensive observation with the need to retain credibility as independent assessors. It is also crucial to agree on whether evaluators would actively convey results as they arise or assume a more passive position.

Process evaluation involves abilities in qualitative and participative research techniques and relevant interdisciplinary theoretical competence. Further, it is vital to develop good communication channels to minimize duplication and conflict between process and outcome evaluation.

Conception and execution

It is critical to express the intervention and its causal assumptions clearly. Additionally, it is vital to identify the crucial uncertainties, choose the most relevant issues, and answer them methodically. Finally, it is essential to identify prospective questions based on the intervention’s assumptions and to agree on scientific and policy priority questions based on the evidence supporting the intervention’s assumptions.

In the case of process evaluation, a mixed-method approach is preferable. It captures emergent changes in implementation, participant experiences with the intervention, unexpected or complicated causal pathways, and the generation of new theories. Additionally, qualitative approaches may be used to document emergent changes in intervention implementation, participant experiences with the intervention, and unforeseen or complicated causal pathways and produce new theories.

The frequency of data gathering is critical in evaluating processes. One should gather data several times to document the intervention’s evolution over time. This should be accomplished by balancing data collection on critical process variables from all sites or participants with comprehensive data from smaller, purposefully selected samples.

Analysis

Consider more detailed modeling of variances in fidelity or reach among participants or places. Integrate quantitative process data with outcome data to see if effects change according to implementation or preset contextual elements and assess hypothesized mediators.

To ascertain the complementarity of quantitative and qualitative investigations, one should regularly collect and analyze qualitative data and study themes that emerge during the first interviews and throughout the later interviews. It is also important to analyze and disseminate the processed data before knowing the final findings, to avoid biased interpretation and indicate explicitly whether process data are being used to generate hypotheses or provide post-hoc explanations.

Reporting

The fundamentals of reporting a process assessment are identical to reporting an outcome evaluation. However, the presentation and visualization are different. In terms of publication, it is necessary to establish current reporting rules that apply to the methodology used. A thorough report encompassing all assessment components should be published, as should a protocol paper summarising the whole evaluation. Additionally, it is critical to highlight contributions to intervention theory or technological development to stimulate the reader’s attention beyond the intervention at hand.

References :

Cartwright, N, Hardie, J (2012) Evidence-Based Policy: A Practical Guide to Doing It Better. New York: Open University Press.

Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337:a1655.

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–1327

Gray D. Doing research in the real world – theoretical perspectives and research methodologies. London: Sage Publications Inc; 2018.

Greene JC. Mixed methods in social inquiry. San Francisco: Wiley; 2007.

Oakley A, Strange V, Bonell C, Allen E, Stephenson J. Process evaluation in randomized controlled trials of complex interventions. BMJ. 2006;332(7538):413–6.

Pawson R, Tilley N. Realistic Evaluation. London: Sage Publications; 1997.

Kultar Singh – Chief Executive Officer, Sambodhi

Author: Kultar Singh

Leave a Reply