Impact evaluations establish a causal link between a program and an outcome by excluding the idea that any other factors outside the program of interest may have been responsible for the observed impact. It is done by developing a counterfactual scenario.
The concept of the counterfactual is essential to every impact evaluation one undertakes. The purpose of using counterfactuals is to attempt to answer the question, “What would have happened to the project population if the project had not been implemented?”.
It is well established that conducting a Randomized Control Trial (RCT) to evaluate the project by creating a control group is one of the most well-known and widely applied approaches for answering the previous question. Despite this, it is not always possible to carry out a randomized controlled trial in the real world because of factors such as the nature of the intervention, the cost, and the scale. When this occurs, one has the option of constructing a robust counterfactual group through the use of quasi-experimental evaluation procedures.
However, there are situations where a quasi-experimental design also cannot be used to generate a counterfactual scenario. When this occurs, the most crucial challenge is assessing the results and effects of the program without the benefit of a control group that is statistically comparable. When faced with such circumstances, considering the real difficulties and circumstances surrounding the construction of comparison groups, it is wise to investigate what other options one has besides the normal counterfactual and the statistical counterfactual. As a result, evaluations frequently need to develop novel approaches to create counterfactuals, i.e., to find sources of information about communities, groups, or organizations that have characteristics comparable to those of the project but have not been subjected to the intervention that the project is carrying out.
An impact evaluation using alternative approaches can be carried out in various ways. What is required is a combination of strategies and plans tailored to the specific circumstances. When deciding which methods and designs to utilize, several factors must be considered: the available resources, evaluation, and the purpose for which the evaluation will be used. Further from the available approaches, we shall also decide when and how to use them to our advantage. How can they adequately examine and rule out possible alternative reasons for the changes that have been noticed in the population that is of interest? Which of these methods best measures the efficacy of intricately designed programs?
The following is a list of the several methods that can either be used on their own or in conjunction with other methods to produce a counterfactual for impact evaluation:
- Pipeline Design
- Historical Method
- Longitudinal Panel
- Process Tracing
- Contribution Analysis
- Comparative Case Study Method
- Statistical Control Using Secondary Data
- Outcome Mapping
- Outcome Harvesting
- Realist Analysis
References
Baker, Judy (2000) Evaluating the Impact of Development Projects on Poverty: A Handbook for Practitioners Washington D.C.: World Bank.
Bamberger, Michael (2006) Conducting Quality Impact Evaluations under Budget, Time, and Data Constraints. IEG: World Bank, Washington D.C.
Bamberger M (2012). Introduction to Mixed Methods in Impact Evaluation. Guidance Note No. 3. Washington DC: InterAction.
Bonbright D (2012). Use of Impact Evaluation Results. Guidance Note No. 4. Washington DC: InterAction. – This guidance note highlights three themes that are crucial for the effective utilization of evaluation results.
Gertler, P. J., Martinez, S., Premand, P., Rawlings, L. B., & Vermeersch, C. M. (2016). Impact evaluation in practice. World Bank Publications.
Gupta Kapoor, A. (2002) Review of the Impact Evaluation Methodologies Used by the Operations Evaluation Department Over the Past 25 Years, OED Working Paper, Washington, D.C.: World Bank.
Khandker, S. R., Koolwal, G. B., & Samad, H. A. (2009). Handbook on impact evaluation: quantitative methods and practices. World Bank Publications.
Kremer, Michael and Esther Duflo (2005) “Use of Randomization in the Evaluation of Development Effectiveness” in Pitman et al.
Rogers P (2012). Introduction to Impact Evaluation. Impact Evaluation Notes No. 1. Washington DC: InterAction.
Shadish, William, Thomas Cook and Donald Campbell (2006) Experimental and Quasi- Experimental Designs for Generalized Causal Inference. Academic Internet Publishers