The Synthetic Control Method (SCM) is now a widely used tool in the policy evaluation literature for assessing the impacts of specific interventions (such as an event) on specific outcomes of interest (Abadie & Gardeazabal, 2003; Abadie et al., 2010 2015; Ben-Michael et al., 2021). The technique is based on pretreatment result matching in general (Imbens & Wooldridge, 2009). It uses pre-intervention trends in outcomes and supporting factors to simulate a counterfactual outcome scenario corresponding to the intervention’s hypothetical absence. The strategy’s primary premise is to generate the desired outcome from a control group of units that have never received the intervention but have outcome and covariate levels comparable to the treated unit in the pre-intervention period.
It is carried out by generating a weighted average of control units whose outcome and covariate values correspond to those of the treated unit before the intervention. It’s known as an artificial unit, a synthetic unit, or a synthetic control group. Suppose the prior outcomes and variables in the treated unit and its control group are reasonably well balanced. In that case, the difference in results between the treated unit and its artificial (i.e., synthetic) control group may indicate the influence of the policy or intervention in the treated unit.
Aspects of matching and difference-in-differences processes are incorporated into the synthetic control method. By averaging the results over a collection of unaffected units, difference-in-differences approaches are often used as policy assessment tools that assess the impact of an intervention at an aggregate level.
The method’s key drawback is achieving an exact balance between the treatment and control units without bias (Ferman & Pinto, 2021; Ben Michael et al., 2021). Several empirical approaches have been put forth to reduce the bias caused by insufficient pre-intervention matching (Abadie & Imbens, 2011; Doudchenko & Imbens, 2016; Li, 2020). These approaches include aiming for exact matching, as in the case of using an outcome model with a lengthy pre-intervention period (Garoupa & Spruk, 2019), using the negative weighting of the synthetic control group through calibrated propensity scores.
Comparative Case Study and Synthetic Control
Researchers compare treatment units to control units in comparative case studies. Therefore, comparative case studies can only be performed when some units are exposed while others are not (or when the exposure levels of the two units are significantly different). In the social sciences, comparative case study research offers a lot of potential. However, inferential issues and uncertainty over the choice of reliable control groups have plagued the empirical applications of comparative case studies. Several papers have suggested employing synthetic control ways to address these issues, building on an idea put forth by Abadie and Gardeazabal (2003). A data-driven procedure can create a weighted mixture of likely comparison zones that roughly approximates the most crucial characteristics of the units exposed to the intervention.
References #
- Abadie, A., & Gardeazabal, J. (2003). The economic costs of conflict: A case study of the Basque Country. American Economic Review, 93(1), 113–132.
- Abadie, A., Diamond, A., & Hainmueller, J. (2010). Synthetic Control Methods for comparative case studies: Estimating the effect of California’s tobacco control program. Journal of the American Statistical Association, 105(490), 493–505.
- Abadie, A., Diamond, A., & Hainmueller, J. (2015). Comparative politics and the Synthetic Control Method. American Journal of Political Science, 59(2), 495–510.
- Abadie, A., & L’Hour, J. (2021). A penalized synthetic control estimator for disaggregated data. Journal of the American Statistical Association, 116(536), 1817–1834.
- Ben-Michael, E., Feller, A., & Rothstein, J. (2021). The augmented synthetic control method. Journal of the American Statistical Association, 116(536), 1789–1803.
- Doudchenko, N., & Imbens, G. W. (2016). Balancing, regression, difference-in-differences and Synthetic Control Methods: A synthesis [Working Paper No. 22791]. National Bureau of Economic Research.
- Ferman, B., & Pinto, C. (2021). Synthetic controls with imperfect fit. Quantitative Economics, 12(4), 1197–1221.
- Imbens, G. W., & Wooldridge, J. M. (2009). Recent developments in the econometrics of program evaluation. Journal of Economic literature, 47(1), 5–86.
- Li, K. T. (2020). Statistical inference for average treatment effects estimated by synthetic control methods. Journal of the American Statistical Association, 115(532), 2068–2083.