fbpx

Quasi-experimental Design

Sambodhi > Blog > Research and M&E > Quasi-experimental Design
Posted by: Kultar Singh
Category: Research and M&E
quasi experimental design

It is often not possible to do a randomized controlled trial or RCT in a real-world setting for impact evaluation. Evaluators must explore alternative options to evaluate the campaign and build on the counterfactual using a quasi-experimental design. 

Quasi-experimental design, as the name implies, is similar to experimental design. In a quasi-experimental design, one uses ways to create counterfactuals by using other possible options to ensure comparability by minimizing confounding factors. They make a comparison group and then mimic the characteristics of the control group as much as possible. However, identifying a strong counterfactual to attribute changes in outcomes or impact regarding specific interventions is crucial. Some of the most popular quasi-experimental designs are:

Difference-in-Differences

The double difference or the difference-in-differences (DID) method compares treatment and a comparison group (first difference) before and after the intervention (second difference). This method applies in experimental and quasi-experimental settings and requires baseline and follow-up data for the same treatment or control group.

Propensity Score Matching

As a quasi-experimental design, propensity score matching (PSM) has gained popularity in the past two decades. Propensity score design uses the propensity score as a counterfactual for the computation of comparison estimates. The main steps as part of PSM are :

  1. Creating dichotomous variables for the two groups (project and comparison area)
  2. Generating propensity scores using logistic regression or another estimation model (depending on the objective) assigns each household a propensity scoring
  3. Balancing the matching set of households to  ensure equal means in the block
  4. Calculating the average treatment effect by local linear regression matching or other estimation methods depending on variable nature by providing common support

Regression Discontinuity

Regression discontinuity is a quasi-experimental method to estimate the causal effect of an intervention. It creates counterfactuals based on a cutoff that segregates participants and non-participants. Donald L. Thistlethwaite (1960) introduced the RD design to estimate treatment effects. They created a counterfactual based upon a cutoff that determines the intervention or comparison group. This design allows evaluators to compare individuals whose values are above or below the cutoff point to estimate treatment effects.

Promotion of Randomization/ Instrumental Variable

Instrumental variables (IVs) can evaluate programs with poor compliance, voluntary enrollment, or universal coverage. While IVs control participation, it is not tied to the outcome.

Randomized encouragement designs are a way to solve the evaluation problem. They randomly offer different incentives for participating in a programme without altering outcomes. This makes it possible to measure the average treatment effect.

Interrupted Time-Series Design

In an interrupted data series, observation is conducted before and after the treatment. The discontinuity can determine the evidence of the intervention’s effect in the time series data when the treatment was initiated.

Quasi-Experimental Design: When should it be used?

For several reasons, a quasi-experimental design can be more ethical than a true experimental one. Research methods that use random assignments can be ethically questionable in certain circumstances like transferring conditional cash to one treatment group and withholding it from the other. Further, a randomized controlled study may not be the best option for researchers for practical reasons, it is expensive and requires more logistical resources.

References:

Campbell DT. Counterbalanced design. In: Company RMCP, editor. Experimental and Quasi-experimental Designs for Research. Chicago: Rand-McNally College Publishing Company, 1963, 50–5.

Cook TD, Campbell DT. Quasi-experimentation: design and analysis issues for field settings. Chicago: Rand McNally Publishing Company, 1979.

Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for evaluating guideline implementation strategies. Fam Pract. 2000;17(Suppl 1):S11–6. 

Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin, 2002.

Thistlethwaite, D. L., & Campbell, D. T. (1960). Regression-discontinuity analysis: An alternative to the ex post facto experiment. Journal of Educational Psychology, 51(6), 309–317.

Kultar Singh – Chief Executive Officer, Sambodhi


Author: Kultar Singh

Leave a Reply