Rigorous impact evaluation when RCTs aren't possible
By Aubrey Jolex | February 15, 2025 | 14 min read
Randomized controlled trials (RCTs) are the gold standard for impact evaluation. But let's be honest: sometimes an RCT just isn't feasible.
Maybe your program is already running, political constraints prevent withholding services, or you're evaluating a national policy. Does this mean you can't conduct rigorous impact evaluation? No.
Enter quasi-experimental designs (QEDs)—methods that approximate experimental conditions without random assignment. When implemented carefully, QEDs can provide credible causal evidence.
Randomization is politically or ethically impossible.
Program is already implemented (too late for RCT).
Universal rollout prevents creating a control group.
Best for: Programs with strict eligibility cutoffs (e.g., test scores, income thresholds).
RDD compares people just above vs. just below the cutoff. If the cutoff is arbitrary, people on either side are likely very similar, making the difference in outcomes attributable to the program.
Example: Scholarship for students scoring ≥80. Compare students scoring 79 vs. 81. They are likely identical in ability, so any difference in future success is due to the scholarship.
Best for: Policy changes where you have pre- and post-data for treatment and comparison groups.
DID compares the change over time in the treatment group vs. the change in the comparison group. This removes time-invariant differences and common trends.
Best for: When you have rich data on all factors influencing participation.
PSM creates a comparison group by matching each treated person with a non-participant who has similar characteristics (age, education, motivation, etc.).
Best for: When you have a variable (instrument) that affects treatment but not outcomes directly.
IV uses "natural experiments" (like distance to a clinic or lottery numbers) to isolate causal effects.
QEDs require sophisticated statistical analysis to be credible. Our Analysis & Results module and consulting services can help.
Yes → Consider RDD (Regression Discontinuity)
Yes → Consider DID (Difference-in-Differences)
Yes → Consider PSM (Propensity Score Matching)
Yes → Consider IV (Instrumental Variables)
Since QEDs rely on assumptions, you need to work harder to prove your results are robust:
Quasi-experimental designs offer rigorous causal inference when RCTs aren't feasible. But they're not a free lunch—they require strong assumptions that must be justified and tested.
Whether RCT or QED, we help you measure what matters
Not sure which design fits your program? Let's discuss your options.
Book Now →Explore our toolkit for data management and analysis support.
Launch Toolkit →Aubrey Jolex has designed and implemented both experimental and quasi-experimental evaluations across multiple countries. Let's find the right approach for your context.