Quasi-Experimental Designs
Rigorous impact evaluation when RCTs aren’t possible
By Aubrey Jolex | February 15, 2025 | 14 min read
Randomized controlled trials (RCTs) are the gold standard
for impact evaluation. But let’s be honest: sometimes an RCT just isn’t feasible.
Maybe your program is already running, political constraints prevent withholding services, or you’re
evaluating a national policy. Does this mean you can’t conduct rigorous impact evaluation?
No.
Enter quasi-experimental designs (QEDs)—methods that approximate experimental conditions
without random assignment. When implemented carefully, QEDs can provide credible causal evidence.
When to Use Quasi-Experimental Designs
✓ Feasibility
Randomization is politically or ethically impossible.
✓ Timing
Program is already implemented (too late for RCT).
✓ Scale
Universal rollout prevents creating a control group.
Four Common Quasi-Experimental Designs
1. Regression Discontinuity Design (RDD)
Best for: Programs with strict eligibility cutoffs (e.g., test scores, income
thresholds).
RDD compares people just above vs. just below the cutoff. If the cutoff is arbitrary, people on either
side are likely very similar, making the difference in outcomes attributable to the program.
Example: Scholarship for students scoring ≥80. Compare students scoring 79 vs. 81.
They are likely identical in ability, so any difference in future success is due to the scholarship.
2. Difference-in-Differences (DID)
Best for: Policy changes where you have pre- and post-data for treatment and comparison
groups.
DID compares the change over time in the treatment group vs. the change in the
comparison group. This removes time-invariant differences and common trends.
3. Propensity Score Matching (PSM)
Best for: When you have rich data on all factors influencing participation.
PSM creates a comparison group by matching each treated person with a non-participant who has similar
characteristics (age, education, motivation, etc.).
4. Instrumental Variables (IV)
Best for: When you have a variable (instrument) that affects treatment but not outcomes
directly.
IV uses “natural experiments” (like distance to a clinic or lottery numbers) to isolate causal effects.
📊 Need Advanced Analysis?
QEDs require sophisticated statistical analysis to be credible. Our Analysis &
Results module and consulting services can help.
- Rigorous statistical modeling (RDD, DID, PSM)
- Robustness checks and sensitivity analysis
- Clear interpretation of causal claims
Which Design Should You Use?
1. Eligibility Cutoff?
Yes → Consider RDD (Regression Discontinuity)
2. Pre/Post Data?
Yes → Consider DID (Difference-in-Differences)
3. Rich Covariates?
Yes → Consider PSM (Propensity Score Matching)
4. Valid Instrument?
Yes → Consider IV (Instrumental Variables)
Strengthening Your Design
Since QEDs rely on assumptions, you need to work harder to prove your results are robust:
- Combine Methods: Use DID + PSM together for stronger evidence.
- Falsification Tests: Test for effects on “placebo” outcomes that shouldn’t change.
- Sensitivity Analysis: Show that results hold under different assumptions.
- Transparency: Be honest about limitations and assumptions.
Conclusion
Quasi-experimental designs offer rigorous causal inference when RCTs aren’t feasible. But they’re not a
free lunch—they require strong assumptions that must be justified and tested.
Key Takeaways
- Choose design based on available data and variation
- Understand and defend your assumptions (Parallel Trends, Continuity)
- Test robustness extensively with placebo tests
- Be honest about limitations
Expert Evaluation Support
Whether RCT or QED, we help you measure what matters
📅 Free Consultation
Not sure which design fits your program? Let’s discuss your options.
ðŸ› ï¸ Analysis Tools
Explore our toolkit for data management and analysis support.
About the Author
Aubrey Jolex has designed and implemented both experimental and quasi-experimental evaluations across
multiple countries. Let’s find the right approach for your context.