And actionable strategies to avoid them in your evaluation
By Aubrey Jolex | February 10, 2025 | 15 min read
Randomized controlled trials (RCTs) are the gold standard for impact evaluation—when done correctly. But we've reviewed dozens of RCT designs over our years in the field, and we see the same mistakes repeatedly.
These aren't minor technical issues. They're fundamental flaws that can invalidate your entire evaluation, waste resources, and lead to incorrect conclusions about program effectiveness.
The Problem: You design an RCT with too small a sample to detect meaningful program effects.
An NGO randomizes 20 schools (1,000 students). Sounds big enough? Wrong. After accounting for clustering, this design has only 35% power. Even if the program works, there's a 65% chance the evaluation will conclude "no significant effect."
Don't guess your sample size. Use our RCT Field Flow Toolkit to calculate exactly what you need.
The Problem: Randomization is compromised by field staff or logistical errors, undermining the entire design.
The Problem: You collect data before effects materialize or after they've faded.
Measuring employment 3 months after training. Job search takes time—you'll miss the impact.
Measuring health 5 years after short-term vitamin supplements. Effects may have faded.
Rule of Thumb: Map your theory of change carefully. Education often needs 1-2 years; health might need 6-24 months depending on the outcome.
The Problem: You assume the program was implemented as designed, but it wasn't. A "null result" might just mean the program never actually happened.
Use the Monitoring Dashboard in our toolkit to track implementation fidelity as it happens.
The Problem: Testing 30 outcomes and reporting only the 2 "significant" ones. This is statistical cherry-picking.
Pre-registration prevents "p-hacking" and "outcome switching." Always register your study design and analysis plan on the AEA RCT Registry or OSF before collecting endline data.
Great RCT design requires care, technical knowledge, and commitment to rigor. Avoid these five mistakes:
Tools designed to ensure rigor at every step
Comprehensive platform for power analysis, randomization, and field monitoring.
Launch Toolkit →Don't risk your evaluation. Schedule a consultation to review your design.
Book Consultation →Aubrey Jolex has designed and implemented dozens of RCTs across Asia and Africa with 7+ years of experience at IFPRI. Learn from real-world experience—avoid costly mistakes in your evaluation.