5 Common RCT Design Mistakes
And actionable strategies to avoid them in your evaluation
By Aubrey Jolex | February 10, 2025 | 15 min read
Randomized controlled trials (RCTs) are the gold
standard for impact evaluation—when done correctly. But we’ve reviewed dozens of RCT
designs over our years in the field, and we see the same mistakes repeatedly.
These aren’t minor technical issues. They’re fundamental flaws that can invalidate your entire
evaluation, waste resources, and lead to incorrect conclusions about program effectiveness.
Mistake #1: Insufficient Statistical Power
The Problem: You design an RCT with too small a sample to detect meaningful program
effects.
Real Example
An NGO randomizes 20 schools (1,000 students). Sounds big enough? Wrong. After
accounting for clustering, this design has only 35% power. Even if the program
works, there’s a 65% chance the evaluation will conclude “no significant effect.”
The Fix: Proper Power Analysis
Don’t guess your sample size. Use our RCT Field Flow Toolkit to calculate
exactly what you need.
- Calculate sample size for individual & cluster RCTs
- Account for attrition and baseline correlation
- Visualize power curves interactively
Mistake #2: Poor Randomization Implementation
The Problem: Randomization is compromised by field staff or logistical errors,
undermining the entire design.
Common Failures
- Staff “randomly” assigning based on need
- Swapping participants after assignment
- Using predictable patterns (every other person)
How to Avoid It
- Centralize randomization (computer-based)
- Blind staff to assignment when possible
- Lock assignment lists immediately
Mistake #3: Measuring Outcomes Too Soon (Or
Late)
The Problem: You collect data before effects materialize or after they’ve faded.
Too Soon
Measuring employment 3 months after training. Job search takes time—you’ll miss the impact.
Too Late
Measuring health 5 years after short-term vitamin supplements. Effects may have faded.
Rule of Thumb: Map your theory of change carefully.
Education often needs 1-2 years; health might need 6-24 months depending on the outcome.
Mistake #4: Ignoring Implementation Fidelity
The Problem: You assume the program was implemented as designed, but it wasn’t. A
“null result” might just mean the program never actually happened.
Monitor Your Fieldwork Real-Time
Use the Monitoring Dashboard in our toolkit to track implementation fidelity as
it happens.
- Track submissions per enumerator
- Verify intervention delivery
- Detect quality issues immediately
Mistake #5: Multiple Testing Without
Correction
The Problem: Testing 30 outcomes and reporting only the 2 “significant” ones. This
is statistical cherry-picking.
The Solution
- Pre-specify primary outcomes: Pick 1-2 main goals.
- Adjust p-values: Use Bonferroni or Benjamini-Hochberg corrections.
- Create indices: Combine related measures (e.g., “Empowerment Index”) to
reduce the number of tests.
Bonus: Skipping Pre-Registration
Pre-registration prevents “p-hacking” and “outcome switching.” Always register your study design and
analysis plan on the AEA RCT Registry or OSF before collecting endline data.
Conclusion
Great RCT design requires care, technical knowledge, and commitment to rigor. Avoid these five
mistakes:
Checklist for Success
- Conduct proper power analysis
- Implement randomization with integrity
- Time your measurement appropriately
- Monitor implementation fidelity
- Correct for multiple testing
Avoid These Mistakes with Our Toolkit
Tools designed to ensure rigor at every step
RCT Field Flow
Comprehensive platform for power analysis, randomization, and field monitoring.
Expert Guidance
Don’t risk your evaluation. Schedule a consultation to review your design.
About the Author
Aubrey Jolex has designed and implemented dozens of RCTs across Asia and Africa with 7+ years of
experience at IFPRI. Learn from real-world experience—avoid costly mistakes in your
evaluation.