Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Randomised Controlled Trial
TermMethods3 min read

Randomised Controlled Trial

An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.

Definition

A randomised controlled trial (RCT) is an experimental evaluation design that randomly assigns participants, communities, or units to either a treatment group (receiving the intervention) or a control group (not receiving it, or receiving a placebo/alternative). This random assignment ensures that, on average, the two groups are equivalent at baseline on both observed and unobserved characteristics. Any systematic difference in outcomes between groups at the end of the trial can therefore be attributed to the intervention itself, establishing causal attribution rather than mere correlation.

RCTs are considered the gold standard for impact evaluation when ethical and practical constraints allow. They directly address the counterfactual question, "what would have happened without the intervention?", by using the control group as a proxy for this unobservable scenario.

Why It Matters

RCTs provide the strongest possible evidence for whether a programme actually works. In an M&E field often dominated by pre-post comparisons that cannot rule out alternative explanations, RCTs offer causal certainty. This matters particularly when:

  • Scale-up decisions depend on proving effectiveness beyond reasonable doubt
  • Donor requirements demand experimental-level evidence before funding expansion
  • Resource allocation between competing interventions requires comparative effectiveness data
  • Policy decisions hinge on knowing whether an approach causes improvement

The ability to claim with confidence that "this intervention caused those outcomes" transforms RCTs from academic exercises into powerful tools for evidence-based decision-making in international development.

In Practice

RCTs appear in M&E work primarily as impact evaluations conducted after a programme has demonstrated initial feasibility. The design requires:

  1. Random assignment mechanism: typically using random number generators or lottery systems to allocate participants to treatment and control groups
  2. Baseline data collection: measuring key outcomes before the intervention begins to verify randomisation succeeded and establish pre-intervention equivalence
  3. Implementation fidelity monitoring: ensuring the treatment group actually receives the intervention as designed while the control group does not
  4. Follow-up measurement: collecting outcome data after the intervention period to compare groups
  5. Statistical analysis: using appropriate tests to determine whether observed differences are statistically significant or could have occurred by chance

Common applications include testing educational interventions (e.g., does a new teaching method improve learning?), health programmes (e.g., does a vaccination campaign reduce disease incidence?), and economic development initiatives (e.g., does microfinance access increase household income?).

Ethical considerations are critical: RCTs may be inappropriate when withholding an intervention from a control group would cause harm, when the intervention is already proven effective elsewhere, or when random assignment is politically infeasible. In such cases, quasi-experimental designs offer a less rigorous but still valuable alternative.

Related Topics

  • Quasi-Experimental Design, Alternative when random assignment is not feasible
  • Impact Evaluation, RCTs are the gold standard for impact evaluation
  • Counterfactual, RCTs directly construct a counterfactual through the control group
  • Causal Inference, RCTs provide the strongest basis for causal claims
  • Random Sampling, Distinct from random assignment; sampling selects participants, assignment allocates groups

At a Glance

Establishes causal attribution by comparing outcomes between randomly assigned treatment and control groups.

Best For

  • Testing whether a specific intervention causes observed outcomes
  • High-stakes impact evaluations where causal certainty is required
  • Programme scale-up decisions based on rigorous evidence
  • Donor requirements for experimental-level evidence

Complexity

High

Timeframe

6-18 months including design, implementation, and analysis

Linked Indicators

12 indicators across 4 donor frameworks

USAIDDFIDWorld BankFCDO

Examples

  • Proportion of impact evaluations using experimental or quasi-experimental designs
  • Number of RCTs conducted to test programme effectiveness
  • Percentage of scale-up decisions informed by experimental evidence

Related Topics

Pillar
Quasi-Experimental Design
A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.
Pillar
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
Core Concept
Sampling Methods
Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.
Term
Counterfactual
The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.
Term
Causal Inference
The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.
Term
Attribution vs Contribution
The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).