TermMethods

Randomised Controlled Trial

An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.

3 min read
Also known as:RCTRandomized Controlled TrialExperimental Design

Definition

A randomised controlled trial (RCT) is an experimental evaluation design that randomly assigns participants, communities, or units to either a treatment group (receiving the intervention) or a control group (not receiving it, or receiving a placebo/alternative). This random assignment ensures that, on average, the two groups are equivalent at baseline on both observed and unobserved characteristics. Any systematic difference in outcomes between groups at the end of the trial can therefore be attributed to the intervention itself, establishing causal attribution rather than mere correlation.

RCTs are considered the gold standard for impact evaluation when ethical and practical constraints allow. They directly address the counterfactual question — "what would have happened without the intervention?" — by using the control group as a proxy for this unobservable scenario.

Why It Matters

RCTs provide the strongest possible evidence for whether a programme actually works. In an M&E field often dominated by pre-post comparisons that cannot rule out alternative explanations, RCTs offer causal certainty. This matters particularly when:

  • Scale-up decisions depend on proving effectiveness beyond reasonable doubt
  • Donor requirements demand experimental-level evidence before funding expansion
  • Resource allocation between competing interventions requires comparative effectiveness data
  • Policy decisions hinge on knowing whether an approach causes improvement

The ability to claim with confidence that "this intervention caused those outcomes" transforms RCTs from academic exercises into powerful tools for evidence-based decision-making in international development.

In Practice

RCTs appear in M&E work primarily as impact evaluations conducted after a programme has demonstrated initial feasibility. The design requires:

  1. Random assignment mechanism — typically using random number generators or lottery systems to allocate participants to treatment and control groups
  2. Baseline data collection — measuring key outcomes before the intervention begins to verify randomisation succeeded and establish pre-intervention equivalence
  3. Implementation fidelity monitoring — ensuring the treatment group actually receives the intervention as designed while the control group does not
  4. Follow-up measurement — collecting outcome data after the intervention period to compare groups
  5. Statistical analysis — using appropriate tests to determine whether observed differences are statistically significant or could have occurred by chance

Common applications include testing educational interventions (e.g., does a new teaching method improve learning?), health programmes (e.g., does a vaccination campaign reduce disease incidence?), and economic development initiatives (e.g., does microfinance access increase household income?).

Ethical considerations are critical: RCTs may be inappropriate when withholding an intervention from a control group would cause harm, when the intervention is already proven effective elsewhere, or when random assignment is politically infeasible. In such cases, quasi-experimental designs offer a less rigorous but still valuable alternative.

Related Topics


Last updated: 2026-02-27