Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Counterfactual
TermMethods3 min read

Counterfactual

The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.

Definition

A counterfactual represents what would have happened to a group of beneficiaries in the absence of an intervention, the alternative reality against which actual outcomes are compared. It is the fundamental basis for establishing causal attribution in impact evaluation: without knowing what would have occurred anyway, you cannot claim your programme caused observed changes.

In practice, the counterfactual is constructed through comparison groups (control or treatment groups), statistical techniques (propensity score matching, regression discontinuity), or quasi-experimental designs that approximate what a true control group would have experienced. The counterfactual is not merely a baseline measurement, it is a comparison of post-intervention outcomes between those who received the intervention and those who did not.

Why It Matters

The counterfactual is essential because correlation does not imply causation. Without a counterfactual, you cannot distinguish programme effects from:

  • Secular trends: changes that would have occurred anyway due to broader economic, political, or social forces
  • Selection bias: the fact that programme participants often differ systematically from non-participants in ways that affect outcomes
  • External shocks: events like market changes, climate events, or policy shifts that affect all groups
  • Maturation: natural changes that occur over time regardless of intervention

When donors require evidence that a programme "made a difference" or "caused improvement," they are asking for counterfactual-based attribution. Without it, you can only describe what happened, not what your programme achieved.

In Practice

Counterfactuals appear in several forms across evaluation approaches:

Randomised Controlled Trials (RCTs) create the gold-standard counterfactual by randomly assigning participants to treatment and control groups. Randomisation ensures the groups are statistically equivalent at baseline, so any post-intervention differences can be attributed to the programme.

Quasi-experimental designs approximate counterfactuals when randomisation is not feasible. Propensity score matching compares programme participants with non-participants who have similar observable characteristics. Regression discontinuity exploits arbitrary cutoffs (e.g., only communities above a poverty threshold receive the programme) to create comparable groups.

Difference-in-differences compares changes over time between treatment and control groups, isolating the programme effect from trends affecting both groups.

Contribution analysis takes a different approach when counterfactuals cannot be constructed: it builds a causal story through evidence that alternative explanations have been ruled out, rather than through direct comparison.

The choice of counterfactual method depends on feasibility, ethics, resources, and the strength of attribution required. RCTs provide the strongest claims but are often impractical or unethical. Quasi-experimental methods offer a compromise but require careful design to avoid bias.

Related Topics

  • Quasi-Experimental Design, Methods for constructing counterfactuals without randomisation
  • Impact Evaluation, Evaluations specifically designed to establish causal attribution
  • Contribution Analysis, Alternative approach when counterfactuals are not feasible
  • Attribution vs Contribution, Understanding the distinction between these two approaches to causal claims
  • Randomised Controlled Trial, The gold-standard method for counterfactual construction

Further Reading

  • What Works Clearinghouse Procedures and Standards Handbook, U.S. Department of Education. Standards for counterfactual-based evidence in education interventions.
  • Imbens, G. W., & Rubin, D. B. (2015). Causal Inference for Statistics, Social, and Biomedical Sciences, Technical foundation for counterfactual reasoning and causal inference methods.
  • BetterEvaluation: Counterfactual Approaches, Overview of methods for constructing counterfactuals in evaluation practice.
  • World Bank Impact Evaluation Guide, Practical guidance on counterfactual methods for development programmes.

At a Glance

Establishes whether observed outcomes can be causally attributed to an intervention rather than other factors.

Best For

  • Impact evaluations requiring causal claims
  • Distinguishing programme effects from external influences
  • Designing rigorous evaluation approaches
  • Understanding limitations of attribution

Complexity

High

Timeframe

Built into evaluation design; analysis occurs post-intervention

Linked Indicators

12 indicators across 4 donor frameworks

USAIDDFIDWorld BankEU

Examples

  • Proportion of impact evaluations that explicitly address counterfactual
  • Average difference in outcomes between treatment and control groups
  • Statistical significance of counterfactual-based impact estimates

Related Topics

Pillar
Quasi-Experimental Design
A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.
Pillar
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
Pillar
Contribution Analysis
A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.
Term
Attribution vs Contribution
The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).
Term
Randomised Controlled Trial
An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.