Skip to main content
M&E Studio
AI for M&E
AI How-TosPromptsPlaybooksPlugins
Indicators
Workflows
M&E Resources
M&E MethodsReference Library
About
Services
FR — FrançaisES — Español
M&E Studio

AI for M&E, Built for Practitioners

AI for M&E

  • AI How-Tos
  • Prompts
  • Playbooks
  • Plugins
  • Indicators
  • Workflows

M&E Resources

  • M&E Methods
  • Reference Library
  • Decision Guides
  • Tools
  • Courses

Company

  • About
  • Services
  • Contact
  • LinkedIn

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

Library
  1. M&E Library
  2. /
  3. Counterfactual

Counterfactual

The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.

Definition

A counterfactual represents what would have happened to a group of beneficiaries in the absence of an intervention - the alternative reality against which actual outcomes are compared. It is the fundamental basis for establishing causal attribution in impact evaluation: without knowing what would have occurred anyway, you cannot claim your program caused observed changes.

In practice, the counterfactual is constructed through comparison groups (control or treatment groups), statistical techniques (propensity score matching, regression discontinuity), or quasi-experimental designs that approximate what a true control group would have experienced. The counterfactual is not merely a baseline measurement - it is a comparison of post-intervention outcomes between those who received the intervention and those who did not.

Why It Matters

The counterfactual is essential because correlation does not imply causation. Without a counterfactual, you cannot distinguish program effects from:

  • Secular trends: changes that would have occurred anyway due to broader economic, political, or social forces
  • Selection bias: the fact that program participants often differ systematically from non-participants in ways that affect outcomes
  • External shocks: events like market changes, climate events, or policy shifts that affect all groups
  • Maturation: natural changes that occur over time regardless of intervention

When donors require evidence that a program "made a difference" or "caused improvement," they are asking for counterfactual-based attribution. Without it, you can only describe what happened, not what your program achieved.

In Practice

Counterfactuals appear in several forms across evaluation approaches:

Randomised Controlled Trials (RCTs) create the gold-standard counterfactual by randomly assigning participants to treatment and control groups. Randomisation ensures the groups are statistically equivalent at baseline, so any post-intervention differences can be attributed to the program.

Quasi-experimental designs approximate counterfactuals when randomisation is not feasible. Propensity score matching compares program participants with non-participants who have similar observable characteristics. Regression discontinuity exploits arbitrary cutoffs (e.g., only communities above a poverty threshold receive the program) to create comparable groups.

Difference-in-differences compares changes over time between treatment and control groups, isolating the program effect from trends affecting both groups.

Contribution analysis takes a different approach when counterfactuals cannot be constructed: it builds a causal story through evidence that alternative explanations have been ruled out, rather than through direct comparison.

The choice of counterfactual method depends on feasibility, ethics, resources, and the strength of attribution required. RCTs provide the strongest claims but are often impractical or unethical. Quasi-experimental methods offer a compromise but require careful design to avoid bias.

Proposal Context

Counterfactual framing matters in proposals whenever the evaluation plan claims impact attribution. Most end-of-project evaluations can answer "what changed?" without a counterfactual; only a subset answer "what change was caused by the program?", and those require counterfactual design. Common proposal pitfalls: (a) claiming impact evaluation in the narrative but proposing only a pre-post design without comparison (this is contribution analysis, not counterfactual impact evaluation), (b) proposing RCT or quasi-experimental design without the budget or scale to execute (typical impact evaluation adds 50-150% cost to a standard evaluation), (c) overclaiming attribution from non-counterfactual designs (post-evaluation reports that assert "the program caused X% improvement" without counterfactual evidence), (d) missing the option to use theory-based evaluation when counterfactual is infeasible (see theory-based-evaluation). Propose counterfactual design only where the program design, budget, and context support it.

Related Topics

  • Quasi-Experimental Design: Methods for constructing counterfactuals without randomisation
  • Impact Evaluation: Evaluations specifically designed to establish causal attribution
  • Contribution Analysis: Alternative approach when counterfactuals are not feasible
  • Attribution vs Contribution: Understanding the distinction between these two approaches to causal claims
  • Randomised Controlled Trial: The gold-standard method for counterfactual construction

At a Glance

Establishes whether observed outcomes can be causally attributed to an intervention rather than other factors.

Best For

  • Impact evaluations requiring causal claims
  • Distinguishing program effects from external influences
  • Designing rigorous evaluation approaches
  • Understanding limitations of attribution

Linked Indicators

12 indicators across 4 donor frameworks

USAIDDFIDWorld BankEU

Examples

  • Proportion of impact evaluations that explicitly address counterfactual
  • Average difference in outcomes between treatment and control groups
  • Statistical significance of counterfactual-based impact estimates

Related Topics

In-Depth Guide
Quasi-Experimental Design
A family of evaluation designs that estimate causal program effects without random assignment, using statistical methods to construct credible comparison groups.
In-Depth Guide
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a program on outcomes by comparing what happened with what would have happened in its absence.
In-Depth Guide
Contribution Analysis
A structured approach to building a credible case for how and why a program contributed to observed outcomes, without requiring experimental attribution.
Quick Reference
Attribution vs Contribution
The distinction between proving a program directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).
Quick Reference
Randomised Controlled Trial
An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.

Decision Guides

RCT vs Quasi-Experimental Design
When to use a randomized controlled trial vs a quasi-experimental design. Feasibility, cost, rigor, and what each can actually tell you about your program's impact.
PreviousContribution AnalysisNextDevelopmental Evaluation