Definition
A counterfactual represents what would have happened to a group of beneficiaries in the absence of an intervention - the alternative reality against which actual outcomes are compared. It is the fundamental basis for establishing causal attribution in impact evaluation: without knowing what would have occurred anyway, you cannot claim your program caused observed changes.
In practice, the counterfactual is constructed through comparison groups (control or treatment groups), statistical techniques (propensity score matching, regression discontinuity), or quasi-experimental designs that approximate what a true control group would have experienced. The counterfactual is not merely a baseline measurement - it is a comparison of post-intervention outcomes between those who received the intervention and those who did not.
Why It Matters
The counterfactual is essential because correlation does not imply causation. Without a counterfactual, you cannot distinguish program effects from:
- Secular trends: changes that would have occurred anyway due to broader economic, political, or social forces
- Selection bias: the fact that program participants often differ systematically from non-participants in ways that affect outcomes
- External shocks: events like market changes, climate events, or policy shifts that affect all groups
- Maturation: natural changes that occur over time regardless of intervention
When donors require evidence that a program "made a difference" or "caused improvement," they are asking for counterfactual-based attribution. Without it, you can only describe what happened, not what your program achieved.
In Practice
Counterfactuals appear in several forms across evaluation approaches:
Randomised Controlled Trials (RCTs) create the gold-standard counterfactual by randomly assigning participants to treatment and control groups. Randomisation ensures the groups are statistically equivalent at baseline, so any post-intervention differences can be attributed to the program.
Quasi-experimental designs approximate counterfactuals when randomisation is not feasible. Propensity score matching compares program participants with non-participants who have similar observable characteristics. Regression discontinuity exploits arbitrary cutoffs (e.g., only communities above a poverty threshold receive the program) to create comparable groups.
Difference-in-differences compares changes over time between treatment and control groups, isolating the program effect from trends affecting both groups.
Contribution analysis takes a different approach when counterfactuals cannot be constructed: it builds a causal story through evidence that alternative explanations have been ruled out, rather than through direct comparison.
The choice of counterfactual method depends on feasibility, ethics, resources, and the strength of attribution required. RCTs provide the strongest claims but are often impractical or unethical. Quasi-experimental methods offer a compromise but require careful design to avoid bias.
Proposal Context
Counterfactual framing matters in proposals whenever the evaluation plan claims impact attribution. Most end-of-project evaluations can answer "what changed?" without a counterfactual; only a subset answer "what change was caused by the program?", and those require counterfactual design. Common proposal pitfalls: (a) claiming impact evaluation in the narrative but proposing only a pre-post design without comparison (this is contribution analysis, not counterfactual impact evaluation), (b) proposing RCT or quasi-experimental design without the budget or scale to execute (typical impact evaluation adds 50-150% cost to a standard evaluation), (c) overclaiming attribution from non-counterfactual designs (post-evaluation reports that assert "the program caused X% improvement" without counterfactual evidence), (d) missing the option to use theory-based evaluation when counterfactual is infeasible (see theory-based-evaluation). Propose counterfactual design only where the program design, budget, and context support it.
Related Topics
- Quasi-Experimental Design: Methods for constructing counterfactuals without randomisation
- Impact Evaluation: Evaluations specifically designed to establish causal attribution
- Contribution Analysis: Alternative approach when counterfactuals are not feasible
- Attribution vs Contribution: Understanding the distinction between these two approaches to causal claims
- Randomised Controlled Trial: The gold-standard method for counterfactual construction