Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Causal Inference
TermMethods3 min read

Causal Inference

The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.

Definition

Causal inference is the process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations. It answers the question: "Did our programme make a difference, or would these outcomes have occurred anyway?"

Causal inference goes beyond correlation or association. It requires constructing or approximating what would have happened in the absence of the intervention (the counterfactual), then comparing actual outcomes against this alternative reality. The strength of causal claims depends on the credibility of the counterfactual and the extent to which alternative explanations have been systematically ruled out.

Why It Matters

Causal inference is essential when stakeholders need defensible evidence that a programme achieved its intended effects. Without it, you can only describe what happened, not what your programme achieved.

Donors, funders, and decision-makers increasingly require causal evidence before scaling programmes or continuing investments. Correlation alone is insufficient because observed changes may result from:

  • Secular trends: broader economic, political, or social forces affecting outcomes independently of your programme
  • Selection bias: systematic differences between programme participants and non-participants that affect outcomes
  • External shocks: events like market changes, climate events, or policy shifts that affect all groups
  • Maturation: natural changes that occur over time regardless of intervention

When the question is whether your programme "made a difference" or "caused improvement," causal inference provides the methodological foundation for answering that question credibly.

In Practice

Causal inference appears across multiple evaluation approaches, each with different strengths and feasibility constraints:

Randomised Controlled Trials (RCTs) create the strongest causal claims through random assignment, ensuring treatment and control groups are statistically equivalent at baseline. Any post-intervention differences can be attributed to the programme with high confidence.

Quasi-experimental designs approximate causal inference when randomisation is not feasible. Methods include:

  • Propensity score matching, comparing participants with non-participants who have similar observable characteristics
  • Regression discontinuity, exploiting arbitrary eligibility cutoffs to create comparable groups
  • Difference-in-differences, comparing changes over time between treatment and control groups

Contribution analysis offers an alternative when counterfactual-based methods are impractical. It builds a causal story by gathering evidence that alternative explanations have been ruled out, rather than through direct comparison groups.

Process tracing examines the internal causal mechanisms, whether the expected pathway from activities to outcomes actually occurred as theorised.

The choice of method depends on feasibility, ethics, resources, and the strength of attribution required. Stronger causal claims require more resources but provide greater confidence in programme effectiveness.

Related Topics

  • Attribution vs Contribution, Understanding the distinction between counterfactual-based and narrative-based causal claims
  • Counterfactual, The comparison condition that makes causal inference possible
  • Impact Evaluation, Evaluations specifically designed to establish causal attribution
  • Quasi-Experimental Design, Methods for causal inference without randomisation
  • Statistical Significance, Determining whether observed effects are likely real or due to chance
  • Bias, Systematic errors that threaten causal inference validity

Further Reading

  • Imbens, G. W., & Rubin, D. B. (2015). Causal Inference for Statistics, Social, and Biomedical Sciences, Technical foundation for counterfactual reasoning and causal inference methods.
  • Heller, T., et al. (2020). Causal Inference in Program Evaluation, RAND Corporation guide to causal methods for development programmes.
  • BetterEvaluation: Causal Inference, Overview of methods for establishing causality in evaluation practice.
  • What Works Clearinghouse Procedures and Standards Handbook, U.S. Department of Education standards for causal evidence in education interventions.

At a Glance

Determines whether observed outcomes can be credibly attributed to an intervention rather than other factors.

Best For

  • Impact evaluations requiring causal claims
  • Distinguishing programme effects from external influences
  • Testing whether programme theory holds in practice
  • Making defensible claims about programme effectiveness

Complexity

High

Timeframe

Built into evaluation design; analysis occurs post-intervention

Linked Indicators

18 indicators across 5 donor frameworks

USAIDDFIDWorld BankEUGlobal Fund

Examples

  • Proportion of impact evaluations using credible causal inference methods
  • Average difference in outcomes between treatment and comparison groups
  • Statistical significance of estimated programme effects

Related Topics

Term
Attribution vs Contribution
The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).
Term
Counterfactual
The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.
Pillar
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
Pillar
Quasi-Experimental Design
A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.
Term
Statistical Significance
A statistical measure indicating whether observed results are likely due to a real effect rather than random chance, typically assessed using p-values and hypothesis testing.
Term
Bias
Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.