Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Confounding Variables
TermMethods3 min read

Confounding Variables

Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.

Definition

A confounding variable (or confounder) is an extraneous factor that correlates with both the intervention being evaluated and the outcome of interest, creating a spurious association that can lead to incorrect causal conclusions. Confounders threaten the internal validity of an evaluation by making it appear that the intervention caused an outcome when, in fact, the observed effect may be due to the confounding variable.

For example, in evaluating a job training programme's impact on employment, socioeconomic status could be a confounder: individuals from higher socioeconomic backgrounds may be more likely to enroll in the programme AND more likely to find employment regardless of training. Without accounting for this confounder, the evaluation would overestimate the programme's true impact.

Identifying and controlling for confounders is essential for credible causal inference and accurate attribution of outcomes to interventions rather than to other factors.

Why It Matters

Confounding variables are the primary obstacle to establishing causal claims in M&E. Without addressing confounders, evaluations risk:

  • Overestimating impact: attributing outcomes to the intervention that were actually caused by pre-existing differences between participants and non-participants
  • Underestimating impact: masking a real effect because a confounder worked in the opposite direction
  • Drawing incorrect conclusions: leading to decisions about scaling, modifying, or terminating programmes based on flawed evidence

This is why quasi-experimental designs and impact evaluations dedicate substantial attention to confounder identification and control. The threat of confounding is what distinguishes rigorous causal analysis from simple before-after comparisons or participant-only outcome reporting.

Understanding confounders is also critical for interpreting any evaluation that claims causal effects. When reading an impact evaluation, the first question should be: "What confounders did the evaluators consider, and how did they control for them?"

In Practice

Confounders appear in programmes across sectors. Common examples include:

  • Health interventions: Age, baseline health status, and access to healthcare confound the relationship between a nutrition programme and child health outcomes
  • Education programmes: Prior academic achievement and parental education confound the relationship between tutoring and test scores
  • Economic development: Market access and infrastructure quality confound the relationship between business training and revenue growth

Addressing confounders requires either design-based or analysis-based strategies:

  1. Randomized designs eliminate confounding through random assignment (though attrition can reintroduce confounding)
  2. Quasi-experimental designs use techniques like propensity score matching, regression discontinuity, or difference-in-differences to approximate randomization
  3. Statistical controls include regression adjustment, stratification, or matching on observed confounders
  4. Sensitivity analysis assesses how robust findings are to unobserved confounders

The key is to identify potential confounders during evaluation design (through theory and context analysis) and select appropriate control strategies before data collection begins.

Related Topics

  • Bias, broader category of systematic errors including confounding
  • Causal inference, the framework for establishing cause-effect relationships
  • Selection bias, a specific type of confounding from non-random assignment
  • Quasi-experimental design, methods for controlling confounders without randomization
  • Impact evaluation, evaluations specifically designed to establish causal effects
  • Counterfactual, the comparison needed to isolate intervention effects from confounders
  • Attribution vs Contribution, distinguishing causal claims from contribution stories

Further Reading

  • Causal Inference in Statistics: A Primer, Pearl, Drman, and Glymour. Accessible introduction to confounding and causal reasoning.
  • Designing Quasi-Experimental Impact Evaluations, International Initiative for Impact Evaluation (3ie). Practical guidance on controlling confounders.
  • What Is a Confounder?, BMJ. Concise clinical epidemiology explanation with examples.

Data References (populated during production)

  • Indicators: 8 indicators across 3 donor frameworks relate to confounding control in evaluation design

At a Glance

Identifies extraneous factors that create false causal associations between an intervention and its outcome.

Best For

  • Designing impact evaluations and quasi-experimental studies
  • Interpreting evaluation results and assessing causal claims
  • Selecting appropriate comparison groups
  • Planning statistical controls and matching strategies

Complexity

Medium

Timeframe

Considered during evaluation design; addressed through study design or analysis

Linked Indicators

8 indicators across 3 donor frameworks

USAIDWorld BankOECD-DAC

Examples

  • Proportion of evaluation findings that account for potential confounding variables
  • Use of quasi-experimental designs to control for confounders in impact attribution
  • Percentage of outcome changes explained by confounding vs intervention effects

Related Topics

Term
Bias
Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.
Term
Causal Inference
The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.
Pillar
Quasi-Experimental Design
A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.
Pillar
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
Term
Counterfactual
The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.
Term
Attribution vs Contribution
The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).