Term

Confounding Variables

Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.

4 min read
Also known as:confoundersconfounding factorsthird variables

Definition

A confounding variable (or confounder) is an extraneous factor that correlates with both the intervention being evaluated and the outcome of interest, creating a spurious association that can lead to incorrect causal conclusions. Confounders threaten the internal validity of an evaluation by making it appear that the intervention caused an outcome when, in fact, the observed effect may be due to the confounding variable.

For example, in evaluating a job training programme's impact on employment, socioeconomic status could be a confounder: individuals from higher socioeconomic backgrounds may be more likely to enroll in the programme AND more likely to find employment regardless of training. Without accounting for this confounder, the evaluation would overestimate the programme's true impact.

Identifying and controlling for confounders is essential for credible causal inference and accurate attribution of outcomes to interventions rather than to other factors.

Why It Matters

Confounding variables are the primary obstacle to establishing causal claims in M&E. Without addressing confounders, evaluations risk:

  • Overestimating impact — attributing outcomes to the intervention that were actually caused by pre-existing differences between participants and non-participants
  • Underestimating impact — masking a real effect because a confounder worked in the opposite direction
  • Drawing incorrect conclusions — leading to decisions about scaling, modifying, or terminating programmes based on flawed evidence

This is why quasi-experimental designs and impact evaluations dedicate substantial attention to confounder identification and control. The threat of confounding is what distinguishes rigorous causal analysis from simple before-after comparisons or participant-only outcome reporting.

Understanding confounders is also critical for interpreting any evaluation that claims causal effects. When reading an impact evaluation, the first question should be: "What confounders did the evaluators consider, and how did they control for them?"

In Practice

Confounders appear in programmes across sectors. Common examples include:

  • Health interventions: Age, baseline health status, and access to healthcare confound the relationship between a nutrition programme and child health outcomes
  • Education programmes: Prior academic achievement and parental education confound the relationship between tutoring and test scores
  • Economic development: Market access and infrastructure quality confound the relationship between business training and revenue growth

Addressing confounders requires either design-based or analysis-based strategies:

  1. Randomized designs eliminate confounding through random assignment (though attrition can reintroduce confounding)
  2. Quasi-experimental designs use techniques like propensity score matching, regression discontinuity, or difference-in-differences to approximate randomization
  3. Statistical controls include regression adjustment, stratification, or matching on observed confounders
  4. Sensitivity analysis assesses how robust findings are to unobserved confounders

The key is to identify potential confounders during evaluation design (through theory and context analysis) and select appropriate control strategies before data collection begins.

Related Topics

Further Reading


Data References (populated during production)

  • Indicators: 8 indicators across 3 donor frameworks relate to confounding control in evaluation design
  • MEAL Rules: Best practices from EX132_F3_R015, EX45_R023, EX109_R018; Common mistakes from EX132_F2_R008, EX59_R012, EX109_W015

Last updated: 2026-02-27