Term

Bias

Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.

3 min read
Also known as:Selection BiasConfirmation BiasSampling BiasMeasurement BiasResponse Bias

Definition

Bias refers to systematic error in data collection, analysis, or interpretation that consistently distorts results in a particular direction. Unlike random error, which varies unpredictably and can be reduced through larger samples, bias introduces a predictable distortion that threatens the validity of M&E findings.

In M&E practice, bias manifests in multiple forms: selection bias occurs when the sample does not represent the target population; measurement bias arises when instruments systematically mismeasure the intended construct; confirmation bias leads evaluators to favor evidence supporting preconceived conclusions; and response bias occurs when participants provide socially desirable answers rather than truthful ones. Each type undermines different aspects of validity and requires distinct mitigation strategies.

Why It Matters

Bias is the primary threat to validity in M&E. When bias is present, findings may appear precise and statistically significant while being systematically wrong. This creates false confidence in conclusions that do not reflect reality, potentially leading to programme decisions that fail to achieve intended outcomes or even cause harm.

For practitioners, understanding bias is essential for three reasons. First, it informs sampling design — choosing methods that minimize selection bias and ensure findings can be generalized. Second, it shapes data quality assurance — implementing checks that detect measurement and response bias before analysis. Third, it grounds interpretation — recognizing when findings may be distorted and communicating appropriate caveats to stakeholders.

Ignoring bias risks producing elegant analyses of the wrong population, measuring the wrong construct, or drawing causal claims that cannot be sustained. The cost is wasted resources on ineffective interventions and missed opportunities to improve programmes that could work.

In Practice

Bias appears in M&E work through identifiable patterns. Selection bias commonly occurs when programme participants self-select into interventions, creating systematic differences between participants and non-participants that confound impact assessment. Measurement bias emerges when survey instruments perform differently across subgroups or when interviewers systematically influence responses. Confirmation bias surfaces during analysis when evaluators disproportionately weight evidence supporting expected findings.

Mitigation requires deliberate design choices. For selection bias, use random sampling where feasible, employ comparison groups in impact evaluations, or apply statistical adjustments like propensity score matching. For measurement bias, validate instruments across subgroups, train data collectors extensively, and use multiple data sources for triangulation. For response bias, ensure respondent anonymity, use indirect questioning techniques, and cross-check self-reported data with objective measures.

Response rates directly affect bias risk — clusters with response rates below 85-90% face substantial non-response bias that can invalidate findings. Regular data quality assessments should explicitly document bias mitigation strategies and assess whether remaining bias threatens conclusions.

Related Topics

Further Reading


Last updated: 2026-02-27