Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Attribution vs Contribution
TermMethods3 min read

Attribution vs Contribution

The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).

Definition

Attribution and contribution represent two different standards of causal evidence in evaluation.

Attribution means demonstrating that your programme directly caused observed outcomes, that without your intervention, those outcomes would not have occurred. This requires establishing a counterfactual: what would have happened in the absence of your programme. Attribution claims demand rigorous methods like quasi-experimental design or impact evaluation approaches that can isolate your programme's effect from other factors.

Contribution means building a credible case that your programme contributed to observed outcomes, alongside other influencing factors. Rather than proving exclusive causation, contribution analysis accepts that multiple factors typically drive change and seeks to demonstrate that your programme was a meaningful part of the causal mix. This approach relies on contribution analysis, process tracing, and triangulating evidence from multiple sources.

The distinction matters because attribution claims are harder to justify at programme scale but stronger when achieved; contribution claims are more realistic for most development programmes but require more nuanced evidence.

Why It Matters

This distinction shapes every downstream decision in evaluation design.

Method selection: If you claim attribution, you need methods that establish a credible counterfactual, randomised control trials, regression discontinuity, or matched comparison groups. If you claim contribution, you can use contribution analysis, outcome harvesting, or most significant change. Choosing the wrong standard leads to either unachievable evaluation designs or underwhelming evidence.

Stakeholder expectations: Donors often ask for "proof of impact" without specifying whether they mean attribution or contribution. Clarifying this early prevents disappointment, a contribution case can be compelling evidence even without exclusive attribution.

Honesty about limitations: Most single programmes cannot credibly claim attribution. Programme-level evaluations typically operate in complex contexts where multiple interventions, policy changes, and external factors influence outcomes. Recognising this upfront allows you to design an evaluation that makes the strongest possible case within realistic constraints.

Communication: Attribution claims require stronger language and more careful interpretation. Contribution claims allow you to say "our programme contributed to this change, alongside other factors", which is often more accurate and still valuable for decision-making.

In Practice

Consider a rural livelihoods programme that claims "farmers' incomes increased by 30%." The attribution vs contribution distinction determines how you prove this link:

Attribution approach: You'd need a comparison group of similar farmers who did not receive the intervention, measured before and after, with statistical analysis showing the income difference is unlikely due to other factors (market prices, rainfall, other programmes). This is expensive, requires baseline data, and still leaves open the possibility that unmeasured confounders explain the difference.

Contribution approach: You'd gather multiple lines of evidence: (1) outcome logs showing farmers attribute income gains to programme-supported activities; (2) timing evidence showing income changes followed programme interventions; (3) elimination of alternative explanations (e.g., no major market shifts or other interventions in the same period); (4) stakeholder testimony from farmers, buyers, and local officials. Together, these build a credible case that the programme contributed to the income gains.

When attribution is appropriate: Small-scale pilots, tightly controlled interventions, contexts with few competing factors, or when a donor explicitly requires experimental evidence.

When contribution is appropriate: Most development programmes, complex contexts with multiple actors, long timeframes where exclusive causation is implausible, or when the question is "did this matter?" rather than "was this the only cause?"

Related Topics

  • Contribution Analysis, Method for building credible contribution cases
  • Counterfactual Analysis, Framework for attribution claims
  • Causal Inference, Broader field covering both attribution and contribution
  • Impact Evaluation, Rigorous approaches for attribution claims
  • Quasi-Experimental Design, Methods for establishing attribution

At a Glance

Clarifies whether you're attempting to prove direct causation or build a credible case of contribution.

Best For

  • Designing evaluation approaches that match your causal claims
  • Communicating limitations of programme-level evidence
  • Selecting between experimental and non-experimental methods

Complexity

Low

Timeframe

N/A — conceptual framework

Related Topics

Pillar
Contribution Analysis
A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.
Pillar
Quasi-Experimental Design
A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.
Pillar
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
Term
Causal Inference
The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.