Skip to main content
M&E Studio
AI for M&E
AI How-TosPromptsPlaybooksPlugins
Indicators
Workflows
M&E Resources
M&E MethodsReference Library
About
Services
FR — FrançaisES — Español
M&E Studio

AI for M&E, Built for Practitioners

AI for M&E

  • AI How-Tos
  • Prompts
  • Playbooks
  • Plugins
  • Indicators
  • Workflows

M&E Resources

  • M&E Methods
  • Reference Library
  • Decision Guides
  • Tools
  • Courses

Company

  • About
  • Services
  • Contact
  • LinkedIn

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

Library
  1. M&E Library
  2. /
  3. Attribution vs Contribution

Attribution vs Contribution

The distinction between proving a program directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).

Definition

Attribution and contribution represent two different standards of causal evidence in evaluation.

Attribution means demonstrating that your program directly caused observed outcomes - that without your intervention, those outcomes would not have occurred. This requires establishing a counterfactual: what would have happened in the absence of your program. Attribution claims demand rigorous methods like quasi-experimental design or impact evaluation approaches that can isolate your program's effect from other factors.

Contribution means building a credible case that your program contributed to observed outcomes, alongside other influencing factors. Rather than proving exclusive causation, contribution analysis accepts that multiple factors typically drive change and seeks to demonstrate that your program was a meaningful part of the causal mix. This approach relies on contribution analysis, process tracing, and triangulating evidence from multiple sources.

The distinction matters because attribution claims are harder to justify at program scale but stronger when achieved; contribution claims are more realistic for most development programs but require more nuanced evidence.

Why It Matters

This distinction shapes every downstream decision in evaluation design.

Method selection: If you claim attribution, you need methods that establish a credible counterfactual - randomised control trials, regression discontinuity, or matched comparison groups. If you claim contribution, you can use contribution analysis, outcome harvesting, or most significant change. Choosing the wrong standard leads to either unachievable evaluation designs or underwhelming evidence.

Stakeholder expectations: Donors often ask for "proof of impact" without specifying whether they mean attribution or contribution. Clarifying this early prevents disappointment - a contribution case can be compelling evidence even without exclusive attribution.

Honesty about limitations: Most single programs cannot credibly claim attribution. Program-level evaluations typically operate in complex contexts where multiple interventions, policy changes, and external factors influence outcomes. Recognizing this upfront allows you to design an evaluation that makes the strongest possible case within realistic constraints.

Communication: Attribution claims require stronger language and more careful interpretation. Contribution claims allow you to say "our program contributed to this change, alongside other factors" - which is often more accurate and still valuable for decision-making.

In Practice

Consider a rural livelihoods program that claims "farmers' incomes increased by 30%." The attribution vs contribution distinction determines how you prove this link:

Attribution approach: You'd need a comparison group of similar farmers who did not receive the intervention, measured before and after, with statistical analysis showing the income difference is unlikely due to other factors (market prices, rainfall, other programs). This is expensive, requires baseline data, and still leaves open the possibility that unmeasured confounders explain the difference.

Contribution approach: You'd gather multiple lines of evidence: (1) outcome logs showing farmers attribute income gains to program-supported activities; (2) timing evidence showing income changes followed program interventions; (3) elimination of alternative explanations (e.g., no major market shifts or other interventions in the same period); (4) stakeholder testimony from farmers, buyers, and local officials. Together, these build a credible case that the program contributed to the income gains.

When attribution is appropriate: Small-scale pilots, tightly controlled interventions, contexts with few competing factors, or when a donor explicitly requires experimental evidence.

When contribution is appropriate: Most development programs, complex contexts with multiple actors, long timeframes where exclusive causation is implausible, or when the question is "did this matter?" rather than "was this the only cause?"

Related Topics

  • Contribution Analysis: Method for building credible contribution cases
  • Counterfactual Analysis: Framework for attribution claims
  • Causal Inference: Broader field covering both attribution and contribution
  • Impact Evaluation: Rigorous approaches for attribution claims
  • Quasi-Experimental Design: Methods for establishing attribution

At a Glance

Clarifies whether you're attempting to prove direct causation or build a credible case of contribution.

Best For

  • Designing evaluation approaches that match your causal claims
  • Communicating limitations of program-level evidence
  • Selecting between experimental and non-experimental methods

Related Topics

In-Depth Guide
Contribution Analysis
A structured approach to building a credible case for how and why a program contributed to observed outcomes, without requiring experimental attribution.
In-Depth Guide
Quasi-Experimental Design
A family of evaluation designs that estimate causal program effects without random assignment, using statistical methods to construct credible comparison groups.
In-Depth Guide
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a program on outcomes by comparing what happened with what would have happened in its absence.
Quick Reference
Causal Inference
The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.
NextBeneficiary Feedback