Skip to main content
M&E Studio
AI for M&E
AI How-TosPromptsPlaybooksPlugins
Indicators
Workflows
M&E Resources
M&E MethodsReference Library
About
Services
FR — FrançaisES — Español
M&E Studio

AI for M&E, Built for Practitioners

AI for M&E

  • AI How-Tos
  • Prompts
  • Playbooks
  • Plugins
  • Indicators
  • Workflows

M&E Resources

  • M&E Methods
  • Reference Library
  • Decision Guides
  • Tools
  • Courses

Company

  • About
  • Services
  • Contact
  • LinkedIn

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

Library
  1. M&E Library
  2. /
  3. Contribution Analysis

Contribution Analysis

A structured approach to building a credible case for how and why a program contributed to observed outcomes, without requiring experimental attribution.

When to Use

Contribution analysis is the right approach when you need to say something credible about whether your program made a difference - but you cannot run a randomised controlled trial, and simply presenting outcome data without explaining the causal link would be unconvincing.

Use it when:

  • Attribution is contested: multiple funders, parallel interventions, or complex contextual factors make it impossible to isolate your program's effect
  • RCTs are not feasible: ethical, logistical, or cost constraints rule out experimental designs
  • The theory of change needs validation: you want to test whether your causal assumptions held during implementation, not just report numbers
  • Donors require a contribution narrative: evaluations for DFID, USAID, or UNDP increasingly expect an explanation of how the program contributed, not just what outputs were delivered
  • The program is complex or adaptive: multiple pathways, feedback loops, or shifting contexts mean a simple input-output model does not capture what happened

Contribution analysis is less appropriate when outcomes are easily measurable and attributable (use a simple pre-post design), when you need to prove causation for policy-making purposes (consider a quasi-experimental design), or when the evaluation question is primarily about what outcomes occurred rather than why (use outcome harvesting).

ScenarioUse Contribution Analysis?Better Alternative
Complex program, no control groupYes-
Want to prove causation rigorouslyNoQuasi-Experimental Design
Outcomes are unpredicted or emergentNoOutcome Harvesting
Need to understand why the ToC failedYes, alongsideProcess Tracing
Program has clear, isolated interventionNoRCT or impact evaluation
Multiple funders, contested contributionYes-

How It Works

Contribution analysis follows a six-step process developed by John Mayne. The goal is not to prove your program caused outcomes, but to build a contribution story - a documented, evidence-backed narrative that makes it plausible your program contributed meaningfully to observed changes.

Step 1: Set out the attribution problem

Define the evaluation question precisely. What outcomes are you claiming the program contributed to? What time period? What population? Acknowledge what you can and cannot prove upfront. This step prevents overreaching and focuses evidence collection.

Step 2: Develop or revisit the theory of change

Contribution analysis rests on a ToC as its analytical spine. If you don't have one, build it. If you do, make the causal links and assumptions explicit - each link becomes a testable proposition.

Step 3: Gather evidence on the theory of change

Collect data to test whether each link in the ToC held during implementation. Use a mix of quantitative and qualitative data: monitoring data, surveys, key informant interviews, document review, focus groups. For each causal link, ask: Is there evidence this step occurred? How strong is that evidence?

Step 4: Assemble the contribution story

Synthesise the evidence into a narrative that walks from program activities through to outcomes. Be explicit about where evidence is strong, where it is partial, and where it is absent. The contribution story should read as a reasoned argument, not a report of numbers.

Step 5: Seek out and address rival explanations

Identify alternative explanations for observed outcomes: other programs operating in the same space, contextual changes (policy shifts, economic shocks), or selection effects. Either present evidence that rules out these rivals or acknowledge them honestly and explain why your program's contribution is still plausible.

Step 6: Revise and strengthen the contribution story

Use the process as a learning exercise. Where the evidence is weak or rival explanations are compelling, revise your ToC or flag what additional evidence is needed. A good contribution analysis improves your next program design.

Key Components

A complete contribution analysis requires:

  • A clear causal claim: a precise statement of what your program is argued to have contributed to, for whom, and during what period
  • An explicit theory of change: with all causal links and assumptions documented (not just a diagram)
  • Evidence by link: data or qualitative findings for each step in the ToC, assessed for quality and relevance
  • Rival explanation testing: explicit documentation of alternative causes and why they are insufficient or incomplete
  • A contribution story: a narrative document (typically 3-10 pages) synthesising the above into a coherent argument
  • Confidence rating: a transparent statement of how strong or weak the overall contribution claim is, and what would increase confidence
  • Mixed methods triangulation: at least two independent evidence sources for each major causal claim

Best Practices

Start with the ToC, not the data. The most common error is gathering data first and then trying to construct a causal story backward. The ToC should determine what data you need, not the other way around.

Map interventions to outcomes explicitly. Before collecting new data, document every existing program activity and map it to the specific outcome it is meant to contribute to. This prevents post-hoc rationalisation.

Strengthen plausibility with external evidence. Contribution stories become more credible when they reference research or comparable programs showing the same causal mechanisms work. Cite relevant literature, sector evaluations, or meta-analyses.

Define your evaluation question as a contribution question. Frame it as "To what extent did X contribute to Y?" rather than "Did X cause Y?" This sets the right level of rigour and prevents scope creep.

Use iterative triangulation. Run the contribution story past program staff, community members, and an external peer reviewer. Different stakeholders will identify rival explanations you have not considered. Each round strengthens the story.

Be transparent about confidence levels. A contribution story that honestly acknowledges weak evidence at certain links is more credible - and more useful - than one that overstates certainty. Rate each causal link: Strong evidence / Moderate evidence / Weak evidence / No evidence.

Common Mistakes

Treating it as an excuse to avoid rigour. Contribution analysis is not a way to avoid collecting good data. It still requires systematic evidence gathering. The difference from experimental designs is the type of evidence and the claim made, not the quality standard.

Ignoring rival explanations. The most common weakness in contribution stories is failing to seriously test alternative causes. If you do not address rivals, reviewers and donors will. Build rival explanation testing into the design, not as an afterthought.

Conflating contribution with attribution. The goal is a plausible contribution claim, not proof of causation. Statements like "Our program caused 30% of the improvement" are usually unjustifiable and undermine credibility. Say instead: "The evidence supports a meaningful contribution from our program, with the other key factors being X and Y."

Skipping the ToC revision step. Many evaluators produce the contribution story but never feed it back into program design. This wastes the primary learning value of the method.

Using it for simple programs. Contribution analysis is resource-intensive. For a well-defined, simple intervention with a single causal pathway, a pre-post design with a comparison group will be more efficient and more convincing.

Weak documentation. A contribution story that cannot be traced back to specific evidence sources is not a contribution story - it is an assertion. Every causal claim needs a cited evidence source.

Examples

Livelihoods program, East Africa. A four-year USAID-funded smallholder agriculture program in Kenya claimed to have contributed to increased household income among 40,000 beneficiaries. A contribution analysis was conducted for the final evaluation. The ToC mapped the pathway from training inputs through knowledge uptake, practice change, yield improvement, to income change. Monitoring data confirmed training attendance and knowledge scores. Agricultural surveys showed yield improvements correlated with practice adoption. The rival explanation - a favourable rainfall season - was addressed by comparing yield trends among non-participants in the same geography (no similar improvement). The contribution story rated the program's contribution as "moderate to high confidence" for yield outcomes and "moderate confidence" for income, acknowledging price volatility as a confounding factor.

Governance and advocacy, West Africa. An EU-funded civil society strengthening program in Ghana sought to demonstrate contribution to improved budget transparency at the district level. A contribution story was assembled using document analysis (budget disclosures increased), key informant interviews with district finance officers and CSO partners, and a policy mapping exercise. The rival explanation - a new national government transparency policy - was significant. The contribution story argued that the program's advocacy training directly informed the CSO coalition that lobbied for the policy, documenting three pivotal meetings. The claim was rated "high confidence for policy influence, moderate confidence for district-level practice change."

Health systems, South Asia. A UNICEF-supported nutrition program in Bangladesh faced a complex attribution environment: multiple donors, government nutrition campaigns, and a global commodity price drop all overlapped with improvements in child stunting rates. A contribution analysis mapped the program's specific delivery pathways (SBCC at community level, health worker training) against observed changes. Rather than claiming credit for the aggregate stunting reduction, the contribution story focused narrowly on the 120 program unions, showing dose-response effects (higher-intensity implementation areas showed faster change) and ruling out differential selection effects. The confidence rating was "moderate" for contribution to stunting reduction in program areas.

Compared To

MethodClaim TypeCounterfactual?Best For
Contribution AnalysisPlausible contributionNoComplex programs, multiple funders
Process TracingMechanism tracingNoExplaining how a specific outcome occurred
Quasi-Experimental DesignCausal attributionYes (comparison group)Programs with clear treatment/comparison
Impact EvaluationCausal attributionYes (control group)Policy-relevant rigorous causation claims
Outcome HarvestingDocuments what changedNoEmergent outcomes in complex change
Realist EvaluationWhat works for whomPartialUnderstanding contextual mechanisms

Relevant Indicators

31 donor-aligned indicators exist across USAID, DFID, UNDP, and OECD-DAC frameworks for evaluating evaluation quality and program contribution. The most commonly cited:

  • Strength of evidence linking program activities to observed changes (scale: 1-5)
  • Number of rival explanations tested in the final evaluation report
  • Degree to which program ToC assumptions are supported by implementation evidence
  • Quality rating of mixed-methods triangulation used in the evaluation
  • Proportion of causal links in the ToC with supporting monitoring or evaluation data

Related Tools

  • MEStudio Logic Model Builder: map your ToC as the analytical foundation before beginning a contribution analysis
  • Evaluation Planner: structure your evidence collection matrix by causal link

Related Topics

  • Theory of Change: the analytical spine of every contribution analysis
  • Attribution vs. Contribution: understanding when each approach is appropriate
  • Process Tracing: a complementary method for tracing causal mechanisms
  • Mixed Methods Evaluation: how to combine quantitative and qualitative evidence for triangulation
  • Outcome Harvesting: alternative for emergent or unexpected outcomes
  • Impact Evaluation: when rigorous causal attribution is required

At a Glance

Builds a plausible, evidence-backed narrative for how your program contributed to outcomes, without needing a control group.

Best For

  • Evaluating complex programs where RCTs are impossible or unethical
  • Mid-term and final evaluations where attribution is contested
  • Programs with multiple interventions and funders
  • Situations where the theory of change needs to be validated against evidence

Linked Indicators

31 indicators across 4 donor frameworks

USAIDDFIDUNDPOECD-DAC

Examples

  • Degree to which program theory of change assumptions are supported by evidence
  • Number of rival explanations tested and addressed in the evaluation
  • Strength of evidence linking program activities to observed outcome changes

Related Topics

In-Depth Guide
Theory of Change
A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.
In-Depth Guide
Process Tracing
A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.
In-Depth Guide
Outcome Harvesting
A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the program contributed to them.
In-Depth Guide
Most Significant Change
A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a program.
In-Depth Guide
Realist Evaluation
An evaluation approach that asks what works, for whom, in what circumstances, and why, by identifying the mechanisms through which programs produce outcomes in specific contexts.
Quick Reference
Attribution vs Contribution
The distinction between proving a program directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).
In-Depth Guide
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a program on outcomes by comparing what happened with what would have happened in its absence.
Overview
Mixed Methods Evaluation
An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of program effects, mechanisms, and context.

Decision Guides

How to Choose an Evaluation Methodology
A decision framework for choosing evaluation design. Covers experimental, quasi-experimental, and non-experimental approaches.
PreviousContent AnalysisNextCounterfactual