When to Use
Contribution analysis is the right approach when you need to say something credible about whether your programme made a difference, but you cannot run a randomised controlled trial, and simply presenting outcome data without explaining the causal link would be unconvincing.
Use it when:
- Attribution is contested: multiple funders, parallel interventions, or complex contextual factors make it impossible to isolate your programme's effect
- RCTs are not feasible: ethical, logistical, or cost constraints rule out experimental designs
- The theory of change needs validation: you want to test whether your causal assumptions held during implementation, not just report numbers
- Donors require a contribution narrative: evaluations for DFID, USAID, or UNDP increasingly expect an explanation of how the programme contributed, not just what outputs were delivered
- The programme is complex or adaptive: multiple pathways, feedback loops, or shifting contexts mean a simple input-output model does not capture what happened
Contribution analysis is less appropriate when outcomes are easily measurable and attributable (use a simple pre-post design), when you need to prove causation for policy-making purposes (consider a quasi-experimental design), or when the evaluation question is primarily about what outcomes occurred rather than why (use outcome harvesting).
| Scenario | Use Contribution Analysis? | Better Alternative |
|---|---|---|
| Complex programme, no control group | Yes | — |
| Want to prove causation rigorously | No | Quasi-Experimental Design |
| Outcomes are unpredicted or emergent | No | Outcome Harvesting |
| Need to understand why the ToC failed | Yes, alongside | Process Tracing |
| Programme has clear, isolated intervention | No | RCT or impact evaluation |
| Multiple funders, contested contribution | Yes | — |
How It Works
Contribution analysis follows a six-step process developed by John Mayne. The goal is not to prove your programme caused outcomes, but to build a contribution story, a documented, evidence-backed narrative that makes it plausible your programme contributed meaningfully to observed changes.
Step 1: Set out the attribution problem
Define the evaluation question precisely. What outcomes are you claiming the programme contributed to? What time period? What population? Acknowledge what you can and cannot prove upfront. This step prevents overreaching and focuses evidence collection.
Step 2: Develop or revisit the theory of change
Contribution analysis rests on a ToC as its analytical spine. If you don't have one, build it. If you do, make the causal links and assumptions explicit, each link becomes a testable proposition.
Step 3: Gather evidence on the theory of change
Collect data to test whether each link in the ToC held during implementation. Use a mix of quantitative and qualitative data: monitoring data, surveys, key informant interviews, document review, focus groups. For each causal link, ask: Is there evidence this step occurred? How strong is that evidence?
Step 4: Assemble the contribution story
Synthesise the evidence into a narrative that walks from programme activities through to outcomes. Be explicit about where evidence is strong, where it is partial, and where it is absent. The contribution story should read as a reasoned argument, not a report of numbers.
Step 5: Seek out and address rival explanations
Identify alternative explanations for observed outcomes: other programmes operating in the same space, contextual changes (policy shifts, economic shocks), or selection effects. Either present evidence that rules out these rivals or acknowledge them honestly and explain why your programme's contribution is still plausible.
Step 6: Revise and strengthen the contribution story
Use the process as a learning exercise. Where the evidence is weak or rival explanations are compelling, revise your ToC or flag what additional evidence is needed. A good contribution analysis improves your next programme design.
Key Components
A complete contribution analysis requires:
- A clear causal claim: a precise statement of what your programme is argued to have contributed to, for whom, and during what period
- An explicit theory of change: with all causal links and assumptions documented (not just a diagram)
- Evidence by link: data or qualitative findings for each step in the ToC, assessed for quality and relevance
- Rival explanation testing: explicit documentation of alternative causes and why they are insufficient or incomplete
- A contribution story: a narrative document (typically 3-10 pages) synthesising the above into a coherent argument
- Confidence rating: a transparent statement of how strong or weak the overall contribution claim is, and what would increase confidence
- Mixed methods triangulation: at least two independent evidence sources for each major causal claim
Best Practices
Start with the ToC, not the data. The most common error is gathering data first and then trying to construct a causal story backward. The ToC should determine what data you need, not the other way around.
Map interventions to outcomes explicitly. Before collecting new data, document every existing programme activity and map it to the specific outcome it is meant to contribute to. This prevents post-hoc rationalisation.
Strengthen plausibility with external evidence. Contribution stories become more credible when they reference research or comparable programmes showing the same causal mechanisms work. Cite relevant literature, sector evaluations, or meta-analyses.
Define your evaluation question as a contribution question. Frame it as "To what extent did X contribute to Y?" rather than "Did X cause Y?" This sets the right level of rigour and prevents scope creep.
Use iterative triangulation. Run the contribution story past programme staff, community members, and an external peer reviewer. Different stakeholders will identify rival explanations you have not considered. Each round strengthens the story.
Be transparent about confidence levels. A contribution story that honestly acknowledges weak evidence at certain links is more credible, and more useful, than one that overstates certainty. Rate each causal link: Strong evidence / Moderate evidence / Weak evidence / No evidence.
Common Mistakes
Treating it as an excuse to avoid rigour. Contribution analysis is not a way to avoid collecting good data. It still requires systematic evidence gathering. The difference from experimental designs is the type of evidence and the claim made, not the quality standard.
Ignoring rival explanations. The most common weakness in contribution stories is failing to seriously test alternative causes. If you do not address rivals, reviewers and donors will. Build rival explanation testing into the design, not as an afterthought.
Conflating contribution with attribution. The goal is a plausible contribution claim, not proof of causation. Statements like "Our programme caused 30% of the improvement" are usually unjustifiable and undermine credibility. Say instead: "The evidence supports a meaningful contribution from our programme, with the other key factors being X and Y."
Skipping the ToC revision step. Many evaluators produce the contribution story but never feed it back into programme design. This wastes the primary learning value of the method.
Using it for simple programmes. Contribution analysis is resource-intensive. For a well-defined, simple intervention with a single causal pathway, a pre-post design with a comparison group will be more efficient and more convincing.
Weak documentation. A contribution story that cannot be traced back to specific evidence sources is not a contribution story, it is an assertion. Every causal claim needs a cited evidence source.
Examples
Livelihoods programme, East Africa. A four-year USAID-funded smallholder agriculture programme in Kenya claimed to have contributed to increased household income among 40,000 beneficiaries. A contribution analysis was conducted for the final evaluation. The ToC mapped the pathway from training inputs through knowledge uptake, practice change, yield improvement, to income change. Monitoring data confirmed training attendance and knowledge scores. Agricultural surveys showed yield improvements correlated with practice adoption. The rival explanation, a favourable rainfall season, was addressed by comparing yield trends among non-participants in the same geography (no similar improvement). The contribution story rated the programme's contribution as "moderate to high confidence" for yield outcomes and "moderate confidence" for income, acknowledging price volatility as a confounding factor.
Governance and advocacy, West Africa. An EU-funded civil society strengthening programme in Ghana sought to demonstrate contribution to improved budget transparency at the district level. A contribution story was assembled using document analysis (budget disclosures increased), key informant interviews with district finance officers and CSO partners, and a policy mapping exercise. The rival explanation, a new national government transparency policy, was significant. The contribution story argued that the programme's advocacy training directly informed the CSO coalition that lobbied for the policy, documenting three pivotal meetings. The claim was rated "high confidence for policy influence, moderate confidence for district-level practice change."
Health systems, South Asia. A UNICEF-supported nutrition programme in Bangladesh faced a complex attribution environment: multiple donors, government nutrition campaigns, and a global commodity price drop all overlapped with improvements in child stunting rates. A contribution analysis mapped the programme's specific delivery pathways (SBCC at community level, health worker training) against observed changes. Rather than claiming credit for the aggregate stunting reduction, the contribution story focused narrowly on the 120 programme unions, showing dose-response effects (higher-intensity implementation areas showed faster change) and ruling out differential selection effects. The confidence rating was "moderate" for contribution to stunting reduction in programme areas.
Compared To
| Method | Claim Type | Counterfactual? | Best For |
|---|---|---|---|
| Contribution Analysis | Plausible contribution | No | Complex programmes, multiple funders |
| Process Tracing | Mechanism tracing | No | Explaining how a specific outcome occurred |
| Quasi-Experimental Design | Causal attribution | Yes (comparison group) | Programmes with clear treatment/comparison |
| Impact Evaluation | Causal attribution | Yes (control group) | Policy-relevant rigorous causation claims |
| Outcome Harvesting | Documents what changed | No | Emergent outcomes in complex change |
| Realist Evaluation | What works for whom | Partial | Understanding contextual mechanisms |
Relevant Indicators
31 donor-aligned indicators exist across USAID, DFID, UNDP, and OECD-DAC frameworks for evaluating evaluation quality and programme contribution. The most commonly cited:
- Strength of evidence linking programme activities to observed changes (scale: 1-5)
- Number of rival explanations tested in the final evaluation report
- Degree to which programme ToC assumptions are supported by implementation evidence
- Quality rating of mixed-methods triangulation used in the evaluation
- Proportion of causal links in the ToC with supporting monitoring or evaluation data
Related Tools
- MEStudio Logic Model Builder: map your ToC as the analytical foundation before beginning a contribution analysis
- Evaluation Planner: structure your evidence collection matrix by causal link
Related Topics
- Theory of Change, the analytical spine of every contribution analysis
- Attribution vs. Contribution, understanding when each approach is appropriate
- Process Tracing, a complementary method for tracing causal mechanisms
- Mixed Methods Evaluation, how to combine quantitative and qualitative evidence for triangulation
- Outcome Harvesting, alternative for emergent or unexpected outcomes
- Impact Evaluation, when rigorous causal attribution is required
Further Reading
- Mayne, J. (2012). "Contribution Analysis: Coming of Age?" Evaluation, 18(3), 270-280. The foundational methodological paper.
- Mayne, J. (2001). "Addressing Attribution Through Contribution Analysis." The Canadian Journal of Program Evaluation, 16(1), 1-24. The original formulation.
- DFID (2012). Broadening the Range of Designs and Methods for Impact Evaluations. UK Department for International Development. Covers contribution analysis alongside experimental approaches.
- OECD-DAC (2019). Evaluating Development Co-operation: A Compendium. Includes contribution analysis guidance for complex programmes.