When to Use
Outcome Mapping is the right approach when a programme works by influencing people and organisations rather than delivering services or products directly to beneficiaries. It was developed by the International Development Research Centre (IDRC) specifically for complex, multi-actor programmes where pre-set outcome targets are unrealistic and long-term social change cannot be attributed to any single intervention.
Use it when:
- The programme works through partners: your theory of change depends on changing the behaviours, relationships, and actions of partner organisations, government agencies, or civil society groups who then influence others
- Advocacy, policy, or systems change: the programme is trying to influence what institutions do, not what individuals receive
- Attribution is not the goal: you care about documenting and understanding contribution to change, not proving causation
- Participatory M&E is valued: boundary partners can be involved in defining what change looks like and monitoring their own progress
- IDRC is the funder: IDRC requires outcome mapping for many of its grants, with specific reporting structures
Outcome Mapping is the wrong tool when programmes primarily deliver services (health, food, shelter), when funders require impact-level attribution, or when the programme timeline is too short for meaningful behavioural change.
| Scenario | Use Outcome Mapping? | Better Alternative |
|---|---|---|
| Advocacy and policy influence | Yes | — |
| Service delivery to beneficiaries | No | Logframe |
| Emergent outcomes, unknown partners | Partially | Outcome Harvesting |
| Donor requires attribution | No | Impact Evaluation |
| Complex multi-actor systems | Yes | — |
| Short programme (under 2 years) | Cautiously | Most Significant Change |
How It Works
Outcome Mapping has three design stages and an ongoing monitoring process.
Stage 1: Intentional Design
Define the programme's vision, mission, and boundary partners. A boundary partner is any person, group, or organisation your programme works with directly and whose behaviour you intend to influence. Then write an Outcome Challenge for each boundary partner, a description of the ideal behaviour change you hope to see in them by programme end.
For each Outcome Challenge, develop a graduated set of progress markers: behaviours on a spectrum from "Expect to see" (early, easy changes), to "Like to see" (deeper engagement), to "Love to see" (transformative shifts). Finally, map the programme's own strategy, what activities and resources will support each boundary partner toward their outcome challenge.
Stage 2: Outcome and Performance Monitoring
Establish an ongoing monitoring process using Outcome Journals (one per boundary partner). Regularly record any behavioural changes observed, with supporting evidence. Use strategy journals to assess whether programme activities are having their intended effect. Use performance data to monitor organisational practices.
Stage 3: Evaluation Planning
Outcome Mapping designs can feed into several evaluation approaches. Outcome Harvesting is commonly used alongside OM to document boundary partner changes systematically.
Key Components
A complete Outcome Mapping design includes:
- Vision statement: the large-scale social change the programme contributes to (not directly causes)
- Mission statement: what the programme itself does and how
- Boundary partners: typically 3-7 direct partners whose behaviour changes are being tracked
- Outcome Challenges: one per boundary partner, describing ideal behavioural change
- Progress markers: graduated behavioural indicators at three levels (Expect/Like/Love to see)
- Strategy maps: activities designed to support each boundary partner
- Outcome journals: ongoing records of behavioural change evidence per partner
- Strategy journals: records of whether programme strategies are working
- Organisational practices monitoring: internal accountability on how well the programme team is functioning
Best Practices
Co-design with boundary partners. Outcome Challenges and progress markers developed without partner input are unrealistic and miss locally relevant change markers.
Use backwards mapping. Start from the long-term vision and work backwards to identify what changes in boundary partners are necessary and what the programme must do to support those changes.
Report behavioural evidence, not activities. Outcome Journals must document what boundary partners actually did differently, not programme activities or outputs. Evidence should be specific: observed behaviours, documented decisions, produced artefacts.
IDRC expects annual reporting. If IDRC is the funder, outcome reports must document progress against progress markers for each boundary partner with specific evidence of behavioural change.
Set realistic time expectations. Transformative behavioural change, the "Love to See" markers, takes 2-3+ years. Programmes that expect all markers to be achieved in 12 months will generate discouraging monitoring data that misrepresents real progress.
Common Mistakes
Applying it to service delivery programmes. Outcome Mapping is specifically designed for programmes that work by influencing partner behaviour. If the programme runs clinics, distributes food, or provides direct services, the methodology does not fit.
Designing without boundary partner input. Outcome Challenges written entirely by programme staff reflect programme assumptions, not boundary partner realities. The resulting progress markers are often irrelevant or patronising.
Too many boundary partners. More than seven boundary partners creates monitoring burden that collapses under its own weight. Prioritise the 3-5 partners whose behaviour change is most critical.
Treating progress markers as targets. Progress markers are a monitoring and learning tool, not performance targets. Evaluating staff performance against "Love to See" achievement sets up perverse incentives and discourages honest reporting.
Confusing OM's vision with attribution. The vision statement in OM deliberately describes large-scale change that the programme does not claim to cause. Evaluators who conflate the vision with the programme's attributed impact misrepresent the methodology's intent.
Examples
Advocacy and governance, West Africa. An IDRC-funded research-to-policy programme in Ghana identified four boundary partners: the Parliamentary Finance Committee, the Ministry of Finance, a national civil society coalition, and a regional think tank. Outcome Challenges focused on each partner's use of research evidence in budget decisions. Progress markers tracked from basic awareness of research findings through to formal policy citations. Monitoring documented that the civil society coalition (a "Like to See" change) began systematically referencing programme research in parliamentary submissions 18 months into the programme, ahead of schedule. This finding prompted an early acceleration of engagement activities with the Finance Committee.
Capacity building, East Africa. A DFID-funded organisational capacity-building programme in Uganda worked with six district health management teams (DHMTs) as boundary partners. Outcome Challenges focused on DHMTs developing and implementing evidence-based district health plans. Progress markers tracked from attending training through to using monitoring data in quarterly planning meetings through to adjusting annual budgets based on performance data. One DHMT reached "Love to See" markers (budget reallocation based on data) at 30 months; others were at "Like to See" (routine data use in meetings) at the same point. The differentiation helped the programme target intensive support where it was needed.
Environmental systems change, Latin America. A multi-country IDRC programme on water governance in the Andes worked with watershed committees, municipal governments, and national water agencies as boundary partners. The OM design captured gradual relationship and behaviour changes across all three levels. Outcome Journals documented a shift in municipal government engagement from passive recipients of watershed data to active contributors, a "Like to See" marker, enabling the programme to position itself for policy-level engagement two years earlier than planned.
Compared To
| Method | Unit of Change | Attribution | Design Flexibility |
|---|---|---|---|
| Outcome Mapping | Boundary partner behaviour | None claimed | High |
| Outcome Harvesting | Any actor behaviour | None claimed | Very High (retrospective) |
| Most Significant Change | Stories of change | None claimed | High |
| Theory of Change | Programme logic | Implicit | Medium |
| Contribution Analysis | Programme contribution | Plausible claim | Medium |
| Logframe | Output/outcome targets | Implicit | Low |
Relevant Indicators
22 indicators across IDRC, DFID, and UNDP frameworks for monitoring outcome mapping implementation. Key examples:
- Number of boundary partners showing measurable progress against Outcome Challenges at midpoint
- Proportion of "Expect to See" progress markers achieved by Year 1
- Quality of evidence documented in Outcome Journals (rated by evaluator)
- Degree to which boundary partners participated in the Outcome Mapping design process
Related Tools
- MEStudio Logic Model Builder: for mapping the causal logic underlying your Outcome Challenges
- Evaluation Planner: for structuring the monitoring schedule and evidence collection
Related Topics
- Outcome Harvesting, a complementary method for systematically documenting boundary partner changes
- Most Significant Change, an alternative qualitative approach for capturing unexpected or transformative change
- Contribution Analysis, for building a causal argument about programme contribution
- Theory of Change, the causal logic underpinning the vision and mission
- Participatory Evaluation, broader framework for engaging stakeholders in evaluation design
Further Reading
- Earl, S., Carden, F., & Smutylo, T. (2001). Outcome Mapping: Building Learning and Reflection into Development Programmes. Ottawa: IDRC. The original and definitive text.
- IDRC (2012). Outcome Mapping Learning Community. www.outcomemapping.ca, resources, case studies, and community of practice.
- Smutylo, T. (2005). Outcome Mapping: A Method for Tracking Behavioural Changes in Development Programmes. ILRI Brief. A concise practitioner introduction.
- Hearn, S. & Buffardi, A.L. (2016). What is Outcome Mapping? ODI Methods Lab Working Paper. A comparative review of OM vs. other approaches.