When to Use
Outcome Mapping is the right approach when a program works by influencing people and organizations rather than delivering services or products directly to beneficiaries. It was developed by the International Development Research Center (IDRC) specifically for complex, multi-actor programs where pre-set outcome targets are unrealistic and long-term social change cannot be attributed to any single intervention.
Use it when:
- The program works through partners: your theory of change depends on changing the behaviors, relationships, and actions of partner organizations, government agencies, or civil society groups who then influence others
- Advocacy, policy, or systems change: the program is trying to influence what institutions do, not what individuals receive
- Attribution is not the goal: you care about documenting and understanding contribution to change, not proving causation
- Participatory M&E is valued: boundary partners can be involved in defining what change looks like and monitoring their own progress
- IDRC is the funder: IDRC requires outcome mapping for many of its grants, with specific reporting structures
Outcome Mapping is the wrong tool when programs primarily deliver services (health, food, shelter), when funders require impact-level attribution, or when the program timeline is too short for meaningful behavioural change.
| Scenario | Use Outcome Mapping? | Better Alternative |
|---|---|---|
| Advocacy and policy influence | Yes | - |
| Service delivery to beneficiaries | No | Logframe |
| Emergent outcomes, unknown partners | Partially | Outcome Harvesting |
| Donor requires attribution | No | Impact Evaluation |
| Complex multi-actor systems | Yes | - |
| Short program (under 2 years) | Cautiously | Most Significant Change |
How It Works
Outcome Mapping has three design stages and an ongoing monitoring process.
Stage 1: Intentional Design
Define the program's vision, mission, and boundary partners. A boundary partner is any person, group, or organization your program works with directly and whose behavior you intend to influence. Then write an Outcome Challenge for each boundary partner - a description of the ideal behavior change you hope to see in them by program end.
For each Outcome Challenge, develop a graduated set of progress markers: behaviors on a spectrum from "Expect to see" (early, easy changes), to "Like to see" (deeper engagement), to "Love to see" (transformative shifts). Finally, map the program's own strategy - what activities and resources will support each boundary partner toward their outcome challenge.
Stage 2: Outcome and Performance Monitoring
Establish an ongoing monitoring process using Outcome Journals (one per boundary partner). Regularly record any behavioural changes observed, with supporting evidence. Use strategy journals to assess whether program activities are having their intended effect. Use performance data to monitor organisational practices.
Stage 3: Evaluation Planning
Outcome Mapping designs can feed into several evaluation approaches. Outcome Harvesting is commonly used alongside OM to document boundary partner changes systematically.
Key Components
A complete Outcome Mapping design includes:
- Vision statement: the large-scale social change the program contributes to (not directly causes)
- Mission statement: what the program itself does and how
- Boundary partners: typically 3-7 direct partners whose behavior changes are being tracked
- Outcome Challenges: one per boundary partner, describing ideal behavioural change
- Progress markers: graduated behavioural indicators at three levels (Expect/Like/Love to see)
- Strategy maps: activities designed to support each boundary partner
- Outcome journals: ongoing records of behavioural change evidence per partner
- Strategy journals: records of whether program strategies are working
- Organisational practices monitoring: internal accountability on how well the program team is functioning
Best Practices
Co-design with boundary partners. Outcome Challenges and progress markers developed without partner input are unrealistic and miss locally relevant change markers.
Use backwards mapping. Start from the long-term vision and work backwards to identify what changes in boundary partners are necessary and what the program must do to support those changes.
Report behavioural evidence, not activities. Outcome Journals must document what boundary partners actually did differently - not program activities or outputs. Evidence should be specific: observed behaviors, documented decisions, produced artefacts.
IDRC expects annual reporting. If IDRC is the funder, outcome reports must document progress against progress markers for each boundary partner with specific evidence of behavioural change.
Set realistic time expectations. Transformative behavioural change - the "Love to See" markers - takes 2-3+ years. Programs that expect all markers to be achieved in 12 months will generate discouraging monitoring data that misrepresents real progress.
Common Mistakes
Applying it to service delivery programs. Outcome Mapping is specifically designed for programs that work by influencing partner behavior. If the program runs clinics, distributes food, or provides direct services, the methodology does not fit.
Designing without boundary partner input. Outcome Challenges written entirely by program staff reflect program assumptions, not boundary partner realities. The resulting progress markers are often irrelevant or patronising.
Too many boundary partners. More than seven boundary partners creates monitoring burden that collapses under its own weight. Prioritize the 3-5 partners whose behavior change is most critical.
Treating progress markers as targets. Progress markers are a monitoring and learning tool, not performance targets. Evaluating staff performance against "Love to See" achievement sets up perverse incentives and discourages honest reporting.
Confusing OM's vision with attribution. The vision statement in OM deliberately describes large-scale change that the program does not claim to cause. Evaluators who conflate the vision with the program's attributed impact misrepresent the methodology's intent.
Examples
Advocacy and governance, West Africa. An IDRC-funded research-to-policy program in Ghana identified four boundary partners: the Parliamentary Finance Committee, the Ministry of Finance, a national civil society coalition, and a regional think tank. Outcome Challenges focused on each partner's use of research evidence in budget decisions. Progress markers tracked from basic awareness of research findings through to formal policy citations. Monitoring documented that the civil society coalition (a "Like to See" change) began systematically referencing program research in parliamentary submissions 18 months into the program - ahead of schedule. This finding prompted an early acceleration of engagement activities with the Finance Committee.
Capacity building, East Africa. A DFID-funded organisational capacity-building program in Uganda worked with six district health management teams (DHMTs) as boundary partners. Outcome Challenges focused on DHMTs developing and implementing evidence-based district health plans. Progress markers tracked from attending training through to using monitoring data in quarterly planning meetings through to adjusting annual budgets based on performance data. One DHMT reached "Love to See" markers (budget reallocation based on data) at 30 months; others were at "Like to See" (routine data use in meetings) at the same point. The differentiation helped the program target intensive support where it was needed.
Environmental systems change, Latin America. A multi-country IDRC program on water governance in the Andes worked with watershed committees, municipal governments, and national water agencies as boundary partners. The OM design captured gradual relationship and behavior changes across all three levels. Outcome Journals documented a shift in municipal government engagement from passive recipients of watershed data to active contributors - a "Like to See" marker - enabling the program to position itself for policy-level engagement two years earlier than planned.
Compared To
| Method | Unit of Change | Attribution | Design Flexibility |
|---|---|---|---|
| Outcome Mapping | Boundary partner behavior | None claimed | High |
| Outcome Harvesting | Any actor behavior | None claimed | Very High (retrospective) |
| Most Significant Change | Stories of change | None claimed | High |
| Theory of Change | Program logic | Implicit | Medium |
| Contribution Analysis | Program contribution | Plausible claim | Medium |
| Logframe | Output/outcome targets | Implicit | Low |
Relevant Indicators
22 indicators across IDRC, DFID, and UNDP frameworks for monitoring outcome mapping implementation. Key examples:
- Number of boundary partners showing measurable progress against Outcome Challenges at midpoint
- Proportion of "Expect to See" progress markers achieved by Year 1
- Quality of evidence documented in Outcome Journals (rated by evaluator)
- Degree to which boundary partners participated in the Outcome Mapping design process
Related Tools
- MEStudio Logic Model Builder: for mapping the causal logic underlying your Outcome Challenges
- Evaluation Planner: for structuring the monitoring schedule and evidence collection
Related Topics
- Outcome Harvesting: a complementary method for systematically documenting boundary partner changes
- Most Significant Change: an alternative qualitative approach for capturing unexpected or transformative change
- Contribution Analysis: for building a causal argument about program contribution
- Theory of Change: the causal logic underpinning the vision and mission
- Participatory Evaluation: broader framework for engaging stakeholders in evaluation design