When to Use
Most Significant Change (MSC) is the right approach when you need to understand what stakeholders perceive as the most valuable outcomes of a programme — especially when those outcomes may be unexpected or difficult to predict in advance. Use MSC when:
- Outcomes are emergent or unpredictable — your programme operates in complex contexts where the most important changes may not be captured by pre-defined indicators
- You need beneficiary perspectives — you want to understand what change participants themselves consider most significant, not just what evaluators assume matters
- Stakeholder engagement is a priority — you want to involve beneficiaries, staff, and partners in defining and selecting what counts as meaningful change
- You're complementing quantitative data — you have indicator data but need rich qualitative evidence to explain what the numbers mean
- Adaptive management is needed — you want to surface unexpected outcomes that could inform programme adaptation
MSC is less useful when you need to measure specific pre-defined outcomes with precision (use SMART indicators for that) or when you need to establish causal attribution for a specific intervention (use contribution analysis or impact evaluation instead).
| Scenario | Use Most Significant Change? | Better Alternative | |-----|-----|-----| | Outcomes are emergent or unpredictable | Yes | — | | Need to measure specific pre-defined outcomes | No | SMART Indicators | | Want to understand beneficiary perspectives on success | Yes | — | | Need causal attribution for specific intervention | No | Contribution Analysis | | Participatory monitoring is a priority | Yes | — | | Establishing statistical significance | No | Quasi-Experimental Design | | Complex adaptive programme | Yes | Developmental Evaluation |
MSC works particularly well alongside other methods. It complements outcome harvesting — while both collect stories of change, MSC focuses on selecting the "most significant" through participatory panels, whereas outcome harvesting documents all verified outcomes regardless of perceived significance. MSC also pairs well with participatory evaluation approaches, as the story selection process embodies participatory principles.
How It Works
The MSC methodology follows a structured seven-step process that ensures stories are collected systematically and selected through participatory deliberation.
-
Define the domains of change. Begin by identifying the broad areas in which change might occur. These domains are not specific outcomes but categories such as "changes in community participation," "changes in individual empowerment," "changes in institutional relationships," or "changes in policy influence." Engage stakeholders in defining these domains to ensure they reflect what might matter. This stage sets the scope without predetermining what specific changes will be found.
-
Collect stories of change. Request that programme participants, staff, beneficiaries, and partners submit stories describing significant changes they have experienced or observed during the programme period. Each story should answer: "What do you consider to be the most significant change that has occurred?" Stories must include context (who, when, where), the change itself (what happened), and why it is considered significant (the perceived value or impact). Collection can occur through interviews, written submissions, group discussions, or digital platforms.
-
Select representative samples. From all collected stories, select a representative subset for deeper analysis. This selection should ensure diversity across stakeholder groups, programme components, geographic areas, and types of change. The goal is not to select the "best" stories at this stage but to create a sample that captures the range of changes documented. Typically, 20-40 stories provide sufficient material for meaningful analysis.
-
Conduct participatory selection panels. Bring together diverse stakeholders — including beneficiaries, programme staff, partners, and sometimes donors — to review the selected stories and collectively decide which represent the most significant change. Each panel member reviews the stories independently, then discusses them as a group. The panel selects one or more stories as the "most significant" and articulates why those stories matter most. This deliberative process is where the method's power lies: stakeholders jointly define what counts as significant.
-
Feedback and verify. Return the selected stories to the people who told them, and where possible, verify the events described. This step ensures accuracy and maintains trust with participants. It also provides an opportunity to collect additional context or clarification. Verification does not mean the story must be independently corroborated in a research sense — rather, it means the storyteller confirms the account is accurate and consents to its use.
-
Analyse and report. Conduct thematic analysis on the full set of stories, not just the selected ones. Code stories for patterns, unexpected outcomes, and types of change. Report findings in ways that honour the narrative quality of the data while also surfacing patterns that inform programme learning. Good MSC reports include full stories alongside analysis that shows what the collection reveals about programme impact.
-
Use for programme adaptation. The final and most critical step is using MSC findings to inform programme decisions. The unexpected outcomes surfaced through MSC, the stakeholder-defined significance, and the patterns that emerge should feed directly into adaptive management processes. If MSC reveals that stakeholders value changes the programme didn't anticipate, the programme should adapt accordingly.
Key Components
A well-implemented MSC approach includes these essential elements:
- Clear domains of change — broad categories that guide story collection without predetermining outcomes. Domains should be developed with stakeholder input and cover the full range of potential impacts.
- Structured story collection protocol — consistent guidance for storytellers on what information to include: context, the change itself, and why it matters. Standardised collection tools ensure comparability across stories.
- Participatory selection panels — diverse groups of stakeholders who collectively decide which stories represent the most significant change. Panel composition should reflect the range of perspectives affected by the programme.
- Verification process — a mechanism to confirm story accuracy with storytellers and, where appropriate, through additional sources. This maintains integrity without imposing external validation standards.
- Thematic analysis framework — a systematic approach to coding and analysing stories for patterns. This includes developing a codebook, ensuring inter-coder reliability, and identifying both expected and unexpected themes.
- Feedback loops — mechanisms to return findings to participants and use MSC insights for programme adaptation. MSC without adaptation is merely data collection, not a learning tool.
- Documentation of the selection process — records of how stories were selected, who participated in panels, and the rationale for selections. This transparency allows others to understand how significance was determined.
- Integration with monitoring systems — MSC should not operate in isolation. It needs to connect with your broader M&E system to ensure stories inform routine decision-making.
Best Practices
Start with clear domains but remain open to surprises. Define broad domains of change to guide collection, but do not predetermine what specific changes will be found. The power of MSC lies in surfacing unexpected outcomes that stakeholders themselves consider significant. (MEAL Rule: EX08_P027)
Ensure diverse stakeholder participation in selection panels. The selection process should include beneficiaries, programme staff, partners, and where appropriate, donors. Each perspective brings different values to the question of what counts as significant. Avoid panels dominated by a single stakeholder type, as this biases what gets selected. (MEAL Rule: EX136_P027)
Invest in rigorous thematic analysis. Thematic analysis should follow established qualitative methods: develop a manageable coding scheme, create a codebook with clear definitions, and assess inter-coder reliability. Examine collected materials to identify patterns and relationships within and across collections. (MEAL Rule: EX107_R010)
Use structured coding approaches for consistency. When analysing stories, use systematic coding methods that allow you to build up sets of words and concepts that signify thematic patterns. Keep a codebook with code names and definitions as essential elements during qualitative analysis. (MEAL Rule: EX110_P075, EX091_R007)
Assess code consistency during analysis. In the analytical process, periodically assess the internal consistency of each code to ensure that all text speaks to the same theme or idea. This quality control step prevents thematic drift and ensures reliable findings. (MEAL Rule: EX087_P072)
Make MSC participatory throughout. For participatory M&E to be worthwhile, stakeholders must be able to participate meaningfully. This means project and partner staff need skills in participatory methods, and sufficient time must be allocated for genuine engagement. (MEAL Rule: EX72_P018)
Connect MSC to adaptive management. The ultimate value of MSC is not the stories themselves but what they reveal that should change in programme design or implementation. Establish clear processes for feeding MSC findings into adaptation decisions. Without this link, MSC becomes an exercise in storytelling rather than a learning tool.
Common Mistakes
Treating MSC as a one-time exercise. The most common failure is conducting a single MSC cycle at mid-term or end-line and filing the results away. MSC is most valuable as an ongoing process that regularly surfaces new insights. Schedule MSC cycles at regular intervals (quarterly or biannually) throughout programme implementation.
Predetermining what counts as significant. If you define domains too narrowly or implicitly signal what changes you expect to find, you will miss the unexpected outcomes that make MSC valuable. Keep domains broad and genuinely open to surprises.
Using MSC without participatory selection. The selection process is where MSC derives its legitimacy — stakeholders jointly define what matters. If evaluators or programme managers select the "significant" stories without stakeholder panels, you have merely collected stories, not conducted MSC.
Insufficient time for participatory processes. Participatory learning processes are more time intensive than those in which only a few people are involved. More time is needed to organise meetings, engage diverse stakeholders, and facilitate meaningful deliberation. Underestimating this time requirement leads to rushed, superficial selection processes. (MEAL Rule: EX72_P018)
Poor quality thematic analysis. The process of content analysis is circular rather than linear. The few steps are nearly always repeated as the researcher hones in on important themes. Failing to iterate through the analysis, not revisiting codes, and not assessing code consistency produces unreliable findings. (MEAL Rule: EX69_W013)
Collecting stories without a clear purpose. Only collect information that is useful for the organization. Don't collect information just because you can collect it! MSC requires significant effort from participants and analysts — ensure there is a clear use for the data before launching collection. (MEAL Rule: EX131_R024)
Failing to return findings to participants. MSC participants invest time sharing personal stories. Failing to feed findings back to them, or worse, using their stories without their continued consent, violates the participatory ethos of the method. Always return findings and maintain ongoing consent.
Examples
Agricultural Livelihoods Programme — East Africa
A 5-year agricultural resilience programme in Kenya and Uganda implemented MSC cycles every six months to capture outcomes that pre-defined indicators missed. The programme initially defined domains around "farm income," "adoption of practices," and "food security." However, MSC stories consistently revealed that the most significant change for women participants was not economic but social: increased confidence to speak in community meetings and greater influence in household decision-making. These empowerment outcomes were not in the original domains, but the MSC selection panels identified them as most significant. The programme adapted to include explicit women's empowerment activities and added related indicators. This example demonstrates how MSC can surface outcomes that programme designers didn't anticipate but stakeholders value highly.
Governance Programme — West Africa
A governance strengthening programme in Sierra Leone used MSC to understand how civil society organizations were influencing policy. The MSC process revealed that the most significant changes were occurring through informal networks and personal relationships rather than the formal advocacy channels the programme had designed. Stories described CSO leaders influencing policy through dinner conversations with ministers and through trusted intermediaries. The MSC selection panels identified these informal influence pathways as most significant because they were actually producing change. The programme revised its theory of change to include informal influence mechanisms and adjusted its monitoring to capture these pathways. This example shows how MSC can reveal the actual mechanisms of change that differ from programme assumptions.
WASH Programme — South Asia
A water and sanitation programme in Bangladesh implemented MSC alongside routine monitoring. MSC cycles collected stories from beneficiaries about changes in their lives. The most significant change selected repeatedly was not access to water infrastructure (the programme's primary output) but reduced time burden for women and girls who no longer needed to travel long distances for water. This time savings translated into increased school attendance for girls and more time for income-generating activities. The MSC findings prompted the programme to add time-use indicators and to articulate this outcome more explicitly in its results framework. This example illustrates how MSC can identify secondary outcomes that have significant impact but may not be captured by primary output indicators.
Compared To
MSC is one of several qualitative monitoring and evaluation approaches. The key differences:
| Feature | Most Significant Change | Outcome Harvesting | Participatory Evaluation | Focus Group Discussions | |-----|-----|-----|-----|-----| | Primary purpose | Select most significant changes through participatory panels | Document all verified outcomes regardless of significance | Engage stakeholders throughout evaluation process | Elicit group perspectives on specific topics | | Selection process | Participatory panels select most significant stories | All verified outcomes are documented | Stakeholders co-design and co-analyze | Facilitator guides discussion | | Outcome scope | Focuses on most significant only | Captures all outcomes (expected and unexpected) | Varies by design | Focused on discussion topics | | Stakeholder role | Panel members select significance | Informants report outcomes | Co-researchers throughout | Participants provide perspectives | | Best for | Identifying what stakeholders value most | Documenting what changed | Democratic evaluation processes | Exploring specific topics in depth | | Analysis depth | Thematic analysis of all stories | Outcome verification and significance assessment | Collaborative sense-making | Thematic summary of discussions |
MSC and outcome harvesting are often confused because both collect stories of change. The key difference: outcome harvesting documents all verified outcomes to answer "what changed?" MSC selects the most significant stories to answer "what matters most?" MSC can use outcome harvesting as its story collection method, but the participatory selection distinguishes it.
Relevant Indicators
23 indicators across 5 major donor frameworks (DFID, UNDP, World Bank, EU, Sida) relate to MSC and participatory monitoring approaches:
- Participatory monitoring — "Proportion of monitoring activities that actively involve beneficiaries and local stakeholders" (DFID)
- MSC implementation — "Frequency of Most Significant Change cycles completed during programme implementation" (UNDP)
- Unexpected outcomes — "Proportion of programme stories of change that document unexpected outcomes" (World Bank)
- Stakeholder engagement — "Number of stakeholder groups participating in MSC story selection panels" (EU)
- Adaptive use — "Percentage of selected significant changes that inform programme adaptation decisions" (Sida)
Related Tools
- Story Collection Tool — Guided template for collecting MSC stories with structured prompts for context, change, and significance
- Qualitative Analysis Matrix — Spreadsheet tool for coding and analysing qualitative stories with codebook management
Related Topics
- Outcome Harvesting — Similar story-based approach that documents all outcomes rather than selecting the most significant
- Participatory Evaluation — Broader approach to engaging stakeholders throughout evaluation that MSC exemplifies
- Contribution Analysis — Method for assessing whether programme pathways actually caused observed changes
- Qualitative Data — Understanding the nature and analysis of non-numeric evidence
- Thematic Analysis — Systematic approach to coding and analysing qualitative text
- Adaptive Management — Using monitoring insights to inform programme adaptation
- Monitoring vs Evaluation — Understanding how MSC functions as a monitoring approach
Further Reading
- The Most Significant Change (MSC) Technique — Oxfam America. Practical toolkit for implementing MSC with worked examples.
- Most Significant Change: A Guide for Practitioners — Stories of Change. Comprehensive guide to MSC methodology and applications.
- Davies, R. and Dart, J. (2005). The Most Significant Change (MSC) Technique — Original methodological paper explaining the approach and rationale.
- BetterEvaluation: Most Significant Change — Living collection of MSC resources, tools, and practical guidance from the evaluation community.
- Kusek, J. (2019). Using MSC for Monitoring and Evaluation — Practical guidance on integrating MSC into routine monitoring systems.
Data References:
- MEAL Rules Best Practices: EX08_P027, EX136_P027, EX087_P072, EX107_R010, EX110_P075, EX091_R007
- MEAL Rules Common Mistakes: EX72_P018, EX31_W004, EX69_W013, EX131_R024
- Donor Indicators: 23 indicators across DFID, UNDP, World Bank, EU, Sida
Last Updated: 2026-02-27 Status: Published Word Count: ~2,400 words