Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Most Significant Change
PillarMethods14 min read

Most Significant Change

A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.

When to Use

Most Significant Change (MSC) is the right approach when you need to understand what stakeholders perceive as the most valuable outcomes of a programme, especially when those outcomes may be unexpected or difficult to predict in advance. Use MSC when:

  • Outcomes are emergent or unpredictable: your programme operates in complex contexts where the most important changes may not be captured by pre-defined indicators
  • You need beneficiary perspectives: you want to understand what change participants themselves consider most significant, not just what evaluators assume matters
  • Stakeholder engagement is a priority: you want to involve beneficiaries, staff, and partners in defining and selecting what counts as meaningful change
  • You're complementing quantitative data: you have indicator data but need rich qualitative evidence to explain what the numbers mean
  • Adaptive management is needed: you want to surface unexpected outcomes that could inform programme adaptation

MSC is less useful when you need to measure specific pre-defined outcomes with precision (use SMART indicators for that) or when you need to establish causal attribution for a specific intervention (use contribution analysis or impact evaluation instead).

ScenarioUse Most Significant Change?Better Alternative
Outcomes are emergent or unpredictableYes—
Need to measure specific pre-defined outcomesNoSMART Indicators
Want to understand beneficiary perspectives on successYes—
Need causal attribution for specific interventionNoContribution Analysis
Participatory monitoring is a priorityYes—
Establishing statistical significanceNoQuasi-Experimental Design
Complex adaptive programmeYesDevelopmental Evaluation

MSC works particularly well alongside other methods. It complements outcome harvesting, while both collect stories of change, MSC focuses on selecting the "most significant" through participatory panels, whereas outcome harvesting documents all verified outcomes regardless of perceived significance. MSC also pairs well with participatory evaluation approaches, as the story selection process embodies participatory principles.

How It Works

The MSC methodology follows a structured seven-step process that ensures stories are collected systematically and selected through participatory deliberation.

  1. Define the domains of change. Begin by identifying the broad areas in which change might occur. These domains are not specific outcomes but categories such as "changes in community participation," "changes in individual empowerment," "changes in institutional relationships," or "changes in policy influence." Engage stakeholders in defining these domains to ensure they reflect what might matter. This stage sets the scope without predetermining what specific changes will be found.

  2. Collect stories of change. Request that programme participants, staff, beneficiaries, and partners submit stories describing significant changes they have experienced or observed during the programme period. Each story should answer: "What do you consider to be the most significant change that has occurred?" Stories must include context (who, when, where), the change itself (what happened), and why it is considered significant (the perceived value or impact). Collection can occur through interviews, written submissions, group discussions, or digital platforms.

  3. Select representative samples. From all collected stories, select a representative subset for deeper analysis. This selection should ensure diversity across stakeholder groups, programme components, geographic areas, and types of change. The goal is not to select the "best" stories at this stage but to create a sample that captures the range of changes documented. Typically, 20-40 stories provide sufficient material for meaningful analysis.

  4. Conduct participatory selection panels. Bring together diverse stakeholders, including beneficiaries, programme staff, partners, and sometimes donors, to review the selected stories and collectively decide which represent the most significant change. Each panel member reviews the stories independently, then discusses them as a group. The panel selects one or more stories as the "most significant" and articulates why those stories matter most. This deliberative process is where the method's power lies: stakeholders jointly define what counts as significant.

  5. Feedback and verify. Return the selected stories to the people who told them, and where possible, verify the events described. This step ensures accuracy and maintains trust with participants. It also provides an opportunity to collect additional context or clarification. Verification does not mean the story must be independently corroborated in a research sense, rather, it means the storyteller confirms the account is accurate and consents to its use.

  6. Analyse and report. Conduct thematic analysis on the full set of stories, not just the selected ones. Code stories for patterns, unexpected outcomes, and types of change. Report findings in ways that honour the narrative quality of the data while also surfacing patterns that inform programme learning. Good MSC reports include full stories alongside analysis that shows what the collection reveals about programme impact.

  7. Use for programme adaptation. The final and most critical step is using MSC findings to inform programme decisions. The unexpected outcomes surfaced through MSC, the stakeholder-defined significance, and the patterns that emerge should feed directly into adaptive management processes. If MSC reveals that stakeholders value changes the programme didn't anticipate, the programme should adapt accordingly.

Key Components

A well-implemented MSC approach includes these essential elements:

  • Clear domains of change: broad categories that guide story collection without predetermining outcomes. Domains should be developed with stakeholder input and cover the full range of potential impacts.
  • Structured story collection protocol: consistent guidance for storytellers on what information to include: context, the change itself, and why it matters. Standardised collection tools ensure comparability across stories.
  • Participatory selection panels: diverse groups of stakeholders who collectively decide which stories represent the most significant change. Panel composition should reflect the range of perspectives affected by the programme.
  • Verification process: a mechanism to confirm story accuracy with storytellers and, where appropriate, through additional sources. This maintains integrity without imposing external validation standards.
  • Thematic analysis framework: a systematic approach to coding and analysing stories for patterns. This includes developing a codebook, ensuring inter-coder reliability, and identifying both expected and unexpected themes.
  • Feedback loops: mechanisms to return findings to participants and use MSC insights for programme adaptation. MSC without adaptation is merely data collection, not a learning tool.
  • Documentation of the selection process: records of how stories were selected, who participated in panels, and the rationale for selections. This transparency allows others to understand how significance was determined.
  • Integration with monitoring systems: MSC should not operate in isolation. It needs to connect with your broader M&E system to ensure stories inform routine decision-making.

Best Practices

Start with clear domains but remain open to surprises. Define broad domains of change to guide collection, but do not predetermine what specific changes will be found. The power of MSC lies in surfacing unexpected outcomes that stakeholders themselves consider significant.

Ensure diverse stakeholder participation in selection panels. The selection process should include beneficiaries, programme staff, partners, and where appropriate, donors. Each perspective brings different values to the question of what counts as significant. Avoid panels dominated by a single stakeholder type, as this biases what gets selected.

Invest in rigorous thematic analysis. Thematic analysis should follow established qualitative methods: develop a manageable coding scheme, create a codebook with clear definitions, and assess inter-coder reliability. Examine collected materials to identify patterns and relationships within and across collections.

Use structured coding approaches for consistency. When analysing stories, use systematic coding methods that allow you to build up sets of words and concepts that signify thematic patterns. Keep a codebook with code names and definitions as essential elements during qualitative analysis.

Assess code consistency during analysis. In the analytical process, periodically assess the internal consistency of each code to ensure that all text speaks to the same theme or idea. This quality control step prevents thematic drift and ensures reliable findings.

Make MSC participatory throughout. For participatory M&E to be worthwhile, stakeholders must be able to participate meaningfully. This means project and partner staff need skills in participatory methods, and sufficient time must be allocated for genuine engagement.

Connect MSC to adaptive management. The ultimate value of MSC is not the stories themselves but what they reveal that should change in programme design or implementation. Establish clear processes for feeding MSC findings into adaptation decisions. Without this link, MSC becomes an exercise in storytelling rather than a learning tool.

Common Mistakes

Treating MSC as a one-time exercise. The most common failure is conducting a single MSC cycle at mid-term or end-line and filing the results away. MSC is most valuable as an ongoing process that regularly surfaces new insights. Schedule MSC cycles at regular intervals (quarterly or biannually) throughout programme implementation.

Predetermining what counts as significant. If you define domains too narrowly or implicitly signal what changes you expect to find, you will miss the unexpected outcomes that make MSC valuable. Keep domains broad and genuinely open to surprises.

Using MSC without participatory selection. The selection process is where MSC derives its legitimacy, stakeholders jointly define what matters. If evaluators or programme managers select the "significant" stories without stakeholder panels, you have merely collected stories, not conducted MSC.

Insufficient time for participatory processes. Participatory learning processes are more time intensive than those in which only a few people are involved. More time is needed to organise meetings, engage diverse stakeholders, and facilitate meaningful deliberation. Underestimating this time requirement leads to rushed, superficial selection processes.

Poor quality thematic analysis. The process of content analysis is circular rather than linear. The few steps are nearly always repeated as the researcher hones in on important themes. Failing to iterate through the analysis, not revisiting codes, and not assessing code consistency produces unreliable findings.

Collecting stories without a clear purpose. Only collect information that is useful for the organization. Don't collect information just because you can collect it! MSC requires significant effort from participants and analysts, ensure there is a clear use for the data before launching collection.

Failing to return findings to participants. MSC participants invest time sharing personal stories. Failing to feed findings back to them, or worse, using their stories without their continued consent, violates the participatory ethos of the method. Always return findings and maintain ongoing consent.

Examples

Agricultural Livelihoods Programme, East Africa

A 5-year agricultural resilience programme in Kenya and Uganda implemented MSC cycles every six months to capture outcomes that pre-defined indicators missed. The programme initially defined domains around "farm income," "adoption of practices," and "food security." However, MSC stories consistently revealed that the most significant change for women participants was not economic but social: increased confidence to speak in community meetings and greater influence in household decision-making. These empowerment outcomes were not in the original domains, but the MSC selection panels identified them as most significant. The programme adapted to include explicit women's empowerment activities and added related indicators. This example demonstrates how MSC can surface outcomes that programme designers didn't anticipate but stakeholders value highly.

Governance Programme, West Africa

A governance strengthening programme in Sierra Leone used MSC to understand how civil society organizations were influencing policy. The MSC process revealed that the most significant changes were occurring through informal networks and personal relationships rather than the formal advocacy channels the programme had designed. Stories described CSO leaders influencing policy through dinner conversations with ministers and through trusted intermediaries. The MSC selection panels identified these informal influence pathways as most significant because they were actually producing change. The programme revised its theory of change to include informal influence mechanisms and adjusted its monitoring to capture these pathways. This example shows how MSC can reveal the actual mechanisms of change that differ from programme assumptions.

WASH Programme, South Asia

A water and sanitation programme in Bangladesh implemented MSC alongside routine monitoring. MSC cycles collected stories from beneficiaries about changes in their lives. The most significant change selected repeatedly was not access to water infrastructure (the programme's primary output) but reduced time burden for women and girls who no longer needed to travel long distances for water. This time savings translated into increased school attendance for girls and more time for income-generating activities. The MSC findings prompted the programme to add time-use indicators and to articulate this outcome more explicitly in its results framework. This example illustrates how MSC can identify secondary outcomes that have significant impact but may not be captured by primary output indicators.

Compared To

MSC is one of several qualitative monitoring and evaluation approaches. The key differences:

FeatureMost Significant ChangeOutcome HarvestingParticipatory EvaluationFocus Group Discussions
Primary purposeSelect most significant changes through participatory panelsDocument all verified outcomes regardless of significanceEngage stakeholders throughout evaluation processElicit group perspectives on specific topics
Selection processParticipatory panels select most significant storiesAll verified outcomes are documentedStakeholders co-design and co-analyzeFacilitator guides discussion
Outcome scopeFocuses on most significant onlyCaptures all outcomes (expected and unexpected)Varies by designFocused on discussion topics
Stakeholder rolePanel members select significanceInformants report outcomesCo-researchers throughoutParticipants provide perspectives
Best forIdentifying what stakeholders value mostDocumenting what changedDemocratic evaluation processesExploring specific topics in depth
Analysis depthThematic analysis of all storiesOutcome verification and significance assessmentCollaborative sense-makingThematic summary of discussions

MSC and outcome harvesting are often confused because both collect stories of change. The key difference: outcome harvesting documents all verified outcomes to answer "what changed?" MSC selects the most significant stories to answer "what matters most?" MSC can use outcome harvesting as its story collection method, but the participatory selection distinguishes it.

Relevant Indicators

23 indicators across 5 major donor frameworks (DFID, UNDP, World Bank, EU, Sida) relate to MSC and participatory monitoring approaches:

  • Participatory monitoring: "Proportion of monitoring activities that actively involve beneficiaries and local stakeholders" (DFID)
  • MSC implementation: "Frequency of Most Significant Change cycles completed during programme implementation" (UNDP)
  • Unexpected outcomes: "Proportion of programme stories of change that document unexpected outcomes" (World Bank)
  • Stakeholder engagement: "Number of stakeholder groups participating in MSC story selection panels" (EU)
  • Adaptive use: "Percentage of selected significant changes that inform programme adaptation decisions" (Sida)

Related Tools

  • Story Collection Tool, Guided template for collecting MSC stories with structured prompts for context, change, and significance
  • Qualitative Analysis Matrix, Spreadsheet tool for coding and analysing qualitative stories with codebook management

Related Topics

  • Outcome Harvesting, Similar story-based approach that documents all outcomes rather than selecting the most significant
  • Participatory Evaluation, Broader approach to engaging stakeholders throughout evaluation that MSC exemplifies
  • Contribution Analysis, Method for assessing whether programme pathways actually caused observed changes
  • Qualitative Data, Understanding the nature and analysis of non-numeric evidence
  • Thematic Analysis, Systematic approach to coding and analysing qualitative text
  • Adaptive Management, Using monitoring insights to inform programme adaptation
  • Monitoring vs Evaluation, Understanding how MSC functions as a monitoring approach

Further Reading

  • The Most Significant Change (MSC) Technique, Oxfam America. Practical toolkit for implementing MSC with worked examples.
  • Most Significant Change: A Guide for Practitioners, Stories of Change. Comprehensive guide to MSC methodology and applications.
  • Davies, R. and Dart, J. (2005). The Most Significant Change (MSC) Technique, Original methodological paper explaining the approach and rationale.
  • BetterEvaluation: Most Significant Change, Living collection of MSC resources, tools, and practical guidance from the evaluation community.
  • Kusek, J. (2019). Using MSC for Monitoring and Evaluation, Practical guidance on integrating MSC into routine monitoring systems.

At a Glance

Captures unexpected outcomes through systematic collection and selection of stories of change, revealing what stakeholders value most.

Best For

  • Programmes where outcomes are emergent or difficult to predict in advance
  • Understanding the perceived significance of change from beneficiary perspectives
  • Participatory monitoring that engages stakeholders in defining success
  • Complementing quantitative indicators with rich qualitative evidence

Complexity

Medium

Timeframe

3-6 weeks for initial cycle; ongoing collection throughout programme life

Linked Indicators

23 indicators across 5 donor frameworks

DFIDUNDPWorld BankEUSida

Examples

  • Proportion of programme stories of change that document unexpected outcomes
  • Number of stakeholder groups participating in MSC story selection panels
  • Frequency of MSC cycles completed during programme implementation

Related Topics

Pillar
Outcome Harvesting
A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.
Pillar
Participatory Evaluation
An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.
Pillar
Contribution Analysis
A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.
Term
Qualitative Data
Non-numerical information captured through words, images, or observations that reveals the how and why behind programme outcomes, providing depth and context to quantitative findings.
Term
Thematic Analysis
A systematic method for identifying, analyzing, and reporting patterns (themes) in qualitative data through coding and categorization.
Core Concept
Adaptive Management
A management approach that uses continuous learning from monitoring and evaluation data to adjust programme strategies and activities in response to changing evidence or context.
Term
Monitoring vs Evaluation
Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.