Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Developmental Evaluation
PillarMethods9 min read

Developmental Evaluation

An evaluation approach designed for complex, adaptive programmes in which goals and processes are emergent, and the evaluator works alongside the programme team as an embedded learning partner.

When to Use

Developmental evaluation (DE) is the right approach when the programme itself is still being designed, when goals and strategies are genuinely emergent, and when adapting in real-time is more important than proving what has already been built. Developed by Michael Quinn Patton, DE was designed for the space between formative evaluation (which assumes a relatively stable programme) and summative evaluation (which requires a completed one).

Use it when:

  • The programme is a social innovation: the team is genuinely discovering what works through iteration; there is no established model to test or scale
  • The environment is highly complex: political shifts, market changes, or emergent social dynamics mean the programme must adapt continuously to remain relevant
  • Goals are evolving: the programme's theory of change is being built and tested during implementation, not applied from a design document
  • Real-time feedback is more valuable than a final report: the programme team needs evaluative thinking embedded in their work, not an external verdict at the end
  • DFID or similar donors support adaptive programming: an increasing number of donors explicitly fund DE as part of adaptive management investments

DE is not appropriate for established programmes with a stable theory of change, for evaluations where donor accountability requires a defined before/after comparison, or for situations where the evaluator's independence must be maintained entirely (the embedded nature of DE creates role boundary challenges).

ScenarioUse Developmental Evaluation?Better Alternative
Programme actively innovatingYes—
Stable programme, testing effectivenessNoImpact Evaluation
Understanding why outcomes varyNoRealist Evaluation
Donor requires summative verdictNoFormative + Summative
Programme needs performance dataAlongsideMEL Plans
Emergency response learningYes—

How It Works

Unlike conventional evaluation, which operates on a programme-then-evaluate sequence, developmental evaluation is simultaneous with programme development. The evaluator is a thinking partner, not an external observer.

Step 1: Establish the evaluator's role and boundaries

The DE evaluator is embedded in the programme team, attending team meetings, contributing to strategy discussions, and providing real-time evaluative feedback. Boundaries must be explicitly negotiated: the evaluator maintains intellectual independence and evaluative perspective even while working alongside the team.

Step 2: Support theory development

In complex programmes, the theory of change is itself emergent. The evaluator's first contribution is often helping the team make their implicit theory explicit and testable.

Step 3: Design real-time monitoring

Identify the critical uncertainties in the emerging theory and design lightweight, rapid data collection processes that can inform decisions in weeks, not months. This is not a comprehensive M&E system, it is targeted data collection to answer the specific questions the programme team is grappling with now.

Step 4: Provide ongoing evaluative feedback

The primary output of DE is not a report, it is a continuous flow of evaluative thinking fed into programme decisions. This might be a brief memo after a stakeholder consultation, a pattern analysis from field visit observations, or a synthesis of early outcome signals.

Step 5: Document learning and theory refinement

Over time, the DE process generates an evolving record of what has been tried, what was learned, and how the programme theory has changed. This documentation becomes the basis for later summative evaluation and for sharing learning with the broader field.

Key Components

  • Embedded evaluator: a professional evaluator working as a learning partner within the programme team, not as an external reviewer
  • Real-time feedback processes: lightweight data collection and synthesis methods that produce findings in days or weeks
  • Living theory of change: an explicitly documented and regularly updated programme theory that captures what has been learned
  • Innovation documentation: systematic recording of what is being tried, adapted, and abandoned, and why
  • Role clarity: explicit agreement between the evaluator, programme team, and funder about what DE is and is not
  • Developmental evaluation report: periodic documentation of learning and theory development (not a traditional evaluation report)
  • Integration with adaptive management: DE findings must be connected to decision-making processes, not just circulated as documents

Best Practices

Clarify the DE role before you start. Developmental evaluators who are unclear about their role become either captured (they stop evaluating and just support the team) or marginalised (the team stops engaging with them). Negotiate and document the evaluator's role explicitly at programme start.

Use the ToC as a working hypothesis, not a fixed frame. The programme theory in DE is a hypothesis about how change will happen. Every implementation experience is an opportunity to test and refine it, not to measure performance against it.

Don't rely solely on routine monitoring. Routine data tells you what is happening, not why, and in emergent programmes, the "why" is the critical question. DE requires methods that probe mechanisms, not just track indicators.

Conduct real-time evaluations during emergencies and pivots. When the programme context changes dramatically, a political crisis, a funding shift, a major implementation failure, a rapid DE review can provide the evaluative thinking needed to navigate the change.

Document what was not pursued. Developmental evaluation's most underused contribution is documenting the path not taken, the hypotheses rejected, the strategies abandoned, and the reasons. This learning is often invisible in conventional evaluation reports.

Common Mistakes

Treating DE as an excuse to avoid rigour. The absence of pre-specified outcomes does not mean anything goes. DE still requires systematic data collection, transparent reasoning, and honest reporting of what is not working.

Blurring the evaluator's independence. When the evaluator becomes a de facto programme staff member, they lose the evaluative distance that makes their contribution valuable. The evaluator should be able to say "this isn't working" without fear of undermining their position in the team.

Using DE language for conventional formative evaluation. DE is not just "evaluation done early." It is a specific approach for genuinely complex and emergent programmes. Applying the label to a standard formative evaluation misrepresents both.

Neglecting to connect DE findings to decisions. Evaluative thinking that stays in the evaluator's notebook serves no one. Build explicit feedback loops between DE findings and programme decision-making processes.

Not planning for transition to summative evaluation. DE generates valuable documentation of programme development. Plan from the start how this will be used when the programme reaches a point where summative evaluation is appropriate.

Examples

Social innovation, Canada. A national foundation in Canada used developmental evaluation to support a five-year social innovation fund testing new models for youth employment in marginalised communities. The embedded DE team attended quarterly strategy meetings, conducted rapid ethnographic observations at implementation sites, and produced monthly learning briefs. Over the first 18 months, the evaluation documented seven theory revisions as grantees discovered which employer engagement strategies produced durable job placements versus short-term placements. These findings were shared across the portfolio, enabling grantees to learn from each other in near real-time.

Adaptive health programme, East Africa. A DFID-funded adaptive health systems strengthening programme in Uganda used DE to support a team working in three politically complex districts. The evaluator documented how the programme theory shifted from a supply-side (training health workers) to a demand-side (community engagement) focus within 12 months as the team responded to facilities refusing to implement changes. The DE documentation provided the evidence base for a mid-programme design review that the donor approved without requiring the usual external evaluation process.

Emergency response, South Asia. Following a cyclone response in Bangladesh, a major international NGO used a rapid real-time developmental evaluation to assess which coordination mechanisms were producing efficient resource allocation versus which were creating bottlenecks. The evaluation ran for six weeks alongside the response. Three coordination changes were implemented within the evaluation period based on findings, each documented and tested in real-time. The final report was completed within the response phase rather than after.

Compared To

ApproachProgramme PhaseEvaluator RolePrimary Output
Developmental EvaluationDuring innovationEmbedded partnerReal-time learning
Formative EvaluationDuring stable implementationExternal advisorImprovement recommendations
Summative EvaluationPost-programmeExternal assessorEffectiveness verdict
Utilization-Focused EvaluationAnyExternal with user focusDecision-relevant findings
Realist EvaluationPost or duringExternal analystMiddle-range theory

Relevant Indicators

16 indicators across DFID, UNDP, and foundation frameworks. Key examples:

  • Number of programme adaptations formally documented as informed by DE findings
  • Quality rating of evaluator-team engagement process (rated by both parties)
  • Frequency of real-time feedback provided to programme team (target: at least monthly)
  • Degree of theory of change refinement documented over the evaluation period

Related Tools

  • MEStudio Logic Model Builder: for developing and updating the living theory of change
  • Evaluation Planner: for structuring the DE monitoring approach and real-time data collection

Related Topics

  • Adaptive Management, the programme management practice that DE is designed to support
  • Utilization-Focused Evaluation, a related approach where intended user needs drive all evaluation decisions
  • Learning Agendas, the structured learning priorities that can anchor DE data collection
  • Theory of Change, the living framework that DE continuously tests and refines
  • Most Significant Change, a complementary method for capturing unexpected or transformative outcomes during DE

Further Reading

  • Patton, M.Q. (2011). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press. The foundational text.
  • Patton, M.Q., McKegg, K., & Wehipeihana, N. (2016). Developmental Evaluation Exemplars: Principles in Practice. New York: Guilford. Case studies from 12 DE evaluations.
  • Gamble, J. (2008). A Developmental Evaluation Primer. Montreal: McConnell Foundation. A concise practitioner introduction.
  • DFID (2014). Broadening the Range of Designs and Methods for Impact Evaluations. Covers developmental approaches alongside experimental designs.

At a Glance

Supports innovation and adaptation in complex programmes by providing real-time evaluative thinking alongside programme design and implementation.

Best For

  • Programmes operating in rapidly changing or unpredictable environments
  • Social innovation initiatives where goals and strategies are still being discovered
  • Pilot programmes testing new models before scale
  • Systems change initiatives where what to measure is itself uncertain

Complexity

High

Timeframe

Ongoing throughout the programme; embedded evaluator engagement from the start

Linked Indicators

16 indicators across 3 donor frameworks

DFIDUNDPMcConnell Foundation

Examples

  • Quality and frequency of evaluative feedback provided to programme team in real-time
  • Number of programme adaptations informed by developmental evaluation findings
  • Degree to which emerging programme theory has been refined through evaluative inquiry

Related Topics

Core Concept
Adaptive Management
A management approach that uses continuous learning from monitoring and evaluation data to adjust programme strategies and activities in response to changing evidence or context.
Pillar
Utilization-Focused Evaluation
An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.
Pillar
Realist Evaluation
An evaluation approach that asks what works, for whom, in what circumstances, and why, by identifying the mechanisms through which programmes produce outcomes in specific contexts.
Pillar
Most Significant Change
A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.
Pillar
Theory of Change
A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.
Core Concept
Learning Agendas
A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.
Core Concept
M&E Plans
A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.