When to Use
Developmental evaluation (DE) is the right approach when the programme itself is still being designed, when goals and strategies are genuinely emergent, and when adapting in real-time is more important than proving what has already been built. Developed by Michael Quinn Patton, DE was designed for the space between formative evaluation (which assumes a relatively stable programme) and summative evaluation (which requires a completed one).
Use it when:
- The programme is a social innovation: the team is genuinely discovering what works through iteration; there is no established model to test or scale
- The environment is highly complex: political shifts, market changes, or emergent social dynamics mean the programme must adapt continuously to remain relevant
- Goals are evolving: the programme's theory of change is being built and tested during implementation, not applied from a design document
- Real-time feedback is more valuable than a final report: the programme team needs evaluative thinking embedded in their work, not an external verdict at the end
- DFID or similar donors support adaptive programming: an increasing number of donors explicitly fund DE as part of adaptive management investments
DE is not appropriate for established programmes with a stable theory of change, for evaluations where donor accountability requires a defined before/after comparison, or for situations where the evaluator's independence must be maintained entirely (the embedded nature of DE creates role boundary challenges).
| Scenario | Use Developmental Evaluation? | Better Alternative |
|---|---|---|
| Programme actively innovating | Yes | — |
| Stable programme, testing effectiveness | No | Impact Evaluation |
| Understanding why outcomes vary | No | Realist Evaluation |
| Donor requires summative verdict | No | Formative + Summative |
| Programme needs performance data | Alongside | MEL Plans |
| Emergency response learning | Yes | — |
How It Works
Unlike conventional evaluation, which operates on a programme-then-evaluate sequence, developmental evaluation is simultaneous with programme development. The evaluator is a thinking partner, not an external observer.
Step 1: Establish the evaluator's role and boundaries
The DE evaluator is embedded in the programme team, attending team meetings, contributing to strategy discussions, and providing real-time evaluative feedback. Boundaries must be explicitly negotiated: the evaluator maintains intellectual independence and evaluative perspective even while working alongside the team.
Step 2: Support theory development
In complex programmes, the theory of change is itself emergent. The evaluator's first contribution is often helping the team make their implicit theory explicit and testable.
Step 3: Design real-time monitoring
Identify the critical uncertainties in the emerging theory and design lightweight, rapid data collection processes that can inform decisions in weeks, not months. This is not a comprehensive M&E system, it is targeted data collection to answer the specific questions the programme team is grappling with now.
Step 4: Provide ongoing evaluative feedback
The primary output of DE is not a report, it is a continuous flow of evaluative thinking fed into programme decisions. This might be a brief memo after a stakeholder consultation, a pattern analysis from field visit observations, or a synthesis of early outcome signals.
Step 5: Document learning and theory refinement
Over time, the DE process generates an evolving record of what has been tried, what was learned, and how the programme theory has changed. This documentation becomes the basis for later summative evaluation and for sharing learning with the broader field.
Key Components
- Embedded evaluator: a professional evaluator working as a learning partner within the programme team, not as an external reviewer
- Real-time feedback processes: lightweight data collection and synthesis methods that produce findings in days or weeks
- Living theory of change: an explicitly documented and regularly updated programme theory that captures what has been learned
- Innovation documentation: systematic recording of what is being tried, adapted, and abandoned, and why
- Role clarity: explicit agreement between the evaluator, programme team, and funder about what DE is and is not
- Developmental evaluation report: periodic documentation of learning and theory development (not a traditional evaluation report)
- Integration with adaptive management: DE findings must be connected to decision-making processes, not just circulated as documents
Best Practices
Clarify the DE role before you start. Developmental evaluators who are unclear about their role become either captured (they stop evaluating and just support the team) or marginalised (the team stops engaging with them). Negotiate and document the evaluator's role explicitly at programme start.
Use the ToC as a working hypothesis, not a fixed frame. The programme theory in DE is a hypothesis about how change will happen. Every implementation experience is an opportunity to test and refine it, not to measure performance against it.
Don't rely solely on routine monitoring. Routine data tells you what is happening, not why, and in emergent programmes, the "why" is the critical question. DE requires methods that probe mechanisms, not just track indicators.
Conduct real-time evaluations during emergencies and pivots. When the programme context changes dramatically, a political crisis, a funding shift, a major implementation failure, a rapid DE review can provide the evaluative thinking needed to navigate the change.
Document what was not pursued. Developmental evaluation's most underused contribution is documenting the path not taken, the hypotheses rejected, the strategies abandoned, and the reasons. This learning is often invisible in conventional evaluation reports.
Common Mistakes
Treating DE as an excuse to avoid rigour. The absence of pre-specified outcomes does not mean anything goes. DE still requires systematic data collection, transparent reasoning, and honest reporting of what is not working.
Blurring the evaluator's independence. When the evaluator becomes a de facto programme staff member, they lose the evaluative distance that makes their contribution valuable. The evaluator should be able to say "this isn't working" without fear of undermining their position in the team.
Using DE language for conventional formative evaluation. DE is not just "evaluation done early." It is a specific approach for genuinely complex and emergent programmes. Applying the label to a standard formative evaluation misrepresents both.
Neglecting to connect DE findings to decisions. Evaluative thinking that stays in the evaluator's notebook serves no one. Build explicit feedback loops between DE findings and programme decision-making processes.
Not planning for transition to summative evaluation. DE generates valuable documentation of programme development. Plan from the start how this will be used when the programme reaches a point where summative evaluation is appropriate.
Examples
Social innovation, Canada. A national foundation in Canada used developmental evaluation to support a five-year social innovation fund testing new models for youth employment in marginalised communities. The embedded DE team attended quarterly strategy meetings, conducted rapid ethnographic observations at implementation sites, and produced monthly learning briefs. Over the first 18 months, the evaluation documented seven theory revisions as grantees discovered which employer engagement strategies produced durable job placements versus short-term placements. These findings were shared across the portfolio, enabling grantees to learn from each other in near real-time.
Adaptive health programme, East Africa. A DFID-funded adaptive health systems strengthening programme in Uganda used DE to support a team working in three politically complex districts. The evaluator documented how the programme theory shifted from a supply-side (training health workers) to a demand-side (community engagement) focus within 12 months as the team responded to facilities refusing to implement changes. The DE documentation provided the evidence base for a mid-programme design review that the donor approved without requiring the usual external evaluation process.
Emergency response, South Asia. Following a cyclone response in Bangladesh, a major international NGO used a rapid real-time developmental evaluation to assess which coordination mechanisms were producing efficient resource allocation versus which were creating bottlenecks. The evaluation ran for six weeks alongside the response. Three coordination changes were implemented within the evaluation period based on findings, each documented and tested in real-time. The final report was completed within the response phase rather than after.
Compared To
| Approach | Programme Phase | Evaluator Role | Primary Output |
|---|---|---|---|
| Developmental Evaluation | During innovation | Embedded partner | Real-time learning |
| Formative Evaluation | During stable implementation | External advisor | Improvement recommendations |
| Summative Evaluation | Post-programme | External assessor | Effectiveness verdict |
| Utilization-Focused Evaluation | Any | External with user focus | Decision-relevant findings |
| Realist Evaluation | Post or during | External analyst | Middle-range theory |
Relevant Indicators
16 indicators across DFID, UNDP, and foundation frameworks. Key examples:
- Number of programme adaptations formally documented as informed by DE findings
- Quality rating of evaluator-team engagement process (rated by both parties)
- Frequency of real-time feedback provided to programme team (target: at least monthly)
- Degree of theory of change refinement documented over the evaluation period
Related Tools
- MEStudio Logic Model Builder: for developing and updating the living theory of change
- Evaluation Planner: for structuring the DE monitoring approach and real-time data collection
Related Topics
- Adaptive Management, the programme management practice that DE is designed to support
- Utilization-Focused Evaluation, a related approach where intended user needs drive all evaluation decisions
- Learning Agendas, the structured learning priorities that can anchor DE data collection
- Theory of Change, the living framework that DE continuously tests and refines
- Most Significant Change, a complementary method for capturing unexpected or transformative outcomes during DE
Further Reading
- Patton, M.Q. (2011). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press. The foundational text.
- Patton, M.Q., McKegg, K., & Wehipeihana, N. (2016). Developmental Evaluation Exemplars: Principles in Practice. New York: Guilford. Case studies from 12 DE evaluations.
- Gamble, J. (2008). A Developmental Evaluation Primer. Montreal: McConnell Foundation. A concise practitioner introduction.
- DFID (2014). Broadening the Range of Designs and Methods for Impact Evaluations. Covers developmental approaches alongside experimental designs.