Free Reference Card

Evaluation Types Comparison Matrix

Compare five evaluation types at a glance: Formative, Summative, Developmental, Impact, and Process. Choose the right evaluation for your program stage and purpose.

Ben Playfair5 min read
evaluationcomparison matrixevaluation designreference card

Purpose

Choosing the right type of evaluation is a critical decision that influences resource allocation, methodological design, and the utility of findings. This reference card compares five primary evaluation types to help you select the right approach for your program stage and purpose.


The Comparison Matrix

| Dimension | Formative | Summative | Developmental | Impact | Process | |-----------|-----------|-----------|---------------|--------|---------| | Primary Purpose | Improve program design and delivery during implementation | Judge overall effectiveness and whether goals were achieved | Support innovation and adaptation in complex environments | Determine whether observed changes are attributable to the intervention | Examine how a program operates and whether it is implemented as designed | | When to Use | During design, piloting, or early implementation | At program end or major milestone | When innovating or operating in uncertain, evolving contexts | After sufficient time for outcomes to materialize | At any stage when you need to understand implementation fidelity | | Key Questions | What is working? What needs to change? How can we improve delivery? | Did the program achieve its objectives? Was it worth the investment? | What is emerging? How should we adapt? What are we learning? | What difference did the program make? Would outcomes have occurred without it? | Is the program being delivered as planned? What explains variation in delivery? | | Primary Audience | Program staff, implementers | Donors, policymakers, boards | Program leaders, innovation teams | Policymakers, sector stakeholders, donors | Program managers, operations teams | | Timing | During implementation (ongoing or periodic) | End of program or funding cycle | Continuous, embedded throughout | Post-implementation (requires time lag for effects) | During implementation (can be standalone or embedded) | | Typical Duration | 2-6 months (can be iterative) | 3-9 months | Ongoing (duration of innovation period) | 6-18 months (can extend beyond program) | 2-6 months | | Evaluator Role | Internal or external, collaborative with program team | Usually external for independence and credibility | Embedded with program team, acts as critical friend | External, methodologically independent | Can be internal or external | | Resource Requirements | Low to Medium | Medium to High | High (requires skilled embedded evaluator) | High to Very High (rigorous designs are expensive) | Low to Medium | | Methodological Focus | Rapid assessments, process observation, stakeholder feedback | Outcome measurement, effectiveness analysis, value-for-money | Real-time feedback, systems thinking, emergent pattern identification | Counterfactual analysis (RCTs, quasi-experimental designs, contribution analysis) | Implementation mapping, fidelity assessment, bottleneck analysis | | Risk if Misapplied | Wasted improvement opportunity if done too late | Irrelevant findings if program has already changed significantly | Scope creep without clear boundaries | Attribution claims without adequate design | Descriptive without linking to outcomes | | Best For | Programs in early stages or undergoing redesign | Mature programs requiring accountability evidence | Innovation, complex adaptive programs, emergent strategies | Programs seeking to demonstrate causal impact for scale-up or policy | Programs needing to understand implementation gaps and delivery quality |


Decision Guide: Which Evaluation Type Do You Need?

Choose Formative Evaluation when:

  • Your program is being developed, piloted, or redesigned
  • You need rapid feedback to improve delivery before scaling
  • You want to test assumptions about what works
  • Internal learning is the primary purpose

Choose Summative Evaluation when:

  • A program or funding cycle is ending
  • Donors or boards need accountability evidence
  • You need to determine if objectives were achieved
  • Findings will inform future programming decisions

Choose Developmental Evaluation when:

  • You are innovating in complex or unpredictable environments
  • Program design is evolving and outcomes are emergent
  • You need an evaluator embedded with the team providing real-time learning
  • Traditional evaluation frameworks cannot capture what is happening

Choose Impact Evaluation when:

  • You need to demonstrate that observed changes are attributable to the intervention
  • Results will inform policy decisions or justify scale-up investment
  • Sufficient time has passed for outcomes to materialize
  • You have budget for rigorous counterfactual designs

Choose Process Evaluation when:

  • You need to understand whether the program is being implemented as designed
  • You want to explain variation in outcomes across sites or populations
  • You need to identify implementation bottlenecks before they undermine results
  • You want to complement an impact evaluation with implementation evidence

Combining Evaluation Types

Strategic evaluation planning often involves combining types:

  • Formative + Process: Improve delivery during implementation by understanding both what to change and how implementation is working
  • Process + Impact: The gold standard combination - understand not just whether the program worked, but why and how
  • Developmental + Formative: For innovative programs, use developmental evaluation in early stages and transition to formative as the model stabilizes
  • Summative + Process: At program end, assess both overall effectiveness and implementation quality to explain results

Common Mistakes

  1. Using summative timing for formative purposes - Conducting an "end-of-project learning evaluation" when there is no project left to improve
  2. Demanding impact evidence without impact budgets - Rigorous attribution requires significant investment in design and data
  3. Skipping process evaluation entirely - Without understanding implementation, you cannot explain why outcomes were or were not achieved
  4. Treating developmental evaluation as "anything goes" - It requires highly skilled evaluators and structured learning processes
  5. Assuming one evaluation type covers all needs - Different questions require different approaches at different times