Skip to main content
M&E Studio
Home
AI for M&E
AI GuidesPlaybooksPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services

AI for M&E

  • AI Guides
  • Playbooks
  • Prompts
  • Plugins
  • Workflows

Resources

  • Indicator Library
  • Reference Library
  • M&E Method Guides
  • Decision Guides
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

Library
  1. M&E Library
  2. /
  3. Formative Evaluation

Formative Evaluation

Evaluation conducted during program implementation to inform improvement, answering what is working, what needs to change, and how the program can deliver better. Paired with summative evaluation (which happens at or after program completion) as the two core evaluation purposes.

Formative evaluation happens during implementation and is oriented toward improvement. It asks "what is working, what is not, and what should change while there is still time to act?"

What Formative Evaluation Asks

Formative evaluation focuses on live, actionable questions about a program in motion:

  • Is the program reaching its intended participants, in the intended numbers, with the intended intensity?
  • Are activities being delivered as designed, or has implementation drifted from the original model?
  • Are early outcome signals present, even if full outcomes are not yet measurable?
  • What implementation adjustments would improve results over the remaining program period?

This is a fundamentally different question set from summative evaluation, which asks "did it work overall, and was it worth the money?" Formative evaluation exists to fix the car while it is still driving. Summative evaluation exists to tell you whether the trip was worthwhile.

When to Use Formative Evaluation

Formative evaluation is most valuable in five situations:

  • Early-phase programs where implementation patterns are still stabilizing and course correction is cheap.
  • Complex theories of change where the causal pathway has multiple assumptions that need field testing.
  • Innovations and pilots where the program model itself is being tested, not just executed.
  • Adaptive management contexts where the program is explicitly designed to learn and adjust.
  • Multi-year programs with mid-term review cycles built into the donor agreement.

If a program is short, simple, and low-risk, a formative evaluation may not justify its cost. Everywhere else, skipping it is usually a false economy.

Design Features

Formative evaluations are designed around use, not rigor. Four features matter:

  • Timing is mid-implementation, typically after enough activity has occurred to generate data but early enough for findings to influence the remaining program period.
  • Methodology is rapid and practical. You are not running a randomized controlled trial. Mixed-methods with light qualitative fieldwork and routine monitoring data is the common pattern.
  • Priority is responsiveness over generalizability. The audience is the program team, not the academic literature.
  • Findings feed the same program cycle that produced them. If the report lands after the program has ended, it was a summative evaluation with a formative label.

Formative vs Summative

FeatureFormativeSummative
TimingDuring implementationAt or after program end
OrientationImprovementJudgment
Primary audienceProgram teamDonor, leadership, public
Typical methodMixed-methods, rapidRigorous, often comparative
Use of findingsCourse correct the current programAccount for results, inform future programs

Most programs of meaningful size need both across the project lifespan. They answer different questions and serve different decisions.

Proposal Context

Multi-year programs should show both formative (a mid-term review or process evaluation) and summative (a final evaluation) in the evaluation plan. A proposal with only an end-of-project evaluation misses the adaptive-management and learning opportunities donors increasingly prize, and it reads as a team that does not plan to learn during implementation.

Timing matters. Too early and the program has not stabilized enough to generate useful signal; the data is noise. Too late and findings cannot realistically inform the current program cycle. For a 3-5 year program, a mid-term review at month 18-24 is the common placement and usually the right one.

Budget formative evaluation at roughly 1.5-2% of total program budget, and summative at 2-4%. Understating evaluation cost is a frequent proposal tell.

Common Mistakes

  • Timed too early, before the program has stabilized enough to produce interpretable data. You end up evaluating startup friction.
  • Findings not fed back into the program cycle. The report is written, filed, and nothing changes. Formative evaluation without a use plan is summative evaluation in disguise.

Related Topics

  • Summative Evaluation: End-of-program evaluation focused on judgment and accountability
  • Utilization-Focused Evaluation: Evaluation designed around how findings will actually be used
  • Evaluation - Parent concept covering formative, summative, and other evaluation types
  • Theory-Based Evaluation: Testing the causal logic behind a program
  • Adaptive Management: Management approach that depends on formative learning

Related Topics

Quick Reference
Summative Evaluation
Evaluation conducted at or after program completion to judge overall results, typically against the program's objectives, targets, and theory of change. The counterpart to formative evaluation and the primary basis for donor accountability reporting.
In-Depth Guide
Utilization-Focused Evaluation
An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.
Quick Reference
Theory-Based Evaluation
An evaluation approach that tests whether a program's theory of change holds in practice, using process tracing and evidence-at-each-step reasoning rather than relying solely on counterfactual comparison. Strong alternative when RCTs or quasi-experimental designs are infeasible.
Overview
Adaptive Management
A management approach that uses continuous learning from monitoring and evaluation data to adjust program strategies and activities in response to changing evidence or context.

Decision Guides

How to Write Evaluation Terms of Reference
A practical guide to writing evaluation TORs that get you a good evaluation. Scoping, evaluation questions, methodology expectations, timelines, budgets, and evaluator selection.
PreviousEx-Ante vs Ex-Post Evaluation: Meaning and Key DifferencesNextFormative vs Summative Evaluation