Formative evaluation happens during implementation and is oriented toward improvement. It asks "what is working, what is not, and what should change while there is still time to act?"
What Formative Evaluation Asks
Formative evaluation focuses on live, actionable questions about a program in motion:
- Is the program reaching its intended participants, in the intended numbers, with the intended intensity?
- Are activities being delivered as designed, or has implementation drifted from the original model?
- Are early outcome signals present, even if full outcomes are not yet measurable?
- What implementation adjustments would improve results over the remaining program period?
This is a fundamentally different question set from summative evaluation, which asks "did it work overall, and was it worth the money?" Formative evaluation exists to fix the car while it is still driving. Summative evaluation exists to tell you whether the trip was worthwhile.
When to Use Formative Evaluation
Formative evaluation is most valuable in five situations:
- Early-phase programs where implementation patterns are still stabilizing and course correction is cheap.
- Complex theories of change where the causal pathway has multiple assumptions that need field testing.
- Innovations and pilots where the program model itself is being tested, not just executed.
- Adaptive management contexts where the program is explicitly designed to learn and adjust.
- Multi-year programs with mid-term review cycles built into the donor agreement.
If a program is short, simple, and low-risk, a formative evaluation may not justify its cost. Everywhere else, skipping it is usually a false economy.
Design Features
Formative evaluations are designed around use, not rigor. Four features matter:
- Timing is mid-implementation, typically after enough activity has occurred to generate data but early enough for findings to influence the remaining program period.
- Methodology is rapid and practical. You are not running a randomized controlled trial. Mixed-methods with light qualitative fieldwork and routine monitoring data is the common pattern.
- Priority is responsiveness over generalizability. The audience is the program team, not the academic literature.
- Findings feed the same program cycle that produced them. If the report lands after the program has ended, it was a summative evaluation with a formative label.
Formative vs Summative
| Feature | Formative | Summative |
|---|---|---|
| Timing | During implementation | At or after program end |
| Orientation | Improvement | Judgment |
| Primary audience | Program team | Donor, leadership, public |
| Typical method | Mixed-methods, rapid | Rigorous, often comparative |
| Use of findings | Course correct the current program | Account for results, inform future programs |
Most programs of meaningful size need both across the project lifespan. They answer different questions and serve different decisions.
Proposal Context
Multi-year programs should show both formative (a mid-term review or process evaluation) and summative (a final evaluation) in the evaluation plan. A proposal with only an end-of-project evaluation misses the adaptive-management and learning opportunities donors increasingly prize, and it reads as a team that does not plan to learn during implementation.
Timing matters. Too early and the program has not stabilized enough to generate useful signal; the data is noise. Too late and findings cannot realistically inform the current program cycle. For a 3-5 year program, a mid-term review at month 18-24 is the common placement and usually the right one.
Budget formative evaluation at roughly 1.5-2% of total program budget, and summative at 2-4%. Understating evaluation cost is a frequent proposal tell.
Common Mistakes
- Timed too early, before the program has stabilized enough to produce interpretable data. You end up evaluating startup friction.
- Findings not fed back into the program cycle. The report is written, filed, and nothing changes. Formative evaluation without a use plan is summative evaluation in disguise.
Related Topics
- Summative Evaluation: End-of-program evaluation focused on judgment and accountability
- Utilization-Focused Evaluation: Evaluation designed around how findings will actually be used
- Evaluation - Parent concept covering formative, summative, and other evaluation types
- Theory-Based Evaluation: Testing the causal logic behind a program
- Adaptive Management: Management approach that depends on formative learning