Outcome indicators measure the change a program is designed to produce. They answer "is the program working?" rather than "is the program running?"
What Outcome Indicators Measure
Outcome indicators track applied change in the people or systems a program is trying to move. That means behavior shifts, skill application, adoption of a new practice, status change, or a measurable gain in capability or capacity. The point is not what was delivered, but what participants now do, know, or are differently because of that delivery.
A few concrete examples:
- Percentage of trained health workers correctly performing triage six months after training
- Proportion of smallholder farmers using improved seed varieties two seasons after distribution
- Share of adolescent girls who remained in school one year after receiving cash transfers
- Percentage of local officials applying the new budget template in published quarterly reports
Each of these measures a shift in behavior, practice, or condition. None of them measure what the program did.
Design Rules
Four rules keep outcome indicators honest.
Measure applied behavior or status, not stated intention. "Participants report they plan to use the skill" is not an outcome. "Participants demonstrate the skill in practice" is. Intention is easy to measure and easy to inflate.
Allow time for change to emerge. Most outcomes need 3-6 months minimum after the intervention ends before they can be measured meaningfully. Measuring the week after training captures recall, not change.
Tie each indicator to a specific claim in the theory of change. If the indicator does not map to a stated causal step, it is measuring the wrong thing.
Specify the measurement method up front. Survey, observation, assessment, records review. An outcome indicator without a declared measurement method will not survive contact with data collection.
Timing and Measurement
Outcome indicators are measured less often than outputs but need both a baseline and at least one follow-up. The typical pattern is baseline, midline, endline. Some programs also run a post-endline measurement 6-12 months after close to check whether the change held.
Lead time matters. Measuring too soon captures recall or enthusiasm rather than behavior change. Measuring too late makes attribution weak because competing causes accumulate over time. For most behavior change work, the useful window is 3-9 months after the intervention ends.
Proposal Context
Outcome indicators carry most of the weight in donor accountability. Most logframe templates place outcome indicators at the center of the reporting framework, and reviewers scan the outcome layer to assess whether the program is designed to produce change or just deliver activities.
The most common proposal pitfall is listing outputs under the "outcome" label. "Number of women trained" is an output. "Percentage of trained women applying the new skill six months later" is an outcome. Another common pitfall is using self-reported intention ("participants plan to use the skill") instead of applied behavior ("participants demonstrate the skill"). Outcome indicators require real measurement, which means budgeting for surveys, assessments, or observation, not just administrative records.
Common Mistakes
Labeling outputs as outcomes. If it counts what the program delivered, it is an output indicator, no matter where it sits on the logframe.
Measuring stated intention instead of applied behavior. Self-reported plans are not change. If the indicator can be satisfied by someone agreeing with a survey statement, it is not measuring outcome.
Related Topics
- Indicator: The parent concept
- Output Indicator: The level below outcomes in the results chain
- Process Indicator: Measures implementation quality, not change
- Indicator Selection: Choosing the right indicator level
- Theory of Change: Where outcome indicators anchor their causal claim