Skip to main content
M&E Studio
Home
AI for M&E
AI GuidesPlaybooksPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services

AI for M&E

  • AI Guides
  • Playbooks
  • Prompts
  • Plugins
  • Workflows

Resources

  • Indicator Library
  • Reference Library
  • M&E Method Guides
  • Decision Guides
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

Library
  1. M&E Library
  2. /
  3. Process Indicator

Process Indicator

An indicator measuring the quality and fidelity of program implementation: how activities are being delivered, at what dose, with what adherence to protocol. Distinguished from output indicators (which count deliverables) by focus on delivery quality rather than quantity.

Process indicators measure implementation quality, not just activity count. They answer "how well is the program running?" which is a different question from "how much is the program delivering?"

What Process Indicators Measure

Process indicators track the delivery quality dimension of a program. The usual categories:

  • Fidelity. How closely implementation matches the documented protocol or curriculum. Example: percentage of training sessions delivered with all required modules covered.
  • Coverage. What share of the eligible population is actually being reached. Example: proportion of target households receiving at least one home visit per quarter.
  • Dose. The amount or duration of exposure participants receive. Example: average number of counseling sessions completed per enrolled client.
  • Adherence. Completion of the planned protocol end-to-end. Example: percentage of participants completing the full 12-week curriculum.
  • Quality markers. Participant-side and supervision-side signals of delivery quality. Example: supervisor observation scores, participant satisfaction ratings.

These are operational measures. They look at the pipework of program delivery, not the results downstream.

Process vs Output

The clearest way to see the distinction is to pair them.

An output indicator counts what was delivered: "5 trainings held, 120 participants trained."

A process indicator describes how well delivery went: "5 trainings held with 94% curriculum fidelity, 87% of participants attended all sessions, supervisor fidelity rating 4.2 out of 5."

Process indicators build on outputs but add the quality dimension a raw count misses. A program can hit every output target while delivering those outputs badly, and only a process layer will surface it.

Design Rules

Three rules make process indicators usable:

  1. Tie to a documented protocol or delivery standard. A process indicator without a protocol to measure against is unmeasurable. If you cannot point to a curriculum, SOP, or fidelity checklist, you are measuring something else.
  2. Collect during the activity, not after. Fidelity, attendance, and dose have to be captured in real time, through session logs, observation forms, or attendance rosters. Reconstructing them later produces garbage data.
  3. Pair each process indicator with the output it describes. Process indicators are not standalone. They qualify an output. Design them as pairs.

Proposal Context

Process indicators strengthen a MEL plan by letting the program distinguish implementation failure from design failure when outcomes disappoint. If outcome indicators move less than expected, process data tells you whether the theory was wrong or whether delivery never happened the way it was supposed to. Donor reviewers increasingly look for process indicators in high-fidelity programs: evidence-based interventions in health, pedagogically-specific education programs, protocol-heavy protection casework. A common proposal pitfall is listing only output counts without the fidelity qualifier, producing a MEL plan that cannot diagnose whether delivery is driving outcome results. Balance matters: 15-30% of indicators in a strong MEL plan are process indicators feeding into outcome measurement. Over-loading process at the expense of outcome is the opposite error.

Common Mistakes

Writing a process indicator without a documented protocol. "Percentage of sessions delivered with fidelity" means nothing if no one has written down what fidelity looks like. Protocol first, indicator second.

Treating process as a substitute for outcome. High fidelity and full attendance are not evidence the program worked. They are evidence the program ran as designed. Pair process indicators with outcome indicators; do not replace outcome measurement with process measurement.

Related Topics

  • Indicator: The parent concept and core definition
  • Output Indicator: What was delivered (process indicators qualify these)
  • Outcome Indicator: What changed as a result of delivery
  • Indicator Selection: Choosing the right indicator mix across types
  • Adaptive Management: Using process data to adjust delivery in real time

Related Topics

Quick Reference
Indicator
A specific, observable, measurable variable that tracks progress toward an outcome or output.
Quick Reference
Output Indicator
An indicator that counts tangible deliverables produced by the program (trainings held, kits distributed, people reached). Sits at the output level of the results chain, just above activities and just below outcomes. The most-commonly-reported indicator type in development M&E.
Quick Reference
Outcome Indicator
An indicator measuring applied change in participants or beneficiaries: behavior, practice, capability, capacity, or condition that has shifted as a result of program activity. Sits above output indicators and below impact indicators in the results chain.
Overview
Indicator Selection & Development
The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track program progress effectively.
Overview
Adaptive Management
A management approach that uses continuous learning from monitoring and evaluation data to adjust program strategies and activities in response to changing evidence or context.

Decision Guides

Process vs Outcome Indicators: What Each Measures and When to Use Them
Process indicators tell you whether the program is running as designed. Outcome indicators tell you whether it is producing the intended change. You need both. Here is how to pick the right mix, and how to avoid reporting only one while pretending to measure the other.
Output vs Outcome vs Impact: The Key Difference
The most common confusion in M&E. Learn the difference between outputs, outcomes, and impact with clear examples from health, education, and food security programs.
SMART Indicators: The Deep Dive
Most indicators fail SMART review because Specific and Measurable are vague. Here is how to apply the framework properly, with sector examples and the revisions that fix common mistakes.
PreviousOutput IndicatorNextProxy Indicators