Core ConceptIndicators

Indicator Selection & Development

The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.

11 min read
Also known as:Indicator DevelopmentIndicator DesignPerformance Indicator Selection

When to Use

Indicator selection is the right approach when designing a new programme or revising an existing one. Use it during:

  • Proposal development — donors require performance indicators with clear definitions, measurement methods, and targets. USAID expects Performance Indicator Reference Sheets (PIRS) for all indicators.
  • MEL plan development — indicator selection is a core component of creating a Monitoring, Evaluation, and Learning plan.
  • Mid-term reviews — monitoring data may reveal that certain indicators are not capturing what matters, are too costly, or are not feasible. This is when you refine or replace indicators.
  • Donor compliance updates — when a donor introduces new standard indicators or modifies reporting requirements.
  • Stakeholder consultations — involving beneficiaries and field staff can surface locally relevant measures that external designers might miss.

Indicator selection is less useful when you are simply collecting data without a clear purpose (use data collection methods instead) or when you need to evaluate whether observed changes are caused by your programme (use contribution analysis instead).

| Scenario | Use Indicator Selection? | Better Alternative | |-----|-----|-----| | New programme design | Yes | — | | Responding to donor indicator changes | Yes | — | | Data collection without clear purpose | No | Data Collection Methods | | Assessing programme attribution | No | Contribution Analysis | | Reviewing indicator quality mid-programme | Yes | Data Quality Assurance |

How It Works

Indicator selection follows a structured process that ensures your measures will actually inform decision-making.

  1. Start with your results chain. Map the causal pathway from activities through outputs and outcomes to impact. Each level needs at least one indicator to track progress. This ensures your indicators are aligned with your programme logic. (MEAL Rule: EX081_S006)

  2. Apply SMART criteria to each candidate indicator. Every indicator must be Specific (quantity, quality, location, target population), Measurable (accurate assessment), Achievable (attainable given budget, time, resources), Relevant (measures the change you want to track), and Time-bound (has a deadline). Use the SMART indicator checklist to evaluate each candidate. (MEAL Rule: EX081_P021)

  3. Define each term unambiguously. For every indicator, write a detailed definition that ensures two different people would measure it the same way. Terms like "food-secure" or "improved access" are meaningless without operational definitions. Document what each term means, how it will be measured, and what data sources will be used. (MEAL Rule: EX085_R004)

  4. Select the minimum necessary number. Choose only the indicators you need to adequately report on progress toward each objective or result. This number is often one, and usually no more than two or three per objective or result. Too many indicators create data collection burden without adding decision value. (MEAL Rule: EX31_S005)

  5. Involve stakeholders in the selection process. When primary stakeholders select indicators, provide a session on what an indicator is, its use, and advantages and disadvantages of various examples before they select. This is especially important for fuzzy objectives where different stakeholders may have different interpretations. (MEAL Rule: EX57_P010)

  6. Ensure feasibility and cost-effectiveness. For each indicator, assess whether the data required can actually be collected given your resources. If data for multiple indicators will be collected from a sample frame, calculate the sample size needed for each indicator and choose the largest size calculated, within reason. Don't commit to indicators you cannot realistically measure. (MEAL Rule: EX31_P008)

  7. Document and validate. Create Performance Indicator Reference Sheets (PIRS) or equivalent documentation for all indicators. This includes the indicator definition, measurement method, data source, frequency, responsible party, and baseline/target values. Review with stakeholders to ensure shared understanding before finalizing. (MEAL Rule: EX121_S015)

Key Components

A well-developed indicator selection process produces these essential elements:

  • Indicator statement — a clear, concise phrase that focuses on a single issue and provides relevant information about a situation. Good indicators are specific enough to provide strategic insight for effective planning and sound decision-making.
  • Operational definitions — detailed explanations of each term in the indicator that eliminate ambiguity. Two different data collectors should arrive at the same measurement when using your definitions.
  • Measurement method — the specific approach for collecting the data (survey, observation, record review, self-report, etc.) with enough detail that it can be replicated.
  • Data source — where the data will come from (household surveys, facility records, beneficiary registries, etc.) and how the sample will be selected if applicable.
  • Disaggregation requirements — which demographic variables (gender, age, location, disability status, etc.) the indicator will be broken down by to ensure equity analysis is possible.
  • Baseline and targets — the starting point and the expected level of achievement by specific timeframes, grounded in context analysis and realistic given programme capacity.
  • Frequency and responsibility — when the indicator will be measured and who is responsible for collecting and reporting the data.
  • Donor alignment — mapping to any required standard indicators from your donor, with clear documentation of how custom indicators complement standard ones.

Best Practices

Use SMART criteria as your quality gate. Every indicator must pass the SMART checklist: specific (quantity, quality, location, target population), measurable (accurate assessment), achievable (attainable given budget, time and resources), relevant (measures the change you want to track), and time-bound (has a deadline). This is the minimum standard for indicator quality. (MEAL Rule: EX081_S006)

Keep it lean. Choose the minimum number of performance indicators necessary to adequately report on progress toward an objective or result. This number can often be one and is usually no more than two or three per objective or result. More indicators do not equal better monitoring — they equal higher burden and diluted focus. (MEAL Rule: EX31_S005)

Define terms unambiguously. Define each term in the indicator such that there can be no misunderstanding as to the meaning of that indicator. The definition should be detailed enough to ensure that if various people at different times would be given the task of collecting data, they would all collect the same data. (MEAL Rule: EX085_R005)

Involve stakeholders early. Use focus groups with key staff, target groups, and stakeholders to develop indicators and SMART objectives that are relevant for particular circumstances, especially for fuzzy objectives. Local knowledge can surface indicators that external designers would never think to include. (MEAL Rule: EX57_P010)

Test feasibility before committing. Before selecting an indicator that uses an index or checklist containing multiple measures, identify all the aspects you will need to measure and determine whether measurement is feasible. Calculate sample size requirements for each indicator and choose the largest size calculated, within reason. (MEAL Rule: EX31_P008)

Document everything. Create Performance Indicator Reference Sheets for all indicators so staff will define and measure consistently. For framework and many standard indicators, PIRS are already created by donors — use them. For custom indicators, create your own documentation. (MEAL Rule: EX63_S003)

Common Mistakes

Using vague, immeasurable terms. Phrases like "food-secure," "improved access," or "increased capacity" are meaningless without operational definitions. Different people will interpret these differently, making data collection unreliable and results uninterpretable. Poorly defined indicators make it impossible to take appropriate action. (MEAL Rule: EX089_W014)

Failing to define terms clearly. Using performance indicators without clearly defining all terms within them, and without establishing data sources and data collection protocols, leads to inconsistent measurement. If you haven't defined what "food-secure" means, one enumerator might count households with three meals a day while another requires nutritional diversity. (MEAL Rule: EX31_R019)

Committing to indicators you cannot measure. Selecting indicators where the required data cannot be collected is a fundamental failure. If data required by an indicator cannot be collected, the indicator is useless and must be deleted or redefined. Don't let donor pressure or proposal templates drive you to commit to impossible measures. (MEAL Rule: EX089_R053)

Selecting too many indicators. Creating an indicator framework with 20+ measures per objective creates unsustainable data collection burden without adding decision value. The most common mistake is thinking more indicators equals better monitoring. In reality, it equals lower quality data, higher costs, and analysis paralysis.

Treating indicator selection as a one-time exercise. Indicator selection is not a proposal-writing exercise that ends when the grant is signed. As implementation generates data, you will discover that some indicators are not useful, are too costly, or are not feasible. Plan for regular indicator review points (at minimum, annually) to refine your framework based on what you learn.

Examples

Health — Sub-Saharan Africa

A 5-year maternal health programme initially selected 12 indicators for its MEL framework, including "percentage of women with improved access to maternal health services." After six months of data collection, the team discovered that "improved access" was being measured differently by different enumerators — some counted distance to facilities, others counted perceived quality, and others counted ability to afford services. The team revised the indicator to "percentage of women traveling less than 5km to a facility offering skilled birth attendance" with a clear operational definition. This single change reduced data collection errors by 40% and made the indicator genuinely useful for decision-making.

Education — South Asia

A primary education programme involved teachers and parents in indicator selection workshops. Participants identified that "student learning" was too vague, so they co-developed three specific indicators: percentage of students achieving grade-level reading fluency, percentage completing all homework assignments, and percentage attending school regularly (90%+ attendance). The stakeholder involvement meant field staff understood and owned the indicators, leading to higher data quality and more consistent collection. The programme also identified a locally relevant indicator — "percentage of students who can read a simple story aloud" — that donor standard indicators did not capture.

Governance — Latin America

A civic engagement programme initially selected 15 indicators across five objectives. Mid-term review revealed that data collection was consuming 60% of the MEL budget with little decision value. The team applied the "minimum necessary" principle and reduced to 7 indicators, keeping only those that directly informed programme adaptation decisions. They dropped indicators that were primarily for donor reporting but not useful for programme management. This reduced data collection costs by 50% while improving the usefulness of monitoring data for programme staff.

Compared To

Indicator selection is one component of a broader indicator design practice. The key distinctions:

| Feature | Indicator Selection | SMART Indicators | Target Setting | |-----|-----|-----|-----| | Primary focus | Choosing which indicators to use | Ensuring indicators meet quality criteria | Establishing expected achievement levels | | When used | Design and mid-term review | Throughout indicator development | After indicators are selected | | Key output | Indicator framework with definitions | SMART-compliant indicator statements | Baseline and target values | | Main challenge | Balancing comprehensiveness with feasibility | Achieving specificity without over-complication | Setting ambitious yet realistic targets |

Relevant Indicators

34 indicators across 5 major donor frameworks (USAID, DFID, UNDP, World Bank, FCDO) relate to indicator selection and development:

  • Indicator quality — "Proportion of programme indicators meeting SMART criteria" (USAID)
  • Definition clarity — "Number of indicators with clearly defined measurement methods and operational definitions" (DFID)
  • Stakeholder involvement — "Percentage of indicators developed with stakeholder consultation" (UNDP)
  • Feasibility — "Proportion of indicators with documented feasible data collection methods" (World Bank)

Related Tools

  • Indicator Planner — Guided worksheet for developing SMART indicators with definition templates
  • PIRS Template — USAID Performance Indicator Reference Sheet template for donor compliance

Related Topics

  • SMART Indicators — The quality criteria that all indicators must meet
  • Logframe — The operational framework where selected indicators are placed
  • Target Setting — Establishing baseline and target values for selected indicators
  • Disaggregation — Requirements for breaking down indicators by demographic variables
  • Baseline Design — Establishing starting points for selected indicators

Further Reading