Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Indicator Selection & Development
Core ConceptIndicators10 min read

Indicator Selection & Development

The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.

When to Use

Indicator selection is the right approach when designing a new programme or revising an existing one. Use it during:

  • Proposal development: donors require performance indicators with clear definitions, measurement methods, and targets. USAID expects Performance Indicator Reference Sheets (PIRS) for all indicators.
  • MEL plan development: indicator selection is a core component of creating a Monitoring, Evaluation, and Learning plan.
  • Mid-term reviews: monitoring data may reveal that certain indicators are not capturing what matters, are too costly, or are not feasible. This is when you refine or replace indicators.
  • Donor compliance updates: when a donor introduces new standard indicators or modifies reporting requirements.
  • Stakeholder consultations: involving beneficiaries and field staff can surface locally relevant measures that external designers might miss.

Indicator selection is less useful when you are simply collecting data without a clear purpose (use data collection methods instead) or when you need to evaluate whether observed changes are caused by your programme (use contribution analysis instead).

ScenarioUse Indicator Selection?Better Alternative
New programme designYes—
Responding to donor indicator changesYes—
Data collection without clear purposeNoData Collection Methods
Assessing programme attributionNoContribution Analysis
Reviewing indicator quality mid-programmeYesData Quality Assurance

How It Works

Indicator selection follows a structured process that ensures your measures will actually inform decision-making.

  1. Start with your results chain. Map the causal pathway from activities through outputs and outcomes to impact. Each level needs at least one indicator to track progress. This ensures your indicators are aligned with your programme logic.

  2. Apply SMART criteria to each candidate indicator. Every indicator must be Specific (quantity, quality, location, target population), Measurable (accurate assessment), Achievable (attainable given budget, time, resources), Relevant (measures the change you want to track), and Time-bound (has a deadline). Use the SMART indicator checklist to evaluate each candidate.

  3. Define each term unambiguously. For every indicator, write a detailed definition that ensures two different people would measure it the same way. Terms like "food-secure" or "improved access" are meaningless without operational definitions. Document what each term means, how it will be measured, and what data sources will be used.

  4. Select the minimum necessary number. Choose only the indicators you need to adequately report on progress toward each objective or result. This number is often one, and usually no more than two or three per objective or result. Too many indicators create data collection burden without adding decision value.

  5. Involve stakeholders in the selection process. When primary stakeholders select indicators, provide a session on what an indicator is, its use, and advantages and disadvantages of various examples before they select. This is especially important for fuzzy objectives where different stakeholders may have different interpretations.

  6. Ensure feasibility and cost-effectiveness. For each indicator, assess whether the data required can actually be collected given your resources. If data for multiple indicators will be collected from a sample frame, calculate the sample size needed for each indicator and choose the largest size calculated, within reason. Don't commit to indicators you cannot realistically measure.

  7. Document and validate. Create Performance Indicator Reference Sheets (PIRS) or equivalent documentation for all indicators. This includes the indicator definition, measurement method, data source, frequency, responsible party, and baseline/target values. Review with stakeholders to ensure shared understanding before finalizing.

Key Components

A well-developed indicator selection process produces these essential elements:

  • Indicator statement: a clear, concise phrase that focuses on a single issue and provides relevant information about a situation. Good indicators are specific enough to provide strategic insight for effective planning and sound decision-making.
  • Operational definitions: detailed explanations of each term in the indicator that eliminate ambiguity. Two different data collectors should arrive at the same measurement when using your definitions.
  • Measurement method: the specific approach for collecting the data (survey, observation, record review, self-report, etc.) with enough detail that it can be replicated.
  • Data source: where the data will come from (household surveys, facility records, beneficiary registries, etc.) and how the sample will be selected if applicable.
  • Disaggregation requirements: which demographic variables (gender, age, location, disability status, etc.) the indicator will be broken down by to ensure equity analysis is possible.
  • Baseline and targets: the starting point and the expected level of achievement by specific timeframes, grounded in context analysis and realistic given programme capacity.
  • Frequency and responsibility: when the indicator will be measured and who is responsible for collecting and reporting the data.
  • Donor alignment: mapping to any required standard indicators from your donor, with clear documentation of how custom indicators complement standard ones.

Best Practices

Use SMART criteria as your quality gate. Every indicator must pass the SMART checklist: specific (quantity, quality, location, target population), measurable (accurate assessment), achievable (attainable given budget, time and resources), relevant (measures the change you want to track), and time-bound (has a deadline). This is the minimum standard for indicator quality.

Keep it lean. Choose the minimum number of performance indicators necessary to adequately report on progress toward an objective or result. This number can often be one and is usually no more than two or three per objective or result. More indicators do not equal better monitoring, they equal higher burden and diluted focus.

Define terms unambiguously. Define each term in the indicator such that there can be no misunderstanding as to the meaning of that indicator. The definition should be detailed enough to ensure that if various people at different times would be given the task of collecting data, they would all collect the same data.

Involve stakeholders early. Use focus groups with key staff, target groups, and stakeholders to develop indicators and SMART objectives that are relevant for particular circumstances, especially for fuzzy objectives. Local knowledge can surface indicators that external designers would never think to include.

Test feasibility before committing. Before selecting an indicator that uses an index or checklist containing multiple measures, identify all the aspects you will need to measure and determine whether measurement is feasible. Calculate sample size requirements for each indicator and choose the largest size calculated, within reason.

Document everything. Create Performance Indicator Reference Sheets for all indicators so staff will define and measure consistently. For framework and many standard indicators, PIRS are already created by donors, use them. For custom indicators, create your own documentation.

Common Mistakes

Using vague, immeasurable terms. Phrases like "food-secure," "improved access," or "increased capacity" are meaningless without operational definitions. Different people will interpret these differently, making data collection unreliable and results uninterpretable. Poorly defined indicators make it impossible to take appropriate action.

Failing to define terms clearly. Using performance indicators without clearly defining all terms within them, and without establishing data sources and data collection protocols, leads to inconsistent measurement. If you haven't defined what "food-secure" means, one enumerator might count households with three meals a day while another requires nutritional diversity.

Committing to indicators you cannot measure. Selecting indicators where the required data cannot be collected is a fundamental failure. If data required by an indicator cannot be collected, the indicator is useless and must be deleted or redefined. Don't let donor pressure or proposal templates drive you to commit to impossible measures.

Selecting too many indicators. Creating an indicator framework with 20+ measures per objective creates unsustainable data collection burden without adding decision value. The most common mistake is thinking more indicators equals better monitoring. In reality, it equals lower quality data, higher costs, and analysis paralysis.

Treating indicator selection as a one-time exercise. Indicator selection is not a proposal-writing exercise that ends when the grant is signed. As implementation generates data, you will discover that some indicators are not useful, are too costly, or are not feasible. Plan for regular indicator review points (at minimum, annually) to refine your framework based on what you learn.

Examples

Health, Sub-Saharan Africa

A 5-year maternal health programme initially selected 12 indicators for its MEL framework, including "percentage of women with improved access to maternal health services." After six months of data collection, the team discovered that "improved access" was being measured differently by different enumerators, some counted distance to facilities, others counted perceived quality, and others counted ability to afford services. The team revised the indicator to "percentage of women traveling less than 5km to a facility offering skilled birth attendance" with a clear operational definition. This single change reduced data collection errors by 40% and made the indicator genuinely useful for decision-making.

Education, South Asia

A primary education programme involved teachers and parents in indicator selection workshops. Participants identified that "student learning" was too vague, so they co-developed three specific indicators: percentage of students achieving grade-level reading fluency, percentage completing all homework assignments, and percentage attending school regularly (90%+ attendance). The stakeholder involvement meant field staff understood and owned the indicators, leading to higher data quality and more consistent collection. The programme also identified a locally relevant indicator, "percentage of students who can read a simple story aloud", that donor standard indicators did not capture.

Governance, Latin America

A civic engagement programme initially selected 15 indicators across five objectives. Mid-term review revealed that data collection was consuming 60% of the MEL budget with little decision value. The team applied the "minimum necessary" principle and reduced to 7 indicators, keeping only those that directly informed programme adaptation decisions. They dropped indicators that were primarily for donor reporting but not useful for programme management. This reduced data collection costs by 50% while improving the usefulness of monitoring data for programme staff.

Compared To

Indicator selection is one component of a broader indicator design practice. The key distinctions:

FeatureIndicator SelectionSMART IndicatorsTarget Setting
Primary focusChoosing which indicators to useEnsuring indicators meet quality criteriaEstablishing expected achievement levels
When usedDesign and mid-term reviewThroughout indicator developmentAfter indicators are selected
Key outputIndicator framework with definitionsSMART-compliant indicator statementsBaseline and target values
Main challengeBalancing comprehensiveness with feasibilityAchieving specificity without over-complicationSetting ambitious yet realistic targets

Relevant Indicators

34 indicators across 5 major donor frameworks (USAID, DFID, UNDP, World Bank, FCDO) relate to indicator selection and development:

  • Indicator quality: "Proportion of programme indicators meeting SMART criteria" (USAID)
  • Definition clarity: "Number of indicators with clearly defined measurement methods and operational definitions" (DFID)
  • Stakeholder involvement: "Percentage of indicators developed with stakeholder consultation" (UNDP)
  • Feasibility: "Proportion of indicators with documented feasible data collection methods" (World Bank)

Related Tools

  • Indicator Planner, Guided worksheet for developing SMART indicators with definition templates
  • PIRS Template, USAID Performance Indicator Reference Sheet template for donor compliance

Related Topics

  • SMART Indicators, The quality criteria that all indicators must meet
  • Logframe, The operational framework where selected indicators are placed
  • Target Setting, Establishing baseline and target values for selected indicators
  • Disaggregation, Requirements for breaking down indicators by demographic variables
  • Baseline Design, Establishing starting points for selected indicators

Further Reading

  • Performance Indicator Reference Sheets (PIRS), USAID. Official guidance on indicator documentation requirements.
  • Indicator Development Guide, Evaluation Hub. Practical guide to developing quality indicators.
  • SMART Indicators: A Guide for Practitioners, ME Center. Comprehensive resource on indicator quality criteria.
  • Donor Indicator Requirements Comparison, ME Center. Side-by-side comparison of USAID, DFID, EU, and World Bank indicator requirements.

At a Glance

Choose and refine performance indicators that accurately measure programme progress and inform decision-making.

Best For

  • Designing new programmes and proposals
  • Reviewing and updating existing indicator frameworks
  • Responding to donor indicator requirements
  • Mid-term indicator refinement based on monitoring data

Complexity

Medium

Timeframe

1-3 weeks for initial selection; ongoing refinement throughout programme life

Linked Indicators

34 indicators across 5 donor frameworks

USAIDDFIDUNDPWorld BankFCDO

Examples

  • Proportion of programme indicators meeting SMART criteria
  • Number of indicators with clearly defined measurement methods
  • Percentage of indicators disaggregated by key demographic variables

Related Topics

Core Concept
SMART Indicators
A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.
Pillar
Logframe / Logical Framework
A structured matrix that summarizes a project's design, linking activities to expected results through a clear hierarchy of objectives with indicators, verification sources, and assumptions.
Pillar
Results Framework
A structured collection of indicators organized by results level that tracks programme performance across a portfolio, focusing on what changed rather than what was delivered.
Core Concept
Target Setting
The process of establishing specific, time-bound performance benchmarks against which programme progress and achievement will be measured.
Core Concept
Disaggregation
The breakdown of aggregate data by sub-group characteristics, such as sex, age, location, or vulnerability status, to reveal inequities and differences in programme reach and outcomes.
Core Concept
Baseline Design
A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.