When to Use
SMART indicators are the foundation of reliable monitoring and evaluation. Use this framework when:
- Designing new indicators — during programme development, before committing to data collection, to ensure each indicator will produce usable information
- Reviewing existing indicators — as part of indicator selection or mid-term reviews, to identify which indicators need revision or replacement
- Training M&E staff — as a practical tool for building indicator design capacity across teams
- Troubleshooting data quality issues — when indicators produce inconsistent, ambiguous, or unusable results
- Preparing donor proposals — to demonstrate indicator quality and strengthen proposal submissions
SMART indicators are less useful when you need to capture complex, emergent outcomes that don't fit neat measurement frameworks (use outcome harvesting or most significant change instead) or when you're conducting rapid assessments where speed outweighs precision.
| Scenario | Use SMART Indicators? | Better Alternative | |-----|---|------| | Designing new programme indicators | Yes | — | | Reviewing indicator quality | Yes | — | | Capturing emergent outcomes | No | Outcome Harvesting | | Rapid needs assessment | Partially | Rapid Assessment | | Tracking complex behaviour change | Alongside | Outcome Mapping |
How It Works
The SMART framework provides five criteria that each indicator must satisfy. These criteria work together — an indicator that meets only some criteria will still produce unreliable data.
1. Specific. The indicator clearly defines what is being measured, including quantity, quality, location, and target population. A specific indicator leaves no room for interpretation about what constitutes a positive or negative result. "Number of farmers trained" is specific; "improved farmer capacity" is not.
2. Measurable. The indicator promotes accurate, consistent assessment. Two different data collectors should arrive at the same measurement when observing the same situation. This requires clear definitions of all terms and explicit measurement methods. "Percentage of households with access to safe water" is measurable; "better water access" is not.
3. Achievable. The indicator is attainable given available resources, time, and organisational capacity. An achievable indicator reflects realistic expectations about what can be measured and what outcomes are possible within the programme's scope. "100% reduction in malaria" may not be achievable; "50% reduction in malaria cases" might be.
4. Relevant. The indicator aligns with the results it's meant to measure and provides information useful for decision-making. A relevant indicator answers a genuine information need rather than measuring something because it's easy to count. "Number of training sessions held" is not relevant to learning outcomes; "Percentage of trainees applying new skills three months later" is.
5. Time-bound. The indicator specifies when the target should be achieved, creating urgency and enabling progress tracking. Time-bound indicators include clear deadlines or measurement intervals. "Improved school attendance" is not time-bound; "90% attendance rate by end of academic year" is.
The SMART framework is not a one-time design exercise. Use it iteratively: draft indicators, test them against all five criteria, revise based on gaps, and re-test. This process typically takes 15-30 minutes per indicator during programme design.
Key Components
A well-constructed SMART indicator includes these essential elements:
-
Clear definition — Each term in the indicator must be defined such that there can be no misunderstanding. The definition should be detailed enough that various people at different times would collect identical data when given the task.
-
Target population — The indicator specifies who or what is being measured. This includes inclusion/exclusion criteria and any segmentation (by gender, age, location, etc.).
-
Measurement method — The indicator describes how the measurement will be made, including data sources, collection tools, and calculation methods.
-
Baseline and targets — The indicator includes a baseline value (current state) and target values (desired state) at specific timepoints (midline, endline).
-
Disaggregation plan — The indicator specifies how data will be broken down by relevant categories (gender, age, location, disability status, etc.) to ensure equitable tracking and analysis.
-
Frequency and timing — The indicator specifies when measurements will occur and how often data will be collected.
-
Data quality checks — The indicator includes mechanisms for verifying data accuracy, such as spot checks, triangulation, or validation against independent sources.
Best Practices
Start with information needs, not measurement convenience. Before creating indicators, explore whether there are standard, validated indicators that can be reused. Ask "What information do I need and why?" Consider donor requirements, project tracking, learning needs, and decision-making needs. (MEAL Rule: EX081_R019)
Use the SMART indicator checklist. Systematically evaluate each indicator against all five SMART criteria. Many organisations use a checklist or scoring matrix to ensure consistent application. This prevents accepting indicators that sound good but fail on closer inspection. (MEAL Rule: EX081_P021)
Define terms with precision. Poorly thought-out indicators are worse than no indicators at all because they may be impossible to measure, produce inaccurate information, and waste resources. Define each term such that two people would measure it the same way. If the information cannot be obtained, the indicator is useless and should be deleted or reformulated. (MEAL Rule: EX085_R004)
Follow a structured indicator formulation process. A practical approach: (1) Identify the indicator, (2) Specify target group, (3) Quantify, (4) Set quality, (5) Specify time. This sequence ensures all essential elements are captured systematically. (MEAL Rule: EX56_R112)
Use purpose-first design. When identifying indicators, start by asking what information is needed and why. This prevents the common failure of measuring things that are easy to count but not useful. (MEAL Rule: EX081_P022)
Common Mistakes
Measuring what's easy, not what matters. The most common failure is selecting indicators because data is readily available rather than because they answer important questions. "Number of training sessions held" is easy to count but tells you nothing about whether training achieved its purpose. Always link indicators to specific information needs.
Leaving terms undefined. Many indicators use vague terms like "improved," "enhanced," "better," or "increased" without specifying what these mean in measurable terms. "Improved maternal health" could mean anything from reduced mortality to increased antenatal care attendance. Define every term precisely.
Setting unrealistic targets. An indicator may be SMART but still fail if the target is unachievable. This creates false expectations and sets up programmes for perceived failure. Ensure targets are ambitious but realistic given available resources and timeframe.
Confusing outputs with outcomes. "Number of farmers trained" is an output indicator; "Percentage of trained farmers adopting new techniques six months later" is an outcome indicator. Both may be valid, but they measure different things. Ensure your indicator matches the level of results you're tracking.
Ignoring data quality from the start. An indicator that cannot be measured reliably is useless. Before finalising indicators, verify that data sources exist, collection methods are feasible, and quality assurance mechanisms are in place. (MEAL Rule: EX57_W002)
Creating too many indicators. Approximately 5 indicators per results level is typically sufficient. More indicators dilute focus, increase data collection burden, and overwhelm analysis capacity. Prioritise quality over quantity. (MEAL Rule: EX59_R030)
Examples
Agricultural Livelihoods Programme
Weak indicator: "Improved farmer productivity"
This indicator fails on all five SMART criteria: "Improved" is undefined, "productivity" is ambiguous, no target population specified, no timeframe.
SMART indicator: "Percentage of smallholder farmers (less than 2 hectares) in Karamoja achieving crop yields of at least 2 tonnes/hectare for drought-resistant maize by end of 2026 rainy season, measured through household surveys."
This indicator is Specific (smallholder farmers, drought-resistant maize), Measurable (2 tonnes/hectare, percentage), Achievable, Relevant, and Time-bound.
WASH Programme
Weak indicator: "Better access to clean water"
SMART indicator: "Proportion of households with access to improved water sources within 1 kilometre, baseline 35%, target 75% by month 24."
Education Programme
Weak indicator: "Improved learning outcomes"
SMART indicator: "Percentage of Grade 3 students achieving minimum proficiency (70%+ on reading assessment) in local language, disaggregated by gender and disability, baseline 42%, target 65% by year 2."
Compared To
SMART indicators are one of several approaches to indicator design. The key differences:
| Feature | SMART Indicators | Indicator Selection | Purpose-First Design | Standardised Indicators | |-----|-----------------|------|------|------| | Primary purpose | Quality assurance for individual indicators | Choosing among multiple indicator options | Starting from information needs | Comparability across programmes | | Level of detail | Five quality criteria | Selection process and criteria | Information needs analysis | Pre-defined indicator definitions | | Best for | Designing new indicators | Selecting from options | Programme design phase | Donor reporting, cross-programme comparison | | Flexibility | Highly flexible | Flexible | Flexible | Fixed definitions | | Time required | 15-30 min per indicator | 1-2 hours for full set | 2-4 hours for full set | Minimal (select from list) |
SMART indicators work alongside these approaches: use purpose-first design to identify what you need to measure, use indicator selection to choose among options, and use SMART criteria to ensure each selected indicator is well-defined.
Relevant Indicators
23 indicators across 5 major donor frameworks (USAID, DFID, UNDP, Global Fund, EU) relate to indicator quality and SMART design:
- Indicator quality — "Proportion of programme indicators meeting SMART quality criteria at design stage" (USAID)
- Indicator definition quality — "Percentage of indicators with clear, unambiguous definitions that ensure consistent data collection" (DFID)
- Indicator revision — "Number of indicators revised after quality review during programme implementation" (UNDP)
- Information needs alignment — "Proportion of indicators linked to specific, documented information needs" (Global Fund)
- Standard indicator use — "Percentage of indicators that are standardised or adapted from validated sources" (EU)
Related Tools
- Indicator Specification Template — Comprehensive template for documenting all elements of a SMART indicator including definitions, measurement methods, baselines, and targets
- Indicator Quality Self-Test — Checklist for evaluating indicators against SMART criteria and identifying areas for improvement
- Purpose-First Indicator Design Template — Guided template for starting indicator design from information needs rather than measurement convenience
Related Topics
- Indicator Selection — Process for choosing among multiple indicator options
- Target Setting — Defining realistic, ambitious targets for SMART indicators
- Baseline Design — Establishing current values for SMART indicators
- Data Quality Assurance — Ensuring SMART indicators produce reliable, consistent data
- MEL Plans — Operationalising SMART indicators in monitoring frameworks
- Results Framework — Organising SMART indicators across results levels
Further Reading
- The Indicator Plan — MEAL-DPro guide to developing comprehensive indicator plans with SMART criteria
- Performance Indicator Reference Sheet (PIRS) — USAID's standard template for indicator specification, incorporating SMART criteria
- BetterEvaluation: Indicators — Collection of indicator design resources and SMART frameworks from the evaluation community
- Indicator Quality Scoring Matrix — Scoring framework for evaluating indicators on multiple quality dimensions