Skip to main content
M&E Studio
Home
AI for M&E
AI GuidesPlaybooksPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services

AI for M&E

  • AI Guides
  • Playbooks
  • Prompts
  • Plugins
  • Workflows

Resources

  • Indicator Library
  • Reference Library
  • M&E Method Guides
  • Decision Guides
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library/
  2. Decision Guides/
  3. Custom vs Standard Indicators: Which to Use and When
M&E How-to Guide

Custom vs Standard Indicators: Which to Use and When

Most M&E systems need both. Standard indicators satisfy donor compliance and comparability; custom indicators capture what is specific to your program. Here is how to mix them without doubling the reporting load.

40-60%
Typical standard/custom ratio
6
Standard indicator frameworks in use
8
Common mistakes
Key Takeaway
Use standard indicators for comparability, custom indicators for specificity, and map them together
Standard indicators buy donor compliance, benchmark data, and validated methodology. Custom indicators capture what is unique about your program's theory of change. A good MEL plan uses both deliberately, maps them against each other, and does not pretend one can do the work of the other. The cost of over-relying on standard indicators is that your program's distinctive outcomes go unmeasured; the cost of over-relying on custom indicators is donor friction and no external benchmark.

Standard vs Custom at a Glance

FactorStandard indicatorsCustom indicators
SourceEstablished framework (SDGs, SPHERE, USAID F, PEPFAR MER, GEF, sector handbooks, donor libraries)Designed for this program, based on its theory of change
DefinitionFixed, inheritedMust be authored and validated
Data collection methodTypically specified by the frameworkMust be designed and tested
DisaggregationOften required by the frameworkDecided by program
ComparabilityHigh, across programs and contextsLow by default; requires explicit alignment to achieve comparability
Donor acceptanceHigh, especially when donor is the framework ownerVaries; requires justification and definition
Cost to deployLower (tools and definitions exist)Higher (authoring, testing, training)
Captures program-specific outcomesOnly if program aligns exactly to the frameworkYes, by design

The choice is not binary. Most mature MEL plans use both deliberately and the ratio reflects the program's design. A program that tracks 15 standard and 10 custom indicators, with documented mapping between them, is better-designed than one that picks 25 of either alone.

For the conceptual definition, see custom vs standard indicators. For the broader indicator design context, see indicator selection.

Common Standard Indicator Frameworks

Standard indicators are never just "standard." They come from a specific framework with its own conventions. Using one means inheriting that framework's measurement discipline.

FrameworkScopeStrengthCommon use
Sustainable Development Goals (SDG)Global, cross-sectorWidely recognized, aggregated to national statisticsPopulation-level outcomes, national-policy linkage
SPHERE StandardsHumanitarian (WASH, shelter, food, health, protection)Sector-specific, field-tested, minimum standardsHumanitarian programs, emergency response
USAID F FrameworkGlobal, USG-funded programsDetailed technical definitions, mandatory for USAID reportingUSAID programs
PEPFAR MER (Monitoring Evaluation Reporting)HIV/AIDSPrecise case definitions, quarterly reportingPEPFAR-funded HIV programs
GEF Core IndicatorsEnvironment, biodiversity, climateResults-framework alignedGEF-funded environment programs
JMP (Joint Monitoring Program, WHO-UNICEF)WASHService ladders (basic, limited, safely managed), comparable across countriesWASH programs, SDG 6 reporting
DHS / MICSPopulation health + child welfareNational sample surveys; produces comparable estimatesHealth and nutrition program benchmarking
INEEEducation in emergenciesMinimum standards, access and quality dimensionsEducation in humanitarian contexts
HFIAS / FCS / rCSI / HHSFood securityValidated household-level measuresFood security monitoring
Donor-specific indicator librariesDonor-aligned (e.g., FCDO, EU, BMZ, JICA)Mandatory for complianceProposal and reporting to that donor

Two implications. First, "standard indicators" is not a single catalog; it is many frameworks with different assumptions and data collection requirements. Second, the framework you pick shapes your data collection budget as much as your indicator list does. Picking a SPHERE standard indicator commits you to SPHERE-aligned measurement; picking an SDG indicator commits you to population-level sampling that a 500-household program cannot execute.

When Standard Indicators Win

Use standard indicators when one or more of the following applies.

Compliance is non-negotiable. Your donor's indicator library is in the contract. You report their indicators whether they fit your program or not. This is the most common driver; roughly 30-50% of an M&E plan is typically dictated by donor compliance for larger bilateral or multilateral funding.

Comparability is analytically important. You are running a program in a context where others have measured the same thing, and you want your results to be comparable. This matters for evaluation credibility, scale-up decisions, and evidence-base contributions.

The framework has validated methodology you would otherwise have to build. JMP service ladders, HFIAS food insecurity scales, and DHS-style household indicators come with tested questionnaires, pretested item wording, and known psychometric properties. Rebuilding these yourself is expensive and probably worse.

The program fits the framework's assumptions. If your WASH program is delivering household water access in a way JMP was designed to measure, use JMP indicators. If your food security program targets household-level consumption in a way HFIAS measures, use HFIAS.

External benchmark data exists. Standard indicators often come with reference values (national averages, SPHERE minimum standards, JMP coverage rates). These benchmarks are analytically useful and impossible to reproduce for a custom indicator without running parallel studies.

When Custom Indicators Win

Use custom indicators when the standard indicators do not capture what your program is actually doing.

Your program is outside the standard framework's scope. A social enterprise producing livelihoods through niche value chains, an innovative education delivery model, or a conflict-sensitive program in a new geography may have outcomes no standard framework measures well. Forcing these into the nearest standard indicator produces data that is technically collectable but substantively uninformative.

Your theory of change is pathway-specific. Standard indicators usually sit at the outcome or impact level. If your program's causal pathway includes intermediate changes (attitude shifts, skill acquisition, behavioral precursors) that standard indicators do not measure, you need custom indicators for those stages. See output vs outcome vs impact for the level-by-level logic.

The measurement cost of the standard indicator is disproportionate. An SDG indicator requiring a national household survey cannot be collected by a $2M program operating in 3 districts. Using a custom indicator at the program scale is defensible even when a standard indicator exists in principle.

The standard indicator's definition fails the local context. "Household" in one context means nuclear family; in another, extended compound. "Improved water source" in one framework excludes boreholes; in another, includes them. If the standard definition does not match your program's operational reality, using it anyway produces invalid data. Better to build a custom indicator with a locally-appropriate definition and document the rationale.

You need higher resolution than the standard provides. Standard indicators often track binary outcomes (had access / did not, achieved / did not). A program whose theory of change requires graduated measurement (how much access, how consistently, across what subgroups) needs custom indicators at finer resolution.

The Hybrid Approach

The strongest MEL plans use both. The question is not "which" but "in what proportion and how mapped."

Typical structure:

  • Compliance layer: 5-15 standard indicators required by the donor or regulatory framework. Reported at the required intervals in the required format. Data collection effort proportionate to reporting requirement, not higher.
  • Comparability layer: 3-8 additional standard indicators chosen for external benchmark value (SDG, SPHERE, JMP, or sector norm). These are not compliance-required but anchor your results against recognizable reference points.
  • Custom layer: 8-20 program-specific indicators measuring what the program uniquely does. These track the theory of change at the levels standard indicators cannot reach.

A typical mid-sized development program ends up with 20-40 total indicators across all layers, roughly 40-60% standard (compliance + comparability) and 40-60% custom. Humanitarian programs tilt higher on standard (cluster standards). Innovative programs tilt higher on custom.

See mistake too many indicators for why ratio alone does not save you from indicator bloat.

Building an Alignment Map

For each custom indicator, document its relationship to the relevant standard indicator (if any). This is the alignment map, and it is the single most important artifact of the hybrid approach.

Structure:

Custom indicatorMaps to (standard)RelationshipUse
% women reporting participation in household decisions on child educationSDG 5.5.1 or custom GESI indicatorApproximate, custom captures finer resolutionReport both; use custom for program management, SDG 5.5.1 for aggregation
Functional water committee score (0-20)JMP "safely managed drinking water" service ladderNot a direct map; custom measures management capacity as a precursor to service sustainabilityReport both; use custom as leading indicator, JMP as outcome measure
Households demonstrating IYCF practice (observed)WHO IYCF indicators (self-report)Same construct, different measurement rigorReport WHO as compliance; use custom as higher-fidelity measure

What the map accomplishes:

  • Analysts and evaluators can reconstruct why each indicator exists
  • Successor M&E staff understand the logic without starting from scratch
  • Donors see how program-specific data complements compliance reporting
  • Comparability is achieved deliberately, not accidentally

Keep the map in the MEL plan, not in a separate file that will be lost. Review and update at every MEL plan revision.

Handling Definitional Drift

Standard indicators sometimes change their definitions. SPHERE revised several standards in 2018 and again in the 2024 revision cycle. USAID F updates annually. SDG indicators have had technical revisions in most reporting years.

Definitional drift creates two specific problems:

Historical comparison is broken. Your 2022 endline data using the old definition is not directly comparable to your 2026 endline using the new definition. Document the version of each standard indicator in your MEL plan ("SDG 3.8.1 as of 2024 revision") so analysts can flag comparability breaks.

Donor reporting shifts mid-program. A multi-year program may enter year 3 and discover the donor's mandatory indicator list changed. Build revision contingency into the MEL plan: a review of indicator currency at each annual work plan cycle.

Custom indicators do not drift unless you change them. This is an advantage for internal program management and a disadvantage for external comparability.

Sector Examples

Health: Balancing PEPFAR MER with program-specific outcomes, East Africa

An HIV program operating under PEPFAR required 12 MER indicators for quarterly compliance reporting. The program's theory of change included client-centered care quality improvements that MER indicators did not measure. The MEL team added 8 custom indicators covering: perceived provider respect (3-item scale), continuity of care across service types (proxy: consecutive quarter visits), and stigma reduction in the broader community (household attitudinal survey). Mapping: the 12 standard indicators covered outputs and service-coverage outcomes; the 8 custom indicators covered process quality and community-level outcomes that standard indicators did not reach. Reports separated the two layers clearly so donor compliance was never in question.

WASH: JMP ladder alignment for a rural water program, West Africa

A rural water access program reported JMP service ladder categories for SDG 6.1 alignment ("basic service," "limited service," "no service"). Program management needed higher resolution: the program's theory of change included water committee capacity as a precursor to service sustainability. The MEL plan added a custom Water Committee Capacity Score (0-20, 10 items) as a leading indicator. When JMP-basic service coverage plateaued in year 2, the capacity score was already flagging specific weak committees, giving the program time to intervene. Standard indicators were the outcome measure; custom was the leading indicator.

Education: INEE standards + custom learning indicators, South Asia

An education-in-emergencies program applied INEE Minimum Standards for compliance and comparability (access, quality, coordination). INEE standards did not measure specific learning outcomes at the grade level the program was targeting. The MEL plan added custom learning indicators: Grade 3 literacy assessment scores (adapted EGRA), Grade 5 numeracy (adapted EGMA), and school-level reading time allocation. The mix gave the donor INEE-aligned comparability against other education-in-emergencies programs and gave program management learning-outcome data at the classroom level.

Food security: HFIAS for compliance, food diversity for program management, Sahel

A food security program reported HFIAS scores quarterly for donor compliance and cross-program comparability. The program's pastoralist context included seasonal migration patterns that HFIAS captured only partially. The MEL team added two custom indicators: a Household Diet Diversity Score disaggregated by migration phase (transhumance vs settled) and a Household Asset Disposal Indicator (tracking distress sales as early warning). HFIAS stayed as the comparable standard; the custom indicators caught context-specific dynamics the standard could not.

Common Mistakes

Mistake 1: Picking standard indicators without reading the definitions. The indicator name sounds right; the technical specification says something different. "Households with improved water access" in JMP has a specific definition (safely managed, basic, limited, unimproved) that is not the same as the colloquial meaning. Read the full definition before committing.

Mistake 2: Modifying standard indicators silently. If your operational definition differs from the framework definition, you are not using the standard indicator anymore. Either use it unmodified or label it as a custom indicator with a clear note on the deviation. Using the standard indicator's name with a modified definition produces data that looks comparable and is not.

Mistake 3: Assuming custom indicators do not need validation. Custom indicators are not shortcuts. They need defined measurement methods, tested questionnaires, inter-rater reliability checks for complex measures, and a clear rationale documented in the MEL plan. "We made it up" is not a methodology.

Mistake 4: No alignment map between custom and standard. Without a map, custom and standard indicators drift into parallel tracks that nobody uses together. The map forces the question: why does the custom version exist if the standard one already measures this?

Mistake 5: Too many standard indicators without regard to program fit. A program operating in 3 districts does not need 15 national-level SDG indicators. Compliance is required; over-reporting is not. Push back on excess standard indicators when the compliance requirement does not mandate them.

Mistake 6: Too many custom indicators without regard to comparability. A program with 20 custom indicators and 2 standard ones cannot be compared to anything. External evaluators struggle to contextualize results. Add 3-5 standard indicators for benchmark value even when not compliance-required.

Mistake 7: Ignoring donor framework updates mid-program. Standard indicators drift. Check the current version at every annual work plan revision. Report comparability breaks explicitly in the annual report.

Mistake 8: Running standard and custom data collection as parallel operations. Where the constructs overlap, integrate measurement into a single survey round or data collection visit. Running two separate quarterly surveys for standard and custom indicators doubles the cost for marginal quality gain.

Custom-vs-Standard Decision Checklist

Run through this during MEL plan development for each candidate indicator.

Is a standard indicator available?

  • Checked donor's mandatory indicator list
  • Checked sector-specific frameworks (SPHERE, SDG, sector-specific standards)
  • Checked donor library for supplementary recommended indicators
  • Reviewed the standard indicator's full definition, not just its name

Does the standard indicator fit the program?

  • Construct matches program's theory of change at the right level
  • Measurement method is feasible given program scale and budget
  • Definition matches local context and operational reality
  • Data collection frequency matches decision cycle

If you are using a custom indicator instead:

  • Reason documented (standard unavailable, fit poor, cost prohibitive, resolution insufficient)
  • Measurement method defined and tested (cognitive pretest + pilot)
  • Alignment to the nearest standard indicator mapped
  • Validation approach specified (inter-rater reliability, construct validity, or expert review)

Portfolio-level checks:

  • Standard/custom ratio matches program type (40-60% each for most development programs)
  • No duplicate measurement (same construct measured standard and custom without explicit reason)
  • Alignment map complete in MEL plan
  • Donor compliance requirements all met with unmodified standard indicators

For the full indicator design workflow, see SMART indicators deep-dive and indicator vs target vs milestone. For the proposal-writing application, see how to write the M&E proposal section. For an AI-assisted step-by-step workflow, see the Indicator Development playbook.

Frequently Asked Questions

PreviousCommon Sampling Mistakes in M&ENextDesign Effect Explained: What It Is and How to Apply It