About the Indicator Library
How our indicators are designed, developed, reviewed, and cross-referenced against international donor frameworks.
Built for Practitioners
Starting points, not prescriptions
Every indicator is designed to be adapted. Definitions and data collection methods provide a foundation that programs tailor to their context, geography, and reporting requirements.
Cross-referenced, not copied
The library was developed independently, then every indicator was cross-referenced against indicators from international donor frameworks. Two-thirds are either sourced from or aligned with at least one established framework. The rest cover measurement areas those frameworks don't.
Multi-layered review, not single-pass
Indicators are generated, then reviewed by an independent quality model against five criteria, then validated through random sampling across sectors and levels. Generation and review are handled by separate systems so that quality scoring is independent of the development process.
Structured, not arbitrary
Organized across 18 sectors and 94 sub-sectors with three distinct indicator levels, each developed using a methodology matched to its measurement complexity. The taxonomy, schema, and quality standards were all designed before a single indicator was built.
How the Library Was Built
Six stages from standards design through framework cross-referencing.
Standards & Schema
Define what good looks like
- Three-field schema: statement, definition, data collection method
- No embedded quantities, so indicators adapt to any program scale
- Operational definitions specific enough to implement
Sector Taxonomy
Organize the measurement space
- 18 sectors and 94 sub-sectors
- Mapped to how donors, UN agencies, and clusters organize their work
- Unified taxonomy balancing specificity with usability
Framework Analysis
Learn from established institutions
- Reviewed indicator frameworks from international donors and humanitarian organizations
- Impact-level exemplars extracted as development anchors
- Understanding of how indicators are structured, leveled, and defined
Indicator Development
Build level by level
- Output indicators follow standardized deliverable criteria
- Outcome indicators use behavioral change definitions
- Impact indicators anchored to donor framework exemplars
Independent Quality Review
Separate systems, five quality gates
- Reviewed by an independent model against five criteria
- Leveling accuracy, measurability, specificity, adaptability, uniqueness
- Random sampling across sectors and levels
Framework Cross-Referencing
Align with established frameworks
- Cross-referenced against indicators from international donor frameworks
- Semantic similarity analysis of indicator statements
- Each indicator tagged as Sourced, Aligned, or Original
Three Indicator Levels
Each level uses a distinct development approach matched to what it measures.
Output
Direct, countable deliverables of program activities
Developed using sector context and standardized leveling rules
Outcome
Changes in knowledge, behavior, practice, or condition
Developed using outcome-specific behavioral change definitions
Impact
Population-level change through national-scale data
Anchored to exemplars from international donor frameworks
Framework Alignment
Every indicator has been cross-referenced against international donor frameworks and tagged with its alignment level.
Equivalent to an indicator in an established donor framework
Measures a construct also measured by international donor frameworks
Extends beyond standard donor framework coverage
Alignment was computed using semantic similarity analysis of indicator statements, capturing meaning equivalence rather than exact wording. Indicators with different phrasing but the same measurement intent are correctly identified. Alignment indicates that similar constructs are measured by established frameworks, not endorsement by those frameworks.
Explore the Indicator Library
Search and filter indicators across 18 sectors. Available in English, Spanish, and French.