Skip to main content
M&E Studio
Home
AI for M&E
AI GuidesPlaybooksPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services

AI for M&E

  • AI Guides
  • Playbooks
  • Prompts
  • Plugins
  • Workflows

Resources

  • Indicator Library
  • Reference Library
  • M&E Method Guides
  • Decision Guides
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

Library
  1. M&E Library
  2. /
  3. Indicator Reference Sheet (PIRS)

Indicator Reference Sheet (PIRS)

The detailed per-indicator specification document (definition, unit of measure, disaggregation, data source, frequency, responsibility, baseline, target, quality controls) that turns an indicator name into a usable measurement protocol. Mandatory for USAID PMPs; best practice everywhere.

An Indicator Reference Sheet, known as a PIRS (Performance Indicator Reference Sheet) in USAID terminology, is the one-to-two-page per-indicator protocol that specifies everything needed to collect, calculate, and report an indicator consistently. It is the difference between an indicator table and a working measurement system.

What Goes in a PIRS

A complete PIRS covers ten standard components:

  1. Indicator title and code. The full indicator name plus a stable reference code used across the PMP, the PITT, and reporting systems.
  2. Definition. The full operational definition, not just the name. What counts, what does not, and any scope boundaries (geographic, demographic, temporal).
  3. Unit of measure and calculation formula. The precise unit (number, percentage, ratio) and the exact formula, including numerator and denominator for any percentage indicator.
  4. Disaggregation required. Every dimension the indicator must be reported by: sex, age band, geography, disability status, wealth quintile, treatment versus control, and so on.
  5. Data source and collection method. The specific source (facility register, household survey, administrative database) and the specific method (enumerator interview, records review, GPS-tagged observation). "Monitoring system" is not a data source.
  6. Responsible party. Named by role, not by person. "M&E Officer, District Office" beats "Alice."
  7. Collection frequency and reporting deadline. How often data is collected and when it is due into the MEL system.
  8. Baseline value and year. The starting value with the reference period, plus the method used to establish it.
  9. Annual targets. Year-by-year targets for the life of the award, justified against baseline and implementation capacity.
  10. Data quality controls and known limitations. Validation steps, verification procedures, and an honest statement of the indicator's weaknesses (recall bias, self-report limits, coverage gaps).

Why PIRS Matter

Without PIRS, indicators mean different things to different people over time. Successor staff cannot reconstruct the methodology. Evaluators cannot verify calculations. Donors cannot audit results. The MEL plan lists indicators; the PIRS document makes them executable. A team that cannot produce a PIRS for an indicator does not actually know how to measure it, regardless of how confidently the indicator appears in a proposal.

PIRS Format

A PIRS is typically one to two pages per indicator, held as an annex to the MEL plan or PMP. Teams format them as a separate .docx or PDF per indicator, or aggregate them into a single workbook with one tab per indicator. Whichever format you choose, version-control the PIRS alongside the MEL plan so changes to definitions, baselines, and targets are traceable.

Proposal Context

PIRS matter in proposals for two reasons. First, USAID ADS 201 requires PIRS for every indicator in a Performance Management Plan, so proposals under USAID must include them. Second, even when not donor-required, including sample PIRS for three to five priority indicators in a proposal signals MEL maturity that distinguishes experienced applicants from inexperienced ones. A proposal with an indicator table but no reference sheets reads as M&E-as-afterthought. A proposal with five fully-specified PIRS reads as ready to execute. Common pitfall: proposing ambitious indicators without the PIRS-level detail to show feasibility (data source unclear, disaggregation undefined, calculation formula absent). Reviewers spot this gap immediately and score accordingly.

Common Mistakes

Skipping PIRS entirely. Producing a "finished" MEL plan with an indicator table but no reference sheets. The plan looks complete on the table of contents and collapses the moment anyone tries to use it. Data collectors ask questions the document cannot answer, and the team improvises inconsistently across sites and quarters.

Writing PIRS that restate the indicator name without adding specificity. A reference sheet that says "data source: routine monitoring, collection method: as appropriate" is a placeholder, not a PIRS. If it does not let a new hire execute the indicator on day one, it has not done its job.

Related Topics

  • Performance Management Plan: The USAID planning document that requires a PIRS per indicator
  • MEL Plans: The broader planning document that PIRS are typically annexed to
  • Indicator Selection: Choosing the indicators that will get their own PIRS
  • SMART Indicators: The quality criteria a PIRS makes operational
  • Target Setting: Setting the baseline and annual targets a PIRS records

Related Topics

Quick Reference
Indicator
A specific, observable, measurable variable that tracks progress toward an outcome or output.
Overview
Indicator Selection & Development
The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track program progress effectively.
Overview
SMART Indicators
A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.
Quick Reference
Performance Management Plan (PMP)
USAID's planning document operationalizing the Results Framework into measurable indicators, targets, data collection methods, and responsibilities. Required by ADS 201 for USAID-funded programs and increasingly expected by other bilateral donors.
Overview
M&E Plans
A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.
Quick Reference
Means of Verification (MoV)
The specific data source and method that will be used to measure each logframe indicator: survey, administrative record, third-party data, document review. The difference between a logframe that can be verified and one that cannot.
Overview
Target Setting
The process of establishing specific, time-bound performance benchmarks against which program progress and achievement will be measured.

Decision Guides

How to Write a MEL Plan: A Practical Step-by-Step Guide
A MEL plan turns your program design into the measurement system that will run for 3-5 years. Here is how to write one that is specific, usable by field staff, and defensible to donors: the eight sections every plan needs, in the order to build them.
SMART Indicators: The Deep Dive
Most indicators fail SMART review because Specific and Measurable are vague. Here is how to apply the framework properly, with sector examples and the revisions that fix common mistakes.
How to Write the M&E Section of a Proposal
A step-by-step guide to writing the M&E, MEL, or MEAL section of a program proposal. What to include, how to structure it, and the mistakes that get proposals rejected.
PreviousIndicatorNextIndicator Selection & Development