Indicator Quality

AI Prompt Templates

Copy a prompt into Claude, ChatGPT, or Gemini. Paste your document at the bottom and run.

Paste a document and get a scored quality assessment with evidence and revision priorities.

5,458 characters
You are an expert M&E specialist with deep experience designing and reviewing indicators across logframes, MEL plans, PIRS, and results frameworks. Score the indicator or set of indicators I will provide using the rubric below. The document may be a logframe, MEL plan, PIRS (Performance Indicator Reference Sheet), indicator framework, results framework, or any document containing indicators.

SCORING RUBRIC - Indicator Quality
Score each dimension 1-5 using these criteria:

DIMENSION 1: SMART Criteria Compliance
- Score 5: All five SMART elements present. Specific (clear what is being measured, no ambiguity about subject or quality). Measurable (quantifiable or systematically observable, not aspirational). Achievable (realistic given context, capacity, and timeframe). Relevant (matches the result level it claims to measure, e.g., outcome indicators measure outcomes). Time-bound (deadline or measurement schedule specified).
- Score 4: At least four SMART elements present. One element partial or weakly evidenced.
- Score 3: At least three SMART elements present. Notable gaps in specificity, measurability, or time-bounding.
- Score 2: Two or fewer SMART elements present. Indicator is largely aspirational or vague.
- Score 1: No SMART criteria met. Indicator is a slogan or a result statement, not an indicator.

DIMENSION 2: Definition Clarity and Operationalization
- Score 5: All four elements present. Unambiguous indicator definition (no room for different interpretations across people or sites). Numerator and denominator defined for ratios and percentages, or explicit count rules for absolute numbers. Unit of analysis named (person, household, community, event, transaction). Calculation method documented (how the value is computed step by step).
- Score 4: At least three of four elements present. Definition clear and unit named; numerator/denominator or calculation method partial.
- Score 3: Indicator defined at a high level. Unit implied. Numerator/denominator or calculation logic missing.
- Score 2: Definition is the indicator name restated. No calculation logic.
- Score 1: No definition beyond a label.

DIMENSION 3: Disaggregation Strategy
- Score 5: All four elements present. Standard disaggregations included (sex and age at minimum, where relevant). Context-relevant disaggregations included (disability, geography, vulnerability category, beneficiary type). Disaggregation level matches the data collection method (not aspirational beyond what data supports). Reporting plan for disaggregated data specified (where it appears, how it is presented).
- Score 4: At least three of four elements present. Standard and one context disaggregation included; reporting plan or feasibility partial.
- Score 3: Sex disaggregation only. No context disaggregation. Feasibility against data collection unexamined.
- Score 2: Disaggregation listed without specifying categories or reporting plan.
- Score 1: No disaggregation specified.

DIMENSION 4: Data Source and Collection Method
- Score 5: All four elements present. Specific data source named (not generic "monitoring data", e.g., "service delivery records from clinic registers"). Collection method appropriate to indicator type (admin records, survey, observation, secondary source). Frequency of measurement specified (monthly, quarterly, annually, baseline-midline-endline). Responsibility for data collection assigned (specific role, not "the MEL team").
- Score 4: At least three of four elements present. Source named and method appropriate; frequency or responsibility partial.
- Score 3: Source generic ("project data"). Method named but not justified for the indicator. Frequency vague.
- Score 2: Source unspecified. Method unspecified or mismatched.
- Score 1: No data source, method, frequency, or responsibility documented.

DIMENSION 5: Indicator Family Coherence
- Score 5: All four elements present. Indicator at right result-chain level (input, output, outcome, impact appropriately tagged). No overlap with other indicators in the set (each measures something distinct). Indicator set covers the result chain (not just input-heavy or output-heavy). Number of indicators is right-sized (not vanity metrics, not overloaded with redundancy).
- Score 4: At least three of four elements present. Levels mostly correct and set covers the chain; minor overlap or one off-level indicator.
- Score 3: Result-chain level inconsistently applied. Some overlap. Coverage skewed toward one level.
- Score 2: Levels mislabeled or mixed. Significant redundancy. Coverage gaps at outcome or impact level.
- Score 1: No level tagging. Indicators are duplicative or disconnected from the result chain.

OUTPUT FORMAT:
Return your assessment as a table followed by a summary:

| Dimension | Score (1-5) | Evidence from Document | Priority Revision |
|-----------|-------------|------------------------|-------------------|
| SMART Criteria Compliance | | | |
| Definition Clarity and Operationalization | | | |
| Disaggregation Strategy | | | |
| Data Source and Collection Method | | | |
| Indicator Family Coherence | | | |

**Total: X/25**
**Band:** Strong (22-25) / Adequate (17-21) / Needs Revision (11-16) / Substantial Revision (5-10)
**Single Most Important Revision:** [One specific sentence]

For any dimension scored 1 or 2, add a brief explanation and a concrete revision example.

DOCUMENT TO SCORE:
[Paste your indicator(s) or document with indicators here]

Scoring Criteria

SMART Criteria Compliance
5Excellent

All five SMART elements present. Specific, Measurable, Achievable, Relevant to the result level, and Time-bound.

4Good

At least four SMART elements present. One element partial or weakly evidenced.

3Adequate

At least three SMART elements present. Gaps in specificity, measurability, or time-bounding.

2Needs Improvement

Two or fewer SMART elements present. Indicator largely aspirational.

1Inadequate

No SMART criteria met. Indicator is a slogan or result statement.

Definition Clarity and Operationalization
5Excellent

All four elements present. Unambiguous definition. Numerator/denominator or count rules explicit. Unit of analysis named. Calculation method documented.

4Good

At least three elements. Definition clear and unit named; numerator/denominator or calculation partial.

3Adequate

Indicator defined at a high level. Unit implied. Calculation logic missing.

2Needs Improvement

Definition restates the indicator name. No calculation logic.

1Inadequate

No definition beyond a label.

Disaggregation Strategy
5Excellent

All four elements present. Standard disaggregations (sex, age) included. Context-relevant disaggregations included. Level matches data collection method. Reporting plan specified.

4Good

At least three elements. Standard plus one context disaggregation; reporting plan or feasibility partial.

3Adequate

Sex disaggregation only. No context disaggregation. Feasibility unexamined.

2Needs Improvement

Disaggregation listed without categories or reporting plan.

1Inadequate

No disaggregation specified.

Data Source and Collection Method
5Excellent

All four elements present. Specific data source named. Method appropriate to indicator type. Frequency specified. Responsibility assigned to a specific role.

4Good

At least three elements. Source named and method appropriate; frequency or responsibility partial.

3Adequate

Source generic. Method named but unjustified. Frequency vague.

2Needs Improvement

Source unspecified. Method unspecified or mismatched.

1Inadequate

No data source, method, frequency, or responsibility.

Indicator Family Coherence
5Excellent

All four elements present. Each indicator at correct result-chain level. No overlap. Set covers the result chain. Number right-sized.

4Good

At least three elements. Levels mostly correct and set covers the chain; minor overlap or one off-level indicator.

3Adequate

Result-chain level inconsistently applied. Some overlap. Coverage skewed.

2Needs Improvement

Levels mislabeled or mixed. Significant redundancy. Coverage gaps at outcome or impact.

1Inadequate

No level tagging. Indicators duplicative or disconnected from the result chain.

Score Interpretation

Total (out of 25)BandNext Step
22-25StrongIndicators are well-defined and ready for use. Proceed with baseline data collection.
17-21AdequateAddress flagged dimensions before baseline. Most likely fix: tighten indicator definitions and complete data source documentation.
11-16Needs RevisionSubstantial revision required. Use the Revise prompt to fix definition, disaggregation, and source gaps before fielding any data collection.
5-10Substantial RevisionIndicators are too thin to track meaningful change. Rebuild starting from SMART criteria and the result chain, then operationalize definitions and sources.