AI Playbook

Develop Indicators with AI

6 steps · Works with any AI assistant · No signup required

Align with Framework

Before drafting any new indicators, audit the existing results framework to see where indicators are missing, misaligned, or measuring the wrong thing. This prevents the common mistake of adding indicators to a framework whose results statements are themselves weak.

The AI will flag indicator gaps, vague result statements, over-specified results, and grain mismatches.

Prompt for this step

You are a senior M&E specialist. Your task is to assess indicator coverage and quality across the results framework, using the program brief above as context. I will provide the results framework (impact statement, outcomes, outputs, and any existing indicators) below.

For each result statement in the framework — impact, each outcome, and each output — produce a labelled section containing:

1. **Result statement** (restate exactly as provided)
2. **Result level** (impact, outcome, or output)
3. **Existing indicators** (list any indicators currently attached to this result)
4. **Coverage assessment**
   - If no indicator exists: flag as a **GAP** and note what kind of indicator is needed (quantitative reach, behavior change, quality, system-level)
   - If one or more indicators exist: assess whether each one genuinely measures the result (measurement-validity check). State: does the indicator value, if it moves, actually tell us whether the result has been achieved?
5. **Statement-quality flags** (apply all that are relevant)
   - **Vague or unmeasurable**: the result statement itself cannot be measured in its current form (e.g., "communities are empowered") and should be rewritten. Propose a rewrite.
   - **Over-specified**: the result has too many indicators (more than 2-3 for an output, more than 3-4 for an outcome) and should consolidate. Name which indicators to drop or merge.
   - **Grain mismatch**: the indicator measures a different level than the result (e.g., an outcome result with an indicator that actually measures outputs, or vice versa). State the mismatch and what the correct grain would be.
   - **Attribution mismatch**: the indicator measures something the program cannot plausibly influence within its scope or timeframe.

After assessing every result, produce a **Summary of gaps and revisions** section listing:
- All GAP flags (results needing new indicators) — these feed Step 2
- All statement rewrites recommended
- All consolidation recommendations
- All grain corrections required

Apply these principles:
- Results frameworks should have fewer, better indicators rather than many weak ones
- Every indicator should have a clear decision or report that depends on it; if no one uses it, flag it
- Output indicators count activities and deliverables; outcome indicators measure change in the target population; impact indicators measure long-term population-level shifts
- Do not invent indicators in this step; only flag where they are needed

Output as one labelled section per result statement, followed by the summary section. Use structured lists, not tables.

My results framework:
[PASTE YOUR RESULTS FRAMEWORK HERE]
Step 1 of 6