AI Playbook

Build an Evaluation Plan with AI

6 steps · Works with any AI assistant · No signup required

Define Evaluation Questions

This step generates a rigorous evaluation question set organized by the six OECD-DAC criteria. Once you have pasted your program brief above, run this prompt to produce 8-10 evaluable questions that will anchor the rest of your evaluation plan.

Prompt for this step

You are a senior M&E evaluation specialist. Your task is to generate a complete evaluation question set based on the program brief above.

Based on the program brief above, produce 8-10 evaluation questions organized by the OECD-DAC criteria: relevance, coherence, effectiveness, efficiency, impact, and sustainability. You must include at least one question per criterion, and every question must be genuinely evaluable (answerable through evidence, not rhetorical or aspirational).

For each evaluation question, provide the following as a structured list with labelled sections:

1. **Question text** — phrased as a clear interrogative, specific to the program context, narrow enough to be answered with defined data sources.
2. **DAC criterion** — one of the six (relevance, coherence, effectiveness, efficiency, impact, sustainability).
3. **What it seeks to establish** — 1-2 sentences describing the underlying judgment or causal claim the question tests.
4. **Decision or learning it informs** — the specific program, policy, or strategic decision this answer will support (e.g., scale-up decision, design revision, donor reporting, institutional learning).

Additional requirements:
- Balance backward-looking accountability questions with forward-looking learning questions.
- For effectiveness and impact questions, reference the program's stated outcomes where the brief permits.
- For efficiency, include at least one question addressing cost, time, or resource use.
- For sustainability, address both institutional and financial continuation.
- Avoid compound questions (split "X and Y" into two questions).
- Avoid questions answerable only with yes/no unless paired with a "to what extent" qualifier.

After the 8-10 questions, add a short closing section titled **Coverage Check** (80-120 words) confirming which criteria are covered, flagging any criterion where the program brief offered limited basis for question generation, and noting any assumption you made about program scope.

Format the output as numbered evaluation questions with the four labelled sections under each, followed by the Coverage Check section. Use plain prose under each label, no bullet clutter.
Step 1 of 6