Build an Evaluation Plan with AI

A 5-step prompt workflow that produces a donor-ready evaluation plan. Run all prompts in a single AI conversation. Takes 30-45 minutes.

30-45 min6 stepsIntermediatePlanning

What you'll build

A complete evaluation plan with evaluation questions, methodology, data collection matrix, analysis plan, and limitations section.

Before you start

  • Your program's Theory of Change or logframe
  • Donor framework or requirements (USAID, FCDO, EU, etc.)
  • Program timeline and budget constraints
  • Any existing evaluation questions from a Terms of Reference
1Define Evaluation Questions

Start by turning your program objectives into focused, answerable evaluation questions organized by OECD-DAC criteria. This is the foundation that drives every other section of the plan.

Step 1: Define Evaluation Questions

You are a senior M&E evaluation specialist. I need to develop evaluation questions for my program. Generate 8-10 evaluation questions organized by OECD-DAC criteria (relevance, coherence, effectiveness, efficiency, impact, sustainability). For each question: - State the question - Identify which DAC criterion it addresses - Note what data would answer it - Flag whether it requires primary data collection Focus on questions that are answerable within a typical program evaluation budget. Do not include more than 10 questions total.

Most donors care about 3-4 DAC criteria, not all 6. After the AI generates questions, cut the ones for criteria your donor will not ask about.

2Select Methodology

Choose an evaluation design that matches your questions, budget, and field access. The AI already has your evaluation questions from the previous step.

Step 2: Select Methodology

Based on the evaluation questions you just generated, recommend an evaluation methodology. For each question, specify: - Proposed method (survey, KII, FGD, document review, observation, secondary data analysis) - Sample or respondent group - Why this method fits this question Then state the overall evaluation design (theory-based, contribution analysis, pre-post comparison, or quasi-experimental) and explain why it fits. If a counterfactual design is not feasible, say so and recommend the strongest non-experimental alternative.

Check whether each proposed method is realistic for your actual field access. AI does not know your security situation or partner capacity.

3Build Data Collection Matrix

Structure the data collection plan into a matrix that links every evaluation question to specific data sources, methods, and responsibilities.

Step 3: Build Data Collection Matrix

Build a data collection matrix as a table with these columns: - Evaluation Question - Data Source - Method - Sample Size / Respondents - Frequency / Timing - Responsible Party (leave as TBD for me to fill) - Ethical Considerations One row per evaluation question. Where a question requires multiple data sources, use multiple rows. After the matrix, list any data collection instruments that need to be developed (questionnaires, interview guides, observation checklists) with a one-line description of each.

Count the instruments. More than 4-5 means you are probably over-collecting. Look for instruments you can combine.

4Draft Analysis Plan

Specify how you will analyze the data for each evaluation question. This keeps the analysis focused and prevents the common problem of collecting data you never use.

Step 4: Draft Analysis Plan

For each evaluation question, write an analysis plan that specifies: - Analysis technique (thematic analysis, descriptive statistics, comparative analysis, contribution tracing, etc.) - Expected output format (comparison table, thematic map, trend chart, etc.) - What constitutes a sufficient answer vs. an inconclusive finding Keep each plan to 2-3 sentences. This is a practical guide for the evaluation team, not a methods textbook. Add a section on data triangulation: which questions will be answered by combining multiple data sources, and how conflicting findings will be handled.

5Write Limitations and Ethics

Close the plan with honest limitations and ethical considerations. Donors respect candor about what an evaluation can and cannot deliver.

Step 5: Write Limitations and Ethics

Write two sections for this evaluation plan: LIMITATIONS (300-400 words): - Methodological limitations (what this design cannot prove) - Practical limitations (access, timing, budget constraints) - Data quality risks - For each limitation, state the mitigation strategy - Be honest about what the evaluation can and cannot deliver ETHICAL CONSIDERATIONS (200-300 words): - Informed consent approach - Data protection and anonymity - Vulnerable population considerations if applicable - Do-no-harm risks specific to the program context - Whether ethics review or IRB approval is needed

If any limitation could change the interpretation of your findings, say so. Burying limitations is the fastest way to lose credibility in an evaluation report.

Score Your Evaluation Plan

Use MEStudio's scoring rubric to check the quality of what you just built. Send this prompt in the same conversation to get a scored assessment with specific revision suggestions.

Open the scoring rubric

If any dimension scores below 4, go back to the relevant step and ask the AI to strengthen that section. The rubric tells you exactly what to fix.

Not sure which AI tool to use?

Try the AI Tool Selector to find the best tool for your specific M&E task, or browse 130+ M&E-specific prompts.

Related Resources