Build a MEL Plan with AI
A 5-step prompt workflow that produces a complete Monitoring, Evaluation, and Learning plan with results framework, indicator matrix, data collection plan, and learning agenda.
What you'll build
A complete MEL plan with results framework, indicator performance tracking table, data collection schedule, reporting calendar, and learning agenda.
Before you start
- Your program's Theory of Change or logframe
- Program timeline, budget, and staffing for M&E
- Donor reporting requirements and frequency
- Any existing indicators from a proposal or results framework
- If you completed the Theory of Change or Results Framework workflow, paste those outputs into Step 1 instead of starting from scratch
Start by structuring the program logic into a clear results framework. This is the backbone of the MEL plan. If you already built a Theory of Change or Results Framework using another workflow guide, paste those outputs here instead of starting from scratch.
You are a senior M&E specialist. I need to build a MEL plan, starting with the results framework. Based on the program information I provide, create a results framework table with columns: - Level (Goal, Outcome, Output) - Result statement - Indicator (1-2 per result) - Baseline value (if known, otherwise "TBD - collect at baseline") - Target value - Data source - Frequency Include 1 goal, 2-4 outcomes, and 3-6 outputs. Each output should logically contribute to at least one outcome. The results framework should be: - Realistic (targets achievable within the program timeframe) - Measurable (every result has at least one quantifiable indicator) - Aligned with the program's theory of change Here is my program information: [Describe your program, its goals, theory of change, target population, and timeline]
If you have more than 20 indicators total, you are over-measuring. Most programs perform better with 10-15 well-measured indicators than 25 poorly measured ones.
Expand each indicator into a full operational definition. This is what your data collection team will actually use in the field.
For each indicator in the results framework, create a detailed indicator reference sheet. Present as a table with columns: - Indicator name - Precise definition (unambiguous, a new team member could understand it) - Unit of measurement - Calculation method (formula if applicable) - Data source - Collection method (survey, administrative records, observation, etc.) - Collection frequency - Responsible person/team (leave as TBD for me to fill) - Disaggregation dimensions - Data quality considerations (known risks or limitations for this indicator) For each indicator, also note whether it is a standard donor indicator or a custom indicator, and whether a validated measurement tool exists for it.
Map out what data gets collected when, by whom, and how. This turns the indicator matrix into an operational schedule.
Create a data collection plan for this MEL system. Produce: 1. **Data collection schedule**: A 12-month calendar (table format) showing which data collection activities happen in which months. Include baseline, routine monitoring, midline (if applicable), and endline. 2. **Methods summary**: For each data collection method used (survey, KII, FGD, observation, administrative data), specify: - Which indicators it covers - Sample size or scope - Tools needed (questionnaire, interview guide, checklist) - Estimated cost and time - Who conducts it (staff, consultants, enumerators) 3. **Data flow**: How does data move from collection to entry to cleaning to analysis to use? Describe each step and who is responsible. 4. **Data quality assurance**: What checks are in place to ensure data quality? Include spot checks, double entry, supervisor review, and any automated validation. 5. **Data management**: Where is data stored? Who has access? What is the backup protocol? What data protection measures are in place?
Budget 5-10% of total program budget for M&E. If your data collection plan costs more than that, you need to simplify the indicator set.
Define what reports get produced, when, for whom, and what data feeds into each one. This prevents the end-of-quarter scramble.
Create a reporting schedule for this MEL plan. Produce: 1. **Reporting calendar**: A table showing each report type, frequency, audience, deadline, and which indicators/data feed into it. Include: - Donor progress reports (per donor requirements) - Internal monitoring reports - Annual review or learning reports - Evaluation reports (midterm, final) - Ad hoc reports (board, government) 2. **Report templates**: For each recurring report type, outline the standard sections and what data each section requires. Keep templates practical (what a program officer can actually fill in). 3. **Data-to-report timeline**: For each report, work backward from the deadline. When must data collection be complete? When must data be cleaned? When must analysis be done? Present as a countdown timeline. 4. **Roles**: Who drafts, who reviews, who approves each report type.
Work backward from the donor deadline. If the report is due October 15, data analysis needs to be done by October 1, data cleaning by September 20, data collection by September 10. Build these dates into the calendar.
The learning agenda is what separates a MEL plan from a plain M&E plan. It defines what the program wants to learn, not just what it wants to measure.
Design a learning agenda for this program. Produce: 1. **Learning questions** (3-5): Questions the program wants to answer through implementation, beyond what the indicators measure. These should be questions about HOW and WHY, not just WHAT. Example: "What factors enable or prevent community health workers from sustaining behavior change practices after training ends?" 2. **Learning methods**: For each question, how will the program answer it? Options include after-action reviews, most significant change stories, case studies, process monitoring, outcome harvesting, or dedicated learning studies. 3. **Learning rhythm**: When does learning happen? Define regular touchpoints (monthly team reflections, quarterly learning reviews, annual pause-and-reflect workshops). 4. **Adaptation protocol**: How will learning lead to program changes? Define the decision-making process: who can authorize changes, what evidence is needed, and how changes are documented. 5. **Knowledge products**: What will the program produce to share its learning? (briefs, case studies, blog posts, conference presentations)
A learning question that cannot change the program is not useful. If the answer would not lead to a different decision, it is not a learning question.
Use MEStudio's scoring rubric to check the quality of what you just built. Send this prompt in the same conversation to get a scored assessment with specific revision suggestions.
Open the scoring rubricIf any dimension scores below 4, go back to the relevant step and ask the AI to strengthen that section. The rubric tells you exactly what to fix.
Not sure which AI tool to use?
Try the AI Tool Selector to find the best tool for your specific M&E task, or browse 130+ M&E-specific prompts.