The Eight Core Sections
A complete MEL plan covers eight sections. Order matters: each section depends on the ones before it.
| # | Section | What it contains | Typical length |
|---|---|---|---|
| 1 | Program Description and Logic Model Reference | Brief program description, link to Theory of Change and logframe, MEL plan scope and period | 1-2 pages |
| 2 | MEL Framework and Governance | Who owns the plan, how revisions work, how data feeds decisions | 1-2 pages |
| 3 | Indicator Selection and Reference Sheets | Full indicator list with Performance Indicator Reference Sheets (PIRS) for each | 6-12 pages + annex |
| 4 | Data Collection and Analysis Plan | Methods, schedule, responsibilities, instruments | 3-5 pages + annex |
| 5 | Data Quality Approach | How the five quality dimensions are protected | 2-3 pages |
| 6 | Evaluation Plan | Scheduled evaluations, key questions, methodology, use | 2-4 pages |
| 7 | Learning Agenda | Prioritized learning questions, evidence plan per question | 2-3 pages |
| 8 | Staffing, Budget, and Revision Schedule | MEL team structure, annual MEL budget, revision calendar | 1-2 pages |
Plans under 15 pages usually skip detail that will be needed later; plans over 50 usually duplicate narrative and tables. Aim for 15-30 pages of main body with detailed instruments and indicator reference sheets in annexes.
For the conceptual definition, see MEL plans. For USAID programs specifically, see performance management plan for the ADS 201 structure.
The Order to Build Them
Build the sections in this order. Jumping ahead causes the most common MEL plan failures.
- Program Description and Logic Model Reference first. Anchor the plan in the program design. If the theory of change or logframe has problems (level confusion, missing assumptions, unrealistic pathways), surface them before writing indicators.
- MEL Framework and Governance second. Decide who owns the plan and how it gets revised before any indicator is chosen. This is the single most skipped section and the one that determines whether the plan stays alive.
- Indicator Selection and Reference Sheets third. This is the biggest section and the one that demands the most discipline. See SMART indicators deep-dive and custom vs standard indicators for the decision framework.
- Data Collection and Analysis Plan fourth. Once indicators exist, name the methods, schedule, and responsibilities.
- Data Quality Approach fifth. The five quality dimensions (validity, reliability, timeliness, precision, integrity) applied to the specific indicators and methods. See the 5 data quality dimensions.
- Evaluation Plan sixth. Scheduled evaluations with clear questions, methodology, and intended use. See how to write an evaluation TOR.
- Learning Agenda seventh. Prioritized learning questions that justify evidence collection beyond reporting requirements. See learning agenda.
- Staffing, Budget, and Revision Schedule last. Once you know the indicator and evaluation workload, you can size the team and budget the work.
Writing indicators before the logic model is finalized is the single most common root cause of MEL plan rework. The temptation is strong because indicators feel like "real M&E" work. Resist it; fix the logic model first.
Section 3: Indicator Selection and Reference Sheets
The indicator section is the largest section and the one donor reviewers read first. Structure it as a summary table plus detailed reference sheets.
Summary table components:
| Indicator | Level | Type (standard/custom) | Baseline | Year 1 target | Year 2 target | Endline target | Data source | Frequency |
|---|
One row per indicator. Typically 15-30 indicators for a mid-size program. Beyond 30 and you usually have indicator bloat (see mistake too many indicators).
Performance Indicator Reference Sheet (PIRS) components per indicator:
- Indicator title and definition (the full definition, not just the name)
- Unit of measure and calculation formula
- Disaggregation (sex, age, geography, other program-relevant)
- Data source and collection method (specific: "annual household survey, KAP module, September")
- Responsible party (by role, not by name)
- Collection frequency and reporting deadline
- Baseline value and year
- Targets for each year of the program
- Data quality approach for this indicator
- Notes on definitional choices, caveats, or known limitations
PIRS live in an annex, one per indicator. 1-2 pages each. This is the document field staff actually reference. Write them for clarity, not for donor review elegance.
For indicator quality, see SMART indicators deep-dive. For the custom-vs-standard mix, see custom vs standard indicators. For reference sheets specifically, see indicator reference sheet.
Section 4: Data Collection and Analysis Plan
For each indicator, this section answers: what instrument, what schedule, what responsibility, what analysis.
Standard content:
- Data collection calendar (visual timeline showing all collection rounds across program life)
- Instrument inventory (baseline survey, endline survey, monitoring forms, qualitative tools, administrative data extracts)
- Field operation approach (digital/paper choice, enumerator hiring, supervision structure)
- Analysis plan outline (who analyzes what, when, producing what outputs)
- Dissemination plan (internal reports, donor reports, beneficiary feedback, public outputs)
Depth check: A reader unfamiliar with the program should be able to name, from this section alone, when the next data collection activity is, who is running it, and what instrument they are using. If that answer is ambiguous, the section is too thin.
For data collection method selection, see paper vs digital data collection. For sampling, see how to choose sample size.
Section 5: Data Quality Approach
Data quality is not a single practice; it is a portfolio of controls applied across the five quality dimensions (validity, reliability, timeliness, precision, integrity). This section names the specific controls this program uses, not generic data quality language.
Required components:
- Planned Data Quality Assessment (DQA) schedule (typically annual or semi-annual)
- Inter-rater reliability approach for subjective or judgment-based indicators
- Enumerator training and retraining schedule
- Data audit trail requirements (who can edit submitted values, how changes are logged)
- Integrity controls at pressure points (performance-linked reporting, target-driven incentives)
See how to conduct a DQA for the formal DQA process, and the 5 data quality dimensions for the framework these controls protect.
A common failure: this section copies generic data quality language from a template, with no specifics about this program's indicators, methods, or pressure points. Donor reviewers spot this immediately.
Section 6: Evaluation Plan
Every multi-year program needs at least one mid-term review and one end-of-project evaluation. Some programs add baseline, process, or impact evaluations.
For each planned evaluation, specify:
- Purpose and type (baseline / mid-term / end-of-project / impact / process)
- Timing (month/year, tied to program milestones)
- Key evaluation questions (3-5 prioritized, see how to write evaluation TOR)
- Methodology (design type, data sources, analytical approach)
- Evaluator type (internal, external, mixed) and independence requirements
- Budget (all-in cost including fieldwork, analysis, reporting)
- Intended use (what decisions will the evaluation inform)
Evaluation plans with no intended-use statement produce reports that sit unread. Every evaluation should tie to a decision the program will make differently based on findings.
Section 7: Learning Agenda
The Learning Agenda is the newest section in the standard MEL plan, formalized by USAID's CLA framework and now expected by most bilateral donors. It is not a list of research questions; it is a prioritized set of 3-7 questions the program will deliberately answer through its M&E system.
Each learning question needs:
- The question itself (specific, answerable, decision-relevant)
- Why it matters to the program (what decision does it inform)
- What evidence will answer it (data from which indicators or methods)
- When the answer is needed (decision deadline)
- Who will act on the answer (role with authority)
The Learning Agenda differentiates a program that treats M&E as a compliance function from one that treats it as a decision-support function. Donor reviewers notice this distinction.
See learning agenda for the conceptual framework and examples.
Section 8: Staffing, Budget, and Revision Schedule
The last section sizes the MEL operation to the plan it has just specified.
Staffing: Team structure by role (M&E Manager, data officer, enumerator team, analysis support). Level of effort per role. Reporting lines. For larger programs, MEL advisory or steering committee structure.
Budget: Annual MEL budget as % of program budget (typical 5-10% for mid-size programs, 3-7% for larger). Itemized major categories: staff, data collection (per round × rounds), evaluations, DQA, technology, dissemination. Donor compliance reporting costs.
Revision schedule: Quarterly review cadence, annual major revision, conditions that trigger mid-cycle revision. Version control approach (dated versions, change log, signoff authority).
MEL plans that undersize their budget or staffing cannot execute the plan they describe. This is a common proposal failure: ambitious plans with no resources to match.
Sector Examples
Health: 3-year HIV prevention program, East Africa
Program size: $4.2M, 2 districts, 12 service delivery sites. MEL plan structure: 22 pages main body, 8 indicators with PIRS (4 standard PEPFAR MER, 4 custom), 8 indicator reference sheets in annex (2 pages each), quarterly DQA schedule, one annual DQA across all sites, mid-term review at month 18, final evaluation at month 34, 4 learning questions (service uptake by age, client-centered care quality, provider training effectiveness, community-level stigma). MEL staff: 1 MEL Manager + 1 data officer, 7% of program budget, 6 field enumerators for periodic data collection rounds.
Education: 5-year girls' education program, South Asia
Program size: $8.5M, 60 schools. MEL plan structure: 28 pages main body, 18 indicators (12 INEE/SDG-aligned standard, 6 custom learning outcome), 18 PIRS in annex, annual learning assessment + quarterly monitoring visits, internal baseline + external mid-term + external endline evaluation, 5 learning questions (girl retention through puberty, parental engagement effectiveness, teacher training transfer, stipend payment timing effects, community attitudes on secondary education). MEL budget 6.5% of total, 3-person MEL team + external evaluator budget ring-fenced.
WASH: 4-year rural water + sanitation program, West Africa
Program size: $6.8M, 80 villages. MEL plan uses JMP service ladder indicators for SDG 6 alignment, plus 4 custom indicators on water committee capacity and household hygiene practice. Data collection: annual household survey (n=1,600 across 30 sampled villages, cluster design with DEFF 1.8), quarterly water point functionality monitoring, semi-annual water quality testing at a sampled subset. Learning agenda focuses on what drives committee sustainability beyond program exit. Mid-term + endline evaluations, both external. MEL budget 8% of total reflecting high data collection intensity.
Food security: 3-year livelihoods program, Sahel pastoralist region
Program size: $3.1M, 20 communities. MEL plan addresses seasonal migration (transhumance vs settled) as a disaggregation variable across most indicators. Standard HFIAS + FCS for food security outcomes; custom indicators on household diet diversity and distress-sale tracking (leading indicators of food insecurity recovery). Data collection quarterly, timed to migration phases. Mid-term review at month 18, endline at month 34, both internal. Learning agenda focuses on which livelihoods interventions produce durable food security gains through drought cycles. MEL budget 9% of total reflecting pastoralist context complexity.
Common Mistakes
Mistake 1: Writing indicators before the logic model is finalized. The indicators are the operationalization of the logic model. If the logic model changes, the indicators change. Fix the theory of change and logframe first. Resist the urge to draft indicators early.
Mistake 2: Confusing the MEL plan with the logframe. The logframe is one input to the MEL plan; the plan is the operations document that turns the logframe into a working measurement system. A MEL plan that is just a logframe with extra formatting is not a MEL plan.
Mistake 3: PIRS-free indicator tables. Listing indicators in a table without the detailed reference sheets leaves every substantive question unanswered: what does "improved" mean, who collects this, how is it disaggregated. PIRS are the part of the plan that actually gets used; skipping them makes the plan ornamental.
Mistake 4: No governance section. The plan names indicators but does not name who revises the plan, who approves changes, and who has authority to act on findings. Without governance, the plan becomes a dead document.
Mistake 5: No Learning Agenda, or a Learning Agenda that is a research wish list. Learning questions must tie to decisions. A list of "interesting things we'd like to know" is a curiosity agenda, not a learning agenda. Prioritize.
Mistake 6: Budget misaligned to plan. The plan specifies annual household surveys, quarterly DQAs, and an external mid-term. The budget has funding for half of those. Either shrink the plan or grow the budget; do not ship an over-specified plan with under-funded operations.
Mistake 7: Template-copy without program-specific adjustment. A MEL plan copied from a prior program, with the program name changed and indicators loosely updated, fails every meaningful review. Donor reviewers and external evaluators spot this pattern in the first 5 pages.
Mistake 8: Treating the MEL plan as a one-time document. A plan that is written, submitted, and never revised becomes stale by year 2. Build revision into the annual work plan cycle and enforce it.
MEL Plan Completeness Checklist
Run this before submitting to the donor, starting an evaluation, or handing over to a new MEL manager.
Section 1-2 (Anchor):
- Theory of change and logframe referenced by version and date
- MEL plan scope, period, and version documented
- Governance named: who owns, who revises, who approves changes
- Revision cadence specified (annual minimum, triggers for mid-cycle revision)
Section 3 (Indicators):
- Summary indicator table covers all program levels (output, outcome, impact/goal)
- PIRS present for every indicator (all 10 components)
- Standard/custom ratio defensible (typically 40-60% each)
- Baselines collected or baseline study planned before targets committed
- Targets realistic and baseline-anchored
Section 4 (Data):
- Data collection calendar across program life
- Method feasibility verified (budget + capacity)
- Sampling designs specified with design effect applied
- Analysis plan names who does what
Section 5-6 (Quality + Evaluation):
- DQA schedule specified
- Five quality dimensions addressed with specific controls
- Every evaluation has intended-use statement
- Evaluator independence requirements specified
Section 7-8 (Learning + Operations):
- Learning agenda has 3-7 prioritized questions with decisions named
- MEL staffing sized to plan workload
- MEL budget sized to plan (% of total program)
- Revision schedule in place
For the conceptual foundation, see MEL plans, logframe, and theory of change. For how this connects to proposal writing, see how to write the M&E proposal section. For an AI-assisted step-by-step workflow, see the MEL Plan playbook.