Who This Page Is For
You are writing a proposal. The deadline is approaching. The program team has handed you a draft technical approach and you need to write the M&E, MEL, or MEAL section. This page walks you through what to include, how to structure it, and the mistakes that get proposals scored down or rejected.
This is not a guide to M&E theory. It is a guide to writing a specific deliverable under deadline pressure.
The 7 Components
Every credible M&E proposal section contains these components. The order and emphasis depend on the donor template, but the content is universal.
1. Theory of Change or Results Framework
Start with the program logic. The reviewer needs to see that you understand how your activities lead to change, not just what you plan to do.
If the donor requires a theory of change: write a 1-2 paragraph narrative summarizing the causal pathway from activities to outcomes to impact, including key assumptions. Attach a one-page ToC diagram as an annex. Do not try to fit a complex diagram into the body text.
If the donor requires a logframe or results framework: build it first, then write the narrative around it. The narrative should explain the logic the framework captures, not just describe the framework. "Our results framework has three outcomes" is description. "We expect that strengthening community health worker skills (Output 1) will increase referral rates (Outcome 1) because trained CHWs are more likely to identify danger signs during home visits" is explanation.
Use the Logic Model Builder to structure this before writing. Getting the framework right saves time on every section that follows.
For more on choosing between these frameworks, see Logframe vs Theory of Change.
2. Indicator Table
This is the section reviewers turn to first. A strong indicator table includes:
| Column | What Goes Here |
|---|---|
| Result level | Goal, Outcome, Output (matching your framework) |
| Result statement | What change you expect, stated clearly |
| Indicator | How you will measure that change |
| Baseline | Current value (or "TBD, to be collected during inception") |
| Target | What you aim to achieve by when |
| Data source | Where the data comes from |
| Frequency | How often you collect it |
| Responsible | Who collects and reports it |
Rules of thumb:
- 1-3 indicators per outcome, 1-2 per output. If you have 30 indicators for a $1M program, you have too many. See Indicator vs Target vs Milestone.
- Every indicator must be SMART. Run yours through the SMART Indicator Checker before submitting.
- If you do not have baseline data yet, say so. "Baseline TBD: to be established during inception phase baseline survey" is honest and expected. Inventing a baseline number is worse than leaving it blank.
- Include at least one outcome-level indicator per outcome. A table with only output indicators is a red flag: it signals the team is tracking activity, not change.
3. Data Collection Plan
Explain how you will collect the data your indicator table promises. Reviewers check whether your data collection plan is realistic given your budget and timeline.
For each major data collection activity, specify:
- Method: Household survey, key informant interviews, focus groups, facility records, observation, administrative data
- Sample: How many respondents, selected how (probability sampling for surveys, purposive for qualitative). Link to your sampling approach if the proposal allows space.
- Frequency: Baseline, midline (if applicable), endline. Plus routine monitoring frequency (monthly, quarterly).
- Tools: Name the instruments (household questionnaire, KII guide, FGD protocol). Mention the platform if relevant (KoboToolbox, ODK).
- Quality assurance: How you will ensure data quality (enumerator training, supervision, back-checks, data cleaning protocols).
A common mistake: promising data collection activities the budget cannot support. If your indicator table has 20 indicators requiring a 400-household survey at baseline, midline, and endline, your M&E budget needs to cover three rounds of field data collection. Check the math before you promise it.
4. Evaluation Plan
State what evaluations you will conduct, when, and who will conduct them. At minimum, address:
- Type: Mid-term review, final evaluation, impact evaluation. See How to Choose Evaluation Methodology.
- Timing: When each evaluation will happen relative to the program timeline.
- Internal vs external: Whether evaluations will be conducted by program staff, external consultants, or a combination. Final evaluations for donor-funded programs are almost always external.
- Methodology overview: You do not need to design the full evaluation in the proposal, but you should indicate the general approach (quasi-experimental, theory-based, mixed methods) and justify why it fits.
- Budget: Evaluation costs are often the single largest M&E line item. Include a realistic estimate. See M&E Budget Allocation for typical ranges.
Do not write "an evaluation will be conducted at the end of the program." That tells the reviewer nothing. Write "an external mixed-methods final evaluation will be conducted in the penultimate quarter, using a quasi-experimental design with a matched comparison group to assess outcome-level change against baseline values."
5. M&E Staffing
Name the roles, not just the positions. The reviewer wants to know:
- Who is responsible for day-to-day M&E? An M&E Officer, M&E Manager, or MEL Advisor. State their level of effort (full-time, 50%, etc.).
- Who manages data collection? The M&E Officer typically coordinates, but field teams, enumerators, and data clerks need to be accounted for.
- Who conducts analysis and reporting? If the M&E Officer writes quarterly reports but has no analysis skills, the data will not be used. State whether analysis support is built in.
- Who reviews M&E quality? Is there a senior M&E person at headquarters or a technical advisor who provides quality assurance?
For a $500K, 2-year project, one M&E Officer at 50-100% LOE is typical. For a $5M, 4-year multi-site program, expect an M&E Manager plus 1-2 M&E Officers plus data clerks. Understaffing M&E is the fastest way to ensure your indicator table becomes fiction.
6. M&E Budget
Include a separate M&E budget or a clearly identifiable M&E line within the overall budget. Standard allocation is 5-10% of total program cost, but the right number depends on what you promised in the sections above.
Common line items:
| Category | Typical % of M&E Budget |
|---|---|
| M&E staff (salaries/LOE) | 35-50% |
| Data collection (surveys, enumerators, tools) | 20-30% |
| Evaluations (external consultants) | 15-25% |
| Systems and technology | 5-10% |
| Learning and dissemination | 5-10% |
The test: Add up what your data collection plan and evaluation plan will cost. Add M&E staff time. Add systems and learning. If the total exceeds your M&E budget line, either cut activities or increase the budget. Do not submit a proposal where the M&E section promises more than the budget can deliver. Reviewers catch this.
For worked examples with specific numbers, see M&E Budget Allocation.
7. Learning and Use Plan
This is the section most proposals skip, and the one that separates strong M&E sections from formulaic ones. Explain how evidence will actually be used:
- Internal learning: How will monitoring data feed into program decisions? Quarterly review meetings? Adaptive management protocols? Decision logs?
- Stakeholder feedback: How will beneficiaries and communities see results and provide input? Feedback mechanisms, community scorecards, public data sharing?
- Reporting to donor: What reports, at what frequency, using what format?
- Knowledge sharing: How will learning be shared with the broader sector? Conference presentations, learning briefs, publications?
Two sentences on learning is better than none. But if you can describe a specific mechanism ("quarterly data review meetings where field teams present indicator trends, discuss deviations from targets, and agree on workplan adjustments"), that signals a team that actually uses evidence rather than just collecting it.
Checklist: Before You Submit
Use this as a final quality check before the proposal goes out.
- Theory of change or results framework is present and clearly linked to the technical approach
- Indicator table has SMART indicators with baselines (or "TBD"), targets, data sources, and frequency
- At least one outcome-level indicator per outcome (not just outputs)
- Data collection plan specifies methods, sample sizes, frequency, and tools
- Evaluation plan states type, timing, internal/external, and general methodology
- M&E staffing is named with LOE percentages
- M&E budget is realistic and covers what the narrative promises
- Learning/use plan explains how evidence will inform decisions
- M&E section is consistent with the technical approach (same outcomes, same logic)
- No copy-paste artifacts from a previous proposal (check program names, dates, indicators)
Common Mistakes
Mistake 1: Bolting M&E on after the program is designed. If the M&E section reads like it was written by a different team than the technical approach, reviewers notice. The M&E framework should flow directly from the program logic. If the technical section describes three pathways to change, the M&E section should measure progress along those three pathways, not along five different ones.
Mistake 2: Too many indicators. A 15-indicator logframe for a $1M program is manageable. A 40-indicator logframe is a promise you will break. Every indicator costs money to collect, analyze, and report. More is not better. Ruthlessly cut any indicator you cannot explain the decision value of.
Mistake 3: No outcome indicators. If every indicator in your table is an output ("number of trainings conducted," "number of beneficiaries reached"), the reviewer sees a program that tracks activity but not change. You must include indicators that measure whether the outputs led to the outcomes you claimed they would.
Mistake 4: M&E budget that cannot deliver. Promising a baseline survey, midline, endline, and external evaluation on 3% of a $500K budget is $15K. That does not cover even one survey round. Either scale back what you promise or argue for a larger M&E allocation.
Mistake 5: Copy-pasting from a previous proposal. Reviewers have seen hundreds of proposals. They recognize recycled M&E sections. The indicators reference a different country, the theory of change does not match the technical approach, the evaluation plan mentions a "Phase 2" that does not exist. Customize every section for the specific program.
Mistake 6: Vague evaluation plan. "An evaluation will be conducted" is not a plan. Specify the type, timing, methodology, who conducts it, and how findings will be used. If you do not know the exact design yet, describe the approach: "A mixed-methods final evaluation will assess outcome achievement against baseline values, using household surveys and key informant interviews."
Proposal M&E by Program Size
The depth and complexity of your M&E section should match the program.
| Program Size | M&E Section Length | Key Differences |
|---|---|---|
| Under $500K, 1-2 years | 3-4 pages | Simple logframe (8-12 indicators), baseline + endline only, internal monitoring, external evaluation optional. M&E Officer at 50% LOE. |
| $500K-$3M, 2-4 years | 4-6 pages | Full logframe (12-18 indicators), baseline + endline + possible midline, quarterly monitoring, external final evaluation. M&E Officer full-time. |
| $3M-$10M, 3-5 years | 6-8 pages + annexes | Theory of change + logframe + indicator reference sheets (annex), baseline + midline + endline, quarterly outcome monitoring, external mid-term review + final evaluation. M&E Manager + M&E Officers. |
| Over $10M, multi-country | 8-12 pages + annexes | Full MEL framework with country-level logframes, harmonized indicators, impact evaluation (quasi-experimental or experimental), dedicated M&E team per country, learning agenda, annual outcome surveys. |
Related Resources
For building the frameworks referenced above, use the Logic Model Builder to structure your theory of change or results framework, and run your indicators through the SMART Indicator Checker before finalizing the indicator table. The Evaluation Readiness assessment can help you determine what evaluation approach is realistic for your program's size and timeline.