What Goes Wrong
You open the RFP template. It has 25 indicator rows. You fill them all. Or the program team hands you a design document with 40 output-level metrics and asks you to "put them in the logframe." Or you inherit a proposal from a prior submission with 35 indicators and a tight deadline, so you leave them in.
The result is a logframe where every activity has two or three indicators, every outcome has four, and the impact row has three overlapping measures of the same thing. The MEL plan that follows claims to collect data on all of them. The M&E budget cannot actually fund that. The team knows this; they have seen prior projects where half the indicators were never reported. They submit anyway.
Defensible Indicator Count by Program Size
| Program size | Defensible range | Warning zone |
|---|---|---|
| Small ($250K-1M, 1-2 years) | 6-12 indicators | 15+ |
| Mid-size ($1M-5M, 2-3 years) | 8-18 indicators | 22+ |
| Large ($5M-15M, 3-5 years) | 12-22 indicators | 28+ |
| Very large ($15M+, 5+ years) | 15-25 indicators | 30+ |
The warning zone column is not a hard cutoff. It is the count at which a typical reviewer flags the section for feasibility review. Staying inside the defensible range does not make your logframe good by itself; it removes one easy reason to score you down.
Why It Happens
Four drivers produce this mistake. First, template pressure: if the donor gave you 25 rows, it feels risky to leave some blank. Second, stakeholder hedging: each team member contributes "their" indicator, and nobody wants to delete a colleague's contribution. Third, false completeness: teams mistake a long indicator list for a thorough M&E plan, when the two are almost opposites. Fourth, inherited proposals: teams lift logframes from prior submissions without cutting, adding the new program's indicators on top of the old ones.
The underlying issue is that an indicator list feels like coverage. More indicators seem to mean more accountability. In practice, a longer list means less of each indicator gets measured well, more data goes uncollected, and the system loses credibility when partial results get reported as complete. A 15-indicator logframe that every M&E officer reports against cleanly is worth more to a donor than a 30-indicator logframe with quarterly gaps.
How Reviewers See It
Experienced proposal reviewers count indicators in the first 30 seconds. A count above 25 on a mid-sized program is a red flag before they have read a single indicator text. They infer three things:
- The team has not prioritized. If everything is a key indicator, none of them are.
- The M&E budget is under-sized. Thirty indicators require more staff, more data collection rounds, and more analysis time than most proposal budgets can absorb. If the MEL plan claims otherwise, the team is being optimistic.
- The data will not be useful. Too many indicators generate too much noise for anyone to read. Donors want a dashboard, not a spreadsheet.
None of these impressions require reading the narrative. The count alone creates them.
The Budget Consequence
Indicator volume drives M&E cost in a roughly linear way, but M&E budgets rarely scale to match. A reasonable per-indicator cost floor in a typical health or education program (surveys, MoV development, analysis, reporting) is 1-2 percent of the M&E budget. Thirty indicators imply an M&E budget that can cover 30-60 percent of its cost just on instrumented data. That leaves little for evaluations, DQAs, learning events, or staff time.
Reviewers who have run programs see this math immediately. When your M&E budget sits at 5-10 percent of total (typical) and your indicator count is 30+, the ratio does not work. They score down for feasibility.
Four Decision Tests
Before keeping any indicator, run it through four questions:
- What decision does this inform? Name the specific decision: "This indicator tells us whether to scale the intervention to a new district in Year 3." If you cannot name a decision, cut.
- Does another indicator already measure this? Often two or three indicators circle the same behavior change. Keep the one with the strongest means of verification and drop the rest.
- Can we collect this data feasibly? If the MoV requires monthly household surveys but your budget covers annual baselines, the indicator is a liability. Cut or redesign.
- Does this indicator belong in the logframe at all, or in a work plan? Activity counts ("number of trainings held," "meetings conducted") often belong in work plan monitoring, not in a logframe that donors read.
Most teams cut 30-50 percent of their initial indicator list when they apply these tests honestly. Use the SMART Indicator Checker to surface additional problems in the indicators you keep; it catches specificity and measurability issues the decision tests miss.
What to Do With the Rest
Cut indicators do not disappear. They move into three places:
- Work plan monitoring: activity-level counts go to your Gantt chart or work plan tracker, not the logframe. Your M&E officer still tracks them; donors do not see them in the formal report.
- MEL plan annex: interesting secondary indicators can live in an appendix flagged as "supplementary indicators, tracked if resources permit." This signals you considered them without cluttering the main matrix.
- Learning questions: some indicators are really questions in disguise. Move them to a learning agenda section where they inform adaptation rather than quarterly reporting.
The logframe that results should have 8 to 20 indicators total, clustered in a clear hierarchy: 1-3 impact, 3-6 outcome, and the balance as outputs. That is what reviewers expect. That is what your team can actually deliver. For donor-specific formatting, see the M&E proposal section guide.
Common Mistakes
Too-many-indicators rarely comes alone. It usually travels with these related mistakes:
- Bundling activity metrics with result metrics. Training counts, meeting counts, and distribution logs are activities, not results. They inflate the indicator count and dilute the result framework. See output vs outcome vs impact for the distinction reviewers apply.
- Volume as a substitute for SMART design. Teams sometimes add indicators because the existing ones are weak, rather than fixing the weak ones. The result is more weak indicators. Fix the design, do not paper over it.
- Ignoring means-of-verification feasibility. An indicator with an infeasible MoV is not a measurable indicator. Reviewers catch this by scanning the MoV column; a proposal with 30 indicators and 10 different survey methods fails on this check alone.
- Inherited indicators never cut. If you copied the logframe from last year's proposal, assume half the indicators do not apply to this program. Cut deliberately, do not just add.
- Treating the logframe as an inventory. The logframe is what you will be held accountable for, not a catalog of everything you might track. Anything you cannot commit to reporting quarterly belongs outside it.
Weak Example
A $600K, three-year health proposal with 32 indicators:
- 5 impact indicators, 3 of which are national prevalence rates the program cannot measurably move
- 9 outcome indicators including three that all measure "knowledge of danger signs" in different wordings
- 18 output indicators, of which 11 are activity counts (trainings conducted, meetings held, materials distributed)
The MEL plan promises quarterly surveys to track outcomes. The M&E budget is 6 percent of total. One M&E officer. No subcontracted evaluation. Reviewers score down on feasibility.
Strong Example
The same $600K program revised to 12 indicators:
- 2 impact indicators: contribution to under-5 mortality (external MICS data, Year 5) and sustained service uptake (endline survey, Year 3)
- 4 outcome indicators: knowledge of danger signs (baseline plus endline household survey), care-seeking within 48 hours, completion of referral, maternal-newborn consultation coverage
- 6 output indicators: CHWs trained and active, home visits completed per CHW per month, danger-sign referrals logged, ANC visits at supported facilities, supervision visits, community dialogue sessions
Each indicator has a specific MoV. The 20 indicators that were cut live in a work plan tracker or a supplementary indicators appendix. The M&E budget covers the data collection the logframe promises. Reviewers score this section well on feasibility and prioritization.