Scoring Criteria
SMART indicators at every output and outcome level. Each is specific, measurable, time-bound, and directly measures the stated result. Targets set with baseline reference. Disaggregation specified where relevant.
Indicators at most levels. 1-2 lack a time dimension or slightly broad but operationalizable.
Indicators present at all levels but several are partially operationalizable. Targets present but missing baseline references for some. Disaggregation specified for major indicators but not applied consistently.
Several indicators are proxy measures or too vague. Missing at one or more levels. Targets absent or without baselines.
Indicators absent or unmeasurable. Cannot be operationalized. No targets.
Each indicator has a specific collection tool or source, frequency, and named responsible position. System is realistic given staffing and geography.
Collection method and frequency documented for most indicators. Responsible party missing for 1-2. System broadly feasible.
Collection tools or sources identified for most indicators but generic for some. Frequency documented but responsible positions missing for several indicators.
Generic sources listed without specifying how data will actually be collected. Frequency or responsibility gaps across multiple indicators.
No collection system described. It is unclear how any indicator will actually be measured.
Explicitly addresses validity, reliability, and completeness before data is used. Includes a review or spot-check process with named responsible party.
DQA addressed for key indicators. 1-2 dimensions of quality not covered.
DQA section present and covers at least one quality dimension with a named process. Reliability or completeness checks absent or described only in general terms. Spot-check process referenced but not specified.
DQA mentioned but generic. No specific processes described.
No DQA provisions. Data quality is assumed rather than managed.
Named positions assigned for: data collection, entry, analysis, reporting, and decision-making. Supervision and escalation paths described.
Most roles assigned. Supervision or escalation path missing but core roles clear.
Roles assigned for data collection and reporting but analysis and decision-making roles unspecified. No supervision or escalation path. Accountability for data quality unclear.
Roles vague or assigned to units rather than positions. Unclear who is accountable.
No roles assigned. The plan does not specify who does anything.
Specific review cycles defined with participants and decision links. At least one mechanism for adaptive management. Learning documentation process specified.
Review cycles mentioned. Link to decision-making implied but not explicit.
At least one review cycle defined with a stated frequency, but participant roles and decision links are vague. Some reference to adaptive management but no trigger conditions or adjustment process described.
"Data will be used to inform decisions" stated without when, by whom, or through what process.
No learning provisions. The plan covers collection only, not use.
Score Interpretation
| Total (out of 25) | Band | Next Step |
|---|---|---|
| 22-25 | Strong | Minor refinements only |
| 17-21 | Adequate | Address flagged dimensions before submission |
| 11-16 | Needs Revision | Return to MEL team with AI output as revision brief |
| 5-10 | Substantial Revision | Redesign the MEL system before proceeding |