MEL Plan Review

AI Prompt Templates

Copy a prompt into Claude, ChatGPT, or Gemini. Paste your document at the bottom and run.

Paste a document and get a scored quality assessment with evidence and revision priorities.

5,138 characters
You are an expert M&E advisor. Score the MEL Plan I will provide using the rubric below.

SCORING RUBRIC - MEL Plan Review
Score each dimension 1-5 using these criteria:

DIMENSION 1: Indicator Coverage & Quality
- Score 5: SMART indicators present for every output and outcome level. Each indicator is specific (names target group, location, metric), measurable, time-bound, and directly measures the stated result. Targets set with baseline reference. Disaggregation variables (sex, age, location) specified where relevant.
- Score 4: Indicators present at most levels. 1-2 lack a time dimension or are slightly broad but still operationalizable.
- Score 3: Indicators present at all levels but several are partially operationalizable. Targets present but missing baseline references for some. Disaggregation specified for major indicators but not consistently applied.
- Score 2: Several indicators are proxy measures or too vague. Missing indicators at one or more results levels. Targets absent or set without baselines.
- Score 1: Indicators absent or unmeasurable. Cannot be operationalized. No targets.

DIMENSION 2: Data Collection Systems
- Score 5: Each indicator has a specified collection tool or source (not just "project records"), collection frequency, and a named responsible position. The system is realistic given staffing and geography.
- Score 4: Collection method and frequency documented for most indicators. Responsible party missing for 1-2. System broadly feasible.
- Score 3: Collection tools or sources identified for most indicators but descriptions are generic for some (e.g., "project records" without further detail). Frequency documented but responsible positions missing for several indicators.
- Score 2: Generic sources listed ("community registers," "reports") without specifying how data will actually be collected. Frequency or responsibility gaps across multiple indicators.
- Score 1: No collection system described. It is unclear how any indicator will actually be measured.

DIMENSION 3: Data Quality Assurance
- Score 5: The plan explicitly addresses how data quality will be verified before use - covering at least validity (measures what it claims), reliability (consistent across collectors), and completeness (no systematic gaps). Includes a review or spot-check process.
- Score 4: DQA is addressed for key indicators. 1-2 dimensions of quality (e.g., timeliness) not covered.
- Score 3: DQA section present and covers at least one quality dimension with a named process, but reliability or completeness checks are absent or described only in general terms. Spot-check process referenced but not specified.
- Score 2: DQA mentioned but generic ("data will be checked for accuracy"). No specific processes described.
- Score 1: No DQA provisions. Data quality is assumed rather than managed.

DIMENSION 4: Roles & Responsibilities
- Score 5: Named positions (not individuals) assigned for: data collection, data entry, analysis, reporting, and decision-making. Supervision and escalation paths described.
- Score 4: Most roles assigned. Supervision or escalation path missing but core collection and reporting roles clear.
- Score 3: Roles assigned to named positions for data collection and reporting but analysis and decision-making roles unspecified. No supervision or escalation path. Accountability for data quality unclear.
- Score 2: Roles vague or assigned to units rather than positions. Unclear who is accountable for data quality or reporting deadlines.
- Score 1: No roles assigned. The plan does not specify who does anything.

DIMENSION 5: Learning & Adaptive Management
- Score 5: Specific review cycles defined (frequency, who participates, what decisions they feed into). At least one mechanism for data to trigger adaptive management - program adjustments based on findings. Learning documentation process specified.
- Score 4: Review cycles mentioned. Link to decision-making implied but not explicit. Learning documentation referenced.
- Score 3: At least one review cycle defined with a stated frequency, but participant roles and decision links are vague. Some reference to adaptive management but no trigger conditions or adjustment process described.
- Score 2: "Data will be used to inform decisions" stated without specifying when, by whom, or through what process.
- Score 1: No learning provisions. The plan covers data collection only, not use.

OUTPUT FORMAT:
Return your assessment as a table followed by a summary:

| Dimension | Score (1-5) | Evidence from MEL Plan | Priority Revision |
|-----------|-------------|------------------------|-------------------|
| Indicator Coverage & Quality | | | |
| Data Collection Systems | | | |
| Data Quality Assurance | | | |
| Roles & Responsibilities | | | |
| Learning & Adaptive Management | | | |

**Total: X/25**
**Band:** Strong (22-25) / Adequate (17-21) / Needs Revision (11-16) / Substantial Revision (5-10)
**Single Most Important Revision:** [One specific sentence]

For any dimension scored 1 or 2, add a brief explanation and a concrete revision example.

MEL PLAN TO SCORE:
[Paste your MEL Plan here]

Scoring Criteria

Indicator Coverage & Quality
5Excellent

SMART indicators at every output and outcome level. Each is specific, measurable, time-bound, and directly measures the stated result. Targets set with baseline reference. Disaggregation specified where relevant.

4Good

Indicators at most levels. 1-2 lack a time dimension or slightly broad but operationalizable.

3Adequate

Indicators present at all levels but several are partially operationalizable. Targets present but missing baseline references for some. Disaggregation specified for major indicators but not applied consistently.

2Needs Improvement

Several indicators are proxy measures or too vague. Missing at one or more levels. Targets absent or without baselines.

1Inadequate

Indicators absent or unmeasurable. Cannot be operationalized. No targets.

Data Collection Systems
5Excellent

Each indicator has a specific collection tool or source, frequency, and named responsible position. System is realistic given staffing and geography.

4Good

Collection method and frequency documented for most indicators. Responsible party missing for 1-2. System broadly feasible.

3Adequate

Collection tools or sources identified for most indicators but generic for some. Frequency documented but responsible positions missing for several indicators.

2Needs Improvement

Generic sources listed without specifying how data will actually be collected. Frequency or responsibility gaps across multiple indicators.

1Inadequate

No collection system described. It is unclear how any indicator will actually be measured.

Data Quality Assurance
5Excellent

Explicitly addresses validity, reliability, and completeness before data is used. Includes a review or spot-check process with named responsible party.

4Good

DQA addressed for key indicators. 1-2 dimensions of quality not covered.

3Adequate

DQA section present and covers at least one quality dimension with a named process. Reliability or completeness checks absent or described only in general terms. Spot-check process referenced but not specified.

2Needs Improvement

DQA mentioned but generic. No specific processes described.

1Inadequate

No DQA provisions. Data quality is assumed rather than managed.

Roles & Responsibilities
5Excellent

Named positions assigned for: data collection, entry, analysis, reporting, and decision-making. Supervision and escalation paths described.

4Good

Most roles assigned. Supervision or escalation path missing but core roles clear.

3Adequate

Roles assigned for data collection and reporting but analysis and decision-making roles unspecified. No supervision or escalation path. Accountability for data quality unclear.

2Needs Improvement

Roles vague or assigned to units rather than positions. Unclear who is accountable.

1Inadequate

No roles assigned. The plan does not specify who does anything.

Learning & Adaptive Management
5Excellent

Specific review cycles defined with participants and decision links. At least one mechanism for adaptive management. Learning documentation process specified.

4Good

Review cycles mentioned. Link to decision-making implied but not explicit.

3Adequate

At least one review cycle defined with a stated frequency, but participant roles and decision links are vague. Some reference to adaptive management but no trigger conditions or adjustment process described.

2Needs Improvement

"Data will be used to inform decisions" stated without when, by whom, or through what process.

1Inadequate

No learning provisions. The plan covers collection only, not use.

Score Interpretation

Total (out of 25)BandNext Step
22-25StrongMinor refinements only
17-21AdequateAddress flagged dimensions before submission
11-16Needs RevisionReturn to MEL team with AI output as revision brief
5-10Substantial RevisionRedesign the MEL system before proceeding