Logframe Quality Assessment

AI Prompt Templates

Copy a prompt into Claude, ChatGPT, or Gemini. Paste your document at the bottom and run.

Paste a document and get a scored quality assessment with evidence and revision priorities.

4,540 characters
You are an expert M&E advisor. Score the logframe I will provide using the rubric below.

SCORING RUBRIC - Logframe Quality Assessment
Score each dimension 1-5 using these criteria:

DIMENSION 1: Intervention Logic
- Score 5: Causal chain is explicit and directionally correct at all levels (activity → output → outcome → goal). Each link has a stated rationale. A reviewer unfamiliar with the programme can trace the full logic without additional explanation.
- Score 4: Causal chain is mostly coherent with 1-2 gaps where a connection is assumed rather than explained.
- Score 3: Causal chain is partially coherent. Key levels are present and broadly sequenced correctly, but multiple links are assumed rather than explained and a reviewer would need supplementary information to follow the logic.
- Score 2: Several gaps in causal reasoning. Some outputs do not clearly lead to stated outcomes.
- Score 1: No clear causal logic. Activities and outcomes appear unrelated.

DIMENSION 2: Indicator Quality
- Score 5: All indicators are specific (who/what/where), measurable, time-bound, and directly measure the stated result. Disaggregation variables specified where relevant.
- Score 4: Most indicators meet SMART criteria. 1-2 lack a time dimension or are slightly broad but still measurable.
- Score 3: Some indicators meet SMART criteria but roughly half are missing one or more elements (time dimension, specificity, or direct alignment to the result statement). Data collection would be possible but would require clarification on several indicators.
- Score 2: Several indicators are proxy measures. Missing time dimensions or disaggregation.
- Score 1: Indicators are vague, unmeasurable, or do not correspond to the result statements.

DIMENSION 3: Assumptions and Risks
- Score 5: Assumptions stated at outcome and goal levels. Each is specific, testable, and genuinely external to programme control (not trivially true).
- Score 4: Assumptions present at most levels. 1-2 vague but major dependencies identified.
- Score 3: Assumptions present at some levels. Key external dependencies are acknowledged but stated broadly, and some are trivially true or could be interpreted as programme activities rather than external conditions.
- Score 2: Incomplete - present at some levels, absent at others. Many trivially true or actually programme activities.
- Score 1: No assumptions stated. Major external dependencies not identified.

DIMENSION 4: Results Hierarchy
- Score 5: Standard structure throughout (activities produce outputs; outputs enable outcomes; outcomes contribute to goal). Each level consistently differentiated.
- Score 4: Mostly correct. 1-2 items at the wrong level.
- Score 3: Overall hierarchy is recognizable but several items appear at the wrong level. The distinction between outputs and outcomes is inconsistently applied, and a reviewer could identify which level is intended only with additional context.
- Score 2: Multiple items at the wrong level. Outputs and outcomes consistently confused.
- Score 1: No recognizable hierarchy. Activities and outcomes collapsed into one level.

DIMENSION 5: Monitoring Provisions
- Score 5: Sources of verification specified per indicator. Collection frequency, responsible party, and method documented. At least one indicator per results level.
- Score 4: Sources listed for most indicators. Frequency or responsible party missing for 1-2.
- Score 3: Sources of verification listed for most indicators but are often generic. At least one of collection frequency, responsible party, or method is consistently missing. Most results levels have some indicator coverage.
- Score 2: Sources generic ("project records") without specifying how data will actually be collected. Several levels lack indicators.
- Score 1: No sources of verification. Major results levels have no indicators.

OUTPUT FORMAT:
Return your assessment as a table followed by a summary:

| Dimension | Score (1-5) | Evidence from Logframe | Priority Revision |
|-----------|-------------|----------------------|-------------------|
| Intervention Logic | | | |
| Indicator Quality | | | |
| Assumptions and Risks | | | |
| Results Hierarchy | | | |
| Monitoring Provisions | | | |

**Total: X/25**
**Band:** Strong (22-25) / Adequate (17-21) / Needs Revision (11-16) / Substantial Revision (5-10)
**Single Most Important Revision:** [One specific sentence]

For any dimension scored 1 or 2, add a brief explanation and a concrete revision example.

LOGFRAME TO SCORE:
[Paste your logframe here]

Scoring Criteria

Intervention Logic
5Excellent

Causal chain is explicit and directionally correct at every level (activity → output → outcome → goal). Each link has a stated rationale. A reviewer unfamiliar with the programme can trace the full logic without additional explanation.

4Good

Causal chain is mostly coherent with 1-2 gaps where a connection is assumed rather than explained. Overall logic is plausible.

3Adequate

Causal chain is partially coherent. Key levels are present and broadly sequenced correctly, but multiple links are assumed rather than explained and a reviewer would need supplementary information to follow the logic.

2Needs Improvement

Several gaps in causal reasoning. Some outputs do not clearly lead to stated outcomes. Requires significant explanation to make sense.

1Inadequate

No clear causal logic. Activities and outcomes appear unrelated. Reads as a list of deliverables, not a causal argument.

Indicator Quality
5Excellent

All indicators are specific (who, what, where), measurable, time-bound, and directly measure the stated result. Disaggregation variables specified where relevant.

4Good

Most indicators meet SMART criteria. 1-2 lack a time dimension or are slightly broad but measurable in practice.

3Adequate

Some indicators meet SMART criteria but roughly half are missing one or more elements (time dimension, specificity, or direct alignment to the result statement). Data collection would be possible but would require clarification on several indicators.

2Needs Improvement

Several indicators are broad proxy measures. Missing time dimensions or disaggregation. Would require clarification before data collection.

1Inadequate

Indicators are vague, unmeasurable, or do not correspond to the result statements. Cannot be operationalized without major revision.

Assumptions and Risks
5Excellent

Assumptions are stated at outcome and goal levels. Each is specific, testable, and genuinely external to programme control. Not trivially true.

4Good

Assumptions present at most levels. 1-2 vague but major dependencies identified.

3Adequate

Assumptions present at some levels. Key external dependencies are acknowledged but stated broadly, and some are trivially true or could be interpreted as programme activities rather than external conditions.

2Needs Improvement

Incomplete: present at some levels but absent at others. Many trivially true or actually programme activities.

1Inadequate

No assumptions stated. Major external dependencies have not been identified.

Results Hierarchy
5Excellent

Hierarchy follows standard structure (activities produce outputs; outputs enable outcomes; outcomes contribute to goal). Each level clearly defined and consistently differentiated.

4Good

Mostly correct. 1-2 items at the wrong level. Overall structure is recognizable.

3Adequate

Overall hierarchy is recognizable but several items appear at the wrong level. The distinction between outputs and outcomes is inconsistently applied, and a reviewer could identify which level is intended only with additional context.

2Needs Improvement

Multiple items at the wrong level. Outputs and outcomes consistently confused.

1Inadequate

No recognizable hierarchy. Activities and outcomes collapsed into a single list.

Monitoring Provisions
5Excellent

Sources of verification specified per indicator. Collection frequency, responsible party, and method documented. At least one indicator per results level.

4Good

Sources listed for most indicators. Frequency or responsible party missing for 1-2. Most levels have indicator coverage.

3Adequate

Sources of verification listed for most indicators but are often generic. At least one of collection frequency, responsible party, or method is consistently missing. Most results levels have some indicator coverage.

2Needs Improvement

Sources generic ("project records") without specifying how data will actually be collected. Several levels lack indicators.

1Inadequate

No sources of verification. Major results levels have no indicators. Monitoring approach is not addressed.

Score Interpretation

Total (out of 25)BandNext Step
22-25StrongMinor refinements only
17-21AdequateAddress flagged dimensions before submission
11-16Needs RevisionReturn to design team with AI output as revision brief
5-10Substantial RevisionFacilitate a design workshop before further drafting