Copy a prompt into Claude, ChatGPT, or Gemini. Paste your document at the bottom and run.
Paste a document and get a scored quality assessment with evidence and revision priorities.
5,947 characters
You are an expert M&E evaluator with experience converting findings into actionable recommendations across program contexts. Score the findings and recommendations of the document I will provide using the rubric below. The document may be an evaluation report, donor progress report, contribution analysis, learning brief, monitoring brief, case study, or any document where findings should drive recommendations.
SCORING RUBRIC - Findings-to-Recommendations Quality
Score each dimension 1-5 using these criteria:
DIMENSION 1: Findings Specificity and Evidence Base
- Score 5: All four elements present. Findings are concrete and avoid vague statements like "the project achieved results." Each finding cites specific evidence (data point, quote, observation, document reference) supporting it. Strength of evidence is indicated where it varies (strong, moderate, suggestive). Findings are disaggregated where relevant (by group, geography, time, sub-population).
- Score 4: At least three of four elements present. Findings concrete and evidenced; strength of evidence or disaggregation partial.
- Score 3: Findings stated with some evidence but mixed specificity. Strength of evidence not flagged. Disaggregation absent where it would matter.
- Score 2: Findings are general claims with thin evidence. No disaggregation.
- Score 1: Findings are vague assertions with no traceable evidence base.
DIMENSION 2: Evidence-to-Recommendation Linkage
- Score 5: All four elements present. Each recommendation traces to specific findings (named or referenced) rather than floating in isolation. Linkage is bidirectional: the reader can move from finding to recommendation and from recommendation back to the findings that generated it. No orphan recommendations: every recommendation has an evidentiary basis. No orphan findings of high importance: key findings translate into recommendations.
- Score 4: At least three of four elements present. Most recommendations linked; bidirectionality or coverage of key findings partial.
- Score 3: Some recommendations linked to findings but linkage logic is implicit. A few recommendations float without clear evidence base. Some major findings have no associated recommendation.
- Score 2: Recommendations and findings sit in separate sections with little explicit linkage. Multiple orphan recommendations.
- Score 1: Recommendations have no demonstrable basis in the findings presented.
DIMENSION 3: Recommendation Specificity and Actionability
- Score 5: All four elements present. Recommendations are specific and avoid vague phrasing like "improve X" or "strengthen Y." Each recommendation names what to do, by whom, and by when. Recommendation level matches decision authority: recommendations to project staff differ in scope from recommendations to donors or government counterparts. Recommendations are operationalizable: a reader could act on them without further translation or interpretation.
- Score 4: At least three of four elements present. Recommendations specific and actionable; ownership level or operationalizability partial.
- Score 3: Recommendations describe direction but lack who and when. Level of authority not clearly matched. Some translation needed before action.
- Score 2: Recommendations are aspirational language ("strengthen capacity") without concrete actions.
- Score 1: Recommendations are restated findings or generic platitudes.
DIMENSION 4: Prioritization and Feasibility
- Score 5: All four elements present. Priorities are indicated among recommendations (high/medium/low, must-do/should-do, or ranked) rather than treating all as equal weight. Feasibility is considered (resources available, timing, organizational capacity, political constraints). Trade-offs between recommendations are addressed where they conflict or compete for the same resources. Sequencing or dependencies are noted (what must happen before what).
- Score 4: At least three of four elements present. Priorities and feasibility addressed; trade-offs or sequencing partial.
- Score 3: Some prioritization signal but not systematic. Feasibility lightly considered. Trade-offs and sequencing absent.
- Score 2: All recommendations treated as equal weight. Feasibility not considered.
- Score 1: No prioritization, feasibility, or sequencing thinking visible.
DIMENSION 5: Ownership and Accountability
- Score 5: All four elements present. Each recommendation has a named owner (specific role or position, not "the team" or "stakeholders"). Follow-up cadence is specified (when progress will be reviewed, by whom, in what forum). Accountability mechanism is documented (how progress is tracked, what happens if recommendations are not implemented). A mechanism exists for revising recommendations as conditions change.
- Score 4: At least three of four elements present. Owner named and cadence set; accountability or revision mechanism partial.
- Score 3: Some owners named but inconsistently. Cadence vague. No tracking or revision mechanism.
- Score 2: Ownership left to "the team" or unspecified actors. No cadence.
- Score 1: No ownership, cadence, or accountability mechanism.
OUTPUT FORMAT:
Return your assessment as a table followed by a summary:
| Dimension | Score (1-5) | Evidence from Document | Priority Revision |
|-----------|-------------|------------------------|-------------------|
| Findings Specificity and Evidence Base | | | |
| Evidence-to-Recommendation Linkage | | | |
| Recommendation Specificity and Actionability | | | |
| Prioritization and Feasibility | | | |
| Ownership and Accountability | | | |
**Total: X/25**
**Band:** Strong (22-25) / Adequate (17-21) / Needs Revision (11-16) / Substantial Revision (5-10)
**Single Most Important Revision:** [One specific sentence]
For any dimension scored 1 or 2, add a brief explanation and a concrete revision example.
DOCUMENT TO SCORE:
[Paste your findings and recommendations section or full document here]
Scoring Criteria
Findings Specificity and Evidence Base
5Excellent
All four elements present. Findings concrete (not vague). Each cites specific evidence. Strength of evidence indicated where it varies. Findings disaggregated where relevant.
4Good
At least three of four elements present. Findings concrete and evidenced; strength or disaggregation partial.
3Adequate
Findings stated with some evidence but mixed specificity. Strength not flagged. Disaggregation absent.
2Needs Improvement
Findings are general claims with thin evidence. No disaggregation.
1Inadequate
Findings are vague assertions with no traceable evidence.
Evidence-to-Recommendation Linkage
5Excellent
All four elements present. Each recommendation traces to specific findings. Linkage bidirectional. No orphan recommendations. No orphan high-importance findings.
4Good
At least three elements. Most recommendations linked; bidirectionality or coverage partial.
3Adequate
Some recommendations linked but linkage implicit. A few orphan recommendations. Some major findings without recommendations.
2Needs Improvement
Recommendations and findings sit in separate sections with little linkage. Multiple orphans.
1Inadequate
Recommendations have no demonstrable basis in the findings.
Recommendation Specificity and Actionability
5Excellent
All four elements present. Recommendations specific. Each names what, by whom, by when. Level matches decision authority. Operationalizable without translation.
4Good
At least three elements. Recommendations specific and actionable; ownership level or operationalizability partial.
3Adequate
Recommendations describe direction but lack who and when. Level not matched to authority.
2Needs Improvement
Recommendations are aspirational language without concrete actions.
1Inadequate
Recommendations are restated findings or generic platitudes.
Prioritization and Feasibility
5Excellent
All four elements present. Priorities indicated. Feasibility considered. Trade-offs addressed. Sequencing or dependencies noted.
4Good
At least three elements. Priorities and feasibility addressed; trade-offs or sequencing partial.
3Adequate
Some prioritization signal but not systematic. Feasibility lightly considered. Trade-offs absent.
2Needs Improvement
All recommendations treated as equal weight. Feasibility not considered.
1Inadequate
No prioritization, feasibility, or sequencing thinking visible.
Ownership and Accountability
5Excellent
All four elements present. Each recommendation has named owner. Follow-up cadence specified. Accountability mechanism documented. Revision mechanism in place.
4Good
At least three elements. Owner named and cadence set; accountability or revision partial.
3Adequate
Some owners named but inconsistently. Cadence vague. No tracking or revision mechanism.
2Needs Improvement
Ownership left to "the team" or unspecified actors. No cadence.
1Inadequate
No ownership, cadence, or accountability mechanism.
Score Interpretation
Total (out of 25)
Band
Next Step
22-25
Strong
Recommendations are actionable, ready for implementation tracking.
17-21
Adequate
Address flagged dimensions before circulating. Most likely fix: tighten ownership assignments and add feasibility considerations.
11-16
Needs Revision
Substantial revision needed; rebuild the recommendation set from findings using the Revise prompt.
5-10
Substantial Revision
Recommendations are observations restated as imperatives. Restart from evidence-recommendation linkage and rebuild end-to-end.