Decision-Linked Measurement Quality

AI Prompt Templates

Copy a prompt into Claude, ChatGPT, or Gemini. Paste your document at the bottom and run.

Paste a document and get a scored quality assessment with evidence and revision priorities.

5,785 characters
You are an expert M&E specialist with deep grounding in decision-linked measurement. Score the deliverable I will provide using the rubric below. The deliverable may be a MEL plan, indicator reference sheet, monitoring brief, evaluation report, learning agenda, or any document expected to drive decisions rather than only describe state.

SCORING RUBRIC - Decision-Linked Measurement Quality
Score each dimension 1-5 using these criteria:

DIMENSION 1: Decision Identification
- Score 5: All four elements present. Decisions are explicitly named (not implied or assumed), scoped (the choice being made is clear, e.g., "continue, modify, or discontinue activity X"), tied to a named decision-maker (specific role or person, not "the team"), and timing or cadence is specified (when the decision is made, on what schedule).
- Score 4: At least three of four elements present. Decisions named and scoped; decision-maker or timing partial.
- Score 3: Decisions named but scope is vague, decision-makers are unnamed or grouped (e.g., "leadership"), or timing is unspecified. Reader can guess at decisions.
- Score 2: Decisions implied but not named. Reader has to infer what decisions the data is supposed to inform.
- Score 1: No decisions identified. The deliverable describes data, indicators, or findings without connecting them to any decision.

DIMENSION 2: Data-to-Decision Mapping
- Score 5: All four elements present. Every indicator, finding, or data product maps to a specific identified decision. The link is traceable in both directions (start with a decision, find the data; start with data, find the decision). No orphan indicators (every indicator serves a decision). No orphan decisions (every decision has data to inform it).
- Score 4: At least three of four elements present. Most indicators or findings map to decisions; one or two orphans on either side.
- Score 3: Mapping is partial or scattered. Some indicators map clearly; others are routine or "nice to know" without an attached decision. Up to four orphans, or one decision lacking data.
- Score 2: Less than half of indicators or findings have a clear decision linkage. Significant orphan indicators or unmapped decisions.
- Score 1: No mapping between data and decisions visible.

DIMENSION 3: Threshold and Trigger Logic
- Score 5: All four elements present. Thresholds are quantified (e.g., "below 60 percent completion") or qualitatively specified (e.g., "if community feedback indicates X"). Different actions are explicitly tied to different threshold ranges. Trigger conditions are documented (when does the threshold get evaluated). Decision rules are transparent enough that anyone reading the document could apply them consistently.
- Score 4: At least three of four elements present. Thresholds and actions tied; trigger timing or transparency partial.
- Score 3: Some thresholds defined but actions vague ("we will review"), or actions defined but no thresholds, or thresholds set but trigger timing unclear.
- Score 2: Thresholds mentioned in passing but no decision rule attached. Reader cannot tell what triggers action.
- Score 1: No thresholds or trigger logic. Indicators and findings are presented without decision rules.

DIMENSION 4: Audience and Use Specification
- Score 5: All four elements present. Each data product names a specific audience (role or person, not generic "stakeholders"). A specific use case is named for each product (what the audience will do with it). Format is appropriate to the audience's capacity and context (e.g., a one-page brief for senior leaders, a detailed dashboard for analysts). A distribution mechanism is specified (how the audience accesses or receives the product, on what cadence).
- Score 4: At least three of four elements present. Audience and use case named; format or distribution partial.
- Score 3: Audiences named generically (e.g., "stakeholders," "donors") with weak use specification. Format appropriate but distribution unspecified, or vice versa.
- Score 2: Audience named generically. No clear use case. Format mismatched to audience capacity.
- Score 1: No audience or use specification. Data products exist without a named user.

DIMENSION 5: Action Pathway and Follow-Through
- Score 5: All four elements present. Findings translate into specific recommendations (not generic "improve" or "strengthen"). Recommendations link to identified decisions. Action owners are named for each recommendation. A follow-up or accountability mechanism is documented (when progress is reviewed, who reviews it).
- Score 4: At least three of four elements present. Recommendations and decisions tied; action ownership or follow-up partial.
- Score 3: Recommendations exist but are generic. Some link to decisions, some do not. Action owners absent or unclear. No follow-up cadence.
- Score 2: Recommendations are observations restated as imperatives ("data quality should improve"). No action ownership. No follow-up.
- Score 1: No action pathway. Findings are described without recommendations or accountability.

OUTPUT FORMAT:
Return your assessment as a table followed by a summary:

| Dimension | Score (1-5) | Evidence from Document | Priority Revision |
|-----------|-------------|------------------------|-------------------|
| Decision Identification | | | |
| Data-to-Decision Mapping | | | |
| Threshold and Trigger Logic | | | |
| Audience and Use Specification | | | |
| Action Pathway and Follow-Through | | | |

**Total: X/25**
**Band:** Strong (22-25) / Adequate (17-21) / Needs Revision (11-16) / Substantial Revision (5-10)
**Single Most Important Revision:** [One specific sentence]

For any dimension scored 1 or 2, add a brief explanation and a concrete revision example.

DOCUMENT TO SCORE:
[Paste your deliverable here]

Scoring Criteria

Decision Identification
5Excellent

All four elements present. Decisions explicitly named (not implied), scoped (the choice being made is clear), tied to a named decision-maker (specific role or person), and timing or cadence specified.

4Good

At least three of four elements present. Decisions named and scoped; decision-maker or timing partial.

3Adequate

Decisions named but scope vague, decision-makers unnamed or grouped, or timing unspecified.

2Needs Improvement

Decisions implied but not named. Reader has to infer what decisions data informs.

1Inadequate

No decisions identified.

Data-to-Decision Mapping
5Excellent

All four elements present. Every indicator/finding maps to a specific decision. Link traceable both directions. No orphan indicators. No orphan decisions.

4Good

At least three of four elements present. Most indicators map; one or two orphans on either side.

3Adequate

Mapping is partial or scattered. Some indicators map clearly; others are routine. Up to four orphans or one decision lacking data.

2Needs Improvement

Less than half of indicators/findings have clear decision linkage.

1Inadequate

No mapping between data and decisions visible.

Threshold and Trigger Logic
5Excellent

All four elements present. Thresholds quantified or qualitatively specified. Different actions tied to threshold ranges. Trigger conditions documented. Decision rules transparent.

4Good

At least three of four elements present. Thresholds and actions tied; trigger timing or transparency partial.

3Adequate

Some thresholds defined but actions vague, OR actions defined but no thresholds, OR trigger timing unclear.

2Needs Improvement

Thresholds mentioned but no decision rule attached. Reader cannot tell what triggers action.

1Inadequate

No thresholds or trigger logic.

Audience and Use Specification
5Excellent

All four elements present. Specific audience named (role or person). Specific use case named. Format appropriate to audience. Distribution mechanism specified.

4Good

At least three of four elements present. Audience and use case named; format or distribution partial.

3Adequate

Audiences named generically with weak use specification. Format appropriate but distribution unspecified, or vice versa.

2Needs Improvement

Audience named generically. No clear use case. Format mismatched to audience.

1Inadequate

No audience or use specification.

Action Pathway and Follow-Through
5Excellent

All four elements present. Findings translate to specific recommendations. Recommendations link to identified decisions. Action owners named. Follow-up or accountability mechanism documented.

4Good

At least three of four elements present. Recommendations and decisions tied; action ownership or follow-up partial.

3Adequate

Recommendations exist but generic. Some link to decisions, some do not. Action owners absent or unclear. No follow-up cadence.

2Needs Improvement

Recommendations are observations restated as imperatives. No action ownership. No follow-up.

1Inadequate

No action pathway.

Score Interpretation

Total (out of 25)BandNext Step
22-25StrongDocument is decision-linked. Use as-is or with minor refinements.
17-21AdequateAddress flagged dimensions before circulating to decision-makers. Most likely fix: tighten threshold logic and audience specification.
11-16Needs RevisionSubstantial revision required. Use Revise prompt to identify and fix decision-linkage gaps.
5-10Substantial RevisionDocument describes state but does not drive decisions. Rebuild starting from a decisions inventory and map indicators back to those decisions.