Free Reference Card

The Decision Test: 5 Questions for Every Indicator

Every indicator should pass the Decision Test before it earns a place in your monitoring plan. These 5 questions expose indicators that collect data no one will use.

Ben Playfair6 min read
indicatorsdecision-linkedmonitoring planreference card

What This Card Does

Every indicator should pass the Decision Test before it earns a place in your monitoring plan. These 5 questions expose indicators that collect data no one will use. Use this card before adding any indicator to a framework, and annually to audit existing ones.

The principle: If data does not change a decision, it is not information. It is extraction.


Question 1: What decision does this inform?

Why It Matters: Data collection without a decision pathway is waste. Every indicator must connect to a specific choice someone will make. If you cannot name the decision, you are collecting "just in case" - the signature phrase of measurement theater.

Red Flag Answers:

  • "We need it for the annual report"
  • "The donor requested it"
  • "It would be good to know"
  • "We might need it later"
  • "It's standard practice"

Green Flag Answers:

  • "Whether to expand site coverage to Phase 2 locations"
  • "Whether the training curriculum needs revision"
  • "How to reallocate the discretionary budget in Q3"
  • "Which districts need additional supervision support"

The Test: Can you complete this sentence? "This indicator will help [specific person] decide [specific action] by [specific date]." If you cannot fill all three blanks, the indicator fails.


Question 2: Who makes that decision?

Why It Matters: Indicators without owners become orphans. Data flows into reports that no one reads because no one's job depends on reading them. A named decision-maker creates accountability for data use.

Red Flag Answers:

  • "The team"
  • "Management"
  • "Stakeholders"
  • "It goes to the steering committee"
  • No name, only a role

Green Flag Answers:

  • "Maria Chen, Country Director - she decides quarterly budget adjustments"
  • "The District Health Officer in Mombasa - she decides staffing rotations"
  • "Program Manager James Osei - he decides whether to modify the delivery model"

The Test: Can you email this person tomorrow and ask: "What will you do with this data when you receive it?" If you cannot send that email, the indicator has no owner.


Question 3: When do they need the data?

Why It Matters: Data has a shelf life. Information that arrives after the decision point is archaeology, not evidence. Timing determines value. A decision made in March cannot be informed by data available in June.

Red Flag Answers:

  • "For the final evaluation"
  • "When the project ends"
  • "Whenever we get around to analyzing it"
  • "The annual report deadline"
  • No timeline specified

Green Flag Answers:

  • "By February 15, before the budget revision meeting"
  • "Within 2 weeks of each training cohort, to inform the next cohort's curriculum"
  • "Monthly, by the 5th, for the operational review"
  • "Before the Q3 steering committee in September"

The Test: Does your data collection timeline allow analysis and delivery before the decision deadline? If you cannot deliver on time, the indicator is cosmetic.


Question 4: What would change based on different results?

Why It Matters: This question exposes indicators that certify activity rather than inform choices. If the answer is "nothing would change regardless of results," you are measuring to perform accountability, not to improve decisions.

Red Flag Answers:

  • "We would know where we stand"
  • "It would confirm our assumptions"
  • "We would report it either way"
  • "The target is fixed regardless"
  • "It depends on other factors"

Green Flag Answers:

  • "Below 60%: redesign the intervention. Above 80%: scale to new sites."
  • "If satisfaction drops below 3.5/5, we trigger a focus group to diagnose issues"
  • "High variance between sites triggers peer learning exchanges"
  • "Results below threshold mean we don't proceed to Phase 2"

The Test: State three different hypothetical results (low, medium, high) and the specific action that follows each. If all three lead to the same response - or no response - the indicator is decoration.

| If results show... | Then we will... | |-------------------|-----------------| | Below threshold | [Specific action] | | At threshold | [Specific action] | | Above threshold | [Specific action] |


Question 5: If we stopped collecting this, who would notice?

Why It Matters: The ultimate test of utility. This question identifies data collected from momentum rather than purpose. Many indicators persist because no one has stopped to ask whether they matter.

Red Flag Answers:

  • "The donor might ask about it"
  • "No one, probably"
  • "It's always been in the framework"
  • "The auditors"
  • Hesitation, followed by justification

Green Flag Answers:

  • "The Field Coordinator - she uses it weekly to prioritize site visits"
  • "The Regional Director - he cannot run the quarterly review without it"
  • "Communities would notice - we share this data back and they use it for planning"
  • "The frontline staff - they track their own performance with this"

The Test: Imagine announcing: "We will stop collecting this indicator next month." Who objects, and why? If objections are bureaucratic ("the donor might not like it") rather than operational ("I cannot make my decisions without it"), the indicator fails.


Quick Reference Table

| Question | What It Reveals | Pass Criterion | |----------|-----------------|----------------| | What decision? | Purpose | Named decision with clear outcome | | Who decides? | Ownership | Named individual with decision authority | | When needed? | Timing | Date that precedes decision point | | What changes? | Utility | Different results lead to different actions | | Who would notice? | Value | Named user with operational dependency |


The Compression Protocol

After testing each indicator, apply Compression:

  1. Indicators that pass all 5 questions: Keep. These earn their collection burden.

  2. Indicators that pass 3-4 questions: Repair. Identify the missing elements and fix them before proceeding.

  3. Indicators that pass 1-2 questions: Challenge. Ask: Is this habit or purpose? Default to removal.

  4. Indicators that pass 0 questions: Remove. These are extraction without value.


Application Example

Indicator under review: "Number of community members trained in financial literacy"

| Question | Answer | Verdict | |----------|--------|---------| | What decision? | "Whether training is happening" | FAIL - describes activity, not decision | | Who decides? | "The M&E team tracks it" | FAIL - tracking is not deciding | | When needed? | "For the quarterly report" | FAIL - reporting deadline, not decision point | | What changes? | "We report whatever the number is" | FAIL - no action varies by result | | Who would notice? | "Probably no one operationally" | FAIL - no operational user |

Verdict: 0/5. This indicator certifies activity for upward accountability. It should be removed or replaced with an indicator that passes the test.

Replacement candidate: "Percentage of trained participants who correctly demonstrate household budgeting at 3-month follow-up"


One Final Question

Before adding any indicator, ask yourself: Would I be willing to defend this data collection to a skeptical frontline worker who will bear the burden of gathering it?

If the honest answer is no, do not collect it.