When to Use
Reporting practices apply every time M&E data is communicated, from a monthly field data report to a final programme evaluation. Good reporting practices are the bridge between data collection and evidence use. They matter because even the best evaluation produces no value if the findings are not read, understood, or acted upon.
How It Works
Step 1: Define the audience before writing
Every report has a primary audience with specific information needs, a decision-making context, and a level of technical familiarity. A USAID contracting officer reading for accountability requires different framing than a community oversight committee reading for programme feedback. Define the primary audience explicitly before drafting.
Step 2: Structure findings around decisions, not data
The most common reporting failure is organising a report around the data collection process ("Chapter 1: Methodology, Chapter 2: Survey Findings, Chapter 3: Interview Findings") rather than around the evaluation questions and decisions the findings answer. Organise by finding, not by method.
Step 3: Lead with conclusions, support with evidence
Busy decision-makers read executive summaries and skip the rest. The executive summary, and ideally the opening of every chapter, should state the finding directly, not narrate the analytical process. Evidence supports the conclusion; it does not precede it.
Step 4: Make recommendations specific and actionable
"Improve coordination" is not a recommendation, it is an aspiration. "Establish a monthly coordination meeting between the education and WASH programme teams, with an agenda item for shared beneficiary updates" is a recommendation. Each recommendation should state: who should do what, by when, with what resources.
Step 5: Triangulate before concluding
No single data source should drive a major finding. Ensure conclusions are supported by multiple evidence sources and that contradictory evidence is acknowledged and explained.
Step 6: Tailor format and length to the audience
A 60-page evaluation report is appropriate for a donor accountability submission. A 2-page brief with three key findings is appropriate for a community meeting. A slide deck with 10 findings is appropriate for a management team review. Produce multiple formats from the same data if multiple audiences exist.
Key Components
- Executive summary: no more than 2 pages; states purpose, key findings, and top recommendations
- Clear structure: organised around evaluation questions and findings, not methodology chapters
- Evidence trail: every major finding linked to its data source(s) with enough detail to trace
- Triangulated conclusions: findings supported by more than one data source
- Specific recommendations: who, what, when, not aspirational statements
- Tailored appendices: technical details (methodology, data tables, instruments) that belong in appendices, not in the main text
- Visual data presentation: charts, tables, and infographics that communicate quantitative findings more efficiently than paragraphs of numbers (see data visualization)
Best Practices
Write for reading, not for filing. Many evaluation reports are written to satisfy reporting requirements, not to communicate with readers. Before submitting, ask: would the intended audience actually read this? If not, reformat.
Disaggregate findings. Aggregate findings ("overall, the programme performed well") mask the differences that inform decisions. Report disaggregated findings by sub-group, geography, or time period.
Report negative findings. Reports that only present positive results are neither credible nor useful. Honest reporting of what did not work, with analysis of why, is more valuable than incomplete positive stories.
Deliver on time. A technically excellent report delivered after the decision has been made is useless. Manage the reporting process to meet the decision timeline, not the other way around.
Follow up on recommendations. Include in the report a recommendation tracking table that assigns ownership and timelines. Revisit at the next programme review whether recommendations have been implemented.
Common Mistakes
Burying the finding in the analysis. Writing "the data showed X, then Y, and after considering Z, we concluded..." wastes the reader's time. State the finding first, then the evidence. Decision-makers read in an inverted pyramid, they want the conclusion first.
Treating the ToR as the report structure. The Terms of Reference define what to evaluate; they do not define how to organise findings. Structuring a report as a ToR checklist produces reports that are hard to use.
Too many recommendations. Reports with 25 recommendations produce no change. Five specific, prioritised recommendations produce five changes. Quality over quantity.
Presenting data without interpretation. Data tables require interpretation. "Table 3 shows a 15% decline in outcome X" is not a finding, it is a data point. A finding explains what the decline means and why it occurred.
Related Topics
- Data Visualization for M&E, how to present quantitative findings visually
- Utilization-Focused Evaluation, designing evaluations (and their reports) around user needs
- Learning Agendas, the prior step of deciding what questions the report needs to answer
- Narrative Reporting, the specific practice of writing progress narratives for donor reports
- Evaluation Terms of Reference, the scope document that evaluation reports respond to