Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Systematic Review
TermEvaluation3 min read

Systematic Review

A rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods.

Definition

A systematic review is a rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods. Unlike traditional narrative reviews, systematic reviews follow a pre-specified protocol that minimizes bias through comprehensive literature searches, explicit inclusion/exclusion criteria, and systematic quality assessment of included studies.

The approach is particularly valuable when you need to answer questions like "What interventions have been shown to work for X outcome?" or "What does the evidence say about the effectiveness of approach Y?" Systematic reviews are often conducted by research institutions, evaluation departments, or donor organizations seeking to inform programme design and policy decisions based on the best available evidence rather than anecdote or selective citation.

Why It Matters

Systematic reviews transform scattered research findings into actionable evidence for M&E practitioners. They provide a defensible basis for programme decisions by answering what has actually worked in similar contexts, rather than relying on what might work based on theory or limited examples. This evidence aggregation reduces the risk of replicating failed approaches and helps identify which intervention components are most critical for success.

For donor organizations and large programmes, systematic reviews support knowledge management by creating a curated, up-to-date understanding of what the evidence base contains, and more importantly, what it doesn't. This gap identification informs where new evaluation investment is most needed and prevents redundant research efforts.

In Practice

A well-executed systematic review follows these key steps:

  1. Develop a protocol specifying the review question, search strategy, inclusion/exclusion criteria, and quality assessment methods before beginning the search.

  2. Conduct comprehensive searches across multiple databases (e.g., Google Scholar, PubMed, World Bank Open Knowledge Repository, evaluation repositories) using predefined search terms and strategies.

  3. Screen and select studies based on explicit criteria, typically involving two independent reviewers to minimize selection bias.

  4. Appraise study quality using standardized tools appropriate to the study designs included (e.g., risk of bias assessments for randomized trials, quality checklists for qualitative studies).

  5. Extract and synthesize data from included studies, either narratively or through meta-analysis when studies are sufficiently similar in design and outcomes.

  6. Assess certainty of evidence using frameworks like GRADE to communicate confidence in the findings.

Systematic reviews are resource-intensive and require specialized skills in research methodology. They are most appropriate when making high-stakes decisions about programme approach, when a topic has accumulated substantial research, or when an organization needs to establish an evidence base for advocacy or policy work.

Related Topics

  • Impact Evaluation, Primary studies that systematic reviews synthesize
  • Meta-Analysis, Statistical technique often used within systematic reviews
  • Knowledge Management, Broader practice of organizing and using evidence
  • Literature Review, Less rigorous alternative approach
  • Comparative Analysis, Related evidence synthesis approach

See also: Evidence Synthesis

At a Glance

Aggregates evidence from multiple studies to answer a specific evaluation question with minimal bias.

Best For

  • Informing programme design with what has worked elsewhere
  • Updating intervention approaches based on current evidence
  • Identifying gaps in the research literature
  • Supporting systematic evidence-informed decision making

Complexity

High

Timeframe

3-12 months depending on scope

Related Topics

Pillar
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
Term
Literature Review
A systematic, critical synthesis of existing research on a specific topic, identifying what is known, gaps in knowledge, and evidence for programme design.
Core Concept
Knowledge Management for M&E
The systematic process of capturing, organising, and applying lessons, evidence, and insights from M&E across programmes and over time to improve organisational decision-making.