M&E Reference

Showing 91 of 91 entries

Pillars

Major frameworks and methodologies

PillarFrameworks

Logframe / Logical Framework

A structured matrix that summarizes a project's design, linking activities to expected results through a clear hierarchy of objectives with indicators, verification sources, and assumptions.

Complexity: Medium
Timeframe: 1-3 weeks for initial development
11 min read
PillarMethods

Most Significant Change

A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.

Complexity: Medium
Timeframe: 3-6 weeks for initial cycle
14 min read
PillarMethods

Outcome Harvesting

A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.

Complexity: Medium
Timeframe: 4-8 weeks for a complete harvest cycle
12 min read
PillarMethods

Participatory Evaluation

An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.

Complexity: High
Timeframe: 3-8 weeks longer than conventional evaluation due to engagement processes
13 min read
PillarMethods

Process Tracing

A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.

Complexity: High
Timeframe: 3-8 weeks depending on evidence availability and case complexity
14 min read
PillarFrameworks

Results Framework

A structured collection of indicators organized by results level that tracks programme performance across a portfolio, focusing on what changed rather than what was delivered.

Complexity: Medium
Timeframe: 1-2 weeks for initial design
10 min read
PillarFrameworks

Theory of Change

A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.

Complexity: Medium
Timeframe: 2-6 weeks for initial development
11 min read

Core Concepts

Key practices and processes

Core ConceptData Collection

Baseline Design

A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.

13 min read
Core ConceptEvaluation

Cost-Effectiveness Analysis

A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.

11 min read
Core ConceptData Quality

Data Collection Burden

The total time, effort, and resources required from respondents and implementers to complete data collection activities, balanced against data quality needs and programme capacity.

9 min read
Core ConceptData Quality

Data Management

The systematic processes for collecting, storing, securing, and maintaining data quality throughout the data lifecycle to ensure information is accurate, accessible, and usable for decision-making.

12 min read
Core ConceptData Quality

Data Quality Assurance

A systematic process for verifying that collected data meets five quality dimensions—Validity, Integrity, Precision, Reliability, and Timeliness—ensuring data is fit for decision-making.

12 min read
Core ConceptReporting

Data Visualization for M&E

The strategic use of charts, dashboards, and infographics to communicate monitoring data to diverse stakeholders, transforming raw numbers into actionable insights for decision-making.

15 min read
Core ConceptEvaluation

Evaluation Criteria (DAC)

The OECD-DAC framework provides five standard criteria—relevance, efficiency, effectiveness, impact, and sustainability—for systematically assessing the merit and value of development interventions.

12 min read
Core ConceptEvaluation

Evaluation Matrix

A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach — the operational blueprint for executing an evaluation.

12 min read
Core ConceptEvaluation

Evaluation Terms of Reference

A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.

12 min read
Core ConceptData Collection

Focus Group Discussions

A qualitative data collection method that brings together 6-10 participants to discuss a specific topic, generating rich insights through group interaction and shared experiences.

13 min read
Core ConceptIndicators

Indicator Selection & Development

The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.

11 min read
Core ConceptLearning

Learning Agendas

A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.

13 min read
Core ConceptPlanning

M&E Plans

A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.

12 min read
Core ConceptPlanning

M&E System Design

A structured approach to building the organizational infrastructure, processes, and capacities needed to collect, analyze, and use M&E data for decision-making throughout a programme's life.

12 min read
Core ConceptEvaluation

Mixed Methods Evaluation

An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of programme effects, mechanisms, and context.

10 min read
Core ConceptPlanning

Needs Assessment

A systematic process for identifying and analyzing gaps between current conditions and desired outcomes, establishing the evidence base for programme design and indicator selection.

11 min read
Core ConceptData Collection

Observation Methods

A systematic approach to collecting data by directly watching and recording behaviours, interactions, and processes as they occur in natural settings.

12 min read
Core ConceptIndicators

Proxy Indicators

Indirect measures used when direct measurement of the intended outcome is impossible, impractical, or too costly, requiring careful validation to ensure they accurately represent the target construct.

11 min read
Core ConceptEvaluation

Rubric-Based Assessment

A structured evaluation approach using predefined criteria and performance levels to systematically assess programmes, projects, or interventions against established standards.

12 min read
Core ConceptData Collection

Sampling Methods

Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.

11 min read
Core ConceptIndicators

SMART Indicators

A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.

9 min read
Core Concept

Value for Money

The optimal balance of cost, quality, and outcomes — achieving the best results for the resources invested, assessed through the 4Es: economy, efficiency, effectiveness, and equity.

6 min read

Terms

Definitions and glossary

3 min read

Accountability Evaluation

An evaluation focused on assessing whether a programme is meeting its obligations to stakeholders, including donors, beneficiaries, and regulatory bodies.

Learning4 min read

After-Action Review

A structured, time-bound reflection process conducted immediately after a specific activity or milestone to capture what was planned, what happened, why the difference, and what should change.

Methods3 min read

Attribution vs Contribution

The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).

3 min read

Audit Evaluation

An evaluation focused on assessing financial probity, internal controls, and compliance with financial regulations and procurement standards.

Indicators3 min read

Baseline

Initial conditions data collected at the start of a project to establish a reference point for measuring change and setting indicator targets.

Indicators3 min read

Benchmark

A reference point or standard value used to measure progress, typically derived from historical data, industry standards, or comparable programmes.

Methods3 min read

Beneficiary Feedback

Systematic collection and use of input from programme beneficiaries about their experiences, needs, and priorities to improve accountability and programme relevance.

3 min read

Bias

Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.

Learning2 min read

Capacity Strengthening

The process of developing skills, systems, and relationships that enable individuals and organizations to achieve their development goals sustainably.

3 min read

Causal Inference

The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.

Data Collection3 min read

Census vs Sample

The choice between measuring every unit in a population (census) versus selecting a subset (sample) determines cost, precision, and what inferences you can make about your programme.

3 min read

Communication Strategies

Intentional approaches to sharing M&E findings and programme information with stakeholders to influence decisions, build accountability, and promote learning.

3 min read

Compliance Evaluation

An evaluation focused on assessing whether a programme adheres to legal, regulatory, donor, and organizational requirements and standards.

Indicators3 min read

Composite Indicator

A composite indicator combines multiple individual indicators into a single index or score, enabling measurement of multidimensional concepts that cannot be captured by a single metric.

4 min read

Confounding Variables

Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.

Methods4 min read

Content Analysis

A systematic approach to analysing communication content — identifying patterns, themes, and biases in text, audio, or video data through structured coding.

Learning3 min read

Continuous Improvement

A systematic, ongoing approach to enhancing programme performance through iterative learning, feedback, and adaptation.

3 min read

Counterfactual

The comparison between what happened and what would have happened in the absence of an intervention — the fundamental basis for establishing causal attribution in impact evaluation.

Indicators2 min read

Custom vs Standard Indicators

The choice between donor-provided standard indicators and programme-specific custom indicators, balancing compliance requirements with contextual relevance.

Reporting3 min read

Donor Reporting

The process of systematically communicating programme progress, results, and financial information to funding organizations according to their specific requirements and timelines.

2 min read

Empowerment Evaluation

A self-evaluation approach where programme participants systematically assess their own work to improve programmes and secure future ownership.

Indicators3 min read

Endline

A final data collection point at programme completion that measures achieved outcomes against baseline and target values.

Methods3 min read

Evidence Synthesis

The systematic process of identifying, selecting, and integrating findings from multiple studies to inform programme design, evaluation, and decision-making.

4 min read

Ex-Ante vs Ex-Post Evaluation

The temporal dimension of evaluation — ex-ante occurs before implementation to inform design, while ex-post occurs after completion to assess outcomes and lessons.

Learning3 min read

Feedback Loop

A structured process for collecting, analysing, and acting on information to improve programme performance and outcomes.

3 min read

Formative vs Summative Evaluation

Formative evaluation improves programmes during implementation; summative evaluation judges their overall merit after completion.

3 min read

Impact Stories

Narrative accounts that illustrate how a programme has influenced the lives of beneficiaries, combining quantitative outcomes with qualitative human experience.

Reporting2 min read

Indicator Reporting

The systematic collection, compilation, and presentation of indicator data to track programme performance and communicate results to stakeholders and donors.

Frameworks3 min read

Intervention Logic

The causal chain connecting programme activities to intended outcomes, showing how and why a set of interventions is expected to lead to desired change.

Learning3 min read

Knowledge Sharing

The deliberate practice of capturing, organizing, and distributing insights, lessons, and best practices across teams and organizations to improve programme performance and avoid repeating mistakes.

Learning3 min read

Learning Cycles

Structured, recurring periods of reflection and adaptation where programme teams review data, draw lessons, and adjust implementation accordingly.

Methods3 min read

Literature Review

A systematic, critical synthesis of existing research on a specific topic, identifying what is known, gaps in knowledge, and evidence for programme design.

Methods3 min read

LQAS

Logical Quality Assessment Sampling is a rapid decision-making method that classifies programs or areas as pass/fail against a threshold, commonly used for health program monitoring.

3 min read

Meta-Evaluation

The systematic evaluation of an evaluation's quality, assessing whether it met professional standards and produced credible, useful findings.

Indicators3 min read

Milestone

A significant intermediate checkpoint or event that signals progress toward a target, used to track whether a programme is on schedule to achieve its intended outcomes.

4 min read

Monitoring vs Evaluation

Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.

Reporting3 min read

Narrative Reporting

Qualitative, story-based reporting that contextualizes quantitative indicators with explanations of what happened, why it happened, and what it means for programme learning and decision-making.

Learning3 min read

Organisational Learning

The systematic process by which an organisation captures, analyses, and applies lessons from experience to improve programme performance and decision-making.

3 min read

Outcome-Level Analysis

The systematic examination of outcomes to determine whether a programme achieved its intended results, distinguishing between expected and unexpected outcomes, and assessing the significance and sustainability of changes observed.

3 min read

Participatory M&E

An approach to monitoring and evaluation that actively involves stakeholders—especially beneficiaries—at every stage, from design through to using findings for decision-making.

Reporting3 min read

Performance Dashboards

Visual management interfaces that display key performance indicators in real-time, enabling programme teams and stakeholders to monitor progress, identify issues, and make data-driven decisions.

3 min read

Performance Evaluation

An assessment of how well a programme or organisation is achieving its intended results and operating efficiently against established standards and targets.

3 min read

Performance Management

The systematic use of monitoring data, evaluation findings, and feedback to guide programme decisions, improve results, and ensure accountability to stakeholders.

Methods3 min read

Primary vs Secondary Data

Primary data is collected firsthand for a specific purpose; secondary data is existing data repurposed for new analysis. Each has distinct trade-offs in cost, timeliness, and relevance.

Data Collection3 min read

Purposive Sampling

A non-probability sampling approach where researchers deliberately select participants based on specific characteristics or knowledge relevant to the research objectives.

3 min read

Qualitative Data

Non-numerical information captured through words, images, or observations that reveals the how and why behind programme outcomes, providing depth and context to quantitative findings.

3 min read

Quantitative Data

Numerical data collected through structured measurement, enabling statistical analysis, generalization, and objective comparison across programmes and contexts.

Methods3 min read

Random Sampling

A probability sampling method where every member of the population has an equal, known chance of selection, enabling statistical inference to the broader population.

Methods3 min read

Randomised Controlled Trial

An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.

Methods3 min read

Rapid Assessment

A condensed data collection approach designed to generate actionable insights quickly, typically using streamlined qualitative and quantitative methods in time-constrained contexts.

3 min read

Real-Time Evaluation

An evaluation approach conducted during programme implementation to provide immediate feedback for adaptive management and mid-course corrections.

3 min read

Real-Time Monitoring

The continuous collection and analysis of data during programme implementation to enable rapid detection of issues and timely corrective action.

Learning3 min read

Reflection Sessions

Structured gatherings where programme teams and stakeholders pause to examine what happened, why it happened, and what should change as a result.

4 min read

Reliability

The consistency and repeatability of a measurement — whether the same tool produces stable results across repeated applications, different raters, or different time periods.

Frameworks7 min read

Results Chain

The sequential hierarchy of change from activities through outputs, outcomes, and impact that shows how a programme is expected to create change.

3 min read

Statistical Significance

A statistical measure indicating whether observed results are likely due to a real effect rather than random chance, typically assessed using p-values and hypothesis testing.

3 min read

Storytelling for Impact

The strategic use of narrative to make M&E findings memorable, actionable, and influential for decision-makers and stakeholders.

2 min read

Sustainability Evaluation

Assessment of a programme's continued benefits and functionality after external funding has ended, examining whether outcomes persist and systems remain operational.

3 min read

Systematic Review

A rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods.

Methods3 min read

Thematic Analysis

A systematic method for identifying, analyzing, and reporting patterns (themes) in qualitative data through coding and categorization.

Methods3 min read

Triangulation

Using multiple data sources, methods, or perspectives to cross-verify findings and strengthen the validity of evaluation conclusions.

3 min read

Validity (Internal & External)

The degree to which an evaluation accurately demonstrates causal relationships (internal validity) and generalizes findings beyond the study context (external validity).