M&E Reference
Pillars
Major frameworks and methodologies
Logframe / Logical Framework
A structured matrix that summarizes a project's design, linking activities to expected results through a clear hierarchy of objectives with indicators, verification sources, and assumptions.
Most Significant Change
A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.
Outcome Harvesting
A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.
Participatory Evaluation
An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.
Process Tracing
A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.
Results Framework
A structured collection of indicators organized by results level that tracks programme performance across a portfolio, focusing on what changed rather than what was delivered.
Theory of Change
A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.
Core Concepts
Key practices and processes
Baseline Design
A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.
Cost-Effectiveness Analysis
A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.
Data Collection Burden
The total time, effort, and resources required from respondents and implementers to complete data collection activities, balanced against data quality needs and programme capacity.
Data Management
The systematic processes for collecting, storing, securing, and maintaining data quality throughout the data lifecycle to ensure information is accurate, accessible, and usable for decision-making.
Data Quality Assurance
A systematic process for verifying that collected data meets five quality dimensions—Validity, Integrity, Precision, Reliability, and Timeliness—ensuring data is fit for decision-making.
Data Visualization for M&E
The strategic use of charts, dashboards, and infographics to communicate monitoring data to diverse stakeholders, transforming raw numbers into actionable insights for decision-making.
Evaluation Criteria (DAC)
The OECD-DAC framework provides five standard criteria—relevance, efficiency, effectiveness, impact, and sustainability—for systematically assessing the merit and value of development interventions.
Evaluation Matrix
A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach — the operational blueprint for executing an evaluation.
Evaluation Terms of Reference
A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.
Focus Group Discussions
A qualitative data collection method that brings together 6-10 participants to discuss a specific topic, generating rich insights through group interaction and shared experiences.
Indicator Selection & Development
The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.
Learning Agendas
A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.
M&E Plans
A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.
M&E System Design
A structured approach to building the organizational infrastructure, processes, and capacities needed to collect, analyze, and use M&E data for decision-making throughout a programme's life.
Mixed Methods Evaluation
An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of programme effects, mechanisms, and context.
Needs Assessment
A systematic process for identifying and analyzing gaps between current conditions and desired outcomes, establishing the evidence base for programme design and indicator selection.
Observation Methods
A systematic approach to collecting data by directly watching and recording behaviours, interactions, and processes as they occur in natural settings.
Proxy Indicators
Indirect measures used when direct measurement of the intended outcome is impossible, impractical, or too costly, requiring careful validation to ensure they accurately represent the target construct.
Rubric-Based Assessment
A structured evaluation approach using predefined criteria and performance levels to systematically assess programmes, projects, or interventions against established standards.
Sampling Methods
Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.
SMART Indicators
A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.
Value for Money
The optimal balance of cost, quality, and outcomes — achieving the best results for the resources invested, assessed through the 4Es: economy, efficiency, effectiveness, and equity.
Terms
Definitions and glossary
Accountability Evaluation
An evaluation focused on assessing whether a programme is meeting its obligations to stakeholders, including donors, beneficiaries, and regulatory bodies.
After-Action Review
A structured, time-bound reflection process conducted immediately after a specific activity or milestone to capture what was planned, what happened, why the difference, and what should change.
Attribution vs Contribution
The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).
Audit Evaluation
An evaluation focused on assessing financial probity, internal controls, and compliance with financial regulations and procurement standards.
Baseline
Initial conditions data collected at the start of a project to establish a reference point for measuring change and setting indicator targets.
Benchmark
A reference point or standard value used to measure progress, typically derived from historical data, industry standards, or comparable programmes.
Beneficiary Feedback
Systematic collection and use of input from programme beneficiaries about their experiences, needs, and priorities to improve accountability and programme relevance.
Bias
Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.
Capacity Strengthening
The process of developing skills, systems, and relationships that enable individuals and organizations to achieve their development goals sustainably.
Causal Inference
The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.
Census vs Sample
The choice between measuring every unit in a population (census) versus selecting a subset (sample) determines cost, precision, and what inferences you can make about your programme.
Communication Strategies
Intentional approaches to sharing M&E findings and programme information with stakeholders to influence decisions, build accountability, and promote learning.
Compliance Evaluation
An evaluation focused on assessing whether a programme adheres to legal, regulatory, donor, and organizational requirements and standards.
Composite Indicator
A composite indicator combines multiple individual indicators into a single index or score, enabling measurement of multidimensional concepts that cannot be captured by a single metric.
Confounding Variables
Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.
Content Analysis
A systematic approach to analysing communication content — identifying patterns, themes, and biases in text, audio, or video data through structured coding.
Continuous Improvement
A systematic, ongoing approach to enhancing programme performance through iterative learning, feedback, and adaptation.
Counterfactual
The comparison between what happened and what would have happened in the absence of an intervention — the fundamental basis for establishing causal attribution in impact evaluation.
Custom vs Standard Indicators
The choice between donor-provided standard indicators and programme-specific custom indicators, balancing compliance requirements with contextual relevance.
Donor Reporting
The process of systematically communicating programme progress, results, and financial information to funding organizations according to their specific requirements and timelines.
Empowerment Evaluation
A self-evaluation approach where programme participants systematically assess their own work to improve programmes and secure future ownership.
Endline
A final data collection point at programme completion that measures achieved outcomes against baseline and target values.
Evidence Synthesis
The systematic process of identifying, selecting, and integrating findings from multiple studies to inform programme design, evaluation, and decision-making.
Ex-Ante vs Ex-Post Evaluation
The temporal dimension of evaluation — ex-ante occurs before implementation to inform design, while ex-post occurs after completion to assess outcomes and lessons.
Feedback Loop
A structured process for collecting, analysing, and acting on information to improve programme performance and outcomes.
Formative vs Summative Evaluation
Formative evaluation improves programmes during implementation; summative evaluation judges their overall merit after completion.
Impact Stories
Narrative accounts that illustrate how a programme has influenced the lives of beneficiaries, combining quantitative outcomes with qualitative human experience.
Indicator Reporting
The systematic collection, compilation, and presentation of indicator data to track programme performance and communicate results to stakeholders and donors.
Intervention Logic
The causal chain connecting programme activities to intended outcomes, showing how and why a set of interventions is expected to lead to desired change.
Knowledge Sharing
The deliberate practice of capturing, organizing, and distributing insights, lessons, and best practices across teams and organizations to improve programme performance and avoid repeating mistakes.
Learning Cycles
Structured, recurring periods of reflection and adaptation where programme teams review data, draw lessons, and adjust implementation accordingly.
Literature Review
A systematic, critical synthesis of existing research on a specific topic, identifying what is known, gaps in knowledge, and evidence for programme design.
LQAS
Logical Quality Assessment Sampling is a rapid decision-making method that classifies programs or areas as pass/fail against a threshold, commonly used for health program monitoring.
Meta-Evaluation
The systematic evaluation of an evaluation's quality, assessing whether it met professional standards and produced credible, useful findings.
Milestone
A significant intermediate checkpoint or event that signals progress toward a target, used to track whether a programme is on schedule to achieve its intended outcomes.
Monitoring vs Evaluation
Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.
Narrative Reporting
Qualitative, story-based reporting that contextualizes quantitative indicators with explanations of what happened, why it happened, and what it means for programme learning and decision-making.
Organisational Learning
The systematic process by which an organisation captures, analyses, and applies lessons from experience to improve programme performance and decision-making.
Outcome-Level Analysis
The systematic examination of outcomes to determine whether a programme achieved its intended results, distinguishing between expected and unexpected outcomes, and assessing the significance and sustainability of changes observed.
Participatory M&E
An approach to monitoring and evaluation that actively involves stakeholders—especially beneficiaries—at every stage, from design through to using findings for decision-making.
Performance Dashboards
Visual management interfaces that display key performance indicators in real-time, enabling programme teams and stakeholders to monitor progress, identify issues, and make data-driven decisions.
Performance Evaluation
An assessment of how well a programme or organisation is achieving its intended results and operating efficiently against established standards and targets.
Performance Management
The systematic use of monitoring data, evaluation findings, and feedback to guide programme decisions, improve results, and ensure accountability to stakeholders.
Primary vs Secondary Data
Primary data is collected firsthand for a specific purpose; secondary data is existing data repurposed for new analysis. Each has distinct trade-offs in cost, timeliness, and relevance.
Purposive Sampling
A non-probability sampling approach where researchers deliberately select participants based on specific characteristics or knowledge relevant to the research objectives.
Qualitative Data
Non-numerical information captured through words, images, or observations that reveals the how and why behind programme outcomes, providing depth and context to quantitative findings.
Quantitative Data
Numerical data collected through structured measurement, enabling statistical analysis, generalization, and objective comparison across programmes and contexts.
Random Sampling
A probability sampling method where every member of the population has an equal, known chance of selection, enabling statistical inference to the broader population.
Randomised Controlled Trial
An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.
Rapid Assessment
A condensed data collection approach designed to generate actionable insights quickly, typically using streamlined qualitative and quantitative methods in time-constrained contexts.
Real-Time Evaluation
An evaluation approach conducted during programme implementation to provide immediate feedback for adaptive management and mid-course corrections.
Real-Time Monitoring
The continuous collection and analysis of data during programme implementation to enable rapid detection of issues and timely corrective action.
Reflection Sessions
Structured gatherings where programme teams and stakeholders pause to examine what happened, why it happened, and what should change as a result.
Reliability
The consistency and repeatability of a measurement — whether the same tool produces stable results across repeated applications, different raters, or different time periods.
Results Chain
The sequential hierarchy of change from activities through outputs, outcomes, and impact that shows how a programme is expected to create change.
Statistical Significance
A statistical measure indicating whether observed results are likely due to a real effect rather than random chance, typically assessed using p-values and hypothesis testing.
Storytelling for Impact
The strategic use of narrative to make M&E findings memorable, actionable, and influential for decision-makers and stakeholders.
Sustainability Evaluation
Assessment of a programme's continued benefits and functionality after external funding has ended, examining whether outcomes persist and systems remain operational.
Systematic Review
A rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods.
Thematic Analysis
A systematic method for identifying, analyzing, and reporting patterns (themes) in qualitative data through coding and categorization.
Triangulation
Using multiple data sources, methods, or perspectives to cross-verify findings and strengthen the validity of evaluation conclusions.
Validity (Internal & External)
The degree to which an evaluation accurately demonstrates causal relationships (internal validity) and generalizes findings beyond the study context (external validity).