The responsibility to be transparent, report, and respond to stakeholders about programme performance and decisions.
An evaluation focused on assessing whether a programme is meeting its obligations to stakeholders, including donors, beneficiaries, and regulatory bodies.
The systems, processes, and structures that enable organisations to answer to stakeholders, including communities, donors, and partners, for their performance, decisions, and use of resources.
What a programme DOES with its inputs to produce outputs; the direct work or services delivered.
A management approach that uses continuous learning from monitoring and evaluation data to adjust programme strategies and activities in response to changing evidence or context.
A structured, time-bound reflection process conducted immediately after a specific activity or milestone to capture what was planned, what happened, why the difference, and what should change.
Conditions outside programme control that must hold true for the programme to succeed as planned.
The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).
An evaluation focused on assessing financial probity, internal controls, and compliance with financial regulations and procurement standards.
Audits examine financial and regulatory compliance; evaluations assess programme effectiveness and impact.
Initial conditions data collected at the start of a project to establish a reference point for measuring change and setting indicator targets.
A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.
A reference point or standard value used to measure progress, typically derived from historical data, industry standards, or comparable programmes.
A person, household, or organisation that receives direct benefits from a programme's activities or outputs.
Systematic collection and use of input from programme beneficiaries about their experiences, needs, and priorities to improve accountability and programme relevance.
Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.
The process of strengthening the knowledge, skills, systems, and resources that organisations and individuals need to design, implement, and use monitoring and evaluation effectively.
The process of developing skills, systems, and relationships that enable individuals and organizations to achieve their development goals sustainably.
The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.
The choice between measuring every unit in a population (census) versus selecting a subset (sample) determines cost, precision, and what inferences you can make about your programme.
USAID framework for integrating collaboration, learning, and adaptation into programme design and management.
A sampling method that divides the population into clusters and randomly selects entire clusters rather than individuals.
Intentional approaches to sharing M&E findings and programme information with stakeholders to influence decisions, build accountability, and promote learning.
An evaluation focused on assessing whether a programme adheres to legal, regulatory, donor, and organizational requirements and standards.
Tracking whether a programme is implemented according to agreed standards, policies, and legal requirements.
A composite indicator combines multiple individual indicators into a single index or score, enabling measurement of multidimensional concepts that cannot be captured by a single metric.
Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.
A systematic approach to analysing communication content, identifying patterns, themes, and biases in text, audio, or video data through structured coding.
A systematic, ongoing approach to enhancing programme performance through iterative learning, feedback, and adaptation.
A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.
A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.
The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.
The choice between donor-provided standard indicators and programme-specific custom indicators, balancing compliance requirements with contextual relevance.
A visual display of key monitoring indicators enabling rapid assessment of programme performance at a glance.
The total time, effort, and resources required from respondents and implementers to complete data collection activities, balanced against data quality needs and programme capacity.
The systematic processes for collecting, storing, securing, and maintaining data quality throughout the data lifecycle to ensure information is accurate, accessible, and usable for decision-making.
A systematic process for verifying that collected data meets five quality dimensions, Validity, Integrity, Precision, Reliability, and Timeliness, ensuring data is fit for decision-making.
The strategic use of charts, dashboards, and infographics to communicate monitoring data to diverse stakeholders, transforming raw numbers into actionable insights for decision-making.
An evaluation approach designed for complex, adaptive programmes in which goals and processes are emergent, and the evaluator works alongside the programme team as an embedded learning partner.
The breakdown of aggregate data by sub-group characteristics, such as sex, age, location, or vulnerability status, to reveal inequities and differences in programme reach and outcomes.
Active, intentional process of sharing M&E findings with relevant audiences to promote understanding, learning, and evidence use.
The foundational M&E principle that programme and evaluation activities must not expose participants, communities, or programme staff to physical, psychological, social, or economic harm, and must actively identify and mitigate harm risks before they occur.
The process of systematically communicating programme progress, results, and financial information to funding organizations according to their specific requirements and timelines.
M&E obligations specified in grant agreements and donor policies that shape system design and reporting.
A self-evaluation approach where programme participants systematically assess their own work to improve programmes and secure future ownership.
A final data collection point at programme completion that measures achieved outcomes against baseline and target values.
The principles and standards that guide the ethical conduct of monitoring and evaluation, protecting the rights and dignity of participants, ensuring honest reporting, and managing power responsibly.
A preliminary review of whether a programme is sufficiently mature and documented to be meaningfully evaluated.
The OECD-DAC framework provides five standard criteria, relevance, efficiency, effectiveness, impact, and sustainability, for systematically assessing the merit and value of development interventions.
A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach, the operational blueprint for executing an evaluation.
The overarching questions an evaluation will investigate, distinct from survey or interview questions.
A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.
The systematic process of identifying, selecting, and integrating findings from multiple studies to inform programme design, evaluation, and decision-making.
Using M&E evidence to inform programme, management, and policy decisions rather than intuition or habit.
The temporal dimension of evaluation, ex-ante occurs before implementation to inform design, while ex-post occurs after completion to assess outcomes and lessons.
A structured process for collecting, analysing, and acting on information to improve programme performance and outcomes.
A qualitative data collection method that brings together 6-10 participants to discuss a specific topic, generating rich insights through group interaction and shared experiences.
Formative evaluation improves programmes during implementation; summative evaluation judges their overall merit after completion.
An approach to monitoring and evaluation that systematically examines how programmes affect women, men, girls, and boys differently, and ensures that M&E processes themselves do not reinforce gender inequalities.
Long-term, higher-level effects attributable or contributed to by a programme; broader change beyond individual outcomes.
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
Narrative accounts that illustrate how a programme has influenced the lives of beneficiaries, combining quantitative outcomes with qualitative human experience.
The first formal deliverable from an evaluation team, detailing refined methodology before primary data collection.
A specific, observable, measurable variable that tracks progress toward an outcome or output.
The systematic collection, compilation, and presentation of indicator data to track programme performance and communicate results to stakeholders and donors.
The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.
Resources invested in a programme (money, staff, materials, time) that enable activities to happen.
The causal chain connecting programme activities to intended outcomes, showing how and why a set of interventions is expected to lead to desired change.
In-depth, semi-structured interviews with individuals selected for their specific knowledge, experience, or perspectives relevant to the evaluation questions.
The systematic process of capturing, organising, and applying lessons, evidence, and insights from M&E across programmes and over time to improve organisational decision-making.
The deliberate practice of capturing, organizing, and distributing insights, lessons, and best practices across teams and organizations to improve programme performance and avoid repeating mistakes.
The systematic process of gathering evidence, reflecting on it, and using it to improve programme strategy and implementation.
A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.
Structured, recurring periods of reflection and adaptation where programme teams review data, draw lessons, and adjust implementation accordingly.
Documented insights from programmes identifying what worked, what did not work, and why, with actionable specificity.
A systematic, critical synthesis of existing research on a specific topic, identifying what is known, gaps in knowledge, and evidence for programme design.
A structured matrix that summarizes a project's design, linking activities to expected results through a clear hierarchy of objectives with indicators, verification sources, and assumptions.
Logical Quality Assessment Sampling is a rapid decision-making method that classifies programs or areas as pass/fail against a threshold, commonly used for health program monitoring.
The portion of a programme budget dedicated to monitoring, evaluation, and learning activities.
The structured document specifying what will be measured, how, by whom, and how often.
A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.
A structured approach to building the organizational infrastructure, processes, and capacities needed to collect, analyze, and use M&E data for decision-making throughout a programme's life.
The systematic evaluation of an evaluation's quality, assessing whether it met professional standards and produced credible, useful findings.
A data collection point conducted midway through a programme to assess trajectory and enable adaptive decisions.
A significant intermediate checkpoint or event that signals progress toward a target, used to track whether a programme is on schedule to achieve its intended outcomes.
An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of programme effects, mechanisms, and context.
Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.
A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.
Qualitative, story-based reporting that contextualizes quantitative indicators with explanations of what happened, why it happened, and what it means for programme learning and decision-making.
A systematic process for identifying and analyzing gaps between current conditions and desired outcomes, establishing the evidence base for programme design and indicator selection.
A systematic approach to collecting data by directly watching and recording behaviours, interactions, and processes as they occur in natural settings.
The systematic process by which an organisation captures, analyses, and applies lessons from experience to improve programme performance and decision-making.
Changes in behaviour, knowledge, skills, or conditions resulting from programme outputs, experienced by beneficiaries.
A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.
A participatory planning and monitoring approach that tracks behaviour changes in the people, groups, and organisations a programme works with directly, rather than long-term development outcomes.
The systematic examination of outcomes to determine whether a programme achieved its intended results, distinguishing between expected and unexpected outcomes, and assessing the significance and sustainability of changes observed.
Direct, tangible products of programme activities; what the programme produces, not what beneficiaries gain.
An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.
An approach to monitoring and evaluation that actively involves stakeholders, especially beneficiaries, at every stage, from design through to using findings for decision-making.
Visual management interfaces that display key performance indicators in real-time, enabling programme teams and stakeholders to monitor progress, identify issues, and make data-driven decisions.
An assessment of how well a programme or organisation is achieving its intended results and operating efficiently against established standards and targets.
The systematic use of monitoring data, evaluation findings, and feedback to guide programme decisions, improve results, and ensure accountability to stakeholders.
Primary data is collected firsthand for a specific purpose; secondary data is existing data repurposed for new analysis. Each has distinct trade-offs in cost, timeliness, and relevance.
Assessment of how a programme is implemented, whether activities are delivered as planned and to intended quality standards.
A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.
The explicit articulation of how a programme is expected to produce change.
A periodic document submitted by programmes to donors detailing implementation progress, indicator performance, and key issues.
Indirect measures used when direct measurement of the intended outcome is impossible, impractical, or too costly, requiring careful validation to ensure they accurately represent the target construct.
A non-probability sampling approach where researchers deliberately select participants based on specific characteristics or knowledge relevant to the research objectives.
Non-numerical information captured through words, images, or observations that reveals the how and why behind programme outcomes, providing depth and context to quantitative findings.
Numerical data collected through structured measurement, enabling statistical analysis, generalization, and objective comparison across programmes and contexts.
A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.
A probability sampling method where every member of the population has an equal, known chance of selection, enabling statistical inference to the broader population.
An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.
A condensed data collection approach designed to generate actionable insights quickly, typically using streamlined qualitative and quantitative methods in time-constrained contexts.
An evaluation approach conducted during programme implementation to provide immediate feedback for adaptive management and mid-course corrections.
The continuous collection and analysis of data during programme implementation to enable rapid detection of issues and timely corrective action.
An evaluation approach that asks what works, for whom, in what circumstances, and why, by identifying the mechanisms through which programmes produce outcomes in specific contexts.
Structured gatherings where programme teams and stakeholders pause to examine what happened, why it happened, and what should change as a result.
The consistency and repeatability of a measurement, whether the same tool produces stable results across repeated applications, different raters, or different time periods.
The principles and practices for producing evaluation and monitoring reports that are clear, credible, actionable, and tailored to their intended audiences.
The sequential hierarchy of change from activities through outputs, outcomes, and impact that shows how a programme is expected to create change.
A structured collection of indicators organized by results level that tracks programme performance across a portfolio, focusing on what changed rather than what was delivered.
A management approach that focuses organisational decisions, resources, and accountability on achieving defined results, using evidence from monitoring and evaluation.
External factors that could prevent programme success and their planned mitigation strategies.
A structured evaluation approach using predefined criteria and performance levels to systematically assess programmes, projects, or interventions against established standards.
Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.
A document specifying what an evaluator or consultant will deliver, within what timeframe, budget, and constraints.
A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.
Evaluation framework that assigns monetary values to social outcomes to calculate return on investment.
A structured process for identifying all parties with an interest in a programme, mapping their roles, influence, and information needs, and informing how M&E should engage them.
A statistical measure indicating whether observed results are likely due to a real effect rather than random chance, typically assessed using p-values and hypothesis testing.
The strategic use of narrative to make M&E findings memorable, actionable, and influential for decision-makers and stakeholders.
The process of designing structured questionnaires and survey protocols to collect reliable, valid, and actionable data from a defined population.
Assessment of a programme's continued benefits and functionality after external funding has ended, examining whether outcomes persist and systems remain operational.
A rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods.
The specific value an indicator is expected to reach by a defined date, quantifying what success looks like.
The process of establishing specific, time-bound performance benchmarks against which programme progress and achievement will be measured.
A systematic method for identifying, analyzing, and reporting patterns (themes) in qualitative data through coding and categorization.
A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.
Using multiple data sources, methods, or perspectives to cross-verify findings and strengthen the validity of evaluation conclusions.
An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.
The degree to which an evaluation accurately demonstrates causal relationships (internal validity) and generalizes findings beyond the study context (external validity).
The optimal balance of cost, quality, and outcomes, achieving the best results for the resources invested, assessed through the 4Es: economy, efficiency, effectiveness, and equity.