Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
All ContentPractical GuidesInsights & Analysis
Workflows
Plugins
Prompts
Start a Conversation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

M&E Library

M&E Glossary

ENFRES

A

AccountabilityQuick Reference

The responsibility to be transparent, report, and respond to stakeholders about programme performance and decisions.

Accountability EvaluationQuick Reference

An evaluation focused on assessing whether a programme is meeting its obligations to stakeholders, including donors, beneficiaries, and regulatory bodies.

Accountability MechanismsOverview

The systems, processes, and structures that enable organisations to answer to stakeholders, including communities, donors, and partners, for their performance, decisions, and use of resources.

ActivityQuick Reference

What a programme DOES with its inputs to produce outputs; the direct work or services delivered.

Adaptive ManagementOverview

A management approach that uses continuous learning from monitoring and evaluation data to adjust programme strategies and activities in response to changing evidence or context.

After-Action ReviewQuick Reference

A structured, time-bound reflection process conducted immediately after a specific activity or milestone to capture what was planned, what happened, why the difference, and what should change.

AssumptionsQuick Reference

Conditions outside programme control that must hold true for the programme to succeed as planned.

Attribution vs ContributionQuick Reference

The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).

Audit EvaluationQuick Reference

An evaluation focused on assessing financial probity, internal controls, and compliance with financial regulations and procurement standards.

Audit vs EvaluationQuick Reference

Audits examine financial and regulatory compliance; evaluations assess programme effectiveness and impact.

B

BaselineQuick Reference

Initial conditions data collected at the start of a project to establish a reference point for measuring change and setting indicator targets.

Baseline DesignOverview

A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.

BenchmarkQuick Reference

A reference point or standard value used to measure progress, typically derived from historical data, industry standards, or comparable programmes.

BeneficiaryQuick Reference

A person, household, or organisation that receives direct benefits from a programme's activities or outputs.

Beneficiary FeedbackQuick Reference

Systematic collection and use of input from programme beneficiaries about their experiences, needs, and priorities to improve accountability and programme relevance.

BiasQuick Reference

Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.

C

Capacity Building for M&EOverview

The process of strengthening the knowledge, skills, systems, and resources that organisations and individuals need to design, implement, and use monitoring and evaluation effectively.

Capacity StrengtheningQuick Reference

The process of developing skills, systems, and relationships that enable individuals and organizations to achieve their development goals sustainably.

Causal InferenceQuick Reference

The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.

Census vs SampleQuick Reference

The choice between measuring every unit in a population (census) versus selecting a subset (sample) determines cost, precision, and what inferences you can make about your programme.

CLA (Collaborating, Learning, and Adapting)Quick Reference

USAID framework for integrating collaboration, learning, and adaptation into programme design and management.

Cluster SamplingQuick Reference

A sampling method that divides the population into clusters and randomly selects entire clusters rather than individuals.

Communication StrategiesQuick Reference

Intentional approaches to sharing M&E findings and programme information with stakeholders to influence decisions, build accountability, and promote learning.

Compliance EvaluationQuick Reference

An evaluation focused on assessing whether a programme adheres to legal, regulatory, donor, and organizational requirements and standards.

Compliance MonitoringQuick Reference

Tracking whether a programme is implemented according to agreed standards, policies, and legal requirements.

Composite IndicatorQuick Reference

A composite indicator combines multiple individual indicators into a single index or score, enabling measurement of multidimensional concepts that cannot be captured by a single metric.

Confounding VariablesQuick Reference

Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.

Content AnalysisQuick Reference

A systematic approach to analysing communication content, identifying patterns, themes, and biases in text, audio, or video data through structured coding.

Continuous ImprovementQuick Reference

A systematic, ongoing approach to enhancing programme performance through iterative learning, feedback, and adaptation.

Contribution AnalysisIn-Depth Guide

A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.

Cost-Effectiveness AnalysisOverview

A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.

CounterfactualQuick Reference

The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.

Custom vs Standard IndicatorsQuick Reference

The choice between donor-provided standard indicators and programme-specific custom indicators, balancing compliance requirements with contextual relevance.

D

DashboardQuick Reference

A visual display of key monitoring indicators enabling rapid assessment of programme performance at a glance.

Data Collection BurdenOverview

The total time, effort, and resources required from respondents and implementers to complete data collection activities, balanced against data quality needs and programme capacity.

Data ManagementOverview

The systematic processes for collecting, storing, securing, and maintaining data quality throughout the data lifecycle to ensure information is accurate, accessible, and usable for decision-making.

Data Quality AssuranceOverview

A systematic process for verifying that collected data meets five quality dimensions, Validity, Integrity, Precision, Reliability, and Timeliness, ensuring data is fit for decision-making.

Data Visualization for M&EOverview

The strategic use of charts, dashboards, and infographics to communicate monitoring data to diverse stakeholders, transforming raw numbers into actionable insights for decision-making.

Developmental EvaluationIn-Depth Guide

An evaluation approach designed for complex, adaptive programmes in which goals and processes are emergent, and the evaluator works alongside the programme team as an embedded learning partner.

DisaggregationOverview

The breakdown of aggregate data by sub-group characteristics, such as sex, age, location, or vulnerability status, to reveal inequities and differences in programme reach and outcomes.

DisseminationQuick Reference

Active, intentional process of sharing M&E findings with relevant audiences to promote understanding, learning, and evidence use.

Do No HarmOverview

The foundational M&E principle that programme and evaluation activities must not expose participants, communities, or programme staff to physical, psychological, social, or economic harm, and must actively identify and mitigate harm risks before they occur.

Donor ReportingQuick Reference

The process of systematically communicating programme progress, results, and financial information to funding organizations according to their specific requirements and timelines.

Donor RequirementsQuick Reference

M&E obligations specified in grant agreements and donor policies that shape system design and reporting.

E

Empowerment EvaluationQuick Reference

A self-evaluation approach where programme participants systematically assess their own work to improve programmes and secure future ownership.

EndlineQuick Reference

A final data collection point at programme completion that measures achieved outcomes against baseline and target values.

Ethics in M&EOverview

The principles and standards that guide the ethical conduct of monitoring and evaluation, protecting the rights and dignity of participants, ensuring honest reporting, and managing power responsibly.

Evaluability AssessmentQuick Reference

A preliminary review of whether a programme is sufficiently mature and documented to be meaningfully evaluated.

Evaluation Criteria (DAC)Overview

The OECD-DAC framework provides five standard criteria, relevance, efficiency, effectiveness, impact, and sustainability, for systematically assessing the merit and value of development interventions.

Evaluation MatrixOverview

A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach, the operational blueprint for executing an evaluation.

Evaluation QuestionsQuick Reference

The overarching questions an evaluation will investigate, distinct from survey or interview questions.

Evaluation Terms of ReferenceOverview

A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.

Evidence SynthesisQuick Reference

The systematic process of identifying, selecting, and integrating findings from multiple studies to inform programme design, evaluation, and decision-making.

Evidence-Based Decision MakingQuick Reference

Using M&E evidence to inform programme, management, and policy decisions rather than intuition or habit.

Ex-Ante vs Ex-Post EvaluationQuick Reference

The temporal dimension of evaluation, ex-ante occurs before implementation to inform design, while ex-post occurs after completion to assess outcomes and lessons.

F

Feedback LoopQuick Reference

A structured process for collecting, analysing, and acting on information to improve programme performance and outcomes.

Focus Group DiscussionsOverview

A qualitative data collection method that brings together 6-10 participants to discuss a specific topic, generating rich insights through group interaction and shared experiences.

Formative vs Summative EvaluationQuick Reference

Formative evaluation improves programmes during implementation; summative evaluation judges their overall merit after completion.

G

Gender-Responsive M&EOverview

An approach to monitoring and evaluation that systematically examines how programmes affect women, men, girls, and boys differently, and ensures that M&E processes themselves do not reinforce gender inequalities.

I

ImpactQuick Reference

Long-term, higher-level effects attributable or contributed to by a programme; broader change beyond individual outcomes.

Impact EvaluationIn-Depth Guide

A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.

Impact StoriesQuick Reference

Narrative accounts that illustrate how a programme has influenced the lives of beneficiaries, combining quantitative outcomes with qualitative human experience.

Inception ReportQuick Reference

The first formal deliverable from an evaluation team, detailing refined methodology before primary data collection.

IndicatorQuick Reference

A specific, observable, measurable variable that tracks progress toward an outcome or output.

Indicator ReportingQuick Reference

The systematic collection, compilation, and presentation of indicator data to track programme performance and communicate results to stakeholders and donors.

Indicator Selection & DevelopmentOverview

The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.

InputQuick Reference

Resources invested in a programme (money, staff, materials, time) that enable activities to happen.

Intervention LogicQuick Reference

The causal chain connecting programme activities to intended outcomes, showing how and why a set of interventions is expected to lead to desired change.

K

Key Informant InterviewsOverview

In-depth, semi-structured interviews with individuals selected for their specific knowledge, experience, or perspectives relevant to the evaluation questions.

Knowledge Management for M&EOverview

The systematic process of capturing, organising, and applying lessons, evidence, and insights from M&E across programmes and over time to improve organisational decision-making.

Knowledge SharingQuick Reference

The deliberate practice of capturing, organizing, and distributing insights, lessons, and best practices across teams and organizations to improve programme performance and avoid repeating mistakes.

L

LearningQuick Reference

The systematic process of gathering evidence, reflecting on it, and using it to improve programme strategy and implementation.

Learning AgendasOverview

A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.

Learning CyclesQuick Reference

Structured, recurring periods of reflection and adaptation where programme teams review data, draw lessons, and adjust implementation accordingly.

Lessons LearnedQuick Reference

Documented insights from programmes identifying what worked, what did not work, and why, with actionable specificity.

Literature ReviewQuick Reference

A systematic, critical synthesis of existing research on a specific topic, identifying what is known, gaps in knowledge, and evidence for programme design.

Logframe / Logical FrameworkIn-Depth Guide

A structured matrix that summarizes a project's design, linking activities to expected results through a clear hierarchy of objectives with indicators, verification sources, and assumptions.

LQASQuick Reference

Logical Quality Assessment Sampling is a rapid decision-making method that classifies programs or areas as pass/fail against a threshold, commonly used for health program monitoring.

M

M&E BudgetQuick Reference

The portion of a programme budget dedicated to monitoring, evaluation, and learning activities.

M&E FrameworkQuick Reference

The structured document specifying what will be measured, how, by whom, and how often.

M&E PlansOverview

A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.

M&E System DesignOverview

A structured approach to building the organizational infrastructure, processes, and capacities needed to collect, analyze, and use M&E data for decision-making throughout a programme's life.

Meta-EvaluationQuick Reference

The systematic evaluation of an evaluation's quality, assessing whether it met professional standards and produced credible, useful findings.

MidlineQuick Reference

A data collection point conducted midway through a programme to assess trajectory and enable adaptive decisions.

MilestoneQuick Reference

A significant intermediate checkpoint or event that signals progress toward a target, used to track whether a programme is on schedule to achieve its intended outcomes.

Mixed Methods EvaluationOverview

An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of programme effects, mechanisms, and context.

Monitoring vs EvaluationQuick Reference

Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.

Most Significant ChangeIn-Depth Guide

A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.

N

Narrative ReportingQuick Reference

Qualitative, story-based reporting that contextualizes quantitative indicators with explanations of what happened, why it happened, and what it means for programme learning and decision-making.

Needs AssessmentOverview

A systematic process for identifying and analyzing gaps between current conditions and desired outcomes, establishing the evidence base for programme design and indicator selection.

O

Observation MethodsOverview

A systematic approach to collecting data by directly watching and recording behaviours, interactions, and processes as they occur in natural settings.

Organisational LearningQuick Reference

The systematic process by which an organisation captures, analyses, and applies lessons from experience to improve programme performance and decision-making.

OutcomeQuick Reference

Changes in behaviour, knowledge, skills, or conditions resulting from programme outputs, experienced by beneficiaries.

Outcome HarvestingIn-Depth Guide

A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.

Outcome MappingIn-Depth Guide

A participatory planning and monitoring approach that tracks behaviour changes in the people, groups, and organisations a programme works with directly, rather than long-term development outcomes.

Outcome-Level AnalysisQuick Reference

The systematic examination of outcomes to determine whether a programme achieved its intended results, distinguishing between expected and unexpected outcomes, and assessing the significance and sustainability of changes observed.

OutputQuick Reference

Direct, tangible products of programme activities; what the programme produces, not what beneficiaries gain.

P

Participatory EvaluationIn-Depth Guide

An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.

Participatory M&EQuick Reference

An approach to monitoring and evaluation that actively involves stakeholders, especially beneficiaries, at every stage, from design through to using findings for decision-making.

Performance DashboardsQuick Reference

Visual management interfaces that display key performance indicators in real-time, enabling programme teams and stakeholders to monitor progress, identify issues, and make data-driven decisions.

Performance EvaluationQuick Reference

An assessment of how well a programme or organisation is achieving its intended results and operating efficiently against established standards and targets.

Performance ManagementQuick Reference

The systematic use of monitoring data, evaluation findings, and feedback to guide programme decisions, improve results, and ensure accountability to stakeholders.

Primary vs Secondary DataQuick Reference

Primary data is collected firsthand for a specific purpose; secondary data is existing data repurposed for new analysis. Each has distinct trade-offs in cost, timeliness, and relevance.

Process EvaluationQuick Reference

Assessment of how a programme is implemented, whether activities are delivered as planned and to intended quality standards.

Process TracingIn-Depth Guide

A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.

Programme TheoryQuick Reference

The explicit articulation of how a programme is expected to produce change.

Progress ReportQuick Reference

A periodic document submitted by programmes to donors detailing implementation progress, indicator performance, and key issues.

Proxy IndicatorsOverview

Indirect measures used when direct measurement of the intended outcome is impossible, impractical, or too costly, requiring careful validation to ensure they accurately represent the target construct.

Purposive SamplingQuick Reference

A non-probability sampling approach where researchers deliberately select participants based on specific characteristics or knowledge relevant to the research objectives.

Q

Qualitative DataQuick Reference

Non-numerical information captured through words, images, or observations that reveals the how and why behind programme outcomes, providing depth and context to quantitative findings.

Quantitative DataQuick Reference

Numerical data collected through structured measurement, enabling statistical analysis, generalization, and objective comparison across programmes and contexts.

Quasi-Experimental DesignIn-Depth Guide

A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.

R

Random SamplingQuick Reference

A probability sampling method where every member of the population has an equal, known chance of selection, enabling statistical inference to the broader population.

Randomised Controlled TrialQuick Reference

An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.

Rapid AssessmentQuick Reference

A condensed data collection approach designed to generate actionable insights quickly, typically using streamlined qualitative and quantitative methods in time-constrained contexts.

Real-Time EvaluationQuick Reference

An evaluation approach conducted during programme implementation to provide immediate feedback for adaptive management and mid-course corrections.

Real-Time MonitoringQuick Reference

The continuous collection and analysis of data during programme implementation to enable rapid detection of issues and timely corrective action.

Realist EvaluationIn-Depth Guide

An evaluation approach that asks what works, for whom, in what circumstances, and why, by identifying the mechanisms through which programmes produce outcomes in specific contexts.

Reflection SessionsQuick Reference

Structured gatherings where programme teams and stakeholders pause to examine what happened, why it happened, and what should change as a result.

ReliabilityQuick Reference

The consistency and repeatability of a measurement, whether the same tool produces stable results across repeated applications, different raters, or different time periods.

Reporting Best PracticesOverview

The principles and practices for producing evaluation and monitoring reports that are clear, credible, actionable, and tailored to their intended audiences.

Results ChainQuick Reference

The sequential hierarchy of change from activities through outputs, outcomes, and impact that shows how a programme is expected to create change.

Results FrameworkIn-Depth Guide

A structured collection of indicators organized by results level that tracks programme performance across a portfolio, focusing on what changed rather than what was delivered.

Results-Based ManagementIn-Depth Guide

A management approach that focuses organisational decisions, resources, and accountability on achieving defined results, using evidence from monitoring and evaluation.

Risks and Risk MitigationQuick Reference

External factors that could prevent programme success and their planned mitigation strategies.

Rubric-Based AssessmentOverview

A structured evaluation approach using predefined criteria and performance levels to systematically assess programmes, projects, or interventions against established standards.

S

Sampling MethodsOverview

Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.

Scope of WorkQuick Reference

A document specifying what an evaluator or consultant will deliver, within what timeframe, budget, and constraints.

SMART IndicatorsOverview

A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.

SROI (Social Return on Investment)Quick Reference

Evaluation framework that assigns monetary values to social outcomes to calculate return on investment.

Stakeholder AnalysisOverview

A structured process for identifying all parties with an interest in a programme, mapping their roles, influence, and information needs, and informing how M&E should engage them.

Statistical SignificanceQuick Reference

A statistical measure indicating whether observed results are likely due to a real effect rather than random chance, typically assessed using p-values and hypothesis testing.

Storytelling for ImpactQuick Reference

The strategic use of narrative to make M&E findings memorable, actionable, and influential for decision-makers and stakeholders.

Survey DesignOverview

The process of designing structured questionnaires and survey protocols to collect reliable, valid, and actionable data from a defined population.

Sustainability EvaluationQuick Reference

Assessment of a programme's continued benefits and functionality after external funding has ended, examining whether outcomes persist and systems remain operational.

Systematic ReviewQuick Reference

A rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods.

T

TargetQuick Reference

The specific value an indicator is expected to reach by a defined date, quantifying what success looks like.

Target SettingOverview

The process of establishing specific, time-bound performance benchmarks against which programme progress and achievement will be measured.

Thematic AnalysisQuick Reference

A systematic method for identifying, analyzing, and reporting patterns (themes) in qualitative data through coding and categorization.

Theory of ChangeIn-Depth Guide

A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.

TriangulationQuick Reference

Using multiple data sources, methods, or perspectives to cross-verify findings and strengthen the validity of evaluation conclusions.

U

Utilization-Focused EvaluationIn-Depth Guide

An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.

V

Validity (Internal & External)Quick Reference

The degree to which an evaluation accurately demonstrates causal relationships (internal validity) and generalizes findings beyond the study context (external validity).

Value for MoneyOverview

The optimal balance of cost, quality, and outcomes, achieving the best results for the resources invested, assessed through the 4Es: economy, efficiency, effectiveness, and equity.