Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

M&E Library

The practitioner-friendly guide to monitoring, evaluation, accountability and learning. Browse frameworks, methods, and definitions curated for M&E professionals.

Filters
144 entries
144 entries

Pillars

Major frameworks and methodologies

PillarMethods

Contribution Analysis

A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.

ComplexityHigh
Timeframe4-12 weeks for a full contribution analysis
10 min read
PillarMethods

Developmental Evaluation

An evaluation approach designed for complex, adaptive programmes in which goals and processes are emergent, and the evaluator works alongside the programme team as an embedded learning partner.

ComplexityHigh
TimeframeOngoing throughout the programme
9 min read
PillarMethods

Impact Evaluation

A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.

ComplexityVery High
TimeframePlanned at design phase
8 min read
PillarFrameworks

Logframe / Logical Framework

A structured matrix that summarizes a project's design, linking activities to expected results through a clear hierarchy of objectives with indicators, verification sources, and assumptions.

ComplexityMedium
Timeframe1-3 weeks for initial development
11 min read
PillarMethods

Most Significant Change

A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.

ComplexityMedium
Timeframe3-6 weeks for initial cycle
14 min read
PillarMethods

Outcome Harvesting

A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.

ComplexityMedium
Timeframe4-8 weeks for a complete harvest cycle
11 min read
PillarFrameworks

Outcome Mapping

A participatory planning and monitoring approach that tracks behaviour changes in the people, groups, and organisations a programme works with directly, rather than long-term development outcomes.

ComplexityHigh
TimeframeIntensive design phase (4-8 weeks)
8 min read
PillarMethods

Participatory Evaluation

An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.

ComplexityHigh
Timeframe3-8 weeks longer than conventional evaluation due to engagement processes
12 min read
PillarMethods

Process Tracing

A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.

ComplexityHigh
Timeframe3-8 weeks depending on evidence availability and case complexity
14 min read
PillarMethods

Quasi-Experimental Design

A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.

ComplexityVery High
TimeframePlanned at design stage if prospective
8 min read
PillarMethods

Realist Evaluation

An evaluation approach that asks what works, for whom, in what circumstances, and why, by identifying the mechanisms through which programmes produce outcomes in specific contexts.

ComplexityVery High
TimeframeTypically 12-24 months
8 min read
PillarFrameworks

Results Framework

A structured collection of indicators organized by results level that tracks programme performance across a portfolio, focusing on what changed rather than what was delivered.

ComplexityMedium
Timeframe1-2 weeks for initial design
10 min read
PillarFrameworks

Results-Based Management

A management approach that focuses organisational decisions, resources, and accountability on achieving defined results, using evidence from monitoring and evaluation.

ComplexityMedium
TimeframeOngoing throughout the programme cycle
8 min read
PillarFrameworks

Theory of Change

A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.

ComplexityMedium
Timeframe2-6 weeks for initial development
11 min read
PillarMethods

Utilization-Focused Evaluation

An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.

ComplexityMedium to High
TimeframeProcess engagement begins at design phase
9 min read

Core Concepts

Key practices and processes

Core ConceptCross-Cutting

Accountability Mechanisms

The systems, processes, and structures that enable organisations to answer to stakeholders, including communities, donors, and partners, for their performance, decisions, and use of resources.

5 min read
Core ConceptPlanning

Adaptive Management

A management approach that uses continuous learning from monitoring and evaluation data to adjust programme strategies and activities in response to changing evidence or context.

5 min read
Core ConceptData Collection

Baseline Design

A structured approach to collecting initial condition data that directly informs project decisions, minimizes burden, and enables valid comparison with endline measurements.

11 min read
Core ConceptLearning

Capacity Building for M&E

The process of strengthening the knowledge, skills, systems, and resources that organisations and individuals need to design, implement, and use monitoring and evaluation effectively.

6 min read
Core ConceptEvaluation

Cost-Effectiveness Analysis

A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.

11 min read
Core ConceptData Collection

Data Collection Burden

The total time, effort, and resources required from respondents and implementers to complete data collection activities, balanced against data quality needs and programme capacity.

9 min read
Core ConceptData Collection

Data Management

The systematic processes for collecting, storing, securing, and maintaining data quality throughout the data lifecycle to ensure information is accurate, accessible, and usable for decision-making.

11 min read
Core ConceptData Collection

Data Quality Assurance

A systematic process for verifying that collected data meets five quality dimensions, Validity, Integrity, Precision, Reliability, and Timeliness, ensuring data is fit for decision-making.

11 min read
Core ConceptLearning

Data Visualization for M&E

The strategic use of charts, dashboards, and infographics to communicate monitoring data to diverse stakeholders, transforming raw numbers into actionable insights for decision-making.

14 min read
Core ConceptIndicators

Disaggregation

The breakdown of aggregate data by sub-group characteristics, such as sex, age, location, or vulnerability status, to reveal inequities and differences in programme reach and outcomes.

5 min read
Core ConceptCross-Cutting

Do No Harm

The foundational M&E principle that programme and evaluation activities must not expose participants, communities, or programme staff to physical, psychological, social, or economic harm, and must actively identify and mitigate harm risks before they occur.

6 min read
Core ConceptCross-Cutting

Ethics in M&E

The principles and standards that guide the ethical conduct of monitoring and evaluation, protecting the rights and dignity of participants, ensuring honest reporting, and managing power responsibly.

5 min read
Core ConceptEvaluation

Evaluation Criteria (DAC)

The OECD-DAC framework provides five standard criteria, relevance, efficiency, effectiveness, impact, and sustainability, for systematically assessing the merit and value of development interventions.

12 min read
Core ConceptEvaluation

Evaluation Matrix

A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach, the operational blueprint for executing an evaluation.

12 min read
Core ConceptEvaluation

Evaluation Terms of Reference

A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.

12 min read
Core ConceptData Collection

Focus Group Discussions

A qualitative data collection method that brings together 6-10 participants to discuss a specific topic, generating rich insights through group interaction and shared experiences.

13 min read
Core ConceptCross-Cutting

Gender-Responsive M&E

An approach to monitoring and evaluation that systematically examines how programmes affect women, men, girls, and boys differently, and ensures that M&E processes themselves do not reinforce gender inequalities.

5 min read
Core ConceptIndicators

Indicator Selection & Development

The systematic process of choosing and refining performance indicators that are specific, measurable, achievable, relevant, and time-bound to track programme progress effectively.

10 min read
Core ConceptData Collection

Key Informant Interviews

In-depth, semi-structured interviews with individuals selected for their specific knowledge, experience, or perspectives relevant to the evaluation questions.

5 min read
Core ConceptLearning

Knowledge Management for M&E

The systematic process of capturing, organising, and applying lessons, evidence, and insights from M&E across programmes and over time to improve organisational decision-making.

4 min read
Core ConceptLearning

Learning Agendas

A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.

12 min read
Core ConceptPlanning

M&E Plans

A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.

11 min read
Core ConceptPlanning

M&E System Design

A structured approach to building the organizational infrastructure, processes, and capacities needed to collect, analyze, and use M&E data for decision-making throughout a programme's life.

11 min read
Core ConceptEvaluation

Mixed Methods Evaluation

An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of programme effects, mechanisms, and context.

10 min read
Core ConceptPlanning

Needs Assessment

A systematic process for identifying and analyzing gaps between current conditions and desired outcomes, establishing the evidence base for programme design and indicator selection.

11 min read
Core ConceptData Collection

Observation Methods

A systematic approach to collecting data by directly watching and recording behaviours, interactions, and processes as they occur in natural settings.

11 min read
Core ConceptIndicators

Proxy Indicators

Indirect measures used when direct measurement of the intended outcome is impossible, impractical, or too costly, requiring careful validation to ensure they accurately represent the target construct.

10 min read
Core ConceptLearning

Reporting Best Practices

The principles and practices for producing evaluation and monitoring reports that are clear, credible, actionable, and tailored to their intended audiences.

5 min read
Core ConceptEvaluation

Rubric-Based Assessment

A structured evaluation approach using predefined criteria and performance levels to systematically assess programmes, projects, or interventions against established standards.

11 min read
Core ConceptData Collection

Sampling Methods

Systematic approaches for selecting a subset of a population to represent the whole, balancing statistical validity with practical constraints.

11 min read
Core ConceptIndicators

SMART Indicators

A quality framework for designing indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound, ensuring they provide reliable, actionable data for decision-making.

9 min read
Core ConceptPlanning

Stakeholder Analysis

A structured process for identifying all parties with an interest in a programme, mapping their roles, influence, and information needs, and informing how M&E should engage them.

4 min read
Core ConceptData Collection

Survey Design

The process of designing structured questionnaires and survey protocols to collect reliable, valid, and actionable data from a defined population.

6 min read
Core ConceptIndicators

Target Setting

The process of establishing specific, time-bound performance benchmarks against which programme progress and achievement will be measured.

5 min read
Core ConceptCross-Cutting

Value for Money

The optimal balance of cost, quality, and outcomes, achieving the best results for the resources invested, assessed through the 4Es: economy, efficiency, effectiveness, and equity.

5 min read

Terms

Definitions and glossary

Accountability

Cross-Cutting

The responsibility to be transparent, report, and respond to stakeholders about programme performance and decisions.

Accountability Evaluation

Evaluation

An evaluation focused on assessing whether a programme is meeting its obligations to stakeholders, including donors, beneficiaries, and regulatory bodies.

Activity

Frameworks

What a programme DOES with its inputs to produce outputs; the direct work or services delivered.

After-Action Review

Learning

A structured, time-bound reflection process conducted immediately after a specific activity or milestone to capture what was planned, what happened, why the difference, and what should change.

Assumptions

Planning

Conditions outside programme control that must hold true for the programme to succeed as planned.

Attribution vs Contribution

Methods

The distinction between proving a programme directly caused outcomes (attribution) versus building a credible case that it contributed to outcomes alongside other factors (contribution).

Audit Evaluation

Evaluation

An evaluation focused on assessing financial probity, internal controls, and compliance with financial regulations and procurement standards.

Audit vs Evaluation

Evaluation

Audits examine financial and regulatory compliance; evaluations assess programme effectiveness and impact.

Baseline

Indicators

Initial conditions data collected at the start of a project to establish a reference point for measuring change and setting indicator targets.

Benchmark

Indicators

A reference point or standard value used to measure progress, typically derived from historical data, industry standards, or comparable programmes.

Beneficiary

Cross-Cutting

A person, household, or organisation that receives direct benefits from a programme's activities or outputs.

Beneficiary Feedback

Methods

Systematic collection and use of input from programme beneficiaries about their experiences, needs, and priorities to improve accountability and programme relevance.

Bias

Methods

Systematic error in data collection, analysis, or interpretation that distorts results and threatens the validity of M&E findings.

Capacity Strengthening

Learning

The process of developing skills, systems, and relationships that enable individuals and organizations to achieve their development goals sustainably.

Causal Inference

Methods

The process of determining whether an intervention caused observed outcomes by establishing a credible counterfactual and ruling out alternative explanations.

Census vs Sample

Data Collection

The choice between measuring every unit in a population (census) versus selecting a subset (sample) determines cost, precision, and what inferences you can make about your programme.

CLA (Collaborating, Learning, and Adapting)

Learning

USAID framework for integrating collaboration, learning, and adaptation into programme design and management.

Cluster Sampling

Data Collection

A sampling method that divides the population into clusters and randomly selects entire clusters rather than individuals.

Communication Strategies

Cross-Cutting

Intentional approaches to sharing M&E findings and programme information with stakeholders to influence decisions, build accountability, and promote learning.

Compliance Evaluation

Evaluation

An evaluation focused on assessing whether a programme adheres to legal, regulatory, donor, and organizational requirements and standards.

Compliance Monitoring

Evaluation

Tracking whether a programme is implemented according to agreed standards, policies, and legal requirements.

Composite Indicator

Indicators

A composite indicator combines multiple individual indicators into a single index or score, enabling measurement of multidimensional concepts that cannot be captured by a single metric.

Confounding Variables

Methods

Extraneous variables that correlate with both the intervention and the outcome, creating spurious associations that threaten causal inference in evaluation.

Content Analysis

Methods

A systematic approach to analysing communication content, identifying patterns, themes, and biases in text, audio, or video data through structured coding.

Continuous Improvement

Learning

A systematic, ongoing approach to enhancing programme performance through iterative learning, feedback, and adaptation.

Counterfactual

Methods

The comparison between what happened and what would have happened in the absence of an intervention, the fundamental basis for establishing causal attribution in impact evaluation.

Custom vs Standard Indicators

Indicators

The choice between donor-provided standard indicators and programme-specific custom indicators, balancing compliance requirements with contextual relevance.

Dashboard

Learning

A visual display of key monitoring indicators enabling rapid assessment of programme performance at a glance.

Dissemination

Learning

Active, intentional process of sharing M&E findings with relevant audiences to promote understanding, learning, and evidence use.

Donor Reporting

Learning

The process of systematically communicating programme progress, results, and financial information to funding organizations according to their specific requirements and timelines.

Donor Requirements

Planning

M&E obligations specified in grant agreements and donor policies that shape system design and reporting.

Empowerment Evaluation

Methods

A self-evaluation approach where programme participants systematically assess their own work to improve programmes and secure future ownership.

Endline

Indicators

A final data collection point at programme completion that measures achieved outcomes against baseline and target values.

Evaluability Assessment

Evaluation

A preliminary review of whether a programme is sufficiently mature and documented to be meaningfully evaluated.

Evaluation Questions

Evaluation

The overarching questions an evaluation will investigate, distinct from survey or interview questions.

Evidence Synthesis

Methods

The systematic process of identifying, selecting, and integrating findings from multiple studies to inform programme design, evaluation, and decision-making.

Evidence-Based Decision Making

Learning

Using M&E evidence to inform programme, management, and policy decisions rather than intuition or habit.

Ex-Ante vs Ex-Post Evaluation

Evaluation

The temporal dimension of evaluation, ex-ante occurs before implementation to inform design, while ex-post occurs after completion to assess outcomes and lessons.

Feedback Loop

Learning

A structured process for collecting, analysing, and acting on information to improve programme performance and outcomes.

Formative vs Summative Evaluation

Evaluation

Formative evaluation improves programmes during implementation; summative evaluation judges their overall merit after completion.

Impact

Methods

Long-term, higher-level effects attributable or contributed to by a programme; broader change beyond individual outcomes.

Impact Stories

Cross-Cutting

Narrative accounts that illustrate how a programme has influenced the lives of beneficiaries, combining quantitative outcomes with qualitative human experience.

Inception Report

Evaluation

The first formal deliverable from an evaluation team, detailing refined methodology before primary data collection.

Indicator

Indicators

A specific, observable, measurable variable that tracks progress toward an outcome or output.

Indicator Reporting

Learning

The systematic collection, compilation, and presentation of indicator data to track programme performance and communicate results to stakeholders and donors.

Input

Frameworks

Resources invested in a programme (money, staff, materials, time) that enable activities to happen.

Intervention Logic

Frameworks

The causal chain connecting programme activities to intended outcomes, showing how and why a set of interventions is expected to lead to desired change.

Knowledge Sharing

Learning

The deliberate practice of capturing, organizing, and distributing insights, lessons, and best practices across teams and organizations to improve programme performance and avoid repeating mistakes.

Learning

Learning

The systematic process of gathering evidence, reflecting on it, and using it to improve programme strategy and implementation.

Learning Cycles

Learning

Structured, recurring periods of reflection and adaptation where programme teams review data, draw lessons, and adjust implementation accordingly.

Lessons Learned

Learning

Documented insights from programmes identifying what worked, what did not work, and why, with actionable specificity.

Literature Review

Methods

A systematic, critical synthesis of existing research on a specific topic, identifying what is known, gaps in knowledge, and evidence for programme design.

LQAS

Methods

Logical Quality Assessment Sampling is a rapid decision-making method that classifies programs or areas as pass/fail against a threshold, commonly used for health program monitoring.

M&E Budget

Planning

The portion of a programme budget dedicated to monitoring, evaluation, and learning activities.

M&E Framework

Planning

The structured document specifying what will be measured, how, by whom, and how often.

Meta-Evaluation

Evaluation

The systematic evaluation of an evaluation's quality, assessing whether it met professional standards and produced credible, useful findings.

Midline

Data Collection

A data collection point conducted midway through a programme to assess trajectory and enable adaptive decisions.

Milestone

Indicators

A significant intermediate checkpoint or event that signals progress toward a target, used to track whether a programme is on schedule to achieve its intended outcomes.

Monitoring vs Evaluation

Cross-Cutting

Monitoring is the continuous, systematic tracking of programme activities and outputs; evaluation is the periodic, in-depth assessment of outcomes, impact, and causal attribution.

Narrative Reporting

Learning

Qualitative, story-based reporting that contextualizes quantitative indicators with explanations of what happened, why it happened, and what it means for programme learning and decision-making.

Organisational Learning

Learning

The systematic process by which an organisation captures, analyses, and applies lessons from experience to improve programme performance and decision-making.

Outcome

Frameworks

Changes in behaviour, knowledge, skills, or conditions resulting from programme outputs, experienced by beneficiaries.

Outcome-Level Analysis

Methods

The systematic examination of outcomes to determine whether a programme achieved its intended results, distinguishing between expected and unexpected outcomes, and assessing the significance and sustainability of changes observed.

Output

Frameworks

Direct, tangible products of programme activities; what the programme produces, not what beneficiaries gain.

Participatory M&E

Methods

An approach to monitoring and evaluation that actively involves stakeholders, especially beneficiaries, at every stage, from design through to using findings for decision-making.

Performance Dashboards

Learning

Visual management interfaces that display key performance indicators in real-time, enabling programme teams and stakeholders to monitor progress, identify issues, and make data-driven decisions.

Performance Evaluation

Evaluation

An assessment of how well a programme or organisation is achieving its intended results and operating efficiently against established standards and targets.

Performance Management

Planning

The systematic use of monitoring data, evaluation findings, and feedback to guide programme decisions, improve results, and ensure accountability to stakeholders.

Primary vs Secondary Data

Methods

Primary data is collected firsthand for a specific purpose; secondary data is existing data repurposed for new analysis. Each has distinct trade-offs in cost, timeliness, and relevance.

Process Evaluation

Evaluation

Assessment of how a programme is implemented, whether activities are delivered as planned and to intended quality standards.

Programme Theory

Frameworks

The explicit articulation of how a programme is expected to produce change.

Progress Report

Learning

A periodic document submitted by programmes to donors detailing implementation progress, indicator performance, and key issues.

Purposive Sampling

Data Collection

A non-probability sampling approach where researchers deliberately select participants based on specific characteristics or knowledge relevant to the research objectives.

Qualitative Data

Data Collection

Non-numerical information captured through words, images, or observations that reveals the how and why behind programme outcomes, providing depth and context to quantitative findings.

Quantitative Data

Data Collection

Numerical data collected through structured measurement, enabling statistical analysis, generalization, and objective comparison across programmes and contexts.

Random Sampling

Methods

A probability sampling method where every member of the population has an equal, known chance of selection, enabling statistical inference to the broader population.

Randomised Controlled Trial

Methods

An experimental evaluation design that randomly assigns participants to treatment and control groups to establish causal attribution between an intervention and observed outcomes.

Rapid Assessment

Methods

A condensed data collection approach designed to generate actionable insights quickly, typically using streamlined qualitative and quantitative methods in time-constrained contexts.

Real-Time Evaluation

Evaluation

An evaluation approach conducted during programme implementation to provide immediate feedback for adaptive management and mid-course corrections.

Real-Time Monitoring

Planning

The continuous collection and analysis of data during programme implementation to enable rapid detection of issues and timely corrective action.

Reflection Sessions

Learning

Structured gatherings where programme teams and stakeholders pause to examine what happened, why it happened, and what should change as a result.

Reliability

Methods

The consistency and repeatability of a measurement, whether the same tool produces stable results across repeated applications, different raters, or different time periods.

Results Chain

Frameworks

The sequential hierarchy of change from activities through outputs, outcomes, and impact that shows how a programme is expected to create change.

Risks and Risk Mitigation

Planning

External factors that could prevent programme success and their planned mitigation strategies.

Scope of Work

Planning

A document specifying what an evaluator or consultant will deliver, within what timeframe, budget, and constraints.

SROI (Social Return on Investment)

Methods

Evaluation framework that assigns monetary values to social outcomes to calculate return on investment.

Statistical Significance

Methods

A statistical measure indicating whether observed results are likely due to a real effect rather than random chance, typically assessed using p-values and hypothesis testing.

Storytelling for Impact

Cross-Cutting

The strategic use of narrative to make M&E findings memorable, actionable, and influential for decision-makers and stakeholders.

Sustainability Evaluation

Evaluation

Assessment of a programme's continued benefits and functionality after external funding has ended, examining whether outcomes persist and systems remain operational.

Systematic Review

Evaluation

A rigorous, structured approach to identifying, appraising, and synthesizing all available evidence on a specific evaluation question using explicit, reproducible methods.

Target

Indicators

The specific value an indicator is expected to reach by a defined date, quantifying what success looks like.

Thematic Analysis

Methods

A systematic method for identifying, analyzing, and reporting patterns (themes) in qualitative data through coding and categorization.

Triangulation

Methods

Using multiple data sources, methods, or perspectives to cross-verify findings and strengthen the validity of evaluation conclusions.

Validity (Internal & External)

Methods

The degree to which an evaluation accurately demonstrates causal relationships (internal validity) and generalizes findings beyond the study context (external validity).