Skip to main content
M&E Studio
Home
AI for M&E
GuidesPromptsPlugins
Resources
Libraries
Indicator LibraryReference Library
DownloadsTools
Topic Guides
EvaluationMEL DesignData CollectionIndicatorsData QualitySampling
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • Insights
  • LinkedIn

Services

  • Our Services

AI for M&E

  • Guides
  • Prompts
  • Plugins
  • Insights

Resources

  • Indicator Library
  • Reference Library
  • Downloads
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Topics
  4. /
  5. Evaluation
Topic Hub

Evaluation

Evaluation is one of the highest-cost M&E activities, and also one of the most frequently misused. A common failure pattern: an evaluation is commissioned to satisfy a reporting requirement, findings are produced too late to influence decisions, and the report sits unread. Good evaluation design starts with use -- who will act on the findings, and how.

This hub brings together everything MEStudio offers for evaluation: reference entries, interactive design tools, AI-assisted guides, and practical decision frameworks to help you choose the right approach for your context and budget.

How Do I Choose?

Side-by-side comparisons, decision trees, and practical guidance for common M&E decisions.

How Much Should You Budget for M&E?
The 5-10% rule explained, evaluation cost ranges by type, budget breakdown templates, and how to negotiate when the M&E budget is too small for what the donor is asking.
How to Choose
How to Choose an Evaluation Methodology
A decision framework for choosing evaluation design. Covers experimental, quasi-experimental, and non-experimental approaches.
Decision Guide
How to Choose Sample Size for M&E
A practical guide to sample size for program evaluations, with rules of thumb, worked examples, and budget-statistics tradeoffs.
How to Choose
Output vs Outcome vs Impact: The Key Difference
The most common confusion in M&E. Learn the difference between outputs, outcomes, and impact with clear examples from health, education, and food security programs.
Comparison
Qualitative vs Quantitative vs Mixed Methods
Qualitative, quantitative, and mixed methods are not a quality ranking. They answer different questions. Here's when to use each, how to combine them, and what integration actually looks like.
Comparison
Surveys vs Interviews vs Focus Groups
The three most common M&E data collection methods, compared. Surveys tell you how many, interviews tell you why, focus groups tell you what people agree on.
Comparison

Interactive Tools

Evaluation Readiness Quiz
Assess whether your program is ready for evaluation across seven readiness factors
Data Collection Method Selector
Answer four questions and get recommended data collection methods matched to your needs

Reference Library(29 entries)

In-Depth Guides

In-Depth Guide
Contribution Analysis
A structured approach to building a credible case for how and why a programme contributed to observed outcomes, without requiring experimental attribution.
In-Depth Guide
Developmental Evaluation
An evaluation approach designed for complex, adaptive programmes in which goals and processes are emergent, and the evaluator works alongside the programme team as an embedded learning partner.
In-Depth Guide
Impact Evaluation
A rigorous evaluation approach that measures the causal effect of a programme on outcomes by comparing what happened with what would have happened in its absence.
In-Depth Guide
Most Significant Change
A participatory qualitative monitoring approach that systematically collects and selects stories of change to identify and share the most significant outcomes of a programme.
In-Depth Guide
Outcome Harvesting
A retrospective evaluation approach that identifies, verifies, and analyses outcomes that have occurred, then determines whether and how the programme contributed to them.
In-Depth Guide
Participatory Evaluation
An evaluation approach that actively involves stakeholders and beneficiaries throughout all stages, from design through use of findings, ensuring local ownership and relevance.
In-Depth Guide
Process Tracing
A within-case method for causal inference that tests whether the causal mechanisms predicted by a theory of change actually operated in a specific case, using systematic evidence to evaluate causal claims.
In-Depth Guide
Quasi-Experimental Design
A family of evaluation designs that estimate causal programme effects without random assignment, using statistical methods to construct credible comparison groups.
In-Depth Guide
Realist Evaluation
An evaluation approach that asks what works, for whom, in what circumstances, and why, by identifying the mechanisms through which programmes produce outcomes in specific contexts.
In-Depth Guide
Utilization-Focused Evaluation
An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.

Overviews

Cost-Effectiveness Analysis
A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.
Evaluation Criteria (DAC)
The OECD-DAC framework provides five standard criteria, relevance, efficiency, effectiveness, impact, and sustainability, for systematically assessing the merit and value of development interventions.
Evaluation Matrix
A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach, the operational blueprint for executing an evaluation.
Evaluation Terms of Reference
A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.
Mixed Methods Evaluation
An evaluation approach that systematically combines quantitative and qualitative data to provide a more complete understanding of programme effects, mechanisms, and context.
Rubric-Based Assessment
A structured evaluation approach using predefined criteria and performance levels to systematically assess programmes, projects, or interventions against established standards.

Quick Reference

Accountability EvaluationAudit vs EvaluationCompliance EvaluationEvaluation QuestionsEx-Ante vs Ex-Post EvaluationFormative vs Summative EvaluationInception ReportMeta-EvaluationPerformance EvaluationProcess EvaluationReal-Time EvaluationSustainability EvaluationSystematic Review

AI Guides

How to Draft Evaluation Reports with AI
Stop staring at a blank page. A 4-phase workflow turns your completed analysis into donor-ready evaluation narrative in hours, not days.
How to Use AI for Baseline and Endline Analysis
Comparing baseline and endline data is the backbone of impact measurement. AI can run the comparisons, flag anomalies, and draft the narrative, but only if you structure the analysis around specific evaluation questions.

Explore Other Topics

MEL Design
Theories of change, logframes, results frameworks, and logic models
Data Collection
Methods, tools, and sampling for field data
Indicators
Select, design, track, and report on indicators
Data Quality
Ensure trustworthy data from collection to analysis
Sampling
Sample size, sampling methods, design effect, and common mistakes