Skip to main content
M&E Studio
Home
AI for M&E
GuidesWorkflow GuidesPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesDecision GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • Insights
  • LinkedIn

Services

  • Our Services

AI for M&E

  • Guides
  • Prompts
  • Plugins
  • Insights

Resources

  • Indicator Library
  • Reference Library
  • Downloads
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

M&E Decision Guides

Side-by-side comparisons and decision frameworks for the M&E choices that come up most often: logframe vs theory of change, output vs outcome vs impact, and more.

21 guides|Updated regularly|Free, no signup

Comparisons

Side-by-side comparisons that help you see the difference between two or more options.

Baseline vs Endline vs Midline Surveys Explained

When you need baseline, midline, and endline surveys, what they collect, and what to do when you missed your baseline.

Read the guide

Indicator vs Target vs Milestone: What's the Difference?

Indicators, targets, and milestones are the building blocks of any MEL plan, but they're constantly confused. Here's how they relate, with examples from real programs.

Read the guide

KoboToolbox vs ODK vs SurveyCTO

The three most common mobile data collection platforms for M&E, compared on features, cost, offline capability, skip logic, and hosting. Plus CommCare for case management.

Read the guide

Logframe vs Theory of Change

Two frameworks everyone confuses. When you need a logframe, when you need a Theory of Change, why most programs need both, and which donors require which.

Read the guide

MEL vs M&E vs MEAL vs MLE: What's the Difference?

M&E, MEL, MEAL, MLE, DME: the acronym soup explained. What each stands for, which one to use, and why the terminology wars matter less than you think.

Read the guide

Output vs Outcome vs Impact: The Key Difference

The most common confusion in M&E. Learn the difference between outputs, outcomes, and impact with clear examples from health, education, and food security programs.

Read the guide

Probability vs Non-Probability Sampling: When to Use Each

Probability vs non-probability sampling in M&E: when each approach is valid, which method fits your context, and five common mistakes that invalidate findings.

Read the guide

Qualitative vs Quantitative vs Mixed Methods

Qualitative, quantitative, and mixed methods are not a quality ranking. They answer different questions. Here's when to use each, how to combine them, and what integration actually looks like.

Read the guide

Surveys vs Interviews vs Focus Groups

The three most common M&E data collection methods, compared. Surveys tell you how many, interviews tell you why, focus groups tell you what people agree on.

Read the guide

Cluster Sampling vs Stratified Sampling

Cluster sampling saves money when populations are spread out. Stratified sampling ensures subgroup comparisons. When to use each.

Read the guide

RCT vs Quasi-Experimental Design

When to use a randomized controlled trial vs a quasi-experimental design. Feasibility, cost, rigor, and what each can actually tell you about your program's impact.

Read the guide

The DAC Evaluation Criteria Explained

The six OECD-DAC evaluation criteria explained: what each means, which ones to use, and how to write evaluation questions for each.

Read the guide

Decision Trees

Step-by-step decision frameworks for narrowing down options to the right choice.

How to Choose an Evaluation Methodology

A decision framework for choosing evaluation design. Covers experimental, quasi-experimental, and non-experimental approaches.

Read the guide

How to Choose

Guides that walk through the criteria and trade-offs to help you make a call.

How Much Should You Budget for M&E?

The 5-10% rule explained, evaluation cost ranges by type, budget breakdown templates, and how to negotiate when the M&E budget is too small for what the donor is asking.

Read the guide

How to Choose Sample Size for M&E

A practical guide to sample size for program evaluations, with rules of thumb, worked examples, and budget-statistics tradeoffs.

Read the guide

How to Clean Your Dataset Before Analysis

A step-by-step checklist for cleaning M&E data after collection. Duplicate detection, outlier identification, skip logic validation, consistency checks, and cleaning log documentation.

Read the guide

How to Conduct a Data Quality Assessment

A step-by-step guide to conducting a DQA using the five standard dimensions. How to select indicators, design verification procedures, conduct the site visit, and write the DQA report.

Read the guide

How to Write Donor Reports That Actually Get Read

How to write donor reports that get read. Indicator tables, narrative structure, explaining underperformance, and what donors actually look for.

Read the guide

How to Write Evaluation Terms of Reference

A practical guide to writing evaluation TORs that get you a good evaluation. Scoping, evaluation questions, methodology expectations, timelines, budgets, and evaluator selection.

Read the guide

How to Write the M&E Section of a Proposal

A step-by-step guide to writing the M&E, MEL, or MEAL section of a program proposal. What to include, how to structure it, and the mistakes that get proposals rejected.

Read the guide

Looking for definitions instead?

The M&E Library has 140+ entries explaining frameworks, methods, and key concepts.

Browse the M&E Library