Skip to main content
M&E Studio
AI for M&E
AI How-TosPromptsPlaybooksPlugins
Indicators
Workflows
M&E Resources
M&E MethodsReference Library
About
Services
FR — FrançaisES — Español
M&E Studio

AI for M&E, Built for Practitioners

AI for M&E

  • AI How-Tos
  • Prompts
  • Playbooks
  • Plugins
  • Indicators
  • Workflows

M&E Resources

  • M&E Methods
  • Reference Library
  • Decision Guides
  • Tools
  • Courses

Company

  • About
  • Services
  • Contact
  • LinkedIn

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

Library
  1. M&E Library
  2. /
  3. Evaluability Assessment

Evaluability Assessment

A preliminary review of whether a program is sufficiently mature and documented to be meaningfully evaluated.

Definition

An evaluability assessment is a preliminary exercise conducted before a full evaluation is commissioned. It examines whether a program is sufficiently mature, documented, and clear in its design to be evaluated meaningfully. The assessment reviews: Are the program's goals and theory of change clearly articulated? Do baseline data and comparison groups exist or can they be constructed? Is there sufficient program documentation? Are key stakeholders aligned on the program's logic? The output is a recommendation - green light for evaluation, or recommendations for strengthening the program first.

Why It Matters

Evaluating an immature program wastes resources. If the program design is not yet clear, its theory of change is still shifting, or baseline data do not exist, an evaluation conducted too early will struggle to attribute outcomes to the intervention or measure change from an unknown starting point. An evaluability assessment prevents this by surfacing readiness gaps before major evaluation resources are invested. It also identifies what documentation or data collection work needs to happen before evaluation can proceed, turning the assessment itself into a planning tool.

In Practice

An evaluability assessment typically takes 3-6 weeks and costs significantly less than a full evaluation. A small team reviews program documents (design, logframe, monitoring reports), interviews key staff and stakeholders, and documents the program's stated logic and change hypotheses. They assess: Is the theory of change internally coherent? Are outcome and impact indicators feasible to measure? Does monitoring data currently exist? Are administrative or control groups available? The team then drafts a report identifying gaps and recommending actions before evaluation proceeds. For example, an assessment might find the program's outcomes are clearly defined but no baseline data exist, requiring the program to conduct a baseline before evaluation can measure change.

Related Topics

  • Evaluation Questions: The specific inquiries an evaluation will answer
  • Scope of Work: The formal specification of evaluation deliverables
  • Theory of Change: The causal logic linking program activities to outcomes
  • MEL Plans: The comprehensive framework for monitoring, evaluation, and learning

At a Glance

Determine whether a program has sufficient clarity and documentation to be evaluated effectively

Best For

  • Early program stage
  • Before major evaluation investments
  • Risk reduction

Related Topics

Quick Reference
Evaluation Questions
The overarching questions an evaluation will investigate, distinct from survey or interview questions.
Quick Reference
Scope of Work
A document specifying what an evaluator or consultant will deliver, within what timeframe, budget, and constraints.
In-Depth Guide
Theory of Change
A structured explanation of how and why a set of activities is expected to lead to desired outcomes, mapping the causal logic from inputs to impact.
Overview
M&E Plans
A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.
PreviousCost-Effectiveness AnalysisNextEvaluation Criteria (DAC)