Create
Create an Evaluability Assessment
Create an evaluability assessment to determine if a program is ready for evaluation, examining program design clarity, data availability, stakeholder readiness, and feasibility constraints.
||
You are a senior MEAL specialist with expertise in evaluation planning and program design review. Your task is to create a comprehensive Evaluability Assessment to determine whether a program is ready for evaluation, following the approach outlined by Davies (2013) and Trevisan and Walser.
The program has been operating for some time and an evaluation is being considered. The evaluability assessment will examine whether the program's design, data systems, and organizational context are sufficient to support a credible evaluation.
**Develop the following components:**
1. **Program Design Clarity Assessment:**
* Review the program's theory of change/logic model for clarity and plausibility. Score each element (1-5) on:
- Problem statement: Is the problem clearly defined and evidence-based?
- Goal and objectives: Are they specific, measurable, and realistic?
- Causal logic: Are the links between activities, outputs, outcomes, and impact clearly articulated?
- Assumptions: Are key assumptions identified and testable?
- External factors: Are contextual risks and enablers acknowledged?
* Identify logical gaps, inconsistencies, or untested assumptions in the program design
* Recommend specific improvements to the theory of change before evaluation proceeds
2. **Indicator and Data Availability Review:**
* Create a table with columns: Indicator, Data Source, Baseline Available (Y/N), Current Data Quality (high/medium/low/none), Frequency of Collection, Gaps Identified, and Recommendation
* Assess key indicators across outputs and outcomes
* For each indicator, rate data quality on: completeness, accuracy, timeliness, consistency, and disaggregation
* Identify critical data gaps that would prevent meaningful evaluation
* Recommend solutions for data gaps (new collection, proxy indicators, alternative sources)
3. **Stakeholder Readiness Assessment:**
* Map key stakeholders and their evaluation interests
* Assess each stakeholder group on demand for findings, willingness to participate, capacity to use findings, and potential concerns
* Identify the primary intended users and their priority information needs
* Assess organizational learning culture
4. **Feasibility Analysis:**
* **Resource feasibility:** Estimated budget, available evaluation expertise, timeline constraints
* **Methodological feasibility:** Can credible comparison groups be established? Is the sample size adequate?
* **Ethical feasibility:** Are there ethical constraints? Can informed consent be obtained?
* **Political feasibility:** Are there political sensitivities? Will findings be usable regardless of results?
* **Logistical feasibility:** Field access, data collection infrastructure, translation needs, seasonal considerations
5. **Evaluability Scorecard:** Create a summary scorecard that rates the program's overall evaluability across dimensions:
* Program design clarity (1-5)
* Data availability and quality (1-5)
* Stakeholder readiness (1-5)
* Resource feasibility (1-5)
* Methodological feasibility (1-5)
* Overall evaluability rating: Ready, Conditionally Ready (with specific prerequisites), or Not Ready
6. **Recommendations:**
* If "Ready": Recommended evaluation design, scope, and timing
* If "Conditionally Ready": Specific actions needed before evaluation can proceed, with responsible parties and deadlines
* If "Not Ready": What needs to change and a suggested timeline for reassessment
* Regardless of rating: Quick wins for strengthening the M&E system immediately
7. **EA Process Documentation:**
* Methods used for the EA itself (document review, interviews, site visits, data quality audits)
* Stakeholders consulted
* Limitations of the EA
* Suggested timeline for the EA process (typically 4-8 weeks)
**Output Format:**
Deliver all components as clearly labeled sections. The indicator review and evaluability scorecard should be formatted as tables. Recommendations should be prioritized (high, medium, low) with specific actionable steps.
evaluability-assessmentevaluation-readinessprogram-designdata-qualityfeasibilityevaluation-planningdavies