Skip to main content
M&E Studio
Home
Services
Tools
AI for M&E
Workflows
Plugins
Prompts
Start a Conversation
Library
Contribution AnalysisDevelopmental EvaluationImpact EvaluationLogframe / Logical FrameworkMost Significant ChangeOutcome HarvestingOutcome MappingParticipatory EvaluationProcess TracingQuasi-Experimental DesignRealist EvaluationResults FrameworkResults-Based ManagementTheory of ChangeUtilization-Focused Evaluation
M&E Studio

Decision-Grade M&E, Responsibly Built

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services
  • Tools

AI for M&E

  • Workflows
  • Plugins
  • Prompts
  • AI Course

M&E Library

  • Decision Guides
  • Indicators
  • Reference
  • Downloads

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library
  2. /
  3. Utilization-Focused Evaluation
PillarMethods9 min read

Utilization-Focused Evaluation

An evaluation approach where every design decision is driven by the needs of the primary intended users, the specific people who will actually use the findings to make specific decisions.

When to Use

Utilization-Focused Evaluation (UFE) is the right approach when the primary concern is ensuring evaluation findings are actually used, not just produced. Developed by Michael Quinn Patton, UFE starts from a simple but radical premise: evaluations are judged not by methodological quality alone, but by whether they produce findings that inform real decisions. An evaluation that no one uses is a failure, regardless of its technical quality.

Use it when:

  • Previous evaluations have not been used: a history of evaluation reports gathering dust indicates the design process ignored intended users
  • Decision-makers have specific, time-bound choices to make: and the evaluation can be designed to provide exactly the information needed, when needed
  • Multiple stakeholders have different information needs: UFE provides a framework for negotiating and prioritising across competing needs
  • Donor or organisational policy requires it: CRS, CARE, and several bilateral donors have institutionalised UFE as their standard evaluation approach
  • Building evaluation capacity: involving users in the evaluation process builds their capacity to use evidence in future decision-making

UFE is not an excuse to produce only the findings users want to see. The evaluator maintains professional independence and reports findings honestly, even when they are uncomfortable. UFE shapes the questions and communication around user needs, not the findings themselves.

ScenarioUse UFE?Better Alternative
Evaluation findings consistently ignoredYes—
Specific programme adaptation decision pendingYes—
Causal attribution is the primary goalAlongsideImpact Evaluation
Programme is highly complex and emergentAlongsideDevelopmental Evaluation
Donor mandates accountability reportingAlongsideResults-Based Management

How It Works

UFE is not a specific data collection method, it is a process framework for designing any evaluation around user needs. The evaluator applies evaluative thinking and professional rigour to whatever methods are most appropriate for the questions users need answered.

Step 1: Identify the primary intended user(s)

This is the most critical step. The primary intended user is a specific, named person or persons who will use the findings to make a specific decision. "The programme" is not a user. "The Programme Director, who will present findings to the Board in March to decide whether to renew the contract" is a user. Patton calls this the "personal factor", abstract users produce abstract evaluations that no one uses.

Step 2: Identify the intended use

What specific decisions or actions will the primary user take based on the evaluation? The intended use shapes everything: the questions, the methods, the timeline, and the communication approach. Use not yet determined = evaluation design not yet possible.

Step 3: Engage users in evaluation design

Involve primary intended users in developing evaluation questions, reviewing the evaluation design, and interpreting draft findings. This is not public consultation, it is a working relationship with the specific people who will use the results.

Step 4: Match methods to questions, not to convention

In UFE, the evaluation questions (derived from user needs) determine the methods, not the other way around. If users need a rapid answer in six weeks, a two-year impact evaluation is wrong regardless of its methodological superiority. Use whatever methods produce credible, useful findings within the users' decision timeline.

Step 5: Communicate for use

Format, timing, and framing of findings must be matched to how users will consume and share them. A 60-page technical report delivered two weeks after a Board decision serves no one. A two-page brief with clear recommendations, delivered before the decision, does.

Step 6: Follow up on use

After findings are delivered, follow up to document how they were used (or why they were not). This accountability closes the loop and enables the evaluator to improve future utilization.

Key Components

  • Named primary intended users: specific individuals, not organisations or general audiences
  • Specified intended use: documented before evaluation design begins
  • User engagement process: structured involvement of users in question development and interpretation
  • Evaluator as active facilitator: the evaluator actively manages the utilization process, not just data collection
  • Methods selected for utility: design choices justified by what will produce useful findings, not by methodological convention
  • Communication plan: tailored outputs for each identified user's needs and decision timeline
  • Use documentation: records of how findings were actually used

Best Practices

Name the user, name the use. The discipline of writing down the specific name and role of the primary intended user and the specific decision they face transforms abstract evaluation planning into concrete usefulness. Do this before anything else.

Start with use, not with questions. Conventional evaluation starts with "what do we want to know?" UFE starts with "what do we need to decide, and what information would improve that decision?" The difference is subtle but consequential.

Protect the evaluator's independence. User engagement does not mean the evaluator tells users what they want to hear. The evaluator's responsibility is to provide honest, credible findings. UFE shapes questions and communication, not findings.

Match the evaluation timeline to the decision timeline. An evaluation delivered after a decision has been made is useless. Build the evaluation schedule around when users need findings.

Document intended use in the ToR. The evaluation Terms of Reference should explicitly state the primary intended users and the specific decisions the evaluation will inform. This creates accountability for utilization from the start.

Common Mistakes

Identifying organisations instead of people as users. "The Ministry of Health" is not a user. "Dr. Amara Diallo, Director of Primary Health Care, who is deciding whether to recommend national scale-up" is a user. Specificity about who will use the findings is what makes UFE work.

Confusing consultation with engagement. Showing a draft report to stakeholders for comment is consultation. UFE requires substantive user involvement in question development, methodology review, and preliminary finding interpretation.

Using UFE as justification to avoid rigorous methods. "The user doesn't need an RCT, they just need a basic survey" is valid, if a basic survey actually provides credible answers to the user's questions. UFE is not a reason to do lower-quality evaluation; it is a reason to choose the right-quality evaluation for the questions that matter.

Neglecting to close the utilization loop. If you do not follow up after findings are delivered to document what was decided and how findings influenced it, you have no evidence of utilization, only hope of it.

Treating UFE as a methodology. UFE is a process framework, not a data collection method. Mixed methods, surveys, case studies, and interviews can all be used within a UFE approach. The methods are chosen based on what will produce credible, useful findings for the identified users.

Examples

Mid-term review, West Africa. A CARE-funded women's economic empowerment programme in Senegal used UFE for its mid-term review. The primary intended user was the Country Director, who needed to decide which of three programme components to deepen and which to phase out before Year 3 planning. The evaluation was designed with three separate findings packages: one for the Country Director's resourcing decision, one for the programme team's operational adjustments, and one for the donor's accountability requirements. Each package used data from the same evaluation but was framed, formatted, and timed differently. The Country Director implemented all three recommended component changes within 30 days of receiving findings.

Summative evaluation, East Africa. A USAID-funded governance programme in Kenya used UFE for its final evaluation. The primary intended users were USAID's Mission and the host-government programme counterpart, each with different intended uses. The evaluator managed a joint learning workshop where both parties reviewed preliminary findings together, clarifying interpretations and identifying implications before the final report was written. The resulting report was used directly in USAID's follow-on programme design, a documented use that satisfied USAID's evaluation utilization requirements.

Organisational learning, South Asia. An international NGO in Bangladesh with a history of evaluations that went unread commissioned a UFE process review before conducting any new evaluations. The review identified that past evaluations failed because they answered questions no decision-maker cared about and were delivered after decisions were made. The UFE framework was then applied to redesign the evaluation system, connecting evaluation questions explicitly to the annual programme review cycle.

Compared To

ApproachStarting PointEvaluator RolePrimary Focus
UFEUser needs and decisionsFacilitator of useEvaluation use
Developmental EvaluationEmergent programmeEmbedded partnerReal-time learning
Realist EvaluationTheory of changeExternal analystMechanism understanding
Conventional summativeMethodological qualityExternal assessorAccountability verdict
Participatory evaluationStakeholder empowermentCollaborative partnerDemocratic inclusion

Relevant Indicators

24 indicators across CRS, USAID, DFID, and CARE frameworks. Key examples:

  • Number of primary intended users engaged substantively in evaluation design (target: minimum 2)
  • Proportion of evaluation questions directly traceable to named user decision needs
  • Documented instances of evaluation findings used in programme or organisational decisions within 6 months of report
  • User satisfaction score for evaluation relevance and timeliness (rated on 1-5 scale)

Related Tools

  • Evaluation Planner: structure your evaluation design with users, intended use, and timeline

Related Topics

  • Evaluation Terms of Reference, the document where intended users and use should be specified
  • Learning Agendas, a complementary tool for identifying priority learning questions across the organisation
  • Adaptive Management, the programme management practice that depends on evaluation findings being used in real-time
  • Developmental Evaluation, Patton's approach for complex, emergent programmes, which shares UFE's use-focus
  • MEL Plans, the operational monitoring plan that provides data for utilization-focused evaluations

Further Reading

  • Patton, M.Q. (2008). Utilization-Focused Evaluation. 4th ed. Thousand Oaks: Sage. The foundational and comprehensive text.
  • Patton, M.Q. (2012). Essentials of Utilization-Focused Evaluation. Thousand Oaks: Sage. An accessible introductory version.
  • CRS (2011). MEAL in Practice Fundamentals. Catholic Relief Services. Implements UFE as organisational standard for all evaluations.
  • Johnson, K., Greenseid, L.O., Toal, S.A., King, J.A., Lawrenz, F., & Volkov, B. (2009). "Research on Evaluation Use." American Journal of Evaluation, 30(3), 377-410. Research review on evaluation use factors.

At a Glance

Designs evaluations around the specific decisions and needs of identified users, maximising the probability that findings will actually be used.

Best For

  • Evaluations where previous findings have not been acted upon
  • Organisations that need to justify evaluation investment with demonstrable use
  • Mid-term reviews where programme adaptation is the intended outcome
  • Multi-stakeholder evaluations where users have different information needs

Complexity

Medium to High

Timeframe

Process engagement begins at design phase; user engagement is ongoing throughout

Linked Indicators

24 indicators across 4 donor frameworks

CRSUSAIDDFIDCARE

Examples

  • Number of identified primary intended users actively engaged in evaluation design
  • Proportion of evaluation questions traceable to specific user decisions
  • Documented use of evaluation findings in programme or organisational decisions

Related Topics

Pillar
Developmental Evaluation
An evaluation approach designed for complex, adaptive programmes in which goals and processes are emergent, and the evaluator works alongside the programme team as an embedded learning partner.
Core Concept
Evaluation Terms of Reference
A formal document that defines the scope, objectives, methodology, and requirements for an evaluation, serving as the primary contract between the commissioning organization and the evaluation team.
Core Concept
Evaluation Matrix
A structured mapping document that links each evaluation question to its data sources, collection methods, indicators, and analysis approach, the operational blueprint for executing an evaluation.
Core Concept
Learning Agendas
A structured set of priority learning questions that guide systematic inquiry throughout programme implementation, turning monitoring data into actionable knowledge for decision-making.
Core Concept
Adaptive Management
A management approach that uses continuous learning from monitoring and evaluation data to adjust programme strategies and activities in response to changing evidence or context.
Core Concept
M&E Plans
A detailed operational document that translates your logframe and theory of change into actionable M&E requirements, specifying what data to collect, when, from whom, and how it will be used.
Core Concept
Evaluation Criteria (DAC)
The OECD-DAC framework provides five standard criteria, relevance, efficiency, effectiveness, impact, and sustainability, for systematically assessing the merit and value of development interventions.