Skip to main content
M&E Studio
Home
AI for M&E
AI GuidesPlaybooksPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services

AI for M&E

  • AI Guides
  • Playbooks
  • Prompts
  • Plugins
  • Workflows

Resources

  • Indicator Library
  • Reference Library
  • M&E Method Guides
  • Decision Guides
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library/
  2. Decision Guides/
  3. How to Write a Logframe: Step-by-Step Guide with Template
M&E How-to Guide

How to Write a Logframe: Step-by-Step Guide with Template

A logframe is not a filing form. It is the logic of your program written down so indicators, assumptions, and evidence can be tested against it. Here is how to write one: the four-by-four matrix, the right order to fill it, and the common mistakes that make most logframes unusable.

4x4
Logframe matrix shape
10-30
Typical indicator count
7
Writing steps
Key Takeaway
Write the rows top-down. Fill the columns last. Test the vertical logic before committing to indicators.
Most failed logframes fail because someone started with activities and tried to aggregate upward, or filled columns before the rows hung together. Strong logframes are built top-down: goal, outcomes, outputs, activities, then indicators, verification, assumptions. Each row must plausibly produce the row above it. If the vertical logic does not hold, no amount of SMART indicator polish saves the logframe.

The Four-by-Four Matrix

A standard logframe is a four-row, four-column matrix. Rows represent levels of the results chain; columns capture the information each level needs.

LevelNarrative summaryIndicatorsMeans of verificationAssumptions
Goal / ImpactThe population- or sector-level change the program ultimately contributes to1-2 indicators (often shared with national statistics or SDGs)Source of the data, e.g., national survey, external datasetWhat must hold at the goal level for program contribution to matter
OutcomesThe applied changes in participant behavior, capability, or status2-3 indicators per outcomeSource, method, frequencyWhat must hold for outcomes to produce goal-level impact
OutputsThe tangible products and services the program delivers1-2 indicators per outputTypically administrative data, monitoring recordsWhat must hold for outputs to produce outcomes
ActivitiesThe discrete actions taken to produce outputs(Usually handled in MEL plan, not logframe)(Activity records)What must hold to deliver activities (access, security, staffing)

The matrix captures the program's logic: each row must plausibly produce the row above it, and each column must support the narrative in that row. This is why order matters when writing one.

For the conceptual foundation, see logframe and theory of change. For the key comparison, see logframe vs theory of change.

The Seven Writing Steps

Write in this sequence. Jumping around is the root cause of most broken logframes.

#StepWhat happens
1Anchor in the theory of changeHave a finalized theory of change before drafting. The logframe operationalizes the theory; without one, the matrix has nothing to operationalize.
2Write the goal (top row)One sentence. Population-level change. Aligns with donor strategic priorities where applicable.
3Write the outcomes2-4 outcomes. Each is an applied change in participants. Each must plausibly contribute to the goal.
4Write the outputs4-10 outputs distributed across outcomes. Each is a tangible deliverable. Each must plausibly contribute to at least one outcome.
5Sketch the activitiesUnder each output, 2-5 activities. These usually stay in the narrative proposal, not the logframe itself.
6Fill indicators and means of verificationWork row by row, specifying 1-3 indicators per row with data source and frequency.
7Write assumptionsWorking from the bottom row up: at each level, what must hold for that level to produce the level above?

Only step 6 onward introduces quantitative specifics. The first five steps are about logic, not measurement. Most logframes fail because writers jump to step 6 before the first five are sound.

Row 1: Goal or Impact

The goal (or impact, depending on donor terminology) is the population- or sector-level change the program contributes to. It is not something the program alone can produce; it is where the program's contribution lands.

Good goal statement traits:

  • Population-level, not program-participant-level ("Reduced under-five mortality in District X" not "Trained 500 health workers")
  • Often aligns with national priorities, SDGs, or sector commitments
  • Typically shared across multiple programs and stakeholders
  • Time-bounded but long-horizon (5-15 years rather than program lifespan)

The goal narrative should be one sentence. Multi-sentence goals are usually two goals pretending to be one.

Row 2: Outcomes

Outcomes are the applied changes in participant behavior, practice, capability, capacity, or condition that result from the program. They are what the program is directly designed to produce, measurable within the program timeframe.

A typical program has 2-4 outcomes. Fewer than 2 usually means the program is too narrow; more than 4 usually means the program is doing too many things.

Each outcome must:

  • Express applied change, not activity completion ("Trained health workers deliver protocol-aligned care" not "Health workers trained")
  • Be attributable at least partly to the program
  • Plausibly contribute to the goal

See outcome-indicator for the indicator-level discussion.

Row 3: Outputs

Outputs are the tangible products and services the program delivers. Unlike outcomes, outputs are under direct program control: the program decides how many trainings, kits, construction projects, or sessions to deliver.

A typical program has 4-10 outputs distributed across outcomes. Each output must:

  • Count or describe a specific deliverable
  • Plausibly contribute to at least one outcome above it
  • Be within the program's direct control (not dependent on participant or external action)

See output-indicator for the indicator-level detail.

Row 4: Activities

Activities are the discrete actions the program takes to produce outputs: conducting trainings, distributing materials, recruiting participants, building infrastructure. Activities are frequently described in the proposal narrative rather than entered in the logframe matrix itself; some donor logframe templates include them, others do not.

Where activities are included, keep them at a reasonable level of abstraction. "Deliver 5-day training curriculum to 200 health workers" is a good activity. "Arrange transport to training venue, provide lunch, issue certificates" is sub-activity detail that belongs in the work plan, not the logframe.

Column 2: Indicators

Each row needs 1-3 indicators that measure whether that row has been achieved. Indicators at different levels serve different purposes.

  • Goal indicators: often inherited from national statistics (under-five mortality, HIV incidence, literacy rates, food security scores). 1-2 per goal. Measurement usually by secondary data rather than program-run surveys.
  • Outcome indicators: measure applied change, 2-3 per outcome. Typically require a program-run survey or assessment with baseline + follow-up.
  • Output indicators: measure deliverables, 1-2 per output. Usually from administrative or monitoring records.

Every indicator should pass SMART criteria before entering the logframe. See SMART indicators deep-dive for the test. For the standard-vs-custom decision, see custom vs standard indicators.

Column 3: Means of Verification

For each indicator, name the specific data source and method. A means of verification is not "survey" or "monitoring data"; it is "annual household survey conducted in September using JMP WASH questions" or "quarterly training attendance roster reviewed by M&E officer."

Donor reviewers assess whether the means of verification is feasible given the program's budget and capacity. A proposal with ambitious indicators but vague verification is a credibility problem.

See means of verification for the full discussion including typical categories and common mistakes.

Column 4: Assumptions

Assumptions are conditions outside the program's control that must hold for the program's logic to work. At each row, the assumption answers: "what must be true for this row to produce the row above?"

Examples:

  • Activity-to-output assumption: "Field staff can access target communities throughout project lifespan."
  • Output-to-outcome assumption: "Trained participants remain in their roles long enough to apply new skills."
  • Outcome-to-goal assumption: "Broader sector reforms (outside program control) continue, allowing outcome gains to contribute to population-level change."

Strong assumptions are specific enough to be tested during implementation. "External context remains supportive" is useless; "National government's water policy reforms remain on track through project period" is testable.

Assumptions link to the risk register in the full proposal: each assumption is something the risk register may contain as a watch item, with a mitigation plan.

The Vertical Logic Test

The vertical logic test asks: does each row plausibly produce the row above it?

Walk the matrix from the bottom up:

  • Do the planned activities produce the stated outputs?
  • Do the outputs plausibly produce the stated outcomes?
  • Do the outcomes plausibly contribute to the stated goal?

Where any step breaks (outputs that do not meaningfully contribute to outcomes, outcomes that do not plausibly aggregate toward the goal), the matrix needs revision before submission. Common vertical logic failures: outcomes too distant from outputs, goals disconnected from realistic outcomes, outputs that duplicate without building toward an outcome.

The Horizontal Logic Test

The horizontal logic test asks: for each row, do the indicators, verification, and assumptions hang together?

  • Do the indicators actually measure what the narrative summary describes?
  • Is the means of verification feasible for each indicator?
  • Are the assumptions specific enough to be tested during implementation?

Weak horizontal logic produces logframes that look complete but do not function. Common failures: generic assumptions that do not constrain anything ("external context remains supportive"), means of verification that the program cannot actually execute ("national household survey" with no survey in the budget), indicators that sound right but do not measure the narrative ("women empowered" measured by "women attending meetings").

Sector Examples

Health: Immunization program, East Africa

A 3-year program to improve immunization coverage in 2 districts. Logframe: goal = "Reduced child mortality from vaccine-preventable diseases in target districts" (indicator: district-level DTP3 coverage from health management information system). Outcomes: (a) "Health workers deliver immunization services per national schedule," (b) "Caregivers bring eligible children for scheduled immunization." Outputs: 4 per outcome covering training, cold chain, supportive supervision, community mobilization, SMS reminders, and clinic upgrades. Indicators totaled 16. Assumptions included "National immunization schedule remains stable" (goal-level) and "Community acceptance of vaccination does not decline due to misinformation" (outcome-level).

Education: Girls' education program, South Asia

A 5-year program to improve secondary school retention among girls aged 11-16. Logframe: goal = "Increased secondary school completion rates for girls in target districts." Outcomes: (a) "Families support daughters continuing to secondary school," (b) "Girls stay enrolled through secondary school transitions." Outputs: mentor program for 2,400 girls, community dialogue events, teacher training on gender-responsive pedagogy, stipend delivery, school-based menstrual health support. 18 indicators. Critical assumption (outcome-to-goal): "Existing government policies supporting girls' education remain in force."

WASH: Rural water program, West Africa

A 4-year program to improve safe water access for 80 rural communities. Logframe: goal = "Reduced diarrheal disease in under-five children in target communities" (indicator: district diarrhea case count from health facility records). Outcomes: (a) "Households use safely managed water sources," (b) "Community water systems are sustainably operated." Outputs: water points installed (by type), water committees trained, water quality testing program, household hygiene promotion. 14 indicators using JMP service ladder categories for SDG 6.1 alignment. Assumption (activity-to-output): "Community contributions (land, labor) materialize per commitment."

Food security: Livelihoods program, Sahel

A 3-year livelihoods program for pastoralist communities. Logframe: goal = "Improved food security among pastoralist households through drought cycles" (indicator: household hunger scale among program communities). Outcomes: (a) "Households diversify income sources," (b) "Savings groups provide financial resilience." Outputs: agricultural training, savings group formation, small-business grants, livestock vaccination campaigns. 12 indicators. Seasonal disaggregation (transhumance/settled) built into outcome indicators. Critical assumption: "No catastrophic drought in target region during program period."

Common Mistakes

Mistake 1: Writing activities first and aggregating up. The temptation is to start with what you plan to do and build the goal from there. This produces logframes where outputs do not plausibly produce outcomes and outcomes do not contribute to the goal. Write top-down.

Mistake 2: Confusing outputs with outcomes. "Number of trainings held" is an output. "Percentage of trained workers applying new skill" is an outcome. Outputs are what the program delivers; outcomes are what changes in participants because of the delivery. Getting this wrong is the most common logframe error.

Mistake 3: Vague assumptions. "External context remains supportive" is a placeholder, not an assumption. An assumption should be specific enough that you could check at program mid-term whether it still holds. Write assumptions as testable statements.

Mistake 4: Indicators that do not measure the narrative. If the outcome narrative is "women empowered," and the indicator is "women attending meetings," the indicator measures attendance not empowerment. Either the indicator or the narrative is wrong; align them.

Mistake 5: Means of verification that the budget cannot support. An outcome indicator requiring a national household survey implies the budget funds that survey. If it does not, the indicator is not executable. Verify feasibility before committing the indicator.

Mistake 6: Too many indicators. 10-30 indicators across a mid-sized logframe is typical; beyond 30 you usually have bloat. See mistake too many indicators. Each additional indicator adds real monitoring cost.

Mistake 7: Goal aligned to program scale rather than population. "Strengthened capacity of 5 partner organizations" is not a goal; it is an output stated big. A goal should describe population- or sector-level change, typically beyond what any single program can fully produce.

Mistake 8: Logframe that does not match the narrative proposal. The logframe is an operational restatement of the theory of change. If the narrative describes one logic and the logframe shows a different one, reviewers notice. Update both together when either changes.

Logframe Completion Checklist

Run through this before submitting a logframe with a proposal or locking it into the MEL plan.

Matrix structure:

  • Goal, outcomes, outputs rows filled in top-down sequence
  • 2-4 outcomes, 4-10 outputs distributed across them
  • 1-3 indicators per row
  • Means of verification named specifically (source + method + frequency) for every indicator
  • Assumptions specific enough to be tested during implementation

Logic tests:

  • Vertical logic passes: each row plausibly produces the row above it
  • Horizontal logic passes: indicators measure narrative; MoV supports indicators; assumptions constrain the logic
  • Logframe matches the theory of change narrative

Indicator quality:

  • Every indicator passes SMART criteria
  • Standard/custom mix documented
  • Baselines collected or baseline study planned before committing targets
  • Targets realistic and baseline-anchored

Means of verification:

  • Data sources named specifically, not generically
  • Collection frequencies specified
  • Budget supports the planned verification approach

Assumptions:

  • One or more assumptions per row (except the top)
  • Assumptions testable during implementation
  • Critical assumptions linked to the risk register

For the broader MEL plan integration, see how to write a MEL plan. For the proposal-section integration, see how to write the M&E proposal section. For the decision between logframe and theory of change, see logframe vs theory of change. For an AI-assisted step-by-step workflow, see the Theory of Change playbook.

Frequently Asked Questions

PreviousHow to Verify AI Outputs for M&ENextHow to Write a MEL Plan: A Practical Step-by-Step Guide