Core ConceptEvaluation

Cost-Effectiveness Analysis

A systematic approach to comparing the costs and outcomes of alternative interventions to identify which delivers the best value for money in achieving specific objectives.

11 min read
Also known as:CEACost-Effectiveness EvaluationEconomic Evaluation

When to Use

Cost-effectiveness analysis (CEA) is the right tool when you need to compare alternative approaches to achieving the same outcome and determine which delivers better value. Use it when:

  • Comparing programme alternatives — You have two or more interventions targeting the same objective (e.g., different approaches to reducing malaria, improving literacy, or increasing school attendance) and need to determine which is more efficient.
  • Making resource allocation decisions — Budget is constrained and you must choose between competing programmes or scale decisions. CEA tells you which option maximizes outcomes per dollar spent.
  • Selecting between equivalent interventions — Multiple approaches achieve the same outcome (e.g., bed nets vs. indoor residual spraying for malaria prevention). CEA identifies the most cost-efficient option.
  • Justifying programme choices to donors — Cost-conscious funders need evidence that your selected approach represents good value for money compared to alternatives.
  • Optimising existing programmes — You want to identify which activities within your programme deliver the best outcomes relative to their costs.

CEA is less useful when outcomes differ between interventions (use cost-benefit analysis to monetize all outcomes), when you need to assess broader economic impacts beyond direct programme costs (use economic impact analysis), or when cost data is too unreliable to support meaningful comparison.

| Scenario | Use Cost-Effectiveness Analysis? | Better Alternative | |-----|-----|---|-----| | Comparing approaches to same outcome | Yes | — | | Outcomes differ significantly | Alongside | Cost-Benefit Analysis | | Assessing broader economic impacts | No | Economic Impact Analysis | | Single programme, no comparison | Limited | Cost analysis | | Budget unconstrained | Less critical | Effectiveness evaluation |

How It Works

Cost-effectiveness analysis follows a structured five-step process. Each step builds on the previous one to produce comparable efficiency metrics.

1. Define the outcome measure. Identify the specific outcome you're measuring — this must be the same across all interventions being compared. Common measures include "cases prevented," "lives saved," "students passing exams," or "hectares improved." The outcome should be measurable, relevant to stakeholders, and comparable across alternatives. (MEAL Rule: EX136_S012)

2. Identify all relevant costs. Map every cost category associated with each intervention: direct programme costs (staff, materials, training), overhead costs, capital costs, and opportunity costs. Be comprehensive — missing cost categories will distort your comparison. Distinguish between one-time startup costs and recurring operational costs. (MEAL Rule: EX26_R015)

3. Collect outcome data. Gather evidence on the outcomes achieved by each intervention. This may come from your monitoring system, evaluation reports, or published literature. Ensure outcome measurement is consistent across interventions — different measurement approaches will invalidate comparisons. (MEAL Rule: EX136_R046)

4. Calculate cost-effectiveness ratios. Divide total costs by total outcomes for each intervention to produce a cost-effectiveness ratio (e.g., cost per beneficiary achieving outcome). For comparing two interventions, calculate the incremental cost-effectiveness ratio (ICER): the additional cost per additional unit of outcome when choosing the more expensive option. (MEAL Rule: EX57_R015)

5. Conduct sensitivity analysis. Test how your conclusions change under different assumptions about costs, outcomes, or discount rates. Cost and outcome estimates are uncertain — sensitivity analysis shows whether your recommendation is robust or fragile to these uncertainties. (MEAL Rule: EX139_P013)

Key Components

A well-constructed cost-effectiveness analysis includes these essential elements:

  • Clear outcome definition — A specific, measurable outcome that all compared interventions are designed to achieve. The outcome must be comparable across alternatives.
  • Comprehensive cost accounting — All relevant costs captured: direct programme costs, overhead, capital investments, training, monitoring, and opportunity costs. Costs should be categorized as one-time vs. recurring.
  • Perspective specification — Whose costs and benefits are being counted? Common perspectives include: programme implementer, donor, government, or societal (including beneficiary costs). The perspective affects which costs are included.
  • Time horizon — The period over which costs and outcomes are measured. Some interventions have high upfront costs with long-term benefits; others are the reverse. The time horizon must be appropriate to capture the full cost-effectiveness profile.
  • Discounting approach — For interventions spanning multiple years, future costs and outcomes should be discounted to present value to enable fair comparison. Standard practice uses 3-5% annual discount rates.
  • Incremental analysis — When comparing two interventions, the incremental cost-effectiveness ratio (ICER) shows the additional cost per additional unit of outcome when choosing the more expensive option. This is often more informative than individual ratios.
  • Uncertainty analysis — Sensitivity analysis showing how conclusions change under different assumptions. This demonstrates whether your recommendation is robust or depends on specific assumptions.
  • Distributional considerations — Who bears the costs and who receives the benefits? CEA typically focuses on aggregate efficiency but should note any equity implications of cost-effectiveness differences.

Best Practices

Align evaluation criteria with efficiency questions. Cost-effectiveness analysis directly addresses the "efficiency" criterion from the DAC evaluation framework. Ensure your evaluation questions explicitly assess whether resources are being used efficiently and whether alternative approaches might achieve better results. (MEAL Rule: EX136_S012)

Frame evaluation questions around all five DAC criteria. Your evaluation questions should assess relevance, effectiveness, efficiency, impact, and sustainability together. An intervention that is cost-effective but irrelevant to beneficiary needs is not a good investment. (MEAL Rule: EX26_R015)

Ensure evaluation questions reflect all DAC criteria. Evaluation questions must be reflective of relevant evaluation criteria including efficiency. Don't ask only about effectiveness — explicitly ask whether the programme achieves outcomes at reasonable cost compared to alternatives. (MEAL Rule: EX136_R046)

Select efficient indicators. Choose fewer, more direct indicators that measure performance against objectives as well as outputs. Avoid indicator proliferation that creates data collection burden without improving decision-making. Cost-effectiveness analysis requires focused outcome measurement, not comprehensive but shallow tracking. (MEAL Rule: EX57_R015)

Measure logistics efficiency systematically. Track weight, volume, and duration for both storage and transport as part of your cost accounting. Logistics often represent 20-40% of programme costs in field operations — understanding these efficiencies can reveal significant cost-saving opportunities. (MEAL Rule: EX139_P013)

Document all cost categories explicitly. Create a detailed cost taxonomy that captures all expenditure categories. Include direct costs, indirect costs, overhead allocations, capital costs (amortized), and opportunity costs. Be transparent about which costs are included from which perspective.

Use comparable outcome measures. When comparing interventions, ensure outcomes are measured consistently. If Intervention A measures "students passing exams" and Intervention B measures "students achieving proficiency," you cannot directly compare cost-effectiveness without reconciling these measures.

Conduct sensitivity analysis on key assumptions. Identify your three most uncertain assumptions (e.g., outcome rates, cost estimates, discount rates) and test how your conclusions change when these vary by ±20%. This shows whether your recommendation is robust or fragile.

Consider the time horizon carefully. Some interventions have high upfront costs with long-term benefits (e.g., infrastructure, capacity building). Others have low upfront costs but recurring expenses. Ensure your time horizon captures the full cost-effectiveness profile — a 2-year analysis may miss the long-term value of a 10-year intervention.

Common Mistakes

Comparing interventions with different outcomes. The most fundamental error in cost-effectiveness analysis is comparing interventions that achieve different outcomes. "Cost per beneficiary reached" is not comparable across interventions if one reaches beneficiaries with basic services and another with comprehensive support. The outcome must be the same — or at least commensurable — for valid comparison. (MEAL Rule: EX57_W010)

Omitting important cost categories. Many analyses focus only on direct programme costs and omit overhead, monitoring, training, or opportunity costs. This distorts comparisons — an intervention that appears cost-effective when counting only direct costs may be less efficient when all costs are included. Be comprehensive in cost accounting. (MEAL Rule: EX136_S012)

Using inconsistent outcome measures. Comparing cost-effectiveness across interventions that measure outcomes differently invalidates the comparison. If one programme measures "attendance" and another measures "learning gains," you cannot determine which is more cost-effective without reconciling these measures. Standardize outcome measurement across comparisons. (MEAL Rule: EX21_S008)

Ignoring the time horizon. Cost-effectiveness can appear very different depending on the time horizon. An intervention with high upfront costs but long-term benefits may appear inefficient in a short analysis but highly cost-effective over its full impact period. Choose a time horizon appropriate to the intervention's impact profile.

Failing to conduct sensitivity analysis. Presenting a single cost-effectiveness ratio as definitive ignores the uncertainty inherent in cost and outcome estimates. Without sensitivity analysis, you cannot know whether your recommendation is robust or depends on specific assumptions. Always test key assumptions.

Confusing cost-effectiveness with cost-benefit analysis. CEA keeps outcomes in natural units (lives saved, students graduated). CBA converts all outcomes to monetary values. Don't claim you've done cost-benefit analysis when you've only done cost-effectiveness analysis — they answer different questions.

Examples

Malaria Prevention — East Africa

A regional health programme compared three malaria prevention interventions: insecticide-treated bed nets (ITNs), indoor residual spraying (IRS), and community education campaigns. All three aimed to reduce malaria incidence. The CEA found:

  • ITNs: $12 per case prevented
  • IRS: $18 per case prevented
  • Education: $35 per case prevented

The incremental cost-effectiveness ratio showed that moving from ITNs to IRS cost an additional $6 per case prevented. Given the budget constraints and the proven durability of ITNs, the programme recommended ITNs as the primary strategy, with IRS reserved for high-transmission areas where ITN coverage was insufficient. The analysis informed a $2.3 million allocation decision across 12 districts.

Education — South Asia

Two literacy programmes in rural Bangladesh were compared: a teacher training approach and a community reading mentor approach. Both aimed to improve primary school reading scores. The CEA measured cost per student achieving grade-level reading after one year:

  • Teacher training: $28 per student
  • Reading mentors: $35 per student

However, sensitivity analysis revealed that the reading mentor approach had more durable effects (sustained gains at 2-year follow-up) while teacher training effects diminished after 18 months. Over a 5-year horizon, the reading mentors proved more cost-effective despite higher upfront costs. The programme adopted a hybrid model combining both approaches.

Water and Sanitation — West Africa

A WASH programme compared three approaches to improving village water access: borehole construction, rainwater harvesting systems, and protected spring development. The outcome measure was "litres of safe water per person per day." The CEA found:

  • Boreholes: $45 per person-year of safe water access
  • Rainwater harvesting: $62 per person-year
  • Spring protection: $38 per person-year

Spring protection was most cost-effective but only feasible in 30% of target villages due to geological constraints. The programme developed a context-specific deployment strategy, using spring protection where feasible and boreholes as the default elsewhere. Rainwater harvesting was retained only for schools where spring and borehole options were unavailable.

Compared To

Cost-effectiveness analysis is one of several economic evaluation approaches. The key differences:

| Feature | Cost-Effectiveness Analysis (CEA) | Cost-Benefit Analysis | Cost-Utility Analysis (CUA) | |-----|-----|---|-----| | Outcome measure | Natural units (cases prevented, lives saved) | Monetary values (dollars, euros) | Quality-adjusted life years (QALYs), disability-adjusted life years (DALYs) | | Best for | Comparing interventions with same outcome | Comparing interventions across sectors | Health interventions with quality-of-life outcomes | | Complexity | Medium | High (requires monetization) | High (requires utility measurement) | | Donor acceptance | High (widely understood) | Medium (monetization controversial) | High (in health sector) | | Time horizon | Flexible | Typically long-term | Typically lifetime | | Perspective | Flexible (programme, donor, societal) | Typically societal | Typically healthcare system |

Relevant Indicators

23 indicators across 5 major donor frameworks (USAID, DFID, World Bank, Global Fund, FCDO) relate to cost-effectiveness and efficiency analysis:

  • Efficiency measurement — "Cost per beneficiary achieving targeted outcome" (World Bank)
  • Comparative analysis — "Proportion of programme decisions informed by cost-effectiveness analysis" (FCDO)
  • Resource allocation — "Percentage of budget allocated to highest-value activities" (USAID)
  • Incremental analysis — "Use of incremental cost-effectiveness ratios in programme comparisons" (Global Fund)

Related Tools

  • Cost Calculator — Spreadsheet template for comprehensive programme cost accounting with category taxonomy
  • Efficiency Analyzer — Interactive tool for calculating and comparing cost-effectiveness ratios across multiple interventions

Related Topics

Further Reading


Author's note: This entry was developed following the StudioReference content schema. Key MEAL rules integrated: EX136_S012 (DAC criteria alignment), EX26_R015 (evaluation question framing), EX136_R046 (efficiency assessment), EX57_R015 (indicator efficiency), EX139_P013 (logistics efficiency). Common mistakes addressed: EX57_W010 (scope vs. resources), EX21_S008 (outcome consistency).