Skip to main content
M&E Studio
Home
AI for M&E
GuidesWorkflow GuidesPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesDecision GuidesTools
Services
About
ENFRES
Also available:Lire ceci en français(Français)Leer esto en español(Español)
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services

AI for M&E

  • Guides
  • Prompts
  • Plugins

Resources

  • Indicator Library
  • Reference Library
  • Downloads
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library/
  2. Decision Guides/
  3. WASH M&E: Activities, Standards, and Measurement Choices
M&E How-To Guide

WASH M&E: Activities, Standards, and Measurement Choices

A practitioner guide to monitoring and evaluation across the WASH sector: activity clusters, typical outputs and outcomes, collection methods, evaluation questions, and the standards that hold it together.

5
Sub-sectors
5
Activity clusters
110
Indicators in library
Key Takeaway
WASH M&E spans infrastructure, behavior, and institutional service delivery
Each activity cluster has distinct indicators, collection methods, and evaluation questions. Planning is cleanest when the program logic drives measurement choices, not the other way around.

The WASH M&E Landscape

WASH is three related measurement traditions braided together. Water supply has the clearest service-level ladder and the most mature infrastructure indicators. Sanitation is easy to count in output terms (latrines constructed) but measurement complexity rises sharply when you move to use, sustainability, and community-level outcomes. Hygiene is the hardest to measure well because the outcomes are behavioral. A WASH M&E plan has to handle all three at once and align them to the same households, schools, or facilities.

This hub organizes WASH M&E around five activity clusters that correspond to the five sub-sectors in the Indicator Library. Each cluster has distinct output and outcome indicators, distinct collection methods, and distinct evaluation questions. Use the structure to align program logic with measurement before writing the MEL plan.

The Sub-Sector Map

  • Water Supply & Infrastructure (23 indicators): pumps, wells, piped systems, water points, water quality testing, committee-led operation and maintenance. Output indicators count infrastructure; outcome indicators measure household access and water-point functionality. Browse Water Supply & Infrastructure indicators.

  • Sanitation Systems (22 indicators): household latrines, public and communal latrines, handwashing stations, fecal sludge management facilities, CLTS triggering, sanitation marketing. Output indicators count construction and events; outcome indicators measure use, ODF status, and fecal sludge knowledge. Browse Sanitation Systems indicators.

  • Hygiene Promotion & Behavior Change (20 indicators): hygiene promotion sessions, campaigns, materials development, community hygiene promoter training. The hardest sub-sector to measure well because outcomes are behavioral. Browse Hygiene Promotion indicators.

  • Institutional WASH (22 indicators): water, sanitation, and hygiene in schools, health facilities, and public institutions. Critical to broader program impact but often under-measured because it sits between education, health, and WASH sectors. Browse Institutional WASH indicators.

  • Emergency WASH (23 indicators): temporary water supply, rapid sanitation, emergency hygiene promotion. Sphere minimum standards govern this sub-sector; donor expectations differ from development programming. Browse Emergency WASH indicators.

Program Logic Overview

Most WASH programs blend construction, behavior change, and institutional service delivery. Activities produce countable outputs (water points built, sessions conducted, kits distributed), which are expected to drive outcome changes (access, use, behavior, service level) that in turn support impact-level claims (health outcomes, time savings, gender equity). The program logic is straightforward on paper; what makes WASH M&E distinctive is that each activity cluster has its own measurement tradition and its own standard of evidence. Designing measurement well means treating each cluster on its own terms and then integrating them at the MEL-plan level.

Activity Clusters, Outputs, and Outcomes

The five clusters below cover the majority of WASH programming. Activities are drawn from activity templates in the MEStudio Activities Library; outputs and outcomes correspond to indicators in the Indicator Library.

1. Water Supply & Infrastructure

Common activities: drill and install boreholes; install handpumps; install household piped connections; rehabilitate existing water points; assess functionality of existing water points; establish water management committees; train community members on operation and maintenance; distribute water quality testing kits.

Typical outputs: new water points constructed, existing water points rehabilitated, household connections installed, water management committees established and active, community members trained on O&M, water quality monitoring events conducted.

Typical outcomes: proportion of households with access to an improved water source within a reasonable distance; proportion of water points functional and meeting quality standards; proportion of households reporting reduced water collection time; proportion treating water at point of use.

2. Sanitation Systems

Common activities: construct latrine blocks; provide materials or subsidies for household latrine construction; construct public or communal latrines; build handwashing facilities adjacent to latrines; train masons and sanitation entrepreneurs; run CLTS triggering events; run sanitation promotion campaigns.

Typical outputs: latrines constructed or upgraded, handwashing facilities installed, CLTS triggering events conducted, sanitation entrepreneurs trained, households receiving sanitation subsidies or materials, fecal sludge management facilities established.

Typical outcomes: proportion of households using improved sanitation, proportion of communities verified as ODF, demonstrated knowledge of fecal sludge management, latrine condition observed to meet use standards.

3. Hygiene Promotion & Behavior Change

Common activities: develop hygiene promotion materials; conduct community-based hygiene promotion sessions; run hygiene campaigns; train community hygiene promoters; conduct school-based hygiene promotion; distribute hygiene kits.

Typical outputs: hygiene sessions conducted, promotional materials disseminated, community hygiene promoters trained, households reached with hygiene messaging, handwashing stations installed.

Typical outcomes: demonstrated knowledge of safe water handling practices, observed handwashing behaviors at critical times (proxied by soap and water availability at station), adoption of hygiene practices related to water use and food preparation, caregiver knowledge of diarrheal disease prevention.

4. Institutional WASH (Schools, Health Facilities, Public Spaces)

Common activities: construct or rehabilitate WASH facilities in schools and health facilities; train teachers on school hygiene; train health workers on hygiene promotion; deliver menstrual hygiene management (MHM) education and materials; support institutional WASH committees.

Typical outputs: schools or facilities with functional WASH services, teachers or health workers trained, MHM curriculum delivered, MHM products distributed.

Typical outcomes: proportion of schools or health facilities meeting WASH service-level standards (functional water, separate-sex sanitation, handwashing), observed handwashing behavior at institutions, school WASH audit scores.

5. Emergency WASH

Common activities: install temporary water supply (trucking, bladders, rapid rehabilitation); distribute hygiene kits and emergency non-food items; construct emergency latrines; conduct emergency hygiene promotion; establish rapid water quality monitoring.

Typical outputs: people served with temporary water supply, hygiene kits distributed, emergency latrines constructed, hygiene promotion sessions delivered in emergency settings.

Typical outcomes: affected population meeting Sphere minimum water quantity and quality indicators, affected households with access to emergency sanitation meeting minimum separation standards, target population demonstrating protective hygiene behaviors.

Methods by Outcome Type

Each outcome type has its own defensible collection approach. Match the method to the outcome, not to what is easiest to collect.

Outcome typePrimary methodObservation requirementKey instruments
Water access and coverageHousehold survey, structuredNot required; proxied via source type questionsDHS, MICS, JMP Core Questions
Water point functionalityTechnical assessment, water quality testingDirect observation requiredWPDx standard fields, field test kits
Sanitation useObservation plus structured interviewLatrine condition directly observable; use triangulatedMICS sanitation module, CLTS verification protocol
Hygiene behaviorStructured observation (ideal) or proxy indicatorsObservation strongly preferred over self-reportMICS hygiene module, SaniFOAM behavioral framework
KnowledgeStructured questionnaireNot requiredMICS WASH modules, DHS hygiene knowledge batteries

Water access and coverage is the most straightforward outcome to measure. A structured household questionnaire asks about primary drinking water source and round-trip collection time, with response options drawn from the JMP ladder. Illustrative wording adapted from JMP Core Questions: "What is the main source of drinking water for members of your household?" followed by JMP-aligned options (piped water, tubewell or borehole, protected dug well, surface water, and so on). DHS and MICS both use this question pattern, so reproducing or adapting their wording maintains comparability with national datasets.

Water point functionality cannot be self-reported. Collection requires a physical site visit with a technical checklist and spot water quality tests (chlorine residual, turbidity, E. coli indicator strips), or laboratory samples for studies requiring higher rigor. The Water Point Data Exchange (WPDx) publishes a standard data format that most humanitarian donors now recognize.

Sanitation use is the classic gap between output and outcome. A structured observation of latrine condition (cleanliness, evidence of use, presence of handwashing facility, door and privacy) paired with a structured household interview triangulates ownership, reported use, and physical evidence. For community-level outcomes like ODF status, follow the community-led verification protocol from CLTS guidance rather than external-audit approaches.

Hygiene behavior is where measurement methodology matters most. Structured observation at critical times (after toilet use, before food preparation, before feeding a child) is the gold standard. Proxy indicators (soap and water observed at a designated handwashing station) are the practical compromise for most programs. Self-report alone is not recommended because respondents often report idealized behavior. When budget and protocol allow, sensor-based measurement (door sensors, soap-use tracking) produces more accurate data than human observation for longer time frames.

Knowledge outcomes use structured questionnaires at baseline and endline. Illustrative wording from MICS hygiene modules: "At what times should hands be washed?" asked open-ended with multiple responses recorded. Knowledge alone is rarely a sufficient outcome indicator; pair with a behavior proxy for program logic that claims behavior change.

For any outcome type, connect your indicator selection to your means of verification and survey design decisions in the MEL plan. Use the SMART Indicator Checker to audit your draft list.

Common Evaluation Questions by Cluster

Evaluation questions anchor design and data-collection decisions. The following are common questions associated with each activity cluster, the evidence that typically answers them, and the design that is typical. For how to structure a full evaluation question matrix, see evaluation questions.

Water Supply & Infrastructure

  • Did the program result in sustained household water access? Evidence: baseline-to-endline household survey plus post-completion verification at 12 months. Design: outcome evaluation with before-and-after comparison.
  • Are installed water points still functional after project end? Evidence: technical reassessment at 12 and 24 months post-completion. Design: sustainability study or ex-post evaluation.
  • Are water user fee systems covering maintenance costs? Evidence: water committee financial records, comparison against maintenance budget needs. Design: implementation evaluation.

Sanitation Systems

  • Are households actually using the latrines that were constructed? Evidence: observation of latrine condition and use evidence, combined with structured interview. Design: outcome evaluation.
  • Did CLTS triggering result in verified ODF status? Evidence: community-led verification at 3, 6, and 12 months after triggering. Design: outcome evaluation.
  • Was sanitation marketing effective at stimulating self-funded construction? Evidence: comparison of construction rates between subsidized and marketed-only communities. Design: implementation evaluation.

Hygiene Promotion & Behavior Change

  • Did hygiene sessions result in observable handwashing behavior change? Evidence: structured observation at baseline and endline supplemented with proxy indicators (soap, water). Design: outcome evaluation.
  • Do households retain hygiene knowledge over time? Evidence: knowledge assessment at endline plus follow-up 6 to 12 months later. Design: sustainability study.

Institutional WASH

  • Do schools or facilities maintain functional WASH services beyond program end? Evidence: institutional WASH audits at 12 and 24 months. Design: ex-post sustainability study.
  • Did menstrual hygiene support affect school attendance among adolescent girls? Evidence: school attendance records paired with household or school-level survey. Design: outcome evaluation.

Emergency WASH

  • Did the response meet Sphere minimum standards throughout the intervention? Evidence: service-level monitoring data plus affected-population satisfaction surveys. Design: implementation evaluation.
  • Was the transition from emergency to recovery WASH successfully managed? Evidence: transition planning documents, service continuity data, handover agreements. Design: real-time evaluation or after-action review.

The Standards That Matter

WASH has one of the best-aligned standards ecosystems in development and humanitarian M&E. Every credible WASH M&E plan references these.

  • JMP (WHO/UNICEF Joint Monitoring Programme): the global standard for water, sanitation, and hygiene service levels. SDG 6 indicators and most donor reporting aggregate to JMP ladders.
  • JMP Service Ladders: tiered ladders for water (No Service, Unimproved, Limited, Basic, Safely Managed), sanitation, and hygiene. Use these as the backbone of outcome indicators.
  • Sphere Standards: minimum humanitarian standards for WASH in emergency contexts. Referenced by UN clusters and most humanitarian donors.
  • DHS and MICS: household-survey instruments with validated WASH modules and question wording. Using DHS or MICS wording in program surveys maintains comparability with national datasets.
  • GLAAS (Global Analysis and Assessment of Sanitation and Drinking-Water): WHO and UN-Water national-level assessment. Useful for governance-level WASH indicators and system-strengthening programs.
  • WASHCost methodology (IRC): cost-benchmarking framework for WASH interventions.
  • SuSanA (Sustainable Sanitation Alliance): working-group standards for sanitation sustainability, fecal sludge management, and integrated sanitation planning.
  • Water Point Data Exchange (WPDx): global open platform for pump functionality and water point data.
  • WASH Cluster Core Indicators: standardized indicator set for humanitarian WASH responses.
  • Sanitation Safety Planning (WHO): framework for fecal sludge safety assessment across the sanitation service chain.

In practical terms: start with JMP ladders for outcomes; layer in Sphere if humanitarian; use DHS or MICS wording for household-survey questions; reference WPDx for water-point functionality data structures.

Where the Ethics Get Real

WASH M&E is often framed as technically clean: infrastructure, coverage, behavior. The ethics are less visible but real across several axes.

  • Sanitation research and dignity. Questions about defecation carry stigma across most cultural contexts. Direct observation of latrine use is almost always ethically problematic. Indirect measurement (latrine condition inspection, trace evidence, sensor data) and neutral survey language are the right approach.
  • Menstrual hygiene management measurement. Measurement of menstrual practices, product access, pain, and school absenteeism requires culturally informed interviewers, usually women interviewing women in private. School-based MHM surveys require consent protocols for minors and separate spaces for data collection.
  • Water collection observation and gendered labor. Water collection (time, distance, burden) is overwhelmingly women's and girls' labor. Observation requires attention to respondent safety, time burden, and ethical framing that documents gendered labor without reinforcing it.
  • Open defecation stigma. Labeling a community as "open defecation" in reports can cause reputational harm. Aggregate at appropriate levels; use neutral language in published results.
  • CLTS implementation ethics. Community-Led Total Sanitation uses disgust triggering to drive behavior change. Whether this is ethical practice is contested in the sector. M&E staff should understand the role they are playing when documenting CLTS steps.
  • Disclosure of water quality failures. If testing reveals contamination, program staff have obligations to the community and public health authorities. The protocol for adverse findings needs to exist in the MEL plan up-front.
  • ODF verification approach. Walking the village to confirm absence of open defecation can reinforce shaming dynamics if done badly. Community-led verification by community members is the standard, not external audit teams.

Common Mistakes

These are framework-defined errors: each represents a departure from an established WASH measurement protocol or standard.

  • Reporting water access without JMP service-level specification. "Access to improved water" without naming the JMP level breaks donor reporting alignment and typically results in reviewer questions.
  • Using self-report alone for hygiene behavior. Departs from Sphere, WHO, and DHS and MICS guidance, all of which use observation or proxy measurement for hygiene outcomes.
  • Counting constructed latrines as an outcome-level result. Conflates output and outcome per standard results-chain conventions. Use requires observation or interview; ownership alone is not an outcome.
  • Missing post-completion sustainability verification. Sustainability claims require measurement after the implementation period ends. Endline at program closure captures the best-case snapshot, not sustained outcome.
  • Not disaggregating water collection time by gender. SDG 6.1.1 and most donor frameworks expect disaggregation on the gendered dimension of water collection.
  • Claiming CLTS impact without community-level ODF verification. CLTS outcome measurement protocol requires community-level verification, not household-level latrine counts.
  • Using household-level sanitation indicators for community-level outcomes. One uncovered latrine can undermine community-level sanitation outcomes; household-level data does not capture that dynamic.

Frequently Asked Questions

PreviousSurveys vs Interviews vs Focus GroupsNextCluster Sampling vs Stratified Sampling