Skip to main content
M&E Studio
Home
AI for M&E
AI GuidesPlaybooksPromptsPlugins
Resources
Indicator LibraryReference LibraryM&E Method GuidesTools
Services
About
ENFRES
M&E Studio

AI for M&E, Built for Practitioners

About

  • About Us
  • Contact
  • LinkedIn

Services

  • Our Services

AI for M&E

  • AI Guides
  • Playbooks
  • Prompts
  • Plugins
  • Workflows

Resources

  • Indicator Library
  • Reference Library
  • M&E Method Guides
  • Decision Guides
  • Tools

Legal

  • Terms
  • Privacy
  • Accessibility

© 2026 Logic Lab LLC. All rights reserved.

  1. M&E Library/
  2. Decision Guides/
  3. Evaluation Ethics Checklist: What to Cover Before Fieldwork
M&E How-to Guide

Evaluation Ethics Checklist: What to Cover Before Fieldwork

Informed consent, confidentiality, risk minimization, ethics approval, and the specific protocols every evaluation needs before a single interview is conducted.

8
Checklist domains
5
Consent-script elements
6
Common mistakes
Key Takeaway
Ethics is a design phase, not a review phase
Most ethical failures in M&E are structural: no consent process because no one built one, data collected because collecting it was easier than justifying it, findings published without reviewing participant risk. Ethical evaluations require design-time decisions, not post-hoc review. Work through this checklist before fieldwork, not as a box-tick before publication.

The Eight Ethics Domains

An ethically designed evaluation addresses eight domains before fieldwork begins. Each one maps to a specific failure pattern that has produced real harm to participants and real damage to program credibility.

DomainWhat it coversWhat fails without it
Ethics approvalFormal review by IRB, ethics committee, or equivalentUnvetted research practices, donor compliance failure
Informed consentDocumented voluntary agreement to participateCoerced or uninformed participation
ConfidentialityProtection of participant identity and responsesData leaks, loss of trust, retaliation against participants
Data protectionSecure handling, storage, transmission of dataBreach, unauthorized access, legal exposure
Risk minimizationIdentification and mitigation of participant risksHarm to participants during or after data collection
Safeguarding and PSEAProtection from exploitation and abuse by staff and contractorsAbuse, inadequate response, complicity
Power dynamicsAcknowledgment of unequal power in research relationshipsPressured consent, performed responses, extractive research
Findings useEthical publication, attribution, and feedback to participantsMisrepresentation, broken promises, extractive outputs

These are not hierarchical; a serious failure in any one domain can end an evaluation or cause real harm. Evaluations for internal program management may not need formal IRB approval but still need every other domain. See the ethics in M&E reference entry for the full principles framework.

When to Seek Formal Ethics Approval

Formal ethics review (IRB, research ethics committee, national ethics board) is not required for every M&E activity. The decision rule is risk and intent.

Ethics review is required when:

  • Findings will be published in peer-reviewed venues or publicly disseminated with identifying detail
  • The evaluation involves vulnerable populations: children under 18, GBV survivors, refugees, displaced persons, detainees, people with diminished capacity for consent
  • The evaluation addresses sensitive topics: health status, legal status, illegal activities, household income, intimate behaviors, violence
  • The evaluation collects biological samples, genetic data, or biometric data
  • Donor policy requires it (USAID, EU Horizon, NIH, many foundations require IRB approval for evaluations involving human subjects)

Ethics review is often not required when:

  • The activity is routine program monitoring using already-collected administrative data
  • Participants are program staff being assessed on professional performance (with institutional approval)
  • The evaluation uses exclusively anonymous, publicly available data

When in doubt, seek review. The time and cost of obtaining ethics approval is usually 2-6 weeks and a few hundred dollars. The cost of conducting unethical research and having it discovered at publication or during donor audit is much higher.

Internal ethics advisors, institutional M&E ethics committees, and national ethics boards can all provide review. Many INGOs have internal IRBs for routine evaluation work. For donor-required approvals, ask the donor directly which body they recognize.

Informed Consent: The Five Elements

Informed consent is a documented process where participants understand and voluntarily agree to participate. Five elements must appear in every consent script or form.

  1. What the evaluation is. Clear language naming the study, who is conducting it, and what it is trying to learn. Not the program narrative; the evaluation purpose.

  2. What participation involves. The type of engagement (interview, survey, observation, focus group), how long it will take, where it will happen, what will be asked.

  3. Risks and benefits. Real risks, named honestly. Not "there are no risks to you" if social, emotional, or privacy risks are present. Benefits described accurately: often there is no direct benefit to the participant, and saying so is more ethical than implying compensation that does not occur.

  4. Voluntariness and right to withdraw. Participation is voluntary, declining has no consequence for program access or services, and the participant can stop at any point or skip any question without penalty.

  5. Confidentiality and use of findings. How the data will be stored, who has access, whether responses will be linked to the participant's identity, how findings will be published, and whether the participant will receive any feedback.

The consent script should be read aloud, not handed over silently for reading. Many participants have limited literacy; many others sign forms without reading. Reading the script aloud in the local language, pausing for questions, and documenting the consent (signature, thumbprint, or witnessed verbal agreement) is the standard.

Consent is not a one-time event. Revisit at each interaction if the evaluation involves multiple contacts with the same participant. If the scope of the research changes substantively, seek renewed consent.

For children, consent comes from the caregiver (parent or legal guardian) plus age-appropriate assent from the child themselves. Children can refuse to participate even after caregiver consent.

Confidentiality and Data Protection

Confidentiality failures produce concrete harm: retaliation against participants who criticized a program, social stigma for disclosing sensitive information, legal consequences for participants in restrictive contexts. Four controls form the standard.

Minimize identifier collection. Collect names, addresses, and identifiers only when analytically necessary. Most M&E indicators do not require individual identity; they require demographic disaggregation (age group, sex, location at district or ward level). If the analysis does not need a name, do not collect the name.

Separate identifiers from responses. Where identifiers are needed (for longitudinal follow-up, for beneficiary attribution), store them in a separate file linked to the response data only by an anonymous ID. Access to the link table is restricted to the M&E lead and designated data manager.

Apply access controls. Data storage (server, cloud, paper archive) requires controlled access: named individuals with documented roles, not shared credentials. Every access event should be logged in systems that support audit trails.

Publish only aggregated or anonymized findings. Publication should not include names, exact locations (village-level detail is often too granular in small populations), exact ages that identify specific individuals, or verbatim quotes traceable to a speaker. For sensitive topics, consider k-anonymity standards (no cell in a published table with fewer than 5 participants) or differential privacy techniques.

See data management and data quality assurance for the technical infrastructure that supports these controls.

Risk Assessment and Minimization

Every evaluation carries some risk to participants. Good ethical design identifies risks before fieldwork and mitigates them deliberately.

Common risk categories in M&E:

Social risk. A participant's responses could damage their relationships, standing, or program access if others knew. Example: a community health worker criticizing program management in an interview. Mitigation: confidentiality, aggregated reporting, no feedback to implicated parties.

Emotional risk. The interview may surface distress, trauma, or unresolved feelings. Example: a GBV survivor asked about their experiences. Mitigation: trained enumerators, referral pathways to support services, right to pause or withdraw, avoidance of unnecessary detail.

Legal or political risk. Participation or disclosure could expose the participant to legal consequences or political retaliation. Example: disclosure of informal economy participation in a restrictive regulatory environment. Mitigation: data stored outside the jurisdiction, exclusion of identifying detail from analysis files, careful framing of questions.

Health or safety risk. The act of participation creates physical risk. Example: a household visit in a context with active insecurity or disease transmission. Mitigation: remote modalities, timing sensitivity, co-ordination with local security and health actors.

Economic risk. Participation costs the participant time or money. Example: a 90-minute interview during productive work hours. Mitigation: scheduling around participant availability, modest compensation where appropriate, short instruments.

Document the risk assessment in the evaluation design. Each identified risk should have a named mitigation. Review by an ethics advisor before fieldwork, and again during fieldwork if new risks emerge.

Safeguarding and PSEA Obligations

Safeguarding and Protection from Sexual Exploitation and Abuse (PSEA) obligations apply to all staff, consultants, and contractors involved in the evaluation. These are not optional; they are contractual in most donor relationships and ethical baselines in the development sector.

Three elements every evaluation must have:

Code of conduct. All personnel (permanent staff, evaluators, enumerators, drivers, translators) must sign and understand a code of conduct prohibiting exploitation and abuse. The code is not just an HR document; it is a protection against staff misconduct toward participants.

Complaint mechanism. Participants must have a safe, accessible way to report misconduct, confidentiality violations, or concerns about the evaluation. Mechanisms include hotlines, suggestion boxes, named focal points independent from the evaluation team, and partner-organization referrals. The mechanism must be explained during consent.

Response protocol. If a complaint is received, who responds, what is the response timeline, and what protections exist for the complainant? A complaint mechanism without a protocol is performative. Document the protocol before fieldwork.

See do no harm and accountability mechanisms for the broader framework in which PSEA sits.

Power Dynamics and Voluntariness

Evaluations are extractive by default: researchers take time, attention, and information from participants and leave. Ethical design acknowledges this asymmetry and works against it.

Power pressures that undermine voluntariness:

The beneficiary who feels participation is required to maintain program access. The staff member who feels an interview with a donor-commissioned evaluator will affect their performance review. The community leader whose public status depends on visible cooperation. Each represents pressured, not voluntary, consent.

Mitigations:

  • State explicitly in consent that participation is voluntary and declining has no consequence for program access, services, or employment
  • Where the evaluator represents the program, consider using an external evaluator for sensitive topics
  • Schedule participation in ways that respect participant time and livelihood obligations
  • Avoid public venues where non-participation is visible to community members

Feedback to participants. One counter to extractive research is feedback: participants receive summary findings, in accessible language, after the evaluation. This is an ethical commitment, not a nice-to-have. Build it into the evaluation timeline and budget.

Publication and Findings Use

The ethical obligations do not end when data collection ends. Publication and use of findings have their own ethical weight.

Three publication considerations:

Accurate attribution. Findings attributed to participants must reflect what they said, not what the evaluator wishes they had said. Verbatim quotes are powerful but should be paraphrased or aggregated if attribution risks identifying the speaker.

Negative findings. Evaluations sometimes produce findings that are unflattering to the program. Ethical publication commits to including these, not filtering through internal review to publish only favorable conclusions. Donors and external evaluators will generally insist on this; internal evaluations should apply the same standard.

Feedback timing and channels. Summary findings should reach participants through accessible channels (community meetings, printed summaries in local language, radio briefings, SMS) before or concurrent with donor reporting, not months after.

Sector Examples

Health: Consent protocol for a GBV prevalence study, East Africa

A GBV prevalence study across 40 communities used a detailed consent protocol: read aloud in local language, private setting, paired-enumerator model with a trained female interviewer, and a visible referral pathway to protection services. An ethics advisor reviewed before fieldwork. Of 820 approached participants, 14% declined, higher than comparable household surveys but evidence that consent was meaningful rather than performative. Findings were published with k-anonymity controls (no cells under 10) and community-level feedback sessions held before donor reporting.

Education: Risk mitigation in a learning outcomes study, South Asia

A 20-school learning assessment initially included student interviews with teacher ratings and household income questions. Ethics review flagged two risks: critical student responses risked attribution back if the school sample was small, and income questions risked distressing households. The design was revised: student responses aggregated to school-level scores, income replaced by an asset-based wealth proxy, consent forms sent home a week before visits. Withdrawal rate was 6%. No complaints were received.

WASH: Safeguarding protocol in a post-emergency assessment, West Africa

A post-emergency WASH assessment had all enumerators and drivers sign a PSEA code of conduct and complete a half-day safeguarding orientation. A complaint hotline staffed by the program's protection partner was introduced in the consent process. During 10 days, two complaints were received, both on transportation logistics rather than misconduct, resolved within 48 hours. The protocol and response record supported donor compliance reporting.

Livelihoods: Publication ethics in a mid-term evaluation, Southern Africa

A livelihoods mid-term produced negative findings: 50% of enrollees were above the poverty line that defined eligibility. Program leadership pushed to revise the framing; the evaluator, with ethics committee backing, insisted on accurate reporting. The published report included the targeting finding prominently and recommended redesign. The donor cited it as honest self-assessment, and the next funding cycle was approved with a redesigned targeting protocol.

Common Mistakes

Mistake 1: Treating ethics as a documentation task instead of a design task. Ethics is not the set of forms you sign; it is the set of decisions you make about how to treat participants. If the ethics plan is produced after the evaluation design is fixed, it is too late to address structural problems.

Mistake 2: Collecting identifiers you do not need. Default data collection often captures names, phone numbers, and precise addresses that are never used analytically. Every identifier collected is a future breach risk. Start with the analysis question and collect only what answers it.

Mistake 3: Reading consent forms silently. Handing a consent form to a participant for silent reading does not produce informed consent in most M&E contexts. Read aloud in local language, pause for questions, document the consent (verbal, written, or witnessed).

Mistake 4: No complaint mechanism for participants. A PSEA code of conduct without a participant-accessible complaint mechanism is a staff compliance document, not a protection. Build the mechanism, name it in consent, and test that it works.

Mistake 5: Filtering negative findings through internal review. Evaluations that report only favorable findings are not honest evaluations. Commit to publishing the full report, including negative findings and caveats, before fieldwork begins.

Mistake 6: Forgetting participant feedback. Summary findings fed back to participants close the loop and counter the extractive default. Build it into the timeline and budget at design stage, not after donor reporting.

Pre-Fieldwork Ethics Checklist

Run through this before committing to a field schedule and budget. Each unchecked item is a specific risk.

Ethics approval:

  • Ethics approval obtained (or documented rationale for exemption)
  • Donor ethics requirements confirmed in writing
  • Ethics advisor identified for mid-fieldwork consultation if needed

Informed consent:

  • Consent script covers all five elements (what it is, what it involves, risks/benefits, voluntariness, confidentiality)
  • Translated into all languages used, back-translated for accuracy, and field-tested for comprehension
  • Process for caregiver consent + child assent documented where relevant

Confidentiality and data protection:

  • Identifier collection minimized to analytical necessity
  • Identifier and response data separated in storage
  • Access controls and audit logs configured
  • Publication standards defined (k-anonymity, anonymization, quote review)

Risk and safeguarding:

  • Risk assessment documented with named mitigation for each identified risk
  • PSEA code of conduct signed by all personnel
  • Complaint mechanism established, named in consent, and tested before fieldwork
  • Referral pathways identified for emotional, legal, or protection risks

Power and publication:

  • External evaluator option considered for sensitive or conflict-sensitive topics
  • Commitment to including negative findings in publication documented
  • Participant feedback plan (channels, timeline, format) included in budget

For the broader ethics framework, see ethics in M&E and do no harm. For the evaluation planning context, see how to write evaluation TOR and how to conduct a DQA. For an AI-assisted step-by-step workflow, see the Evaluation Plan playbook.

Frequently Asked Questions

PreviousDesign Effect Explained: What It Is and How to Apply ItNextHow Much Should You Budget for M&E?