How to Build AI Governance for Your M&E Function
Every major donor now expects some form of AI governance. This 6-point framework, synthesized from UN, UK, World Bank, and EU requirements, gives you a defensible structure before your first AI pilot.
AI governance for M&E is not a compliance exercise you do after something goes wrong. It is the structure that makes AI use defensible to donors, credible to stakeholders, and safe for the people whose data you handle. The good news: most of it builds on data protection and evaluation quality practices you already have.
The 6-Point AI Governance Stack
Synthesized from the most explicit donor controls: UN system AI principles, UK Government AI Playbook, World Bank privacy and access policies, and EU AI Act requirements. Apply all six before scaling any AI use case.
Risk Screening
Before using AI for any M&E task, assess: What is the potential for harm? Whose rights could be affected? What is the sensitivity of the context? The UN system requires that fundamental-rights decisions must never be ceded to AI. If your use case involves vulnerable populations, resource allocation, or evaluative judgments about people, the risk level is high.
Data Governance
Establish the lawful basis for processing data with AI tools. Apply data minimization (only share what is necessary). Secure data in transit and at rest. Assess cross-border transfer risks (especially relevant for cloud-based AI tools). The World Bank privacy policy requires legitimate, fair, and transparent processing with purpose limitation.
Human Oversight Design
Define where humans review, validate, and can override AI outputs. This is non-negotiable across UN, UK, and World Bank frameworks. World Bank IEG methods papers are explicit: "domain experts must validate and interpret outputs." Design review points into the workflow, not as afterthoughts.
Validation & Quality Control
Establish accuracy checks, peer review processes, and change control procedures. World Bank IEG uses "rigorous quality and change control procedures" to keep AI-assisted analysis robust. For M&E, this means: spot-check AI outputs against source data, have a second reviewer verify AI-assisted analysis, and track output quality over time.
Transparency & Documentation
Disclose AI use in evaluation methodology sections and reports. Maintain audit trails of what AI tools were used, for what purpose, with what data, and how outputs were validated. Reproducibility matters: another evaluator should be able to understand and scrutinize your AI-assisted process.
Monitoring & Incident Response
AI governance is not a one-time checklist. Monitor AI tool performance over time. Have a process for reporting incidents (unexpected outputs, data breaches, bias detected). Re-assess governance periodically and when tools, data, or contexts change. Include decommissioning criteria.
Governance in Practice
Real M&E scenarios showing the difference between ungoverned and governed AI use.
AI-Assisted Qualitative Coding
"We used ChatGPT to code 200 interview transcripts. It saved weeks." But: participant data was uploaded to a commercial AI tool with no data processing agreement. No one validated the coding against human coders. The evaluation report does not mention AI use. When the donor asks how themes were identified, you cannot explain the process.
AI-Assisted Qualitative Coding
"We used an AI tool to assist with initial coding of 200 transcripts. We conducted a DPIA first, used only anonymized excerpts, validated AI codes against human coding on a 20% sample (92% agreement), documented the process in our methodology section, and retained audit logs." Defensible, transparent, credible.
AI-Generated Indicator Frameworks
"AI generated our indicator framework in an hour instead of three days." But: the AI hallucinated indicator definitions that do not match the donor's standard definitions. Nobody cross-checked against the donor indicator handbook. The logframe now contains plausible-sounding but incorrect measurement approaches that will fail at endline.
AI-Generated Indicator Frameworks
"We used AI to generate a draft indicator framework, then validated every indicator definition against the donor handbook and our theory of change. We caught 3 hallucinated definitions and 2 misaligned data sources. The AI saved drafting time, but expert validation ensured accuracy." AI accelerates, humans verify.
Procurement of AI Evaluation Tools
"We bought an AI analytics platform for our M&E team." But: the contract has no transparency clauses. You cannot audit how the tool processes your data. When the tool produces a questionable finding, you cannot explain the methodology. The vendor can change the model without notice.
Procurement of AI Evaluation Tools
"Before procuring, we required: transparency about model and training data, IP clarity, liability allocation for errors, audit access, notification of model changes, and data deletion on contract end." UK AI Playbook explicitly recommends addressing IP, transparency, and liability in AI procurement.
5 Governance Principles Every M&E Team Needs
Human oversight is non-negotiable
The UN system, UK government, and World Bank all converge on this: AI must not make autonomous decisions in high-stakes contexts. For M&E, this means every AI output that informs an evaluative judgment, resource allocation, or stakeholder-facing report must have a human review step.
Do a DPIA before your first AI pilot
Data Protection Impact Assessments are standard requirements under UK GDPR and emerging as best practice globally. Assess: what personal data flows to the AI tool, what are the cross-border risks, what is the lawful basis, and what safeguards are in place. Do this proactively, not after an incident.
Disclose AI use in methods sections
Transparency is a requirement across UN, UK, and World Bank frameworks. When AI assisted with data analysis, coding, synthesis, or drafting, say so in your evaluation methodology. Explain what tool, what task, what validation was applied. Silence about AI use is a credibility risk.
Start governance before the first pilot
Do not wait until you have 10 AI use cases to write governance guidelines. The UN model policy framework calls for institutional accountability structures before deployment. Write basic usage guidelines, data sharing rules, and disclosure requirements before anyone uses AI for real M&E work.
Put governance in procurement contracts
The UK AI Playbook recommends addressing transparency, liability, IP, error handling, and audit access in AI procurement. When buying AI tools for M&E, require: model transparency, data deletion rights, notification of model changes, and clear liability for errors.
AI Governance Checklist Generator
Use this prompt to generate a tailored governance checklist for a specific AI use case in your M&E work.
I need you to generate an AI governance checklist for a specific M&E use case. The checklist should cover all 6 governance domains. Use case: [DESCRIBE: e.g., "Using AI to assist with qualitative coding of 150 key informant interviews for a mid-term evaluation"] Context: - Organization: [NGO / UN agency / government / consulting firm] - Donor(s): [USAID / FCDO / EU / World Bank / UN / other] - Data sensitivity: [Low: no personal data / Medium: anonymized data / High: identifiable personal data] - Population: [DESCRIBE target/affected population] - AI tool: [ChatGPT / Claude / Gemini / custom tool / not yet selected] For each of the 6 governance domains below, provide: 1. A yes/no checklist of 3-4 items specific to this use case 2. One "red flag" that should stop the process 3. One recommended action if the checklist is not fully met Domains: 1. Risk Screening (harm potential, rights, context sensitivity) 2. Data Governance (lawful basis, minimization, security, transfers) 3. Human Oversight (review points, override authority, decision boundaries) 4. Validation & QC (accuracy checks, peer review, change control) 5. Transparency & Documentation (disclosure, audit trail, reproducibility) 6. Monitoring & Incident Response (performance tracking, incident process, re-assessment) Format as a printable checklist with checkboxes.
Take the Next Step
Governance is the foundation. Once your framework is in place, assess your team's readiness and start using AI tools with confidence.
Related Quick Guides
How to Assess Your M&E Team's AI Readiness
A 5-level self-assessment across data, governance, skills, tools, and value.
Read guideHow to Protect Data Privacy When Using AI
What's safe to share and what to remove before using any AI tool.
Read guideHow to Write AI Prompts That Actually Work for M&E
The 4Cs Framework for prompts that produce donor-ready outputs.
Read guide