The Landscape
Across eight major donor groupings, explicit M&E-specific AI rules are still uncommon. Most donors are either publishing system-level responsible AI principles or experimenting with AI in evaluation through pilots, without yet codifying operational standards for AI-assisted M&E.
The most concrete evaluation-practice guidance comes from World Bank IEG, whose methods papers document AI-assisted content analysis and repeatedly emphasize human expert validation. The most explicit controls language comes from the UK Government AI Playbook, covering DPIAs, human oversight, and procurement constraints. The UN system has the most centralized governance through UNEG ethical principles for AI in evaluations plus a model policy framework.
Donor-by-Donor Comparison
| Donor | Key AI Document | M&E-Specific Guidance | Key Requirements |
|---|---|---|---|
| USAID | AI Action Plan (2022); AI in Global Development Playbook (2024) | Unspecified (OIG flags governance gaps) | Federal AI directives apply; specifics unresolved due to 2025-26 reorganization |
| UK FCDO | Digital Development Strategy 2024-2030; UK Gov AI Playbook (2025) | No FCDO-specific manual, but AI Playbook functions as de facto guardrails | DPIAs required; human oversight for high-risk decisions; procurement clauses for IP, transparency, liability |
| EU / DG INTPA | EU AI Act (in force Aug 2024, fully applicable Aug 2026); EDPS guidance (2025) | Unspecified in INTPA M&E pages | EU AI Act risk classification; AI literacy obligations; EDPS data protection requirements |
| World Bank IEG | Methods papers (2022, 2023); WBG Privacy Policy (2020) | Explicit: AI accelerates synthesis but domain experts must validate; quality/change control required | Legitimate/transparent processing, minimization, security, transfer controls |
| UN System | UN AI Principles (2022); Model Policy (2024); UNEG AI Ethics (2025) | UNEG principles link AI in evaluation to UNEG ethics; emphasize fairness, transparency, accountability | Human oversight non-negotiable; life/rights-critical decisions must not be ceded to AI |
| Gates Foundation | AI Principles (2023) | Evaluation operationalized through EVAH funding (rigorous evaluation of AI tools) | Privacy/security assessments; informed consent; transparency; stepwise scaling |
| Global Fund | Digital Health page (2025) | Unspecified in M&E framework | "Thoughtful utilization driven by evidence"; align with national data systems |
| Gavi | Digital Health Information Strategy 2022-2025 | Unspecified | Data sourcing/validation via DHIS2; standard digital reporting |
The Minimum Viable Compliance Posture
Even where your specific donor has not published AI-in-M&E rules, a defensible posture can be inferred from the most explicit controls across all donors:
- Explicit scoping of AI use cases: what AI is being used for, with what data, for what purpose
- Impact assessment (DPIA or equivalent) before processing data with AI tools
- Documented human oversight with defined review points and override authority
- Lifecycle monitoring and incident response, not just a launch checklist
- Disclosure and audit trails: AI use documented in methodology; process is reproducible
- Procurement clauses that prevent opacity and unmanaged liability in vendor AI tools
This posture is not gold-plated. It is the intersection of what the UN, UK, World Bank, and EU already require in their most explicit documents.
What Is Coming Next
Three trends to watch:
EU AI Act full applicability (August 2026) will create binding obligations for any organization using AI in EU-funded programming, including risk classification, AI literacy requirements, and compliance documentation.
USAID reorganization has created genuine uncertainty about where AI governance "lives" and whether the 2022 AI Action Plan remains operative. M&E teams working with USAID funding should monitor for updated policy.
Evaluation as AI governance: The Gates Foundation EVAH model, which funds rigorous evaluation of AI tools as a prerequisite for scaling, may set a template other donors replicate. Evaluation capacity is becoming part of responsible AI governance, not just a downstream activity.
Bottom Line
If your donor has not published AI-in-M&E rules, that does not mean anything goes. The 6-point compliance posture above is defensible against any audit from any of these eight donors. Start there, document everything, and update as donor guidance catches up with donor expectations.
Sources: USAID OIG (2024), UK AI Playbook (2025), EU AI Act (Regulation 2024/1689), World Bank IEG methods papers (2022, 2023), UNEG AI Ethics Principles (2025), UN Model Policy Framework (2024), Gates Foundation AI Principles (2023), EVAH (2026).