Donor Policy

What Major Donors Actually Require for AI in M&E

Most M&E teams assume their donors have clear rules about using AI. The reality: most donors have published principles but almost none have M&E-specific operational guidance. Here is what eight major donors actually require.

Ben Playfair4 min read
donor requirementsAI governanceUSAIDFCDOEUWorld BankUNEGGates Foundation

The Landscape

Across eight major donor groupings, explicit M&E-specific AI rules are still uncommon. Most donors are either publishing system-level responsible AI principles or experimenting with AI in evaluation through pilots, without yet codifying operational standards for AI-assisted M&E.

The most concrete evaluation-practice guidance comes from World Bank IEG, whose methods papers document AI-assisted content analysis and repeatedly emphasize human expert validation. The most explicit controls language comes from the UK Government AI Playbook, covering DPIAs, human oversight, and procurement constraints. The UN system has the most centralized governance through UNEG ethical principles for AI in evaluations plus a model policy framework.

Donor-by-Donor Comparison

DonorKey AI DocumentM&E-Specific GuidanceKey Requirements
USAIDAI Action Plan (2022); AI in Global Development Playbook (2024)Unspecified (OIG flags governance gaps)Federal AI directives apply; specifics unresolved due to 2025-26 reorganization
UK FCDODigital Development Strategy 2024-2030; UK Gov AI Playbook (2025)No FCDO-specific manual, but AI Playbook functions as de facto guardrailsDPIAs required; human oversight for high-risk decisions; procurement clauses for IP, transparency, liability
EU / DG INTPAEU AI Act (in force Aug 2024, fully applicable Aug 2026); EDPS guidance (2025)Unspecified in INTPA M&E pagesEU AI Act risk classification; AI literacy obligations; EDPS data protection requirements
World Bank IEGMethods papers (2022, 2023); WBG Privacy Policy (2020)Explicit: AI accelerates synthesis but domain experts must validate; quality/change control requiredLegitimate/transparent processing, minimization, security, transfer controls
UN SystemUN AI Principles (2022); Model Policy (2024); UNEG AI Ethics (2025)UNEG principles link AI in evaluation to UNEG ethics; emphasize fairness, transparency, accountabilityHuman oversight non-negotiable; life/rights-critical decisions must not be ceded to AI
Gates FoundationAI Principles (2023)Evaluation operationalized through EVAH funding (rigorous evaluation of AI tools)Privacy/security assessments; informed consent; transparency; stepwise scaling
Global FundDigital Health page (2025)Unspecified in M&E framework"Thoughtful utilization driven by evidence"; align with national data systems
GaviDigital Health Information Strategy 2022-2025UnspecifiedData sourcing/validation via DHIS2; standard digital reporting

The Minimum Viable Compliance Posture

Even where your specific donor has not published AI-in-M&E rules, a defensible posture can be inferred from the most explicit controls across all donors:

  1. Explicit scoping of AI use cases: what AI is being used for, with what data, for what purpose
  2. Impact assessment (DPIA or equivalent) before processing data with AI tools
  3. Documented human oversight with defined review points and override authority
  4. Lifecycle monitoring and incident response, not just a launch checklist
  5. Disclosure and audit trails: AI use documented in methodology; process is reproducible
  6. Procurement clauses that prevent opacity and unmanaged liability in vendor AI tools

This posture is not gold-plated. It is the intersection of what the UN, UK, World Bank, and EU already require in their most explicit documents.

What Is Coming Next

Three trends to watch:

EU AI Act full applicability (August 2026) will create binding obligations for any organization using AI in EU-funded programming, including risk classification, AI literacy requirements, and compliance documentation.

USAID reorganization has created genuine uncertainty about where AI governance "lives" and whether the 2022 AI Action Plan remains operative. M&E teams working with USAID funding should monitor for updated policy.

Evaluation as AI governance: The Gates Foundation EVAH model, which funds rigorous evaluation of AI tools as a prerequisite for scaling, may set a template other donors replicate. Evaluation capacity is becoming part of responsible AI governance, not just a downstream activity.

Bottom Line

If your donor has not published AI-in-M&E rules, that does not mean anything goes. The 6-point compliance posture above is defensible against any audit from any of these eight donors. Start there, document everything, and update as donor guidance catches up with donor expectations.


Sources: USAID OIG (2024), UK AI Playbook (2025), EU AI Act (Regulation 2024/1689), World Bank IEG methods papers (2022, 2023), UNEG AI Ethics Principles (2025), UN Model Policy Framework (2024), Gates Foundation AI Principles (2023), EVAH (2026).