Donor Policy

Gates Foundation's EVAH: When Evaluation Becomes AI Governance

In February 2026, the Gates Foundation launched EVAH, a funding mechanism that makes rigorous evaluation a prerequisite for scaling AI tools in LMIC primary health care. This governance model will likely spread to other donors.

Ben Playfair4 min read
Gates FoundationEVAHAI evaluationevidence gateLMIC healthresponsible AI

What EVAH Is

In February 2026, the Gates Foundation launched EVAH (Evidence for AI in Health), a funding mechanism that makes rigorous evaluation a prerequisite for scaling AI tools in LMIC primary health care.

EVAH explicitly funds:

  • Implementation science studies to understand how AI health tools perform in real LMIC settings
  • Randomized controlled trials to measure impact against counterfactuals
  • Economic analyses to assess cost-effectiveness and sustainability
  • Acceptability and trust assessments to understand whether communities and health workers actually trust and use AI tools

This is unusually explicit. Most donor AI initiatives fund the AI tools themselves. EVAH funds the evaluation of AI tools. The signal is clear: evaluation is not downstream of AI deployment. Evaluation is a precondition for responsible AI scaling.

Why This Matters Beyond Health

EVAH's structure creates what might be called an "evidence gate": AI tools cannot scale past initial pilots without rigorous evaluation evidence that they work, are safe, do not cause harm, and are accepted by the people they affect.

This model is transferable to any sector:

  • Education: AI tutoring tools should not scale without evidence of learning outcomes and equity impacts
  • Livelihoods: AI-assisted targeting should not scale without evidence that it reaches intended beneficiaries without bias
  • Protection: AI analysis tools should not scale without evidence that they do not create re-identification or stigmatization risks
  • Governance: AI-assisted monitoring should not scale without evidence that it produces reliable, actionable data

The "evidence gate" concept reframes the relationship between AI and M&E. Instead of M&E teams asking "how do we use AI in our work?", the question becomes "how does our M&E expertise make AI deployment responsible?"

The Gates Foundation's AI Principles

EVAH sits within broader Gates Foundation AI principles (published May 2023):

  • Privacy and security assessments before AI use
  • Compliance with relevant regulations and standards
  • Informed consent and opt-out measures for people affected by AI
  • Transparency and accountability measures throughout
  • Stepwise scaling as the evidence base grows

The "stepwise scaling" principle is the most directly relevant to M&E: do not scale AI use faster than your evidence base supports. EVAH operationalizes this principle through funding.

Implications for M&E Practitioners

Evaluation expertise is becoming an AI governance asset. Organizations with strong M&E capacity are uniquely positioned to:

  1. Design rigorous evaluations of AI tools: the RCTs, implementation studies, and acceptability assessments that EVAH funds
  2. Advise on evidence standards for AI scaling decisions: what evidence is sufficient to move from pilot to scale?
  3. Monitor AI tool performance in deployment: ongoing evaluation, not just launch assessment
  4. Assess community trust and acceptability: qualitative evaluation methods applied to AI adoption

If other donors follow the EVAH model, demand for evaluation of AI tools will grow significantly. M&E teams that can design and conduct these evaluations have a competitive advantage.

For M&E teams using AI internally: The same "evidence gate" logic applies. Before scaling AI use from one evaluator's experiment to a team-wide practice, apply your own evaluation standards. What is the evidence that this AI-assisted workflow produces reliable results? What are the risks? What does the quality data show?

Bottom Line

EVAH is the clearest signal yet that evaluation capacity is not just compatible with AI adoption; it is a prerequisite for responsible AI adoption. The Gates Foundation is putting money behind the principle that AI tools must earn the right to scale through rigorous evidence. M&E practitioners should pay attention: your skills are becoming more valuable, not less, in an AI-enabled world.


Sources: Gates Foundation EVAH announcement (Feb 2026), Gates Foundation AI Principles (May 2023).