Governance

UNEG's AI Ethics Principles: What They Mean for Your Evaluation

In 2025, the UN Evaluation Group published the first evaluation-specific AI ethics framework. If you conduct evaluations within or for the UN system, these principles now set the bar.

Ben Playfair4 min read
UNEGUN systemAI ethicsevaluation ethicsOIOShuman oversight

The Document and Its Lineage

UNEG's AI principles do not exist in isolation. They build on three layers of UN AI governance:

  1. UN System Ethical AI Principles (September 2022): System-wide principles requiring human autonomy, transparency, accountability, and explicit prohibition on ceding fundamental-rights decisions to AI.
  2. Framework for a Model Policy on Responsible AI Use (October 2024): Operational framework calling for institutional accountability structures, impact assessments, procurement guidelines, audit considerations, and lifecycle management.
  3. UNEG Ethical Principles for Harnessing AI in UN Evaluations (2025): The evaluation-specific layer, linking AI use to existing UNEG ethical guidelines and UN evaluation norms and standards.

This three-layer structure means that AI governance for UN evaluation is unusually centralized compared to other donors. An evaluator working within the UN system is bound by all three.

Core Requirements

The UNEG principles emphasize four domains:

Fairness: AI-assisted evaluation must not introduce or amplify bias. This is particularly relevant for AI tools used in qualitative coding (which may reflect training data biases) or in targeting/selection decisions (which may systematically disadvantage certain groups).

Transparency: AI use in evaluation must be disclosed. The process must be explainable and reproducible. This goes beyond "we used AI" to "we used [specific tool] for [specific task] with [specific validation approach]."

Accountability: Clear accountability structures must exist for AI-assisted evaluation decisions. Who is responsible if an AI-assisted finding is wrong? Who reviews AI outputs before they enter an evaluation report? The principles require that these questions have answers before AI is used, not after.

Privacy: When AI processes evaluation data, data protection requirements apply with full force. This is especially relevant for evaluations handling sensitive data from vulnerable populations, where AI processing creates additional vectors for re-identification or breach.

The OIOS Reality Check

A useful counterpoint to the principles comes from OIOS (UN Office of Internal Oversight Services). OIOS audit work has found that some UN entities exploring AI tools did not yet have ethical standards to guide safe and responsible use. The audit highlighted UNODC as an example where research standards needed alignment with UN AI ethics principles, and noted the absence of clear timelines.

This gap between principles (published) and practice (uneven) is the reality evaluators operate in. The principles exist. Compliance across the system is still catching up.

The UNEG Data/AI Working Group

UNEG has established a Data/AI working group as a collaboration mechanism for knowledge exchange on AI in evaluation. This group coordinates pilot studies and learning across agencies. For evaluators, this signals that AI use is being actively monitored and that practices will be compared across entities.

Practical Implications

If you evaluate for the UN system:

  • UNEG principles are your compliance baseline for AI use in evaluation
  • Disclose AI use explicitly in methodology sections
  • Ensure human review and validation for all AI-assisted analysis
  • Apply data protection measures consistent with the UN model policy framework
  • Document your AI governance approach in evaluation inception reports

If you evaluate outside the UN system:

  • UNEG's framework signals where the evaluation profession is heading
  • Other evaluation professional bodies (e.g., national evaluation associations) are likely to develop similar frameworks
  • Adopting UNEG-level practices now positions you ahead of requirements that are coming

For AI-assisted evidence synthesis:

  • The UN system is already using AI for evidence mapping and summarization (UNSDG SWEO initiative)
  • This signals institutional acceptance of AI for synthesis tasks, with appropriate governance
  • The bar is: AI accelerates synthesis, but human experts interpret and validate

Bottom Line

UNEG's principles are the clearest statement from any evaluation professional body on how AI should be used in evaluation work. They are binding for UN evaluations and influential beyond. The core message is consistent with every other major donor framework: AI can accelerate evaluation work, but transparency, human oversight, and accountability are not optional.


Sources: UNEG Ethical Principles for Harnessing AI in UN Evaluations (2025), UN System Ethical AI Principles (2022), UN Model Policy Framework (2024), OIOS A/80/332 Part II (2026).