What the EU AI Act Is
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI regulation. Unlike voluntary principles or guidelines, it creates binding legal obligations with enforcement mechanisms. It applies to AI systems placed on the market or put into service in the EU, and to providers and deployers of AI systems regardless of where they are established, if the output is used in the EU.
For development evaluators, this means: if your AI-assisted evaluation serves an EU-funded program, the Act's requirements likely apply.
Timeline That Matters
- August 2024: Act entered into force
- February 2025: Prohibitions on unacceptable-risk AI systems apply
- August 2025: Obligations for general-purpose AI (GPAI) models apply
- August 2026: Full applicability, including high-risk AI system requirements
The progressive timeline means obligations are accumulating now, not starting in 2026.
Risk Classification: Where M&E AI Use Fits
The Act uses a risk-based approach:
Unacceptable risk (prohibited): Social scoring, real-time remote biometric ID in public spaces, manipulation of vulnerable groups. Unlikely to apply to standard M&E AI use.
High risk: AI systems used in areas like employment, credit, law enforcement, migration. Some M&E use cases could touch high-risk categories, especially if AI outputs inform decisions about resource allocation to vulnerable populations or if AI is used in migration-related program evaluations.
Limited risk: Transparency obligations where users must be informed they are interacting with AI. Applies to chatbot-based data collection and AI-generated evaluation summaries presented to stakeholders.
Minimal risk: Most standard M&E AI use (drafting, coding, analysis assistance) likely falls here, but only if outputs do not directly determine resource allocation or affect fundamental rights.
The key question for evaluators: Does your AI use case influence decisions that affect people's access to services, resources, or rights? If yes, it may be classified as higher risk than you assume.
AI Literacy Obligation
Article 4 of the Act requires that providers and deployers ensure "a sufficient level of AI literacy" among staff who operate or are affected by AI systems. For M&E teams on EU-funded programs, this means:
- Teams using AI tools need training on what AI can and cannot do
- Understanding of risks, limitations, and appropriate use is a compliance requirement, not a nice-to-have
- Donors may begin asking about AI literacy in proposals and evaluation ToRs
EDPS Generative AI Guidance
The European Data Protection Supervisor issued guidance in October 2025 specifically for EU institutions using generative AI:
- Strengthened data protection requirements in the context of rapidly evolving AI
- Applies to EU institutions and bodies, relevant for evaluators contracting directly with EU entities
- Emphasizes that existing data protection frameworks (GDPR) apply fully to generative AI use
What DG INTPA Has Not Done (Yet)
DG INTPA's published monitoring and evaluation pages do not contain AI-specific operational guidance for evaluators. The evaluation framework references methodology (EU Better Regulation Guidelines, OECD DAC criteria) and monitoring systems (OPSYS), but there is no equivalent of the UK AI Playbook or UNEG AI principles for INTPA evaluation practice.
This gap will likely close as the EU AI Act's full applicability date approaches. Evaluators should not wait for INTPA-specific guidance. The Act itself creates the obligations.
Practical Implications for Evaluators
- Assess your AI use cases against the risk classification before deploying on EU-funded work
- Ensure AI literacy on your evaluation team and document this in proposals
- Apply GDPR/DPIA requirements to any AI processing of personal data in EU-funded evaluations
- Watch for INTPA-specific guidance as August 2026 approaches
- Document AI use in methodology sections: the transparency obligations in the Act reinforce what good evaluation practice already requires
Bottom Line
The EU AI Act makes AI governance a legal requirement, not a voluntary best practice. For development evaluators working with EU funding, the compliance baseline is higher than for any other donor. The good news: if you are already doing DPIAs, documenting AI use, and maintaining human oversight, you are most of the way there.
Sources: EU AI Act (Regulation 2024/1689), EDPS Generative AI Guidance (Oct 2025), DG INTPA monitoring/evaluation pages.