How to Run a Data Protection Impact Assessment for AI in M&E
DPIAs are becoming standard for AI use in evaluation and monitoring. This 4-step process helps you assess risks before uploading any data to an AI tool, not after something goes wrong.
A DPIA is not bureaucratic overhead. It is the process that lets you explain to a donor, a data subject, or a review board exactly why using AI with this data, for this purpose, with these safeguards, is proportionate and safe. For M&E teams handling participant data, beneficiary records, or sensitive evaluation findings, it is increasingly a baseline expectation.
The 4-Step DPIA Process
Based on UK GDPR requirements and adapted for M&E AI use cases. Each step builds on the previous one. Do not skip to mitigation without completing the assessment.
Describe the Processing
Document exactly what AI tool you are using, what data it will process, where the data goes (including cross-border transfers to cloud servers), what the purpose is, and who has access to outputs. For M&E: "We will use [tool] to [task] using [data type] from [source]. Data will be processed on [servers in country/region]."
Assess Necessity & Proportionality
Is AI the least invasive way to achieve this M&E objective? Could you achieve the same result with less data, a local tool (no cloud transfer), or a non-AI approach? If you are uploading identifiable participant data to a commercial AI tool to save 3 hours of manual coding, the risk-benefit calculation may not favor AI.
Identify & Assess Risks
Map risks to data subjects (re-identification, data breach, bias in AI outputs affecting program decisions), to evaluation integrity (hallucinated findings, unreproducible analysis), and to communities (group-level harms from AI-generated patterns). Rate each risk by likelihood and severity.
Mitigate & Document
For each identified risk, define technical measures (anonymization, encryption, local processing, data deletion agreements) and organizational measures (access controls, validation workflows, training, incident response). Document residual risks and acceptance decisions. Review the DPIA at each project phase.
When DPIAs Prevent Real Problems
These scenarios show what happens with and without a DPIA for common M&E AI use cases.
AI-Assisted Survey Analysis
"We uploaded our household survey dataset to ChatGPT for analysis." The dataset contains GPS coordinates, household IDs, and income data. The AI tool's terms of service allow using inputs for model training. You have no data processing agreement. If the donor audits your data handling, you cannot demonstrate compliance.
AI-Assisted Survey Analysis
"Before uploading, our DPIA identified: the dataset contains indirectly identifiable data (GPS + household size could re-identify). We removed GPS coordinates, replaced household IDs with random codes, and used an enterprise AI tool with a data processing agreement that prohibits training on our data. DPIA documented and filed."
Cross-Border Data Transfer
"Our field team in [country] collected interview transcripts. We used a US-based AI tool to code them." The transcripts contain sensitive information about conflict-affected populations. Data left the country of collection, transited through servers with no adequacy assessment, and the AI provider's data retention policy is unclear.
Cross-Border Data Transfer
"Our DPIA identified cross-border transfer as the primary risk. We assessed: the AI tool processes data in the EU (adequate jurisdiction). We have a data processing agreement with standard contractual clauses. Transcripts were pseudonymized before upload. We chose this tool specifically because of its data residency options."
Sensitive Population Data
"We used AI to analyze feedback from GBV survivors for our protection evaluation." Deeply sensitive data entered a commercial AI system. No consent was obtained for AI processing (only for collection). No assessment of whether AI processing creates re-identification risk.
Sensitive Population Data
"Our DPIA determined this data is too sensitive for cloud-based AI processing. We used a local AI model (no data leaves our infrastructure). We assessed community-level risks of AI-generated patterns. We obtained specific consent for AI-assisted analysis. The methodology section discloses the approach and safeguards."
5 DPIA Rules for M&E Teams
Do the DPIA early, not after deployment
The UK AI Playbook emphasizes privacy-by-design across the entire AI lifecycle: design, training/testing, deployment, and monitoring. Start your DPIA when you are evaluating whether to use AI for a task, not after you have already uploaded data.
Always assess cross-border transfer risks
Most commercial AI tools process data on servers outside the country of collection. UK GDPR and EU frameworks require you to assess whether the destination jurisdiction provides adequate data protection. For M&E teams collecting data in LMICs and using US or EU-based AI tools, this is almost always relevant.
Include community-level risks, not just individual
Standard DPIAs focus on individual data subjects. M&E DPIAs should also consider: could AI-generated patterns from aggregated data stigmatize a community? Could AI-assisted targeting recommendations create group-level harms? The Do No Harm principle extends to AI-assisted analysis.
Review at each lifecycle stage
A DPIA is not a one-time document. Review it when you change AI tools, change the type of data processed, scale from pilot to full implementation, or when the AI tool provider changes their terms of service. The UN model policy framework calls for lifecycle governance including re-assessment.
Keep it proportionate
A DPIA for "using AI to format a report template with no personal data" should be a one-page note. A DPIA for "using AI to analyze 500 interview transcripts from conflict-affected populations" should be thorough. Match the depth to the actual risk, not to a generic template.
DPIA Starter Prompt
Use this prompt to generate a draft DPIA for a specific AI use case. Then review and refine with your data protection officer or equivalent.
Help me draft a Data Protection Impact Assessment (DPIA) for an AI use case in our M&E work. Use case: [DESCRIBE: e.g., "Using Claude to assist with thematic coding of 80 key informant interview transcripts from a food security evaluation in South Sudan"] Data details: - Data type: [interview transcripts / survey responses / beneficiary records / monitoring data / other] - Contains personal data: [yes - identifiable / yes - pseudonymized / no] - Sensitive categories: [health / conflict / GBV / children / ethnic identity / none] - Data subjects: [program participants / beneficiaries / staff / community members] - Country of collection: [country] AI tool details: - Tool: [ChatGPT / Claude / Gemini / local model / other] - Processing location: [US / EU / local / unknown] - Data processing agreement: [yes / no / unknown] - Data retention policy: [not used for training / used for training / unknown] Please generate a structured DPIA with these sections: 1. Processing Description (what, why, who, where) 2. Necessity & Proportionality Assessment 3. Risk Register (risk | likelihood | severity | risk level) for at least 6 risks 4. Mitigation Measures (technical and organizational) 5. Residual Risk Assessment 6. Review Schedule Flag any areas where the risk level suggests I should reconsider using AI for this task.
Protect Your Data, Use AI Confidently
A completed DPIA is the foundation for responsible AI use. Pair it with governance guidelines and start your AI pilots with confidence.
Related Quick Guides
How to Build AI Governance for M&E
The 6-point governance framework every M&E team needs before scaling AI use.
Read guideHow to Protect Data Privacy When Using AI
What's safe to share and what to remove before using any AI tool.
Read guideHow to Assess Your M&E Team's AI Readiness
A 5-level self-assessment across data, governance, skills, tools, and value.
Read guide